Podcast

Episode

How this founder built a personal language model to mimic your behavior, knowledge, and style.

Suman Kanuganti is the Co-Founder and CEO at Personal.ai. Previous to Personal.ai, Suman also founded Aira. Suman holds his BE degree in Engineering, MS in Robotics, MBA in Entrepreneurship, and ten patents in emerging technologies as well. Suman also founded Aira. Personal AI is a GPT implementation designed to mimic an individual’s behavior and to speak like them. In today’s episode, Suman shares his goal is to create AI systems that understand and replicate users’ communication patterns and cognitive abilities. Suman emphasizes that their language model has time awareness, allowing it to adapt its knowledge base depending on the user’s age or point in their timeline. Suman also shares his some learning lessons for AI product creators.

show more
/

Lets stay in contact

Full transcript:

Dhaval:
So today we have Suman Kanuganti on the show. Sunman is the co-founder of personal AI, a product that he will talk about and, we’ll learn a lot more about where his learning lessons bear, what his learning lessons bear and all that other stuff. So yeah, let’s get started. Suman, would you mind introducing yourself and tell us a little bit about your product, AI product?

Suman Kanuganti:
Sure Dhaval. Thanks for having me. I’m Suman Kanuganti. My background is in engineering. Over 10 years ago I started creating companies. This is my second company. Previously I built a company called Aira, A-I-R-A. My philosophy always has been how do use technology kind, solve, hard human problems. Aira was about using technology to fill the gap of missing visual information for people who are visually challenged, such as blind and low vision. And personal AI is about augmenting people’s mind where memory, cognition, and our time is limited. And we would want to augment that using technology by creating a personal language model of every individual that essentially learns to behave and act and learn to be you. So that’s a little bit of my background and who I am.

Dhaval:
Wonderful. Thank you, Suman. Thank you for that introduction. Tell us a little bit about what personal AI’s narrative is like. What is the, who is your target customer? What kind of a problem, specific problem you are solving with that product? And what is the overall narrative? How is it different from your competitors?

Suman Kanuganti:
Yeah, totally. As individual people, like on a day-to-day basis, we create and consume a lot of information. Like, you know, a lot of experiences have a lot of conversations. But obviously, 80% of that is lost or forgotten. Our goal with personal AI is to be able to create a model. That actually learns your knowledge, your style, and your voice, more or less like to be a digital version of you. Imagine being able to surface relevant pieces of your own knowledge, on demand whenever you need it. Or imagine, you having conversations in a, chat or text message. Are people talking to you? Where relevant pieces of information when we start facing as you speak. So our intention is to be able to augment, humans with an extension of their own mind. Because one cognition is limited and then two time is also limited. You mentioned about the target market. Our goal is to actually go after everyday consumers. Our intention is to have everybody have their own personal AI. That is trusted by them. The data belongs to them. The model gets trained over a period of time and innovate kind of grows alongside with you. Unlike, public or general models that exists such as open AI, such as, Google or Alexa, which is mostly like trained on public internet of data. Personal AI is a unique model that also uses similar architecture such as GPT, but actually trains on individual person’s data. And it does so stylistically, relevantly authentically to replicate as you would. So we are trying to essentially like replicate, your thought process and your mind and give you an extension of your.

Dhaval:
Wow. So it is adding the stylistic and tonality and the personal attributes to your, to the replica that you are building for someone which is not there in current GPT or any of those products, right? So there’s a bias that. GPT has very confident answers, but doesn’t necessarily align with your style or may not align with the way you communicate. And what you are saying is that not only does personal AI helps you do that, but also creates the model in the first place using your personal attributes , did I get that right Simon?

Suman Kanuganti:
Yeah, totally. So you will create what we refer to as a memory stack, which is essentially taking all your unstructured data that you ever have in your digital world. Let’s say you’re having conversations online, you are texting with people. You probably have written a bunch of different knowledge pieces out there. And then we create this memory stack, which is essentially like a digital representation of your memory vault. In other words, we basically break down this idea of structuring your data into these blocks that is associated with time. And imagine over a period of time, as you create an as you learn. Your AI technically would also be training alongside with you. So it’s kind of how. Conceptually we have architecture system.

Dhaval:
Got it. Now, there are, if you were to dive a little bit into your products architecture or the core engine, the AI, since the audience of this show are the people who aspire to either create an AI Product. Or wanna add AI to their existing product, for their knowledge. If you were to share a little bit without getting into the confidential details about what does a product stack, what does it take to build memory stack? What does it take to build a knowledge replica, a knowledge brain summons, human brain, and human personality into something that you were doing, like what do you call that thing? The entity that.

Suman Kanuganti:
Yeah, I’ll try to provide answers and then I’ll try to provide some contrast technologies that exist out there so that we can wrap our heads around. The first thing is at the core, we are essentially an AI first company. We built an algorithm called. Personal language model. So we call it personal language model. This is in contrast or like kind of opposite in concept to a large language model. If you think of a large language model such as GPT 3 or any other open Language models that exist out there close to around like one 70 billion parameters in our case. Our language models around one 40 million parameters. It revolves around at the core individual’s data and not public’s data. And you can keep on adding the data to your model so that way it gets more sophisticated in regards to the purposes of your mind. You can go abroad and you can also like, go deep into specific topics itself. So yeah, at the core we build this personal language model for every individual to essentially like mimic the behavior, knowledge, and style of an individual person. And the transformer that we have developed, we call it generative Grounded Transformer. And if you think about like GPT as a generative pre-train transformer, the subtle difference of our transformer is that it is grounded in the personal data of you. And whenever I refer to personal data, is nothing bad. The memory is tag that I was explaining earlier, So every AI response that your personal language model actually generates, it has an attribution, and the attribution is nothing but attribution Back to the data or what”s? Data elements of what memories in within your stack is responsible for creating a particular response? One of the challenges for large language models is that there is no attribution, primarily because it is driven by aggregation of the data. And there is quite extensive anonymization that is involved. So technically, you cannot create that attribution and it’s extremely hard to create that attribution. And our goal is exactly the opposite. We would want to have. That attribution, we want to have the ownership and we want to create that value to every individual consumer by creating their own individual model, at the foundation, it’s personal language model.

Dhaval:
So this generated you, grounded trained model that you are referring to is that, Adaptive with human changes, human behavior changes over the lifespan of the user.

Suman Kanuganti:
Exactly. The transformer also has a sense of time. For example, let’s say, if you’re talking about AI maybe three years ago, How you refer to your transformer, how you refer to your technology, maybe different from your latest and greatest creations or thought process around your ai. So it has a time to decay component. So when you are indeed chatting with your own AI, it normally anchors around the latest and the greatest thought process of. How you would respond. However, let’s say if you indeed are contextually trying to fit something from the past that happened like 10 years ago Then, we are probably talking about the first autonomous car, right? And my experience with the first autonomous car, then it will. It’ll go back in time and be able to fetch that response for you. So kind of like designed to work very similar or akin to how a human mind would function. You can think about potentially being able to drop your AI at a certain period in time. Given these are small models and given as the data is going in on a day-to-day basis, there is a new version of the model. We are technically able to time travel. Your model, like let’s say 2, 2020, and then when you are having the conversation, it would like replicate the information density as if you were to be functioning at that time. If that makes sense.

Dhaval:
That makes sense. Yeah. So with this language model the way you communicate is it’s having a time sensitivity to it. It has a time awareness to it. So it, if you go back to your younger age, you would have smaller, it would have smaller set of parameters. You look at. Depending on, the size or depending on the scope, it may change depending on at what point in the timeline you put it. So this is, what are the use cases? What are the use cases of something like this or an average consumer? What would they benefit?

Suman Kanuganti:
yeah. I’ll give you a few different use cases, but I’ll also tell you the focus of the use case that we are going after. The few different use cases could be. Anywhere from simply being able to remember everything that happens in your life and be able to recall. It’s almost like a data store where you’re not searching for the data, but you’re actually interacting with yourself. And being able to, recall pieces of information and facts as well. So that’s fine. Our personal language models also has different capabilities, meaning as increase your stack size and get more data in it moves from being able to simply answer questions to being able to generate content for you and being able to have a conversation like you as well. So that kind of unlocks into. Being able to create or generate responses or draft emails for you, write tweets for you and even large form content as well. And over time it could technically be a conversational mind that exists on the internet for anybody to interact 24/7. And an asset that could label potentially on the internet forever for your future generations to even have a conversation as well. So it’s designed to be an digital asset that essentially grows over a bit of time and they own it. The use case that we are going after is to be able to draft responses in a human communicates with other humans currently in multiple different platforms. And one of the downsides is, everything is kind of lost and you are pushing the data to large, big tech companies. And our intention is to create a, personal AI chat system where people would communicate. But every conversation that people communicate in within that system is trained upon. So if they, give a piece of knowledge or information once. It’ll be useful or re useful at later bit of time, given the particular context again and again. So think about as like a automatically drafting solutions almost all the time. And you can potentially put your AI in a co-pilot mode or a auto-pilot mode that is communicating on your behalf with other people if you would choose to all the time. So it saves time, but it also saves, the idea of augmenting your cognitive capacity. Because it’s very hard to process all the information all the time that is needed.

Dhaval:
Wonderful. Thank you Suman. One, couple of quick questions follow up and then we’ll wrap is what are some of the learning lessons you had from your product journey? That’s a second. The more important question is where are you in your product journey. Have you launched your MVP. How many users you have? Where are you in the stage of the company? If you could share that and then if you could share any learning lessons in this journey, any big learning lessons you had for other AI product creators?

Suman Kanuganti:
Yeah. We had a little less than three year old company. We’ve been focusing mostly on creating our language models and testing with a few groups of people and experimenting to figure out like what our market is. So we would be launching our. The first version of the product in March there is a version of the product that people would come in and train their AI to, essentially remember and recall information and train model over a period of time. But technically the first launch, the personal AI chat application. We call it personal AI to that, or we’ll be launching in March. The learnings are interesting, so the learning. First of all, like, almost two and a half or two years ago, even when we talk about personal AI and creating a digital version of your mind it was almost like too hard to delay. let’s think about it. I am not sure if it is there. And then the GPT three came along, and when GPT three came along in 2021 it was good for AI awareness. And then it was taught as a almost like a marketing tool because it was really good at content generat. There are good number of, startups that are evolved who are building on top of GPT three which is content generation tools. We were heads down essentially finishing our development on the tech as well as on the product to create this personal language model. It has the generated capabilities but it also has conversational capabilities. One of the insights was we went after like content in Generation. But the most interesting thing is large language models are so great content generation. The content creation normally happens with new content now, I found people who would want to create content from the existing knowledge. The appetite is very low or maybe it’s like a different use case. So our, purpose of the personal AI is more valuable where. The existing knowledge is more valuable, the existing interactions and experiences are more valuable. And that’s where we landed on identifying our PMF, like where it’s everyday consumer application for being able to augment your conversations and communications and make it more human to human and stuff like human to ai because human to Ai. Is a very, like a bot conversation. And our goal is to create this ai be like you and not necessarily talk to your bot. So there are like several nuances around how we create these experiences, which is which was exciting in a way. So yeah, so essentially where we are driving towards right now is Chat GPT came along. The awareness around the AI essentially being able to chat. Is more acceptable because people saw the taste of what is possible and it is great because it’s all trained on public data. So when we talk about personal ai, which is essentially chat GPT, if you will, but for personal data the promise is very exciting and there is awareness and there is accept. So essentially here we are basically going to the market, creating a, not even creating, basically launching our personal AI chat application, which is in of development for past one year. And excited to give it to everyday consumer and people, and everybody will have their own personal AI that will essentially grow with them and it’ll be theirs, right? It can make money for you. It can live in the cloud. While you are sleeping, it can do work. So yeah, that’s where things are headed to.

Dhaval:
What are you most excited about for personal AI in 2023?

Suman Kanuganti:
Most excitement is this. We’ve been at it for a while now, and finally things are coming together. What that means is we are finally able to kind a fulfill this idea of democratizing AI. Where everybody will have access to their own AI. And we are also excited about this core idea of access between people. If you think about what does it mean to be able to have access to the loud ones that you know potentially passed away are people who has like knowledgeable but we do not have access to. So there is a lot more, interaction gap that exists. Information gap may not exist because internet exists. So I think personal AI is the next level of, almost like Internet 2.O that will kind of unlock this exchange of information in a much more meaningful yeah.

Dhaval: And you say that AI, generative AI is over a trillion dollar. Where does personal AI fit in this market space, and how much of it do you think belongs to personal ai?

Suman Kanuganti:
I think it fits in the everyday consumer market space. it would likely be sitting next to, any of the Google Assistance services. We are more focused around day consumers to let say, you know, your personally would be living in your mobile phones. So communication like human to human communication and establishing those connections with and increasing access between people and reducing basically burden of being able to remember, recall and generate response. So I think it’s somewhere between a communication augmentation tool within the consumer space and not necessarily not necessarily in the customer support are business engagement where general AI are pretty good at. So we really want to tap into this idea of, personal nature of personal AI.

Dhaval:
Now if I were to use personal AI in my work communication, is that part of your vision or is that considered professional work and business AI.

Suman Kanuganti:
You could nothing’s talking about. And then at the end of the day, I think every company that starts with consumer application or consumer focus eventually penetrates into the business markets. So we want to build this like ground up rather than going after like existing like business application data and building the models over there because there are technologies who does that really well, right? We don’t have to do it. There are not a lot of technologies who are actually like, really good at working on like smaller amounts of data and people’s data. So eventually if you would want to use it for work, there is nothing stopping. In fact, there are people who use it for like specific projects, the conversation and augment their conversations, within the professional setting. But I think when I talk about like everyday consumer, we are essentially not going to start off with aggregating of the data in a business setting. We are going to start off with a personal data and individual data starting off with an individual consumer.

Dhaval:
Got it. Well, fascinating conversation soon. Thank you so much for making time for. Chatting with me and helping other AI product creators cut their learning curve. I really appreciate that. Looking forward to chatting with you again once your product is launched and you have a few more things to share with us. In the meantime, I wish you all the best with your launch coming up here and we’ll be keeping up with your news.

Suman Kanuganti: Thank you Dhaval.

Show full transcript

follow us

Join AI Product
Creators Club


    By signing up, you agree to receive email from this podcast.

    recent
    Transforming Creativity and Content Creation with Zach Hanson – A Deep Dive into the Future of AI Product Management
    Zach Hanson is an expert in artificial intelligence and machine learning product management, with experience developing AI solutions for Fortune 500 companies including IBM, Brightcove, Capital One, and Wells Fargo. He holds degrees from the College of Charleston and Johns Hopkins University. In today’s episode, We discussed power of AI, Zach discusses how it aids in tasks like content parsing, summarizing, and producing video trailers. He also explores the interconnection...
    July 24, 2023
    21:49
    This founder built a platform to create lifelike characters for immersive experiences | EXCLUSIVE!
    Kylan Gibbs is the CPO and Co-Founder at Inworld AI. He is the former Product Manager at DeepMind, Consultant at Bain and also Co-founder at FlowX. In today’s episode, Kylan Gibbs shares his experience working in AI startups and consulting, as well as his time at DeepMind working on conversational AI and generative models. He emphasizes the importance of iterative processes, adapting to market pressures and user feedback, and the...
    July 24, 2023
    19:05
    What happens when a PhD Professor in Analytics launches an AI Writing Product?
    Martin Pichlmair is the CEO of Write with LAIKA, Associate Professor at ITU Copenhagen and Co-founder of Broken Rules. He Holds an PhD degree (Department of Informatics) in Vienna University of Technology. In today’s episode, Martin explains that LAIKA is designed to make AI-generated writing more accessible and user-friendly, with the AI and the user working in a tight interactive loop. Martin highlights that their product uses a “no-prompt” system,...
    July 24, 2023
    17:29