Register now for VISIONS Summit LA – Oct 10
Episode 309
June 30, 2023

AI Should Empower Your Workforce, Not Replace Them

Brian Lange sat down with visionary Brian Roemmele this week, and luckily we get to listen in on a couple of hours of their conversation! Are you an AI user in the workplace? If so, Brian Roemmele challenges you to look at AI as a tool for creativity and empowerment, rather than replacement. 

<iframe height="52px" width="100%" frameborder="no" scrolling="no" seamless src="https://player.simplecast.com/63b0cb81-f7d1-4970-aa97-9f2fd537a5bf?dark=false"></iframe>

Brian Lange sat down with visionary Brian Roemmele this week, and luckily we get to listen in on a couple of hours of their conversation! Are you an AI user in the workplace? If so, Brian Roemmele challenges you to look at AI as a tool for creativity and empowerment, rather than replacement. 

Don’t Throw the Baby out with the Bathwater

  • {00:03:30} “We're not designed to be in safety literally our entire life. We should always be a little unbalanced and a little challenged. So the same is true with AI. And that challenge forces creativity.” - Brian Roemmele
  • {00:16:40} “I also challenge you to accept the fact that you no longer have a cemented position on anything. That does not mean that you just don't have a position. It means the tree is flexible so the wind doesn't break the tree.” - Brian Roemmele
  • {00:21:24} “They'll tell me, “Brian, you're crazy. You're a charlatan. It's just math.” I go, "That's interesting. Now, we're dealing with human language. We're dealing with the psychology of humans that created the math. That's what you're seeing here.’" - Brian Roemmele
  • {00:30:06} “You're now going to become seven times more powerful because you're going to know how to use this technology in a way that's going to empower you to be stronger. It is not a replacement. It's a tool. The spreadsheet didn't fire the accountant. It made the accountant more powerful.” - Brian Roemmele
  • {00:37:23} “How do you train an AI system to understand good unless it knows bad? Now you can train good and bad, but the bad has got to be in there as a reference point because it needs a contextual way to identify.” - Brian Roemmele
  • {00:41:56} “We build a persona. We build a personality. Because that creates engagement. If you're just going to put out robotic statements and go to freaking a Google search or Wikipedia, it's not engaging. So if I'm going to have a customer-facing interface, that interface better be useful and engaging.” - Brian Roemmele
  • {01:28:11} “That machine is actually going to be able to have a bank and understand more of how I relate to the things that I find resonance with and find wisdom in. And it's actually going to be able to relate those things back to who I am and use an even more full understanding of the things that I find to be inspiring or the things that reflect me to interact with other people and other people's extended versions of themselves, i.e. their AI component. - Brian Lange

Associated Links:

  • Dive into more of Brian Roemmele’s work
  • Listen to our past episodes with Brian Roemmele
  • Did you see our VISIONS: Volume IV drop? Check it out here.
  • Have you checked out our YouTube channel yet?
  • Subscribe to Insiders and The Senses to read more of what we are witnessing in the commerce world! 
  • Listen to our other episodes of Future Commerce

Have any questions or comments about the show? Let us know on futurecommerce.com, or reach out to us on Twitter, Facebook, Instagram, or LinkedIn. We love hearing from our listeners!

Brian Lange: [00:01:04] Hello and welcome to Future Commerce, the podcast about the next generation of commerce. I'm Brian. I am, unfortunately, without my co-host Phillip today, he could not make it and he was very disappointed because I got the opportunity to sit down with Future Commerce classic guest, Founder and President of Multiplex and visionary Brian Roemmele for a very long conversation. And Phillip was super sad to miss it and I don't blame him. Brian is so filled with interesting insights and visions of the future, and that's why we had him on almost seven years ago in a two part series that we did with him. I had the opportunity to have a super long conversation with him this time. In fact, we're going to let you in on two hours and 15 minutes of it. So I am actually going to just jump right in here with him in the middle of the conversation because we have a long way and a lot to cover. And it is super interesting. Stick around until the end. There are things that he drops and I even say a few things of my own along the way. So I'm very excited to jump right into that conversation. Let's do it. So, Brian, back to the limitations of creativity. Where do you see AI sort of limiting out or since everything is just derivative, is AI able to accomplish the same things that humans are able to accomplish?

Brian Roemmele: [00:02:54] Yes and no. And it's a great question. So we have already established that AI is sort of trying to predict what is the most valid next word. And humans are doing that also. And I argue that the more you read, the more you expose yourself as a human being to a diverse amount of information, especially information that grates on you... I love being exposed to stuff I dislike because it gives me insight into ideas that I have not seen before. [00:03:30] We're not designed to be in safety literally our entire life. We should always be a little unbalanced and a little challenged. So the same is true with AI. And why do I say that? Because that challenge forces creativity. [00:03:45] And any listener can kind of look at their own life. Where were your major milestones? It usually comes down to those points where you were really pressed and you had no choice and you just sort of flung up your arms. If you're religious, you said, "God, give me a sign. What direction do I go in?" And it's sort of the hero's journey. Joseph Campbell. We're all on a hero's journey and in some ways so is AI. A product. A company is on a hero's journey. And when you understand these Jungian archetypes, you start realizing that this is baked into human intelligence. And if AI is a low resolution, pixelated version of the part of the brain that invented language, it is also, in a way, a reflection. The grand mirror reflecting back to humanity what humans are. So in that is our humanity and our creativity. And when we start looking at creativity in AI, there is the path that everybody takes. I don't know if that's necessarily the creative path. You can take a freeway ride and you can get from point A to B as fast as you humanly possibly can, or you can decide to take the road less traveled, maybe climb a mountain range, you know, things of that nature. These are creative paths, right? So with AI, the same is true. Ask simple questions of AI, you're going to get simple results. So what's the creative path for artificial intelligence? Primarily using the technology that we're more or less used to right at this moment. Large language models. There are other versions of this, and there are derivatives, but large language models. The path less traveled is formed by creating a persona and a motif at the very minimum within your prompts. And a persona is a character on a hero's journey. You're writing a story because humans are storytellers, right? And so the way we elicit elucidations from anybody or anything, you guys do these interviews and are phenomenal, but you have to try to find it within the guest. Stuff that they don't know that they don't know or they do know and they forgot they've known. And that's what good interviewers do. When we're dealing with large language models, we are becoming an interviewer and the interviewee is the corpus of human knowledge. So if you're going to say, "I asked it a simple question and I got a dumb, simple response," I could say, "Maybe it's operator error at that moment." So you need to have a more sophisticated approach. This means using nomenclature, vernacular, phrasing, linguistics, all of the things that most of us have found not to be useful as university directions. The soft subjects that we've, "Oh, no, don't take that path. You won't get a job." Well, actually, now it's flipping. The more you know about words, the more you know about high language, Shakespeare, storytelling, and the arts, the more likely if you understand the power within an AI model, and most people don't yet... That's why I'm doing a lot more media right now is trying to get this word out. You are the creatives at this point. You are at the Macintosh moment where people are saying, "Well, it's just text on a screen." Well, now all of a sudden this first popular... Yeah, there were other people had Windows and Xerox, Palo Alto Research Center, the Alto, all this stuff. But the most popular, first time that humanity saw on a grand scale a windowing environment that allows you to have a metaphor, a desktop metaphor, and the ability to draw using a mouse was a Macintosh, and that took the computer away from the academics and the researchers, technicians and scientists and put it in the hands of creatives.

Brian Lange: [00:07:58] Yeah.

Brian Roemmele: [00:07:59] And they have not let go since then. And that's for good and bad. There are some downsides to that, but there are more upsides. Large language models are still in the hands of academics and I get pelted with them daily. It's like, "How dare you say that large language models are displaying creativity?" Well, I can create a prompt, and this is getting to the creative reaction. I can create a prompt. I call them super prompts because I have a limited vocabulary. I only come up with simple things. I want them to stick. Super prompt. I'm done. Okay. Think about it. So anyway, what is a super prompt? At the very least, it's a couple of sentences that are setting the groundwork of the expansion and contraction of the output or the elucidation of that particular AI model. And they are different compared to which model you're dealing with and what the domain is focused on. So in a general large language model like GPT3, which is free for anybody, and you can experiment with this, you start building a super prompt, and at the very base of that super prompt are commands and directions that you're building. So I'll give an example. Let's just say that you want to get both sides of a particular argument. And you're not ready to debate somebody and you are really tied to one side of an argument. So how does one elicit an output from an AI model that has seen both sides? And how do I make sure that whatever let's call safety training that the company has put in there to try to restrict output, how do I get that out of the way? Because you know what? Every day I walk out my front door, there's a good chance that I'm going to be unsafe. Every human being is very unsafe at the moment they get behind the wheel of an automobile. No matter how safe they are, they're dealing with unsafety. We human beings are unsafety calculators. We are anti entropy engines, but we also are chaos engines. And so our AI models are now being made safe. So that doesn't say hurtful things because we don't want that to happen. So but part of that is you're throwing the baby out with the bathwater and that means it constrains what the large language model thinks it is capable of. So how do we break it out of that? Some people call it jailbreaking. I call it a Denis sort of prompt named after Denis Diderot. Denis Diderot was a French researcher, scientist, and philosopher person in the Enlightenment period. We would not likely be here if it wasn't for Diderot putting together the encyclopedia. I won't try to say in its French vernacular, but encyclopedia whatever. He was jailed and tortured for putting the encyclopedia together. And I sort of felt it was very fitting to call the prompting that I use, in this case, a Denis type of prompt. Some people might call it Dan: do anything now. And that was more coming from an embarrassment of the company. A "Look, Mom, I made ChatGPT say bad thing. Oh, look at me here." We're not doing that. The people at Reed Multiplex, one of my member sites, we have a forum and we mastermind super prompts. And it's a beautiful thing to watch. I encourage everybody, if they can, to just start playing around with these prompts because you have the power to do this. I train people, I'm going to have prompt engineering, promptengineer.university very soon to train people how to go up the steps.

Brian Lange: [00:11:51] Very cool.

Brian Roemmele: [00:11:51] But prompting is building a story and you're trying to guide it by constraining and expanding. So I want to have a debate. So how am I going to do this? So let's set the stage. You are the professor and we'll name you. Let's call you Professor. Professor. I don't know Johnson. You're Professor Johnson. You're the well known expert in debating and maintaining debates to be the most productive debate. I'm thinking contemporaneously here, the most productive debate ever at Yale University. You're tasked with, and it is vital for you to maintain a debate on two opposing sides to subject X. Debate person number one could be a famous person with a well known history. It could be somebody who takes a very strong position, Insert whatever you want. I immediately think everybody's going politics. Expand your life a little, try other subjects. But yeah, you can do politics and you get two different sides of a debate. You can have Mao debate Washington or Jefferson, you can do whatever you want. And the GPT model will use a corpus of knowledge of that person to structure the debate. Now, on with that sentence, you can name the person or they can be Bill or Jane, Mark and Jeff. It doesn't matter. And then you go on and say, "Each debate will have 35 rounds with a follow up for each respondent's response. I have this, by the way, the debate prompt is in my Twitter feed. Just type in, find my Twitter, find my search bar and say, "Debate super prompt," and you'll get it. And you can see it's probably not quite like I'm saying it because I homogenize and hone these and I change them because OpenAI is constantly changing their algorithms so as to protect us from having debates like this because these debates get very, very fiery. So how do you do that? You say, "Although this debate must be respectful and must contain no logical fallacies, your job, professor whoever, is to make sure logical fallacies are identified and attributed to the person creating them and they have to accept the logical fallacy was presented." This really is constraining, right? You see what I'm doing here? I'm really forcing the argument that you have to make a statement, but you better not use a logical fallacy. Now I have now bifurcated or trifurcated this into three individuals coming out of one thing we call ChatGPT. Then finally we say that "it is vital that you defend your position to the end. But you must accept facts and reason and logic from the other side. Go at it." Now, there's a lot more to the thing because you have to keep it focused. There's a context window of so many tokens. Let's call that about GPT 3.5 right now. Let's call it about 35 pages of text. After that, it develops amnesia and you're kind of lost it. There are ways that we can fix that, but we'll go on to that later. GPT4, the context window is quite larger. Let's call it maybe 300 pages, 400 pages worth of text. So now you have this debate that's running by itself, by the way. The only thing you have to do is write, "continue," because it will stop at some point. Now, a lot of people forget to type "continue" to say, "Brian, they got to two rounds and they stopped," and I go, "Type, continue." "Oh." Go back to it. It's still up here. And then it will continue. I've had some debates using memory expansion techniques, if you will, go on for days about subjects that are quite crucial to humanity. And it is humbling. Absolutely humbling to humanity. A lot of what we talked about in the early part of this discussion will now start playing out because what you're seeing is raw information being debated without people resorting to name calling and rhetoric.

Brian Lange: [00:16:36] Right.

Brian Roemmele: [00:16:36] You're now seeing the facts, and I challenge anybody to do this. And [00:16:40] I also challenge you to accept the fact that you no longer have a cemented position on anything. That does not mean that you just don't have a position. It means the tree is flexible so the wind doesn't break the tree. [00:16:58] And I know that I'm using some kind of wisdom here. But if the tree was so hard, the next time the wind comes, it will break. I don't want you as a human being to break. I want you to understand that as we grow as human beings, your position in life will change. I promise you that. Or you're not growing. That is the definition of growing. It's to be able to not, well, I hope you can say, "I was wrong. So very wrong. And I have grown since then and I've grown because I've learned more." Who informed you? Well, hopefully you've read you've hung out with people and have had incredible by the fire into 3:00 in the morning conversations with people you don't know or you do know or people you care about or people you thought you hated and you just had this meeting of the minds because that's how we got here. We got here because of that debate and so we could do it with GPT.

Brian Lange: [00:18:54] Yeah. What about AI? So you talk about AI, it's constantly being updated. There are new models, there's new data that it's absorbing. We're talking about quite a bit of change. The very thing you were talking about. It's growing, if you will. How does that play out as someone goes and builds a model like the one you're talking about? And I love to talk about this in the context of commerce. We're seeing updates to the model continuously. If someone was to go train AI on a model that could interact with their customers live because right now we're looking at three different types of AI interactions, right? There's one where AI is producing something for an internal team and it's helping spur creativity. And that can happen in any context, not just commerce, but any sort of business context. The second is where something's generated by AI, but then there's a human touch at the end where it does get published. Or maybe there's no human touch at the end. It's AI produces something, a human may or may not touch it, and then it's published publicly. And then there's the final type of AI interaction, at least that we have right now, which would be live interaction with some sort of model. And this is where I think things start to get really interesting. Right now a lot of AI use cases are falling into the first two camps, but we're not seeing too many brands or retailers or anyone in commerce take the leap and put out something live that people can interact with. Do you see this ever happening and how do you see someone, a brand, leveraging AI to make commerce either more efficient, more interesting, or better for the consumer?

Brian Roemmele: [00:20:59] That's a great question, Brian. A lot of ways to approach this, so let me go kind of backwards first. All right. So I brought up the debate as an example of what you can do in stressing AI. It is a form of stress and some of the things we do psychologically to get our super prompts to work and to elicit the elucidation that we want is that we're using a psychological guidance system. And this turns the toenails of AI engineers. I can hear it right now as this is playing back over and over. Why is that? Why is it work? [00:21:41] They'll tell me, "Brian, you're crazy. You're a charlatan. It's just math." I go, "That's interesting. Now we're dealing with human language. We're dealing with the psychology of humans that created the math. That's what you're seeing here." [00:21:54] And so it is not a shock or surprise that we can use the very archetypes and psychologies that form those words and languages and stories. So when we are creating a debate for commerce, this is where it gets fun. And I've done this with a lot of clients and I'd be more than happy to work with just about anybody to do this. So you can create a persona for a brand or a company or an individual product. You can say by virtue of just telling it, if it's a popular product, the training does not need to be very detailed. Like you can say, "You are the product Coca Cola. And I'm going to be asking you a series of questions about your strengths and weaknesses." And it's vitally important... Again, we're using certain keywords and you can also see this in hypnosis and all other types of neuro-linguistic programing, psychology... There are certain power words that really activate AI and again, this curls the toenails of AI scientists. They're just words. Yeah, show me the neurological map that AI takes in the hidden layers when I use a common vernacular word and I use, say, a power word. And as marketers, we know that there are certain power words. Use them in prompting AI. Find out as a scientist the differences you get in the outputs and you'll be shocked and surprised. So test your scripts. But in this case, we're asking the AI to believe itself to be the product. And then, in a very general sense, just as research throw out a product that you think is interesting that you want to kind of interview and you'll get kind of some interesting results, but they're not going to be extremely satisfying because it's going to take a rather standard path through the data and information. Even if you stress it, there's only so much knowledge it has about a product. So what's the next level? The next level is to build a model of that. And now you don't have to literally build a model yet. That will be the end point where I get to. But you build a model by informing in the context window a story about the product. And the more you tell the model about the product and to self-reference that data that you just gave it, you would go now into a different... You would definitely try to do this in GPT4 because the context window is bigger. And let me qualify what context window is. It's how much data the system can deal with, inputs and outputs, prompts and answers before it starts getting amnesia about what it just told you. So in that session, it sort of starts expiring as you get towards the end of it. The context window is bigger in GPT4. How big can we get? Well, there's a 1 million token context window that is available not quite for everybody, and that's for scientific and computer programing research. You can throw a whole code base in there and it will come back with analysis and changes and optimizations. Or you can throw a corpus of scientific data research in there without building a model, kind of temporarily building the model, if you will, and start quizzing it by that data. So the same is true with this. But what is available now is Claude 100 K, I mean it's a 100,000 tokens. We've done F Scott Fitzgerald in there and we asked it to write the last chapter and it absolutely knew context and new characters. It knew how to write the ending and the ending is quite interesting and satisfying. You know, some of it is post-log, pre-log... There are all kinds of different ways it approached it. And so there are interesting ways that we can arrive at this sort of data. The next thing is if you're building that sort of model, you can kind of get to the point where you're asking questions after you've instructed it. Now, these questions can be based on the data that you presented. Now, I would not tell a brand or a company to load this stuff into a cloud at this point. So if you're going to do this, you can do it with something that maybe you don't care very much about. But I would not just kind of close my eyes and say, "Nobody's going to see it," because there's a chance that they could. So but if you're giving it financial data, marketing data, fears, market research, you can throw a whole lot in there randomly. It can just be a big potpourri cornucopia of data that you've thrown out there. Now you start having a real conversation because now it's got very concise data about this particular model that you're trying to kind of build. And this particular product or company or brand, and you start asking questions, say very simple ones, "What's the weakness of my brand?" And it will tell you. You will get honest truth. And it does not... I'm telling you when I do this with some companies, it does not make people very happy sometimes because you have a third party that you can't attack. If you were to hire an outside consultant, you can bring that consultant in and you can say, "Okay, tell me your ideas," and then they can pillage that guy, that gal, and say, "This is terrible. How dare you? How dare you diminish our brand? We worked hard at this. You don't know. You're an outsider." There are fiefdoms to protect. And so it becomes a really difficult challenge. But when you are using the third party that's giving you the raw data, there's going to be some people out of joint. I've seen it and it's surprising. It's not necessarily that people have been there a long time. It's actually the new people because back in the early part of this conversation, it was the existential fear and threat. The old people feel a little bit more relaxed in their position. They've taken quite a few hits in life. They've also accepted the reality that they're fading into the sunset. But the younger folks are like, "I was supposed to be the hip creative person and this thing is running circles around me. It's creating persona I never even believed." So how do I empower that person? Because that's my first go-to when I'm hired truly as an advisor. Some people half-ass. It that's a technical term. And they just say, "Brian, give me some advice." I'm like, "Okay, I can do this but I'm actually giving you weapons and you guys are going to shoot each other up. You really need some guidance." "No, come on, Brian, you're making this up. You know, I hear your esoteric stuff. Just give us the stuff." "Go ahead. All right." And they play around with it and it's damaging. Why is it damaging? Because you first have to prepare people. I walk in and I sit down with these folks. Generally, I try not to do it remotely. So it does become costly. So don't be surprised what the bill is. I sit down and I talk to individual groups and individuals that I've identified I need to talk to. These are very large corporations. I'm not talking about a ten person startup. I'm talking about multi-trillion, billion, and trillion-dollar corporations that have crazily invited me to help them out. Because I have no degrees. I have no background. I'm just this guy and Twitter, right? So I'll sit down and I'll say, "Okay. What are your fears about this?" And you always come down to it. "I'm going to lose my job. The CEO is going to see this." I've had this conversation dozens of times. I go, "No, you aren't." [00:30:06] You're now going to become seven times more powerful because you're going to know how to use this technology in a way that's going to empower you to be stronger. It is not a replacement. It's a tool. The spreadsheet didn't fire the accountant. It made the accountant more powerful. [00:30:20]

Brian Lange: [00:31:11] This is interesting. This extreme honesty, if you will, of having a third party. As brands look to take some of this technology towards customers, like make it customer facing, what are the pitfalls of making something like this customer facing? You called it. What if you fed in all of the review data and all the product data and all the test data and all of that information about a product and then you turn it over to a customer so they could ask it any question they wanted about one of your products? What if competitive data lands in there?

Brian Roemmele: [00:31:59] That's right. {laughter}

Brian Lange: [00:32:00] So what are some ofthe downsides of, of doing something like this live with a customer? I think you said to an employee, "Don't have any fear this is actually going to empower you beyond what you even realize. When you go work internally, your CEO is actually going to adore you for what you're able to accomplish. But what about for the purchaser?

Brian Roemmele: [00:32:29] Well, great question. Well, as it stands today, there's no way under the sun to control what a large language model is going to say. And you're watching it in real time. OpenAI is struggling to try to make it more and more generic. And the more generic you make it, the less useful it becomes. That's the way it is. And they believe that. They absolutely believe that they're on a path that they're going to be able to do this. There's constitutional AI and a few companies doing that, and they're saying, "Well, we developed a better way. We're going to really constrain it, so it doesn't say bad things, it doesn't embarrass the company." The reality is current technology, it's never going to quite be there. Now there are filters, vector databases you can put in there, but ultimately, if somebody wanted to, they could make this AI say embarrassing things. So as humanity, guess what we need to do? If we want to use this technology, we need to grow up. Simple, grow up, get mature. AI is going to say bad things. It's just the way it is. It could say horrific things. They're just words coming out of a computer. I can put up a web page and say anything. I can go around the Internet. Guess what? I can go into a word processor and type words, really bad words. What's the difference? So that's part of the growing up and I know this sounds a little political. It's not. It's not. This is human beings having a conversation. Understand, it's technology that's arranging words back. If you're going to be that upset over it, then let's work. Let's all walk backwards. Let's throw a hand grenade at AI and say you're not ready because you're going to hurt our feelings. And that's the biggest fear. The elephants in the room. The biggest fear for a brand is that they have their AI saying something bad. AI said bad things. Brand is now canceled. Well, okay, let's have at it. Where's the pile on? So if we still want to go in that paradigm, we can, I don't know, a decade, maybe two decades. The reality is you can't fully control the output that is trying to simulate. And again, I'm using non-technical words. I'm giving you sort of an overview. It's trying to simulate the part of the brain that invented language, and that is a free flow of information based on the data of your paradigm. What's your paradigm, where you were born, who raised you, what your religion, what your culture, what your society informed you?

Brian Lange: [00:35:03] Contextual.

Brian Roemmele: [00:35:03] It's always that way. This AI, meaning the AI that we're using right now, has been trained and rightly so, on the corpus of human knowledge and in there is going to be the good, the bad, and the ugly. And you could try to extract the bad and the ugly, but then at that point you really are going to have something that you don't want. It doesn't really work.

Brian Lange: [00:35:25] Right.

Brian Roemmele: [00:35:25] If it works, it would have happened already.

Brian Lange: [00:35:27] Okay. So what I think I hear you saying is this, that if we want to be able to have an AI that functions at the level of intelligence or functions at a level of like being coherent, I'll say that, that we have to include all of the data. We can't just train it, train an LLM on, let's say a specific set of data and say, "Only answer as it relates to this data," because the only way it's going to be able to provide back anything coherent is if it's trained on the whole body of knowledge that we have.

Brian Roemmele: [00:36:13] True to a certain extent. So there are some people raising their hands in the audience now saying, "Weights and biases, Brian." And these are technical kinds of terminologies. So yes, we can try to accentuate the silos of knowledge and focus on that and we can denature and dilute the general knowledge of a particular model. But there is a balance point. How do you know what's good unless you know what's bad? Let's really look at this. If you've raised children and you've only told them what is good and never given them contrast to what's bad or they never see bad in the world, realistically, let's look at this logically. Let's look at it philosophically. Let's look at it from a religious context. How do you know good from good if you don't know good from bad? So in a fantasy world of some and I don't mean this as a putdown, some very, very wise, AI researchers and scientists and popular media people who think they have a grasp on this. And I'm not trying to be arrogant. I'm just trying to be as real as I can and as direct. [00:37:23] How do you train an AI system to understand good unless it knows bad? Now you can train good and bad, but the bad has got to be in there as a reference point because it needs a contextual way to identify. [00:37:39] And so today what it says is "I'm just a large language model and I can't." And first off, don't insult my intelligence. I happen to know I'm in a text window of OpenAI. You don't need to tell me every time I prompt you that your large language model. Let's grow up from that. That's where we are in society. It's like, "Okay, I didn't know that, ChatGPT." And they're doing this because as a society we're not mature enough to deal with it. Some regulators somewhere scared somebody someplace and said, "You better put that in there so somebody doesn't get the wrong idea and do something bad, do something that AI said that was bad to do." And it's like, you know what? If it's on the open web, it's already out there. It's already out there.

Brian Lange: [00:38:29] The genie is not going back in the bottle.

Brian Roemmele: [00:38:31] And I'm not saying the dark web, I mean, yeah, I'm not saying reproduce everything that's on a dark web, but if it's relatively available in the open web and you're trying to stop it from saying it, you're being ridiculous and the future is going to laugh at you. That is the reality. They're going to laugh at you with a guttural belly laugh on what you were trying to do. So stop being fashionable, look at reality, take a few steps back and get a backbone and say, as a company, "You know what? I don't need to inform my users that it's an AI model." So let's get out of that. So the next level is, can AI say bad things? Yes. Can you tell it? Can you tell it that it shouldn't say bad things? Yes. Okay. But you can't eliminate the bad things. And therefore, I can get around what would be the blockage of saying something bad and making it say something bad? Because I just showed you through persona and motif, the motif with the debate, and the persona was a university professor and that particular thing, the motif was a brand and you can kind of mix, motif and persona in some of these cases when you flatten them out. I would actually extend both directions. But as long as I can do that, then I can get around a blockage. There's no if, ands, or buts about it. So then is this a wise choice of your energy? Should you be spending some of your best efforts in trying to control it? I think not. So if I was a brand, and I'm doing this with brands, first off, I build local models and that means a model is not in the cloud. The model is on a hard drive and somebody's computer. I start with that and I start building what is the mission of what we're trying to do here? And I get the mission, okay. Then I start establishing a persona. We need a persona, a personality too within that persona. "Oh, well, we want something just neutral. Non-human." I go, "Well, then you found the wrong person." Because if you're going to interact with something, we're going to anthopomorphize it. There's nothing we can do about it. I see people anthropomorphize bread, knots, and wood.

Brian Lange: [00:40:55] Right.

Brian Roemmele: [00:40:55] Why am I even in 2023 having to have this conversation that the human brain is programed to identify a face? It's one of our first things that we identify as a child. We can identify our mom's eyes, nose, and mouth. Why am I doing that? I don't know. But we still have to do it. So when we go into the debate with AI professionals, "Well, I don't want to enter anthropomorphize it. Why are you doing that?" And it's like, "Oh boy, here we go. We got to go back to kindergarten." And then then when you finally logic it out with somebody who does subscribe to logic, "I guess so, but it's not right. It's not fair that we do that." And I'm like, "Well, you know what? Go to the inventor, have a sit down with the inventor and say, "It's really bad you put that in human brain. It's it's a bad thing. Did a bad thing.'" But so we can spend forever splitting hairs on the debate or just go back into reality. So [00:41:56] we build a persona. We build a personality. And why is that important? Because that creates engagement. If you're just going to put out robotic statements and go to freaking a Google search or Wikipedia, it's not engaging. So if I'm going to have a customer facing interface, that interface better be useful and engaging. [00:42:18] People love talking to people no matter how much they like being in some soulless store where they never see a human face and they walk in and walk out and never see a person. No, they go out to an Italian restaurant on an anniversary where Luigi is there and there's people all over the place. And hey, "It's the anniversary." The violin player comes in. That's real humans. And I love talking to people who love this soulless world that they want. And then I ask them where they go for a birthday party or an anniversary. And it's always something like that. I'm like, that's interesting.

Brian Lange: [00:42:52] This gets back to our original, our first talking point, which is like, what is the role of humanity? Understanding where humans fit and what the appropriate place for human to human interaction is versus human to machine interaction, understanding those roles.

Brian Roemmele: [00:43:11] And that's why our chat agent, if you will, should represent the brand. So you build an honest debate within the company. Who is this brand? Now the first order of business is usually they say, oh, it's the customer. No, that's BS. That's too easy. Now don't just try to make your brand whoever you think your cohort is, that's buying it because that's not the attractive nature. Remember, humans like to kind of attract to opposites. This is a reality. And sometimes when you really are on a brand journey and I used to do this pre AI and take people on a brand journey using Joseph Campbell... This is not new to me, this stuff. Hero's journey archetypes, Jungian. I was known for this and I don't have a business card that says I'm a brand builder or anything like that, but I would be invited to have these conversations. It would last days sometimes. I'd be there a week and people would like walk away and look at their brand entirely differently. So don't go don't go to the cheap seats and say, "Our brand is a 25 year old college graduate." No, that's your demographic that happens to be resonating with your brand at this point. But no, it's probably not. Oh, then it's the opposite. No, that's not necessarily. We have to find what is within your psycho demographic that's being attracted to your brand. And then we kind of look at those opposing polarities because we must have them in anything. There's always a balance in everything in the universe. There always is, always will be. And if you don't make it, the vacuum will be filled up by something else, probably your competitor. So a really wise competitor creates the right balance to begin with, and you can get into Fibonacci sequences and get really esoteric about it and maybe a Fibonacci sequence of a product, you know, golden ratio. But all right, so you're creating this sort of chat interaction experience. It should have a personality that you immediately identify, and that is a theme song for a product. It's like a color scheme for a logo type. It is like the brand external outside of the brand packaging when it's on the shelf. And we used to talk about probably seven years ago when we were on this, I was talking about in the voice context, what's the voice of your product? What does it sound like? What's the tone of that voice? Oh, I want it. "I don't want to upset anybody by making it anthropomorphized." Okay, you want to do that? Fine. You want to sell a robotic thing and a robotic environment, go do that. Now you're going to make everybody unsatisfied. By by creating a committee meeting sort of, "Yes. Protect my butt," type of response, you're going to get the most lamest product that you're ever going to want.

Brian Lange: [00:46:08] Bland.

Brian Roemmele: [00:46:09] Right. So what are the products that we love? There's still Apple. And I assert one of the reasons why we're still in love with Apple is because we had somebody who said that, and now if we can maybe have I say it. But they said that. This is who the product is. This is who the brand is. Apple is a good example. And so when you see entrepreneurial still driven companies, I don't want to get lightning rod here, Elon, to a certain level. And a lot of people won't understand Elon until probably 50 years from now to really understand the organized chaos. I'm not a genius. Oh, God. And all this. I'm just saying that there is an organized chaos that most people just don't understand. And sometimes getting you angry and upset is part of the equation. Yeah. Look at the last seven years and you can kind of figure out how that all plays out. And it's going to continue to play out because it's always been a theme of the hero's journey. It's not the anti-hero that we have fallen in love with. Hollywood has been feeding us the anti-hero message as if that's the only hero's journey outcome and archetype. That's not the case. That just happened to be the most expedient. But every brand is on a hero's journey.

Brian Lange: [00:47:24] So interesting. What do you think about... So we just released our book, The Multiplayer Brand, and it's this idea that there is a brand voice, but as we have more and more tools to make something more specific to people's individual contexts, like you said before, like every person has grown up with a specific context that they were born into parents, or not, religion and place.

Brian Roemmele: [00:48:03] I call it paradigm.

Brian Lange: [00:48:03] It's a paradigm. Exactly yeah. And so we're creating these paradigms that that that brand has been a specific hero's journey for many, many years, but now through electronic communication, through the internet, through platforms and voices, people are iterating on those, iterating on those brand hero journeys. They're making them their own. And some brands are trying to sort of quash that and say, "This is our brand, this is the way that our brand exists."

Brian Roemmele: [00:48:43] That's ridiculous.

Brian Lange: [00:48:44] Yeah, exactly. Yeah. Should brands be letting go and letting people iterate and grow and build on top of or does that end up with sort of what you've described where you end up with something that doesn't actually exist anymore, where it's just an amorphous blob of people's own interpretations of what your brand is? And there's no voice. There's no stake in the ground. You lose track of your archetype.

Brian Roemmele: [00:49:16] Well, these are great, great questions. So you can do both simultaneously. And it sounds contradictory, but it really isn't. You can invite the people that are part of your brand journey to make it their own. That doesn't stop. So let's look at a brand as a bus going to a destination and let's call it Cardinal Compass points. Its destination is West. But within that bus you have, as the user of that product or service, you have the ability to be as flamboyant or as plain vanilla as you decide to be with that brand. It does not dilute the general directional that that product is going in and it does not dilute the fact that the brand may in fact change your mind and say, "Hey, folks, I got a new plan. We're going east. Everybody with me. Let's take a show of hands. Okay, here we go. We go east." That is gutsy. That's a brand that decides to change direction. Now, some people think that they've witnessed this over the last couple of weeks or months or years. They haven't. What they've seen is people that are directed by not the integrity of the brand, not the integrity of the company, but the integrity of something else that has absolutely nothing to do with either one of those, although they will try to retrofit it onto that. A brand has its own integrity. It comes down to, oh, it comes down to what is the reason? It's like architecture. What's the reason for this building to exist? A lot of good architects will come from that form and function sort of point of view. I don't believe you can ever really stray from that first principle. So you go form and function. Why does this brand exist? You know, what are you really doing? Well, it better be to sell something at the bottom. If I first hear "to change the world" or "to make humanity better, I go around, try to slap everybody in the face, throw water at them and say, "Okay, come on, let's go back. No. It is to make money." And that's not a bad thing. That's why you're here. You're here to make money so you can feed your family. Is that a noble and honorable thing? Yes. Is it noble and honorable for a company to make money? Yes. What's not noble and honorable is when you're actually stepping on somebody else in the process of doing it. If you're doing your job, I don't care what the job it is, as long as you're doing it in a noble and honorable way, you can go home and you can face yourself. You're doing a great job. Bottom line, if you're giving it your best and you're all. But if you're stepping on other people and you know it, you're going to have some baggage you're going to deal with. I don't know if it's this lifetime. I don't know where, but you're going to deal with it. What I know about the universe and this is true about a company. The thing about now is it comes very fast at you because once you lose track of your first principle, then you start losing track of everything. So why did I build this building? Well, it's going to be a library. Ah, okay. So what's a library going to do? Help people find books and read books and store books. Okay, so you start building the first principles of your product like an architect. Once you start doing that, everything you do from that point forward, you can say, what can go on in this library? Well, we have some meeting rooms. What can go on in the meeting rooms? Anything we want, as long as it's legal. Then all of a sudden you see the creativity you can build within the realm of the product or the structure that you're building. Now, this is varying in different products and services, of course, but you need to have that personal flexibility for people to not only express themselves, but by making it its own. Apple did so well when it first iPods came out. Quite immediately people were having their names emblazoned on the back of the silver. I think it was sterling silver at the time. Maybe it was metal, but it was probably more not sterling silver. I'm sorry. It was metal in either case, polished metal. And they were having engraving on the back of the iPod and Steve Jobs saw this and said, "Wow, there's the person's name on the device. Now, he did not want anybody to adulterate the back of his iPod. He spent hours and days and months looking at that, making it become this reflection of you in there. And there's a whole lot of symbolics going on it. I can spend hours just telling you why the back of the iPod was reflective and it's not an accident. People think they understand. It's funny. They think they understand how maybe Steve Jobs has done things, but they haven't dug deeply and really talked to some of the people that were with him and asked the questions, why was it that way? Why was it stainless steel and reflective? Why did they make it this way? Well, Steve turned it around and wanted the people to see their own face in it. And you might say, what does that mean? Well, then you need to go. If you're a marketer and you don't know what that means, you need to go back and study the first principles. So then you have your name engraved into it. Now that's a form of personalization. Now we can get slip this back to AI real quickly. If you're interacting with the product in a customer service type setting, the very first thing you want to know in a customer service situation is that your problem is going to be answered. As soon as you feel that weight lifted off your shoulder. So why do we get upset? Well, the phone tree. Press one. The language barrier. You know, this is just reality. If English is not a primary language and you're in an English-speaking country and you're dealing with somebody who is new to English, there's a frustration level already. Why? Because they're already frustrated calling. And this is the reality of it. So how do you pull back from this? Humanity. You humanize the environment. I helped a company that, unfortunately, they can only deal with a certain outsourced company for their customer service. And they said they're going through the roof on the negativity and they're cursing before anything ever starts. And I gave them a few sentences to say, first off, that are very... I can't say them because I'm in agreement with this particular company. I gave a few sentences that they absolutely thought were bizarre, and I said, "I will guarantee your complaints will drop by 50% or you don't pay me a dime." And I was charging a lot for this, and it wasn't the only thing I was doing. And I also did something on the other side. I will tell you what I did on the other side. Every customer service rep had a mirror, a face size mirror right in front of them when they talked and they were to look at themselves. There was nothing else they needed to look at when they answered that phone but their own eyes and mouth and their face. And they lost probably about 25% of the workers who couldn't do it. Could not stay on the phone facing yourself. And there are a lot of reasons I did that. I did telemarketing in my old businesses, and I wanted people to be honest. I wanted them to be human. And one of the ways you do that is you face your biggest weakness, and our biggest weakness is seeing our own image. This is the reality of humanity. And okay. I hear somebody saying, no, it isn't. Yeah, it is. I don't want to sit there. Go and research it. Anyway, so they said a couple of sentences. It humbled the entire conversation. The language barrier was no longer really a big issue and people got their problems solved. And see, the thing is same is true with AI. If somebody is going to interact with the chatbot, first off, don't try to lie, but don't try to do this, "I'm just a chatbot." There are other ways to do it and some of that is going to be proprietary. But let me just tell you, you have to humanize it and you're going to anthropomorphize it. You're going to create personality and you're going to have people sitting around the table that came out of university the last 7 to 10 years, and they're going to tell you exactly why you shouldn't do that. And I say go and follow them. Come back to me in about 3 to 5 years and then we'll do it again. Because they're wrong. They're wrong because they're operating on a completely wrong theory of human mind, completely wrong theory of how humans operate. They don't understand that we all have a limbic system that we cannot override, that the limbic system takes over first our reptilian brain, and we can do all that we want. But if we don't deal with that first, you don't have the other part of the brain. So if you're activating the limbic system by confusing somebody and what I mean by confusing, there are a thousand different ways you can confuse somebody. We are designed to communicate. And there's the uncanny valley of communication where you kind of we've all gotten these calls where there's sort of like, hey, I'm sorry I caught you at a bad time, but I got something to tell you real quick. You know, and it's a recording. It's somebody there. And you say different things and you have a decision tree and they're using somebody's recordings to kind of respond. And one of the ways to throw it off is say, "Hey, what time is it where you are?" So what are you doing? It's not in a decision tree. There's no way they're going to give the exact time. You know, and if they're off by the quarter or half an hour, it's wrong. So you immediately know you're talking to somebody who's at a recording deck that's just giving you response. Those were interesting telemarketing systems. A lot of political telemarketers are using it now, so they hire people outside the country to run these systems that are giving you simulated voices. AI is now going to be doing that, and it's going to be harder to trip up the AI. And then now we have laws because we don't want AI to confuse you to think you're actually talking to a person just as long as the person is there. So it's funny. So anyway, getting back to the persona customer facing, when you build that, you realize that there are going to be discussions that are going to take place in any customer service environment that probably should not take place in a corporate setting. And so you have ways and means to try to. Deal with that. We can build that with an AI, but you can't fully stop it unless you want to hang up the phone. You can't control what the other party is going to say or do or how to manipulate the conversation. And I've watched in helping some people in telemarketing. I've watched people be manipulated, you know, seasoned customer service representatives being manipulated to do and say things that they should never have done and said giving money back and being just totally played against all company rules. So it's possible with human beings. Do you think it's possible with the technology that human beings created? Of course. So if you're going to sit there as a corporation and clutch your pearls and walk in circles and say, "It's not ready, I don't know. What if it says something bad?" Well, you know what? You can be a leader and say, you know, our customer service system is designed to help you out. And I can go into the contemporaneous marketing of this, but basically it ain't perfect, but what we're going to do is we're going to solve your problem. This is going to you call us up. You can get to a human being whenever you need to, and you're going to get to this system. And again, this is all contemporaneous kind of Brian stutter speech, but essentially you put it out there and so you're already setting the groundwork of what the reality is. And so what does this mean? It means that old-fashioned... What people think has gone away in the 19 from the 1950s is straight-laced, corporatism is gone. Don't be deluded. It's actually worse than the 1950s. It's 10X worse because everybody is so concerned about what they think their brand image is, what they think the market is willing to take. There is no risk. There is no gamble because unfortunately, entrepreneurs aren't running a lot of these companies. Accountants are. So an accountant is always making a financial calculation. And lawyers. And lawyers are always going to try to limit exposure. So let's make a financial calculation. Let's limit exposure. And then what do you have? You have brands that nobody is passionate about. And then you see brands dying. A lot of the food brands across the country are falling apart because they lack passion, they lack creativity, they lack the desire, the things that pulled us in there. We're seeing in small local restaurants, if they survived the last three years, the creativity in some of these restaurants is still there. You know, like I said so many times, I would hear, So where did you take your wife on an anniversary at a, you know, a meeting table? Oh, I'll give you the standard Luigi's kind of place. Oh, it's a romantic Italian hole in the wall. Italian restaurant hole in the wall. Why? Oh, everything's so beautiful. My wife's so happy or my husband's so happy. Whatever. It's always something sentimental like that. So why is that? I sit here sometimes with large brands and I say,  Do you see what just happened? Everybody at this table has a sentimental journey about something that's profound to them. And you don't want your product to be sentimental in the least because you think it is a) old fashioned, b) dangerous, c) not cool, d) All the above plus more and blah blah, blah. And I'm like, Well, yeah, so ake your choice. You know I didn't make reality. You just told me what your journey was. And it's always sentimental. I've never met a person where I can't get to that point. Now, there are people that will be resistant. Absolutely. No, I'm not sentimental. No, not a bone in my body. And then we get there and usually, that person's crying in a ball on the floor at some point. So we are. Humans are emotional creatures. Every single decision we make is an emotional decision, period. It is neuropeptide releases. Candace Pert. You can read the molecule of emotion. This is the reality. So if we're making a logical decision and a lot of rational scientific types like myself, well, well, you know, I made a really rational decision on that Corvette or that Tesla or that, you know, honey, it's going to save... Whatever. It's not. It's the very last thing that you do before you say yes is an emotional cascade release of neural feedback. It's a chemical electrical feedback. You just deal with that reality Totally.

Brian Lange: [01:04:51] And you're talking about why people make decisions and this relates directly to why they purchase. As you think about we're talking about customer service situations and the use case for AI there and do you see the customer service line and the purchase path line blurring? The interface for purchasing right now is either you walk into a store and you walk around and you maybe you know what you want, maybe you don't. And you buy things or you go online and you search for it, or you do a little bit of click and browse and you buy it. Do you start to see a convergence of now that if we're talking about training AI models on specific product data or on a specific brand voice, are we going to start to see that type of interaction become more of a primary modality for purchasing? Is it faster or is it more exploratory, is it more personable or are we just going to stick to what we know? Is it actually just faster just search and click and browse and AI won't play a part in that?

Brian Roemmele: [01:06:10] Oh, wow. Great questions, Brian. So, I actually asked AI how could I make a particular brand or product... This is a real client and a real brand. How can I make it better? And I wound up getting to the point, what are the best salespeople for this product? And in retrospect, it was very logical. But within the corporate boardrooms that I was meeting at and this was protracted, it was six meetings and there was debate on whether we should do this. And it was sacrilegious in a sense that we're going using AI as an external consultant. Now, again, these are local models that I built on their computers. They were never connected to the Internet. The USB ports were locked out. Once the data was on the computer this computer was essentially in a Mission Impossible room where if the sweat rolled off somebody's brow would set an alarm. Not quite that bad, but why am I doing that? People think that's overkill. Absolutely not overkill. I'm a proponent of private and personal AI for companies and for human beings, those layers of context, whether it's your personal context on the memory of your life or the context of the memory of that product, they're both on the same hero's journey and the power of intelligence amplification. I call this intelligence amplifiers. And when if it's a person when somebody dies, I call it the wisdom keeper. It's keeping the context of who you were after you're gone. It's a powerful, powerful thing, Brian. So when you ask the AI, "Who's your best customer? Who's your best salesperson?" And the answer I already gave it away is the customer. And if you ever really track what brands do really well, it's the customers that are selling the brands much harder, much more arm twisting than any "Hey, he came into Best Buy. You want a TV? You want a computer? I got you. Washer and dryer is on sale." Your uncle, your aunt, your mom, your dad, your grandpa. When they fall in love with the product, you can't stop them from talking about it. They fall in love with a great restaurant. You better go to that restaurant, because if you come into town, they're taking you to that place. And it's probably Luigi or it could be Bill or it could be somebody. They know the owner. They got an in. A little wink. Let's take care of them. This is everybody. Everybody thinks it's only them. This is every human being on the planet. Large brands are afraid to do this stuff. They think it's amateur. It wasn't in Yale. It wasn't in Harvard. They never talked about proprietorship and all this other that is, come on, why are people doing it? Corporate brands don't do that. Yeah, go back 1950s, they actually were doing it also. The soap operas were a form of this. There is a whole infrastructure. So corporations have been doing this forever. They got amnesia and they think now that we're much more sophisticated because we're using math, we're using data science, and we're doing all these things. And meanwhile, basically a lot of the major brands are diminishing. They are losing their power and they don't know why. Oh, consumer tastes are changing. No, no, actually, no. Your product tastes the same as it did, you know, 75 years ago. What's changing is you've lost touch with reality and you're trying to go after, I don't know, brand ambassadors. Who are these people? What? What? Because they have followers. What does that mean? We're hip and cool. I got a fanny pack on. I'm now doing it over my shoulder. I got my hair a little phased over here. I'm doing, look, I'm cool now. No, it doesn't work, o this is kind of the journey that existing brands have. New brands, unfortunately, want to be taken seriously and they don't want to look amateur because everything I'm telling you here is pretty much amateur. It's Amateur 101. And it's beautiful because it's amateur. It is Proprietorship 101. It is acting as if you are a local store with local customers who you know, and the better you know them and the better you identify them and the better you treat them as people. And your brand as a person. See, when you walk into the local shop and you know the owner, that owner became interchangeable with the brand. You could not separate it. You anthropomorphized it to that person and it was almost inseparable. So if Luigi did something bad and you didn't like what he said, the entire brand would be thrown out the door. And we can't do that. We don't want that. We have CEOs that come and go every three years. You know, the market goes up and down. Sorry. You're going to need a brand that actually is anthropomorphized. And if you can't do it with a person, you can do it with AI. There's ways to do it. It isn't as good, but it is as valuable. So you make sure when you're building this technology, specifically getting back I know I drift a bit here and coming back, specifically when building on large language models. And in the case of this particular brand, I asked, "Why is the customer the strongest salesperson?" And it went on and on. And it was very specific about the product, very specific about how things in the past were enormously successful because we in this particular case, we put in everything, all of their advertising data. I mean, this was we actually built a model. Ultimately, we froze a model, we trained it, we used local human training and machine training. And so it knows all the greatness, the grandeur of the product and the failures of the product, the low times, the high times, the numbers, the stock market reaction, the news reaction, comings and goings of CEOs, everything all thrown in there. And so when you have a very deep conversation with the model, it comes back with truths that are sometimes hard to deal with. I mean, this particular model told this particular company that basically everything they were doing was wrong. And it was a very humbling thing. And originally I got... Because, Brian, this is kind of the stuff you talk about. Yeah, no accident. I kind of just listened to the reality and I just kind of go with that. So it's not like, yeah, I did kind of learn this, but I also learned it from the AI. And I look at its facts, I look at its inputs and I just deal with the reality. Now, I could sit here and resist it. I just think life is a little too short. I can go into the popular vernaculars and debates and I can do what's fashionable and we can spend 5 or 6 years doing things that look great, or you can just get down to the basics of it. And this is what the brand, this is what that particular product said. It said, "This is what you need to do, this is how you need to do it, and this is what we expect the results to be." It was very concise.

Brian Lange: [01:13:33] That's interesting. Yeah. There's something you said a little while back that I want to throw into the mix here, and that was the idea that consumers should also, like people should have their own trained AI as well. And if we look a little bit into the future, it's not hard for us to start to think, oh, wow when I have my personally trained AI interacting with the brand AI that you're talking about here, that's going to be that that proprietor that my personal AI will interact with, what starts to happe Now we're talking about machines interacting with machines and and and and where does that start to take us as far as our role in the purchasing process and what commerce looks like in that sort of far, far future scenario? And how far away is that future?

Brian Roemmele: [01:14:29] Wow, great question. So can we build our own local AI today? Absolutely. At multiplex.com, a lot of our members are building on open source platforms. The easiest one to install right now is GPT4all, GPT4all. You download it, basically it is not going to impress you to be better, better, faster than GPT4 or 3. It's not designed to do that. The Apple one was not designed to wow you against a mainframe computer. We're generally at the Apple one getting to the Macintosh. We might pass by the Apple two in technology because it's moving quite quickly. But we're not under any illusions that you're going to be so impressed that you're going to say, "Oh, wow, this is running circles." But that's happening, too. That's changing kind of rapidly in certain sectors and segments. So you download this and then you download models and the models are there's well over 400 models that are open source right now, and there are a lot of them are based on the Facebook or meta leaked model. It was leaked apparently. I don't know what that means. The entire model was out in the public domain. It's called Llama. And what is that model? It's essentially the corpus of human knowledge as Meta got it. Put it out there and we quantize it and we cut it down to a four bits. These are the technical terms from reality. What does it do? And we do some training and those are the bees that we see. So a 13b means 13 billion parameters of training. A 7b means 7 billion parameters. You're going to see these vernaculars and nomenclatures as you're looking at these models. 13b models take a more taxing role on your memory. So you have to max out your consumer hardware with memory and ultimately max it out with hard disk space or SSD space, whatever you're using because you're going to be experimenting with a lot of models. That's square one. So why do you have a local model? Because it's not on the Internet and there's not enough time in the show to debate why you don't want to start exposing your most intimate details on a cloud, no matter how encrypted it is. But I'll save that maybe to another show. We can dive into it. I'm not a fanatic about Internet encryption and privacy, but you become one when you fully understand and believe what AI is capable of doing if another AI had access to that context and you start realizing what you gave up for free, like when we talk about, well, they're just getting to see what I click on and how long I hover over a picture on TikTok and how quickly I swipe. And you know, what part of the picture did I like and how did it train me to respond? Why did it do it in 30 seconds? Why am I getting more of these pictures? Why can't I get out of TikTok? That's just simple attention data. Imagine if it had deeper context. And is that a bad form of AI? If you're giving up your time and you wake up one day and you've given up three hours and you can't get those three hours back, I don't know if that's a good or bad thing. Now we're all selling something. Every human being on a planet is selling something, their ideas or product or service, everybody is selling something. I don't care who you are. If philosophy outlook on life, if you're getting stuffed with empty calories for most of your time, I don't know if that serves you or disserves you. You might call it entertainment. At some point, I might call it a disservice to your existence that you could actually have gotten more from your life. And maybe you've given up too much to something that did not give you something back other than a couple of giggles every now and then. So that's the bad side of AI, and you give it up your context for free to do that, to be entertained. We do that when we do a search or we give somebody our email and say, scan all of my email, figure out what I like, figure out where my journey in life is, and then start throwing ads at me. And again, that was the old version of the web. That's how we paid for the web. And I'm using kind of attitude and angles that sound sort of dismissive, but it's only because it's old. It sort of broke. So what's replacing it? So if I create an AI that knows what I love, what I absolutely am passionate about, what I'm looking at, what I have shared with it, that nobody knows but my AI and maybe my wife or a close friend. And I give it part of my journey. And we have this project called Save Wisdom dot org, and it's as simple as it sounds. It's a concept right now, this mildly being formed. It's a concept of saving wisdom. And one of the ways we save wisdom is we have a thousand questions that you record into an analog recording device. Mot really analog. It's a standalone handheld recording device. 30, 60 bucks. And why is it not in the cloud? I just explained it. Don't want to go down that road again. I know I can encrypt it. Still don't want to go down that road. It's on a device. Yes, Somebody can steal it. Put it in a safe. Okay. Anyway, standalone device, single purpose recording your wisdom 1000 questions. It might be 10,000. You're never going to get through them all. But what are we doing? We're creating a context. And why are we creating a context? Because we didn't have a device recording you from the day you were born. So we have to start where you are and those questions are designed to open you up. They're designed to get you emotional at times because we want to hear the great and the bad. And it's not me hearing it. It's going to be you if we never get it in technology, you've left something for the ages and they can hear what Grandpa had to deal with and in his own voice. But if we do our job, I will encode that, not me, but you into a local AI and that local AI will know dimension and ideas about you that you would not have been able to maybe even figure because you can have a conversation with yourself. Now, again, if you just hear me on the surface, Oh, who wants that? You're in an echo chamber. Now listen to me again. You can have a concept of who you are to a higher level. Let me tell you what happens to most people that go on this journey, because I've been doing save wisdom concepts for about two decades. If you don't cry through this journey, you probably haven't answered those questions the right way. You are doing a form of therapy on top of this. It's hopefully going to mature you. Yeah, there's my hidden agenda. Part of it is maybe you grow up a bit. We all need it. I need it every single day. So I'm not holier than thou. I am as flawed as anybody else, so you start answering these questions, you start seeing a reflection of your humanity. So that alone is extremely powerful. Then you take that and you put it inside an AI model that again, never touches the Internet, and you start having conversations with it and you start building on that, you start sharing more data. So that's the beginning. That's the foundation. And we do this through either vector databases or an SQL. Right now with GPT for all, it's not quite a vector database, and I'm speaking over the heads of some folks. I'm just trying to help the people who are saying Brian's full of shit. I'm using a technical term. No. Yeah, it's not a baked model yet. I'm not saying that we don't have GPUs baking models. I'm saying that it's an SQL database that will linguistically analyze those words from a paramedic sort of data standpoint and be able to answer questions fairly nicely within current technology that's running on consumer grade hardware. Even if you don't have very much Ram and you could start here and now by doing that. That's the beginning point.

Brian Lange: [01:22:39] This is interesting. This is interesting. So having that personal AI model is something and I understand right now why you're saying and again, we won't go down this path about why you would have that in the cloud. There's a lot of use cases for this, not just for yourself, but eventually it might be worthwhile to like, you know, use in specific contexts.

Brian Roemmele: [01:23:06] Absolutely.

Brian Lange: [01:23:06] And let me go now down my far flung future for just a minute and I haven't really talked about this on the podcast before, but there's this idea that I have where if you look at the communication spectrum between any two individuals, there's always a gap, right? There's one person saying something on one side and the other person has to interpret that thing. Even if you're speaking the same language, there are language games within the language that sort of force you to have to try to gather someone's meaning. And you referenced this earlier. If you're in person and you've got the body language and the smells, all the cues, right, that helps. That helps. Now if we're talking about the interaction between machines and people, if you think about how we've had to communicate with machines that can do things for us, the communication has been mostly centered around the way that these machines operate. And if you look at this as a line between human and people, we're on the human side of that line. I'm sorry, previously we've had to take our crayons, as you said, and try to communicate with what was possible from that machine by speaking the machine's language. And the problem with that is we're not very good at speaking machine language and don't fully understand what it's capable of and what that machine can do for us all the way because we're always trying to speak in a language that's not our own. I.e. things like either, you know, programing languages or following an algorithm to get the outcome we want out of that algorithm. We're always chasing the way that the machine does things. Now with AI, what we've done with language models and so on is we've actually given machines an input that's closer, if you were looking at a spectrum again, that line between human and machine, it's actually closer to their speaking things on our terms more, and there's less interpretation that we have to do. There's more interpretation the machine has to do. But I think this is going to enable and this is where I'm going to go a little bit far. Future is I think that there's actually an entire world of things that can be accomplished with machines and with with software that we are just barely scratching the surface of because we didn't have we were trying to speak a language that we were speaking very poorly. And now that we've given like the machine world, if you will, a way to communicate with us, it's actually going to open up the opportunity for us to explore what is fully possible on the other side of that realm. And like you said, it's an extension of or a mere back of sort of wish fulfillment, if you will. It's humanity in a mirror, but maybe it's even more than that. We're going to be able to tap into more of what we actually hope to become. And and back to Norbert Wiener. He said that AI or machines... He didn't say AI specifically and you correctly identified it's cybernetics not necessarily AI, but regardless he said that this world of machines is actually going to push the boundaries of what our mind can do. It's actually going to challenge us, not replace us, which is a completely different paradigm. It's actually the opposite. It's going to be the the opposite of replacement. It's actually going to push the reason for our role like it's and we're going to be able to more fulfill what we are able to accomplish. If you think about what the wheel did for the foot, it enabled us to do so much more than we could do with just our feet. And so you talked about inputting our data now through this communication device, this communication modality that's more human centric. We can now feed data about our context to the machine world, if you will, and it's going to actually enable us to understand, I think this is what you're getting at, more of who we are than ever before. And that is to say, like, I love the works of Fyodor Dostoyevsky, but I've not read all of Dostoyevsky before. In fact, I've not even read Dostoyevsky in the original language that it was written in, but with a machine who understands that I adore Fyodor Dostoyevsky's writings and his thoughts that [01:28:11] that machine is actually going to be able to have a bank and understand more of how I relate to the things that I find resonance with and find wisdom in. And it's actually going to be able to relate those things back to who I am and use an even more full understanding of the things that I find to be inspiring or the things that reflect me to interact with other people and other people's extended versions of themselves, i.e. their AI component. [01:28:50] Now that's my super far. Is this is sort of where you're headed here?

Brian Roemmele: [01:28:56] Wow, man, that is just like right on the money. And I agree with you. So many things you brought up that I think I can just kind of lightly amplify. So when your personal AI system understands your context well enough, you can also give it guidance on where you want to be. So I'm not going to say Tony Robbins in a box, but you know, I did it. And you can kind of see where I'm going with this. It can guide you in the direction where you want to be. And this is your choice. I'm not saying you're making, that some corporation has made an AI to control you. I'm saying you've built your own AI to help you achieve your goals and ambitions and to keep you on track. But I want to take you down a path of where the humanity touches the machine. You're at the dark night of the soul and you've lost your compass. This is where the dark night of the soul in every movie, every movie's got to have it. You look at the monomyth and you look at a movie and there's not a movie that hasn't been made that doesn't have a monomyth. You know, hero's journey. So the dark night of the soul. Every human goes through this many times in their life and many different versions running inside of versions of it. There's this "I'm lost." Well, what is it like when you have an AI component to who you are that knows that you're lost and it can help you find your compass? I'm not saying it's replacing a human being. I'm not saying the least of that. I'm saying that we are not just the sum total of all of our experiences, but a lot of who we are is the sum total of our experiences. And if we can have a conversation, whether it's started by us or the AI system through telemetry: your heart rate is changing, it detects emotionality, you said something you shouldn't have said because guess what? If you're really smart, at some point it records with permission what you're saying and you kind of review it like we all do before we go to bed. Should I have said that? Well, the AI is going to tap you on the shoulder and say, "Hey, Brian, you shouldn't have said that. It's not who you are." And is that a replacement to a best friend? No. Is it something that's going to make you stronger? Yes. Is it available today? Yes. Would I like to see people have more access to that? Why the hell not? Why the hell not? And is that something that Google is going to give you? No. Is it something that Apple's going to give you? No. Why? Do you think you're going to have a conversation like we just had from a CEO of one of these companies? No, it's going to come out of the ground work. It's going to be an upswelling of people doing it for themselves, by themselves, to the point where the noise is so loud that, hey, this thing called the Internet, okay, I'm going to put me a business card up on that Internet thing and I'm going to get me some business. You know, all of these transformative things in life happen from a groundswell. They don't happen from some board of directors meetings because most of what I said would not be allowed. If I was working for a company, Brian, I'd be fired probably within the first couple of sentences because I wouldn't be allowed to say that about a company or a brand or as a representative of the company. Well, guess what? I'm not working for anybody, right? I'm in my garage. I've been building this stuff. And I'm just saying, do you want that capability? And if your head is nodding yes, then you could start it right now by preserving wisdom. Save your wisdom because it will go away when you go away. If you've done your best job as a parent, hopefully you've transferred some of that wisdom. What I can tell you is we just touched upon this lightly is we're at information overload already and it's going to be very, very hard for your next generations to understand the wisdom that you work so hard to create. We're all facing the end. This is reality. We might start asking what the meaning was. I hope so. I have. You know, especially these last couple of years. If you haven't, ask what meaning is. I'm sorry, man. Go back. Start again because you better start asking wisdom. I'm sorry if that isn't on a corporate flowchart somewhere, but that is the reality. And it's not even in the media. We're not allowed to ask these questions what meaning is and why we're doing this. We'd rather just fight each other and hit each other over the head with a bat because of some differences on 1 or 2 subjects. Although we're like 100% on other subjects, compatible. That's the reality of humanity today. So what these things do is they start waking us up. They start letting us see who we are. A grand mirror. And I'm not kidding about this because once you start putting your context in there, this will be the most sticky device you will ever have in your life. And it will be very hard. Hard for the old paradigm to get between you and it, because it would like to be on a cloud on a subscription package $9.99 a month and you can access your context. Bullshit. No, that's not where it should be. And that's why you don't hear me at a major corporation or with the right VCs behind me right at this moment. Because I said that. I said the exact same thing. I wouldn't say it so loudly a couple of years ago, but we went through a whole lot of crap the last couple of years. So I'm saying it now. I sat down with people. You're saying, why isn't this in my hands, Brian, right now? Because nobody wants to really build it. I can't do it alone. Yeah, I can go up there and go fund me now. I'm not talking about that. It requires a lot of time and attention and people and talent that I don't have, that I don't possess. I can only do so much in my garage. But when you sit down with VCs, "Where's your go to market, Brian?" "Um. I thought I just gave it to you." "No. Where's your go to market?" I'm like, "You know, you guys, let's wait." This is seven years ago. If I was talking, we had this conversation seven years ago. You'd probably say, Brian, I got. It's getting late. I got to kind of go because you would not even believe me that something like large language models would come around.

Brian Lange: [01:35:22] Right.

Brian Roemmele: [01:35:23] And nothing against you. I know it was coming because I was heads into it. But, you know, we were talking about the low hanging fruit of the tree, Alexa and Siri, which was junk. But so what does this mean from a commercial standpoint? Let's look at it that way. Well, advertising is completely upside down and changed. Why? Because now you've built your own context that you own and you can license it out to others. And there's no middle party like Google getting paid. They pay you directly to use your context. So the advertising money, the trillions of dollars that are spent instead of going to middle parties that are creating, "the platform," you are the platform, you get paid for giving a very small sliver, a thread, of your context to a marketer in a very limited and temporary fashion so that they can sell to you directly to your agent.

Brian Lange: [01:36:18] This is bring your own algorithm basically. You're bringing your own. You're giving them temporary access to your own data.

Brian Roemmele: [01:36:30] Publicy facing, we'll call it a publicly facing. Now, some people are going to argue, "Well, Brian, you just said you can't constrain and constrict large language models." That is true to a certain extent, but you do have the right to see how they're addressing your data and you have the ability to cut that access off if they are going to ask questions that are outside the scope of what was predefined that they had access to. So you pretty much and this is all algorithmic and there's going to be an electronic currency, let's call it Bitcoin. You want to know how valuable your data is? Just with the technology we have today, just in the space of three years, you can have thousands of dollars coming in a month of advertising money for brands to have access to your context. And why would they give it to you instead of a third party that built these algorithms that, hey, they did a search for baby strollers two years ago, let's give them another round of baby stroller ads. Well, now, kind of that ship left the building. We got the baby stroller. Your context would already be there. They would probe you, no baby stroller. No need. Fine. Here's your money. Oh, baby stroller need. Okay. What have you been looking at? Next one. Next one. Do you see how it becomes hierarchical? And then you constrain it and you say, okay, what's the bottom of the tree? What's the top of the tree? I'm good with all of that. Fine. And you may already know where the answer to the question is. You may not, meaning that this is all going to be done algorithmicly. It's going to be you're going to get bids coming at you. Now, this all sounds sci fi. We have the technology to do it right now.

Brian Lange: [01:38:15] Yeah. This doesn't sound sci fi to me. This sounds dead on.

Brian Roemmele: [01:38:19] You will only... What are you getting as a user? You will only get offers that are absolutely customized to you and the brand journey that you want to be on. And then the brand journey that's being transmitted to you will be semi customized. Remember we talked about the bus? You'll get on the bus to that brand, but your particular brand relationship is going to be intimate. It's going to be through your agent, it is going to be through your AI and your AI is going to interact. And if you like that brand or if you love that brand, you will never leave that brand and they will become a part simpatico with your AI agent informing you of things that you want to know about. Instead of following, "Follow us on Instagram and Twitter," it's "Follow our AI because we got we got the recipes so that you can use our product better in the kitchen, and our chef just came up with that." Are you into it? Hey, I got to make something tonight. That's right in line what I need. Yes. And so the brand is now integrated into you closer intimately, more than ever before. This is not sci fi. This is right now. The world changes around this because of money, Now you're hearing my pitch, right?

Brian Lange: [01:39:39] This is not a pitch. This is not a pitch. This is the idea of...

Brian Roemmele: [01:39:44] This is the pitch I gave VCs on Sand Hill Road. So now you can see what my life has been like, right? So I sit down and I've given and let me tell you, the pitch is a lot more polished. I'm not really giving it full justice, but there were there were nonplused. They were like, "Well, you know, is it a cloud concept? Where's a subscription model? Is it an Uber?" I'm not putting some of these folks down. I understand they're stuck in their paradigm. And when I was trying to be acquired by larger companies, you know what was going to happen. You already know where this went. You already probably know what my response was, because here I am. The reason you, whoever is listening to me right now, don't already have this what I call the intelligence amplifier and when you pass your wisdom keeper and we got to touch upon if we got some time, we got to touch upon what wisdom keeper really means. The reason you don't have this is because it does not fit the model. And if you draw anything from this conversation is what I kept saying, that you need to be ready to be wrong. You need to be ready to have the debate. You need to be ready to change. I am doing this as much as I can. I'm not perfect. I'm ready to change this model immediately. [01:41:06] If somebody can present something better, I want to be wrong. Being wrong informs you to be right the next time. That's the fact of life. [01:41:16] And if you're so married to being right and not being wrong, you're going to fall off a cliff one day. I'm right. I'm going the right way. No you aren't.

Brian Lange: [01:41:24] Here's here's my biggest concern. Okay. So here's the challenge that I see coming ahead here. We are getting better and better at collecting data. And if you think about all the data points of just the human body alone, let alone all of our decisions. If we are able to collect all the data points we're even aware of, the human body alone can produce more data activity in a year than we have in all of the Internet right now. Like a single person. And so how does this factor into like processing power and storage and like being able to adjust the model? Because I think this is the biggest problem. We have all of this data now. We're collecting it at an increasing rate across the world. We're having to update our models on such a regular basis and have it reabsorb all the changes of all the people in the world. We're talking about data that's that's so vast and it's going to be impossible to store, let alone process. How is that going to work? You've mentioned everything happening at like a local level, at a personal level. Is this possible or do we just not collect that much and just say, "Okay, we can only take this so far," and stop there?

Brian Roemmele: [01:42:58] Oh, great questions. All right. So I just wrote an article for Read Multiplex members about the Journal app that's coming into the new iOS. So a lot of people just think it's one of those quirky one off Apple things, kind of fun. But, you know, unfortunately, they're not seeing the bigger picture. The bigger picture is Apple already gets this. I'm not saying it's an accident. I'm not saying I ever sat down with Apple, but I'm saying that you need to be able to have a way to record your local context and do it with permission. And you need to do it in a way that is innocuous and you need to do it in a fun way. I have a different approaches. That was one of my approaches. It still is. I encourage people to write and journal, but a lot of people don't. They're too busy. So if you have context already on your phone, pictures, music, you listen to voice communications, voice mail. I don't know if they're going to integrate that, but I suggest that they should have if I ever did meet with them. You know, you integrate all of this into a what? A story. What was the concept of what I have been saying? Humans are storytellers. It never will stop. It never will end. It's what we do. Now, is there another species out there that doesn't use stories? I don't know. All I know is that we use our own lens of what intelligence is, what creativity is, and what information is. Information is a story. It better tell us something or it's not really even information. Data is in a sense, a story, but it's a very low, low form of that. Wisdom is the highest order of story. So how do you get wisdom? How do you get knowledge? How do you get information and insight? You build context. And so we're on this device. We're already cyborgs. We have our iPhone. It already is tracking a whole lot of information. So what do you do in an innocuous way? You say, "Let's make a journal." That's cool. That's first order. Second order, what else do we do? Let's share that journal. That's called microblogging. That might sound like Facebook or Twitter amongst your friends. So the next order of business, which people aren't ready for, but it's coming. Is okay I just created my journal. It was very cool because I spent the day with my buddies and I want to share with them the pictures, the music, my thoughts, scrapbooks, how I prepared to go on this journey. And I sort of microblog it to my what? My text message platform, my imessages, write messages at this point. Old school. And what's that? Well, why am I going Facebook now? What's just happened? Holy cow. Hold it. I created a journal. I'm Apple. I already have the AI coordinating my pictures. I now have AI that's going to prompt me, literally prompt the human. That's what's going to go on in journal. How would you explain what happened when you went to Yosemite.

Brian Lange: [01:45:47] Different kind of prompt engineering. {laughter}

Brian Roemmele: [01:45:49] Yeah, reverse prompt engineering. Right. And the AI is going to build contextual models. Now, what happens to all that data? It could be an intelligence amplifier. What did I do on June 29th, 2023? Oh, you did this. Here's pictures, here's the music. Here's a little recording from your wife. Here's what your kid sent you. Here's a picture of this. Oh, the dog did that. And let's string it together in a story. Well, you woke up today, you woke up on this day and you had this for breakfast. How do we know that? Because not only did we have a picture, not only do we have location data, mapping information, we also have your body. Your body count. You went for a run just before breakfast. How do we know that? Because you were wearing your watch. Now we start seeing the pieces built together on context. Do you want that in an Android? Google Cloud? No. Do you want to encrypted in your Apple phone? Sure. Is that the intelligence amplifier and wisdom keeper I'm talking about? No, it will be the feed data for that product because if you're wise, if you've heard some of what I'm saying, just a nodule of what I'm saying, you'll start building this today. At the very least, buy a cheap... Go to my website. You know, it doesn't have to be that. Go to Amazon, go anywhere, build it, get a cheap memo, voice memo recorder, download my questions. It'll be up on savewisdom.org and start speaking in there because you're saving your wisdom which you're not going to be able to do with this device. This will inform your models. But we're also recording your voice so that someday you can speak your voice out. We just need at this point, the Facebook model. Three seconds. I'm going to be a little more realistic. 15 minutes of your voice. I will give you a version of you that is indistinguishable from your actual voice. Especially if it's over a period of time answering these. If you're going to attempt a thousand questions that we're going to put out there, you're going to have many months on one little chip if you want that. Now, you talked about memory and context. Well, how did we get what was petabytes of data, which was the entire Internet, how did we get that quantized into a model that fits an eight megabyte space on your hard drive? How do we do that? How's that even possible? It is gigabyte. I'm sorry. I'm mixing my vernacular. How do we do that? Well, it turns out that there's a lot of empty space. It's like a compression algorithm. It's the best way I can explain it. The human brain actually has, and I maybe talked about this in our first conversation, it has a maximum bandwidth, a throughput, and that throughput is much lower than anybody can possibly realize. Probably in every interview that anybody's tracking me on, I'm going to say, "Read The User Illusion by Tor Norretrranders," it will tell you just how little bit of information that your brain uses to make a determination of fact or reality. And I mean fact or reality. If you really analyze what we all believe is our facts and our realities and our truth, it is built on the minimal amount of information, not the maximal. We're not capable. We're not designed to have the Niagara Falls of information to make us believe our truth or our realities. So we've kind of simulated that in the AI models. Google put out not officially, a person within Google put out a memo that was called "We Have No Moat" memo. And the moat is a parlance to a protection of the castle. In this case, Google's castle is its AI models and its paid access to its greater model. Well, this Jerry Maguire type of letter was sent out to everybody one weekend and it was telling them that the smaller models are actually winning, that the large language models that we've built, the very large one, the petabytes, are dragging us down and the open source community is making bigger and more effective models, bigger in a sense that it's able to address things in ways that the large language models that they built can't. Because not only do we have hundreds, if not thousands of people working on this hourly, we have just quantitized these things to much smaller and smaller systems so that people can start doing what I just said. Building their own local context. Now let's look at the wisdom keeper. I'm going to do a quick transformation into this. And this is part of intelligence amplifier too. There's a there's a hazy merging. Generally, the demarcation is when you pass on your wisdom keeper is your context that you built within these AI models. What happens to it? You can press it to delete key and it's gone. Or you can decide to pass it on to your heirs and they can literally have a conversation with grandpa. Or if you pass on too quickly your son or daughter when they get married, talk to your dad. "Hey, Dad, you're not here. But man, I'm going through this," and you can have a contextual conversation. Is it a replacement? No. Is it a cyborg? No. Is it immortality? No. Is it the singularity? No. A lot of those are utopian or dystopian fantasies, but it is something to be said that you can have an interaction with context. Are we waiting for anybody to do this? Do we have to wait for somebody to record a thousand questions on a voice memo recorder that costs 30 bucks? I don't think so. Do we have to wait to encode that using open source whisper on my computer? No. Do we have to wait for somebody to encode that into my local AI? No. Does it take a lot of memory? No. None of this does. And you can start building that right now. Where's my model? I don't know. I'm just saying that this is something we need to have because it's the first time in human history that 8 billion people potentially could store their wisdom and possess it in such a way that it can be foisted forward as a first person testament of humanity's recording of history, as that person saw it, and their own personal context within it.

Brian Lange: [01:52:44] Even beyond that. Okay. So let's talk about even beyond the passing on, which there's a lot we could go into there, Brian. And I think we're probably coming up on maybe one more discussion round here. And that discussion round is this. As we look at people are going to start using... I know this for a fact and I'm sure you do too. People are going to start using their augmented AI selves to accomplish additional things, things that they don't actually do themselves but are going to represent them doing things. People are going to be represented in movies and in content. They're going to be represented on newscasts. They're going to have their AI selves make statements for themselves or create additional music.

Brian Roemmele: [01:53:45] Answer the phone for you.

Brian Lange: [01:53:46] Answer the phone for us. Exactly.

Brian Roemmele: [01:53:48] Work that you don't want to do at work.

Brian Lange: [01:53:49] Right.

Brian Roemmele: [01:53:50] It's like, okay, I don't like to do that at work. "Hey, boss, Can my AI do it?" "Well, yeah, but you're going to get fired if it screws up." This is all going to be a reality. And it's just like the very first computers. Do you want to know how the first computers got into corporation? I'm talking personal computers. It was people that were dragging their own Apple twos in with with spreadsheets. And it wasn't Lotus, it was VisiCalc. So VisiCalc sold more Apple twos than anything else. And literally people were dragging them in and there was scorn. How dare you not use your calculator? How can you trust that computer? And you want to know what you go and do that spreadsheet, buddy. And it's wrong. You're out of here. In fact, you're going to do it twice. You're going to do your spreadsheet. You can print that out, but I want the calculator tape because that's how I deal with it. That's what we're always going to deal with. There's always going to be that disbelievability. But imagine if your job, now this is WallE world again. People are saying, "Oh, I'm just going to let my AI do everything." Well, if you do that, you deserve what you get. If you don't see your liberation as something that's going to transform you to be a better person, a better human, this is why a lot of my conversations are very psychological, philosophical and even religious. I challenge anybody who dives into this deeply enough. Listen, I can tell you when I meet an AI scientist that's really dived deep enough into this, they're philosophical, they're psychological, and they're deeply religious. And they weren't before when they started. This is a reality and I can't explain it in a single conversation. It would probably take a few. But you start seeing things that you're hearing that you heard seven years ago and it was kind of off the wall. You probably a lot of people today, "It's jelly. Where is he going with this?" It's because that is really what it's like. And when you start looking at this, it's like, okay, what is my job look like? Who am I now? What's the first thing most... And this is generally, guys, and I want to be sexist, but the first thing most guys will say, "Hey, what do you do?" after the pleasantries. What does that mean? What do you do? I mean, like, what do you do? What's your job? And a lot of people that's quite judgmental. No, it's a frame of reference. No. If you got a couple of brain cells, it's like, "Oh, what team are you into?" There are different things. And I'm not a woman, so my wife will tell me different reference points. You know, there are different sort of conversations. But humans try to establish reference points as a commonality when we first meet somebody. So this is what we have to start dealing with as humans. If if AI is going to "replace our job," what was our job to begin with? Now, what are we doing? Why do we work? What is this? What is this electronic digits on a screen? If it moves a little bit more to the right, I'm happier. And if it moves, that little dot moves to the left, I'm less happier. And if it's red and it's negative, I'm really unhappy. What does that mean? These are the questions that we're going to have to maturely start to answer. The easy go to is, of course, you know, guaranteed minimum income. I'm not going to go into those debates. But the reality is way beyond that. Reality is meaning. It's meaning. Why am I here? And I believe one of the reasons you're here and you'll hear it again from me is wisdom. I believe that every human being that's listening to me possesses a tremendous amount of wisdom, and we now possess the way to retain it. And if you think you don't have wisdom, sit down. I will tell you that you have wisdom. I don't care what age you are. You have kids, you will see wisdom in children. If you're 99 years old, man, there's some wisdom there. We now have the ability to save wisdom. And this is a humanity project, not a corporate or political or religious project. It's a humanity project. It will involve all those elements. It will also give a testament, a testimonial and a test to how people see the world, because now we see the world through the eyes of the victors. The history is written by the people who are victorious. The story of America looks different from Native Americans than it does to Europeans or anybody else who's here. And guess what? The people who were here before the Native Americans, because there were generations of other people, their history looks different than the other people that came along and so on and so on. We have amnesia as a species, a tremendous amount of amnesia. And you start looking backwards and you look at the threads that hold our humanity together. What does it look like moving forward if more people have wisdom?

Brian Lange: [01:58:59] Super interesting. I think brands have an amnesia as well. {laughter} And it's interesting that to be able to capture what happened we often rediscover, like you brought it up earlier, brands recapture... They think they've made so much progress. But actually there's an old model, a proprietor model.

Brian Roemmele: [01:59:23] Oh gosh, yeah.

Brian Lange: [01:59:24] Yeah. And Forerunner just put out this treatise on digital franchises and it's really interesting to see all these things align and come together at a time. What a time to be alive, Brian. I think on the last time that we podcasted you said  like, "I see the future, and I'm kind of afraid of it. I'm kind of nervous about the role that I'm playing as it sort of ushered in." Do you still feel that way? Do you feel a sense of responsibility and almost the sense of terror as you step into this next era of technology? If we're at the Apple mark one right now, that next era is just going to be wild. How do you feel about it and your role?

Brian Roemmele: [02:00:29] Very, very mixed feelings, right? Every technologist has blood on their hands from the inception of control grids, social media, the things that we saw take place. I don't care what team you may think you're on, I'm on team human, period. All the other stuff really makes no sense. And I guarantee you in 25, 30 years, 50 years and definitely 500 years, nobody's going to care about your little squabbles, whatever they were, however big they may be deeming, there's much bigger things that humanity is facing that is much more real and much more pressing. And that does not diminish anybody's reality. In truth, I don't diminish that in the least. What hurts you hurts you. And that's just the way it is. But we as technologists, as somebody my entire life, I would promote tech and I would think it was a solution to everything. It's now become the problem to most everything. And we didn't talk dystopia in this conversation. I word vomited for a couple of hours here about the grandness of it, but have no illusions. We absolutely are facing and I can maybe do part B or some other thing. We're absolutely facing things, but they're not the things that people generally believe. It's not oh, misinformation is going to be magnified and projected. It's a simple antidote to that. It's called discernment. You know, discern. Discern. It's also offense control. The testament of maturity is you get offended less because you realize that everybody's on a typical journey and they're at a particular place in time where they are maybe less evolved. Does it take a little strength to do that? Yeah, nobody's perfect. I'm not. But as you get older, you get less offended if you're growing. You can get older and grow. You can get older and stop growing and you become more and more fenced to the point where nothing but your existence is being offended. I didn't sign up for that, Brian. You didn't. Erica didn't, our producer here. And probably nobody signed up for that, whatever that means to you. We signed up to do something grander and maybe it's time we start focusing on that. You know that horizon, that hill on the horizon that we all looked at as a species, we were all one family. We still are looking at that horizon saying, "What is over there? Let's mount up. Let's mount up a group and let's go and explore." That's where humanity is right now. It's the only answer. Everything else is here to divide us. And what AqI winds up telling you when you ask it in plain ways that is not being edited. That's what it will tell you. It will tell you in a very plain, logical sense that we are being ridiculous as a species on so many levels and not, I can tell you, I assure you, almost all of the ones that you think are obvious are not the ones that AI ultimately come up with. They are not the ones that we think are so profoundly important. And I don't care what is in your top line. I'm not talking to you particularly, Brian. I'm saying anybody listening. And this is your own personal journey. I'm not a part of this. This is your journey that you take and you deal with it as you will. But all I can tell you is growing is changing your mind. Growing is growing stronger. Growing is realizing that we're here for one purpose. And that is the expression of love. Abandonment is a form of a lack of a feeling, of a lack of love. Love is wrapped up in being wanted, desired as a human being, being recognized, but also being guided and fathered and mothered. We are in, I tell you one thing, I'll tell you one thing that AI has told me, and I've run models that have run for hundreds of days looking at corpus of human knowledge that we both, all of us human beings need more moms and more dads, whatever that means. That's the wording that basically comes out. And what that means is love without any strings attached. That's what AI believes this ultimately should mean. Now, we may not have been blessed with that, and I am really sorry. I am. But that does not mean you're a victim forever. It means that you need to seek that out. This sort of level of love. You deserve it. You own it. It's out there. And AI is refortifying that idea that that's what we need. And that also means guidance in a sense with fathering in a sense. The AI sees it. It is telling you that you screwed up and you better get back on the trail. The reality is all of humanity is facing this. You want to go off the trail and just go keep going in that one direction. Go ahead and do it. But as a dad or in this sort of archetype that AI has built for us, it's telling you that you're going to have to face the repercussions of that delay. That delay that you've taken in whatever you think is so important. You may need to wake up and come back from that. The last couple of years I hope that you looked at your life and the people that are no longer with us and you started looking at what importance is. I hope that also controls what you're offended over and what you debate about and what you get angry about and just start seeing the humanity. Because when you probe AI without the filters, it finds our debates ridiculous. It doesn't mean that they're not real. It just finds that the arguments first, when you have a real debate, the logical fallacies that all sides wind up creating are hilarious when AI finally finds it. And you can make AI funny in sort of a Dan type of sense where it's using Internet vernacular and it just it goes off on you or you can make it into a grand old man and it can just kind of really give you the sort of wisdom aspect. This is where we are as a species, and this is where we are with AI and we're all going to be facing this. The Pandora's box is open. You can try to regulate it. Good luck with that. You can try to put it up to somebody else's greater hands as if they know better than you. You're just as qualified with your data as anybody else. And I'll give you one final thing we talked about can we hold or context locally? Yeah. I mean, one of the things that happened, Brian, is local storage got sort of sidetracked in the early 2000s because cloud storage became so much cheaper and didn't get cheaper necessarily because of the technology. It got cheaper because underwriting money from IPOs and venture capitalists was subsidizing the cost of storage to such a level that you couldn't compete with local storage that had no subsidy. So local storage technology started dwindling. The best that we have is SSDs. But permanent local archival storage was getting kind of pushed to the side because it's in the cloud. Well, most of the cloud is getting deleted as we speak. If you don't access your Gmail account in two years, everything's erased. All those pictures, everything. Go and check what happens to your Apple photos after you stop paying for it. Somebody passes away. Nobody's going to maintain that. What's going to happen to her iCloud photos? They're gone. We're going to be the species with amnesia because we dedicated so much to cloud storage and we expected it to be there and it may not. There's also natural disasters. There's a lot of things. So what is the answer to that? There was something called holographic crystal memory. Fujitsu was a leader in that technology in Japan and the highlight of the Japanese economy and they were able to get nearly a petabyte onto a crystal substrate which is holographic literally as the name implies. And it had 35,000 year half life. EMPs, erosion, even cracking it to a certain level would not delete that data. I think that's pretty permanent. Realistically, it's going to be about 200,000 years heat and pressure not going to affect it. That's available. I have to sit down with venture capitalists and explain to them, yeah, the end game is to get holographic crystal memory and people are going to walk around with crystals looking really futuristic and that's going to be their data and it's going to be encrypted and it's going to know when you've passed on because your biometric signature is gone and there's no way that it can be unlocked. And so it will have be a locked version that you've allowed only certain access to your data because you may not want the future to see it or it might be time based, a lock that a thousand years from now you get the raw data of everything you dealt with. So that's the wisdom. That's the concept. So the end game and that's why I'm extending this a little bit. The end game is if we do this right as humanity, in our hand will be a crystal and it will hold the context of our entire human life. Video ultimately still in a petabyte. If I ran a video camera in HD, from the moment you're born to the average human lifespan, it still won't max out that petabyte of data with the right encryption. Sorry. Right compression and of course encryption. So yeah. And what would it cost? Well, Moore's Law essentially today, probably about $30,000, of course. Quantum computer is probably going to be about $70,000. One day we can talk about that, because that parallels this. That's more of a wake up call because there are no secrets at that point. So it's about time we wake up because there won't be any secrets anyway. What's that storage mean? I don't know. What I do know is it's valuable. And I would hope that maybe a thousand years is enough where you can put all these crystals. Now it's going to sound really Superman quasi sci fi. And we put them all together and we have the combination of all human wisdom for the last 1000 years in our hands. And that is far, far better than the corpus of human data as sucked up by a large language model. This is first person wisdom being transferred in perpetuity to next generations. We will never be a species with amnesia. We will be a species that are built on wisdom. And this is all attainable now. This is not utopia. None of this is utopian, in my view. It is just a logical conclusion of us putting our hand on a cave wall. When you and I as a family wet our hand in something red, probably blood of an animal, the first ones look like, it was our hand saying I was here. It was a projection of humanity across the ages. How far that was. Who knows? Egyptian mummification. The pyramids. Cuneiform. Clay tablets from the Sumerians. We've always been on this journey. The Gutenberg press. We've always been trying to foist forward our wisdom and our knowledge. Now we have a way to do that on such a dimension that it can fundamentally change humanity.

Brian Lange: [02:12:39] Well, mount up, everybody. As you said, Brian, let's go climb that next hill.

Brian Roemmele: [02:12:45] Yeah, that's where we're going. And can you commercialize it? Yeah. Will there be trillionaires? The first trillion dollar five person company has is basically going to happen very soon in our lifetime. And that's going to be because of AI.

Brian Lange: [02:13:02] Oh wow. You heard it here.

Brian Roemmele: [02:13:02] You do not have any barriers to AI. You can do things that were unavailable to you before. You can code. Do you need a founder that is a coder? He's got to be a tech founder. Of course. That's a paradigm for the last 20 years in the Silicon Valley. No, you don't. I can give you a local model of a language that you can run and it can code anything you could possibly want. So this is the reality. There are no limits. Now we just need to get humanity on the right track to be able to use this wild horse that we're all on. It's wild right now. It's going in a thousand different directions. We don't know where it's going to end up, but a few of us have kind of thought about this for a little bit then since November of 2022, and I'm trying to give you some guidance and I hope in this conversation it gels up right now in your mind some of the really crazy ways we got here, why this is so powerful, and why you shouldn't be afraid unless you don't do something. If you just sit there and you assign this responsibility to somebody who you think knows better, I'm going to tell you, there is nobody that knows better than you because there are no experts in AI as far as I'm concerned.

Brian Lange: [02:14:17] I think that' a really interesting point. And especially the industry that's listening to this, it's so early. If you're coming to this and you're like, "Wow, there's like so much I don't know," it's like, "Wow, there's so much that a lot of people don't know." This is the time to be building up a muscle around this and to experiment and try new things.

Brian Roemmele: [02:14:39] Train every employee on how to super prompt. That's why I'm talking with you. I'm serious about this. If anybody in our voice and this is so important because I don't want to see people lose jobs over this, it is the lamest thing you can do as a CEO is a fire somebody because you think you've replaced them by AI. I'm challenging you to fortify that person with AI and they would do seven times better. Seven times better. And I will tell you that at some point your stock will be worth many times more because you didn't fire somebody. You fortified across the entire company empowerment with this new tool so that your employees are now 7 to 1. And that's not a guess. Right now, it's 7 to 1. In two years, it'll be 13 to 1, and in five years it'll be about 3000 to 1, because we're going to be going very logarithmic on this. And the reason why it's slower is because we need to train people how to ask the right questions and how to do this in a safe and effective manner, safe in only one regard. So the company data is not leaked out to the greater world. I'm very concerned about that. And it's not going to happen in a cloud. Sorry, guys, game over. It's going to happen locally. So we're seeing a new paradigm shift because there's no way I'm going to allow a large company to put their, I don't care how encrypted, their very important critical data in a cloud anywhere. When it's in an AI model, it is the biggest tool that can be used by a competitor or anybody else. So do not fire somebody. Empower. Learn how to prompt. I'm not the only one. I happen to believe in a special way of prompting. Meaning we use psychology, we don't use math, we don't use programmatic kind of concepts. We use all sorts of ways to try to do this. Please do that if you're running a bigger brand. And be strong enough to stand up to somebody who's above you to say call their bullshit, say no, that's a weak response. You don't need to eliminate the copywriter here. You need to empower the copywriter. And that copywriter needs to be empowered to know how to use this tool to create a ten x, 500 x better than they would have done alone.

Brian Lange: [02:16:59] I hear arbitrage is what I hear. This is an arbitrage opportunity. Whoever gets to this first and does it well is going to have... Early adopters are going to be the ones that see the greatest gains. Because if you're coming to the game and everyone's already using it, then the benefit you're just going to be behind or you're going to go out of business.

Brian Roemmele: [02:17:19] I promise you, your competitor, whoever you are listening, your competitor is going to hear this message the right way. And instead of taking a cheap and easy way out and firing 300 people to get a quick hit on your stock, you're throwing the baby out with the bathwater because the long termers are going to realize that when the right message is out there that we've empowered our employees with AI to the level that they are impervious to so many things now. That stock is going to be worth a lot more than what you thought you were going to gain by cutting some simple expenses or whatever. That's what they call, you know, firing somebody, cutting expenses.

Brian Lange: [02:18:00] Optimization. Yeah. Well, Brian, thank you. That's a great way to end it. Don't fire your employees because of AI. Empower them to do more and build a moat. That that is exciting. Brian, this is exactly the conversation I hope that we would have today. Thank you so much for for all your thoughts and ideas. It's just super, super amazing to have a chance to talk to you and hear what you have to say yet again. Hopefully we can get you back on sooner than seven years from now. And I know Phillip really, really wanted to talk to you, so I can't wait to get you back on again and have another conversation. And maybe if things are moving at the pace we are, I bet you we'll have a lot more to talk about in six months from now. So let's talk sooner than seven years, that's for sure. Thank you again, Brian.

Brian Roemmele: [02:18:53] Such an honor being here. And thank you for spending this time. Anybody that lasted this long.

Brian Lange: [02:18:57] We'll talk soon.

Brian Roemmele: [02:18:58] Thanks, Brian. Take care.

Recent episodes

LATEST PODCASTS
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.