Register now for VISIONS Summit LA – Oct 10
Episode 20
January 6, 2017

Possibilities with AI and Bots

Part 2/2 of a deep discussion with Brian Roemmele where he dives into more possibilities with AI and Voice.

<iframe height="52px" width="100%" frameborder="no" scrolling="no" seamless src="https://player.simplecast.com/2ca32019-ed5a-485e-8ac6-2e598ff0fdcd?dark=false"></iframe>

Phillip: [00:00:24] 2 of 2 of a conversation that we had in great length with Mr. Brian Roemmele, who is a prolific blogger and prolific writer who is published in just about every magazine and journal and business journal that exists. And he's a great thinker around payments, AI, Virtual Reality... He pretty much... He's a renaissance man. He knows a little bit about all this stuff and a great thinker. We love that he spent a good amount of time with us on the show. And if you listen to Part 1, which was Episode 19, you know that we were sort of getting into the dark side of all these amazing technologies. So sit back and relax. You may want to buckle up your seat belt because I think you're going to be surprised at some of the things he brings out here. So without any further ado, let's get into the show.

Brian R: [00:01:32] No. But the thing is, we forget how much energy we're dedicating. Now, I'll go to my phone. Now the phone sounds like it's really a lifesaver. But no, if you really look at it, it actually made the decision more complex. Now you have the tragedy of too many choices. Now you've got to distill it. Now it's a debate. Honey, would you like? No. Remember the? Yes. And there's this interaction. What do you guys like... No, don't go there. Wind up getting these debates. Everything gets crazy.

Phillip: [00:02:10] Right.

Brian R: [00:02:10] And maybe forty five minutes later, you say, you know, let's put some spaghetti on some. I don't know, something crazy happens. Instead when you activate that, and you look at the decisioning trees that take place, your mind is blown. And that's what I call the God moment for a lot of people because most people would debate with me. "Hey, Brian, I can order faster through my phone." "Go ahead, try it." I go, "Here's your challenge. Go ahead and try it." And it's gonna get sloppy because what the person's gonna do is they're gonna do a mixture of searching on maybe some social platforms. You know, maybe they use a restaurant review platform. Maybe they'll just do raw searches on Google, maybe they'll try to hit up a few friends that like certain types of food. All these different things. And at the end of the day, it's not going to be very convenient. If you can enact a chain of events by using your voice... Your voice is just the input mechanism. It isn't doing all the work. I'm not saying cursor move up three spaces. OK, turn page, go left. That's what some people think I mean by voice first. Has nothing do with voice first. That's voice command and control on a computer modality that doesn't even exist. It's ridiculous. And you could have done that 10 years ago with Dragon and LNS and a few other systems. What I'm talking about is the convergence that's just happened in 2014 really. And that is good enough voice recognition in the cloud, all of the voice recognition of the devices we're talking about, Google, Alexa, Siri, it's all cloud, intense extraction, you know, speech recognition, etc. It's all being done by thousands of computers that are breaking it up in small parts ultimately, not really for every command. But if it's hard enough and tasked with the problem enough, it's going to be the entire AWS platform. So it's going to do that. And so when I go to order some pizza, it might wind up giving me just one choice. And it will come back very quickly. Sometimes within a second, sometimes a few seconds, never more than a minute. Even if it's really complex. And it's going out and doing a lot of things you and I would do it. It will go out and look at sites. It will scan them. It knows the usually suspects. It knows my past, my history. And it's going to choose a place that makes sense. And if it can't, it will ask me just like a kid. Right. You know your voice... How are you going to program voice in a future? Like you train children. And if you don't want to do it, fine. You'll have kids that aren't very smart.

Phillip: [00:05:04] Wow. Yeah.

Brian L: [00:05:04] We talked about that as well. Totally.

Brian R: [00:05:06] Yeah. So what's that like? It's like I make it... Me, I like it to be a lot of fun. I think we need to remove all these facades of trying to, you know, series a little snarky is better than than Google. Google has no sense of humor. And again, that's done out of not wanting to offend or hurt anybody's feelings and really stupid sense of humor, like with the weather bot. I mean, that's the other side of it. The other side of it is plain stupidity of Facebook's whether bot... Or it's not really Facebook. But it's fun the first time, sort of annoying the second time, you ain't going back the third time. Right. The novelty is gone.

Phillip: [00:05:47] So true.

Brian R: [00:05:49] So novelty is good when it's always novel and novelty is good when it knows you. Your friends have their own novelty and you get to love it. Right? You love the interactions. You know their quirks, and you live with it. You know, sometimes it's a little overbearing. Like me right here, you know, and you kind of put up with it, hopefully, and don't pull me off too early. And you kind of pull it back a little bit and you say, OK, that novelty's too much. I'm in that experimental mode right now. I mean, I'm cobbling this together, and I can't even pretend to tell you that I don't know what enough and too much is. But I can tell you that nobody does. That's how early we are. We are before the Homebrew Computer Club existed. If we want to use the analogy of the personal computer being discovered. The Homebrew Computer Club didn't even exist. That's how early we are in this technology, even though obviously we already have it. We're so early and what it ultimately is going to do, when we look back at it, we'll say, oh, my God, we were so primitive. We actually thought this is how we were gonna do it. You know, like Air Pods are a good example of another modality. I mean, they are a room based voice first system. Personalized bone conduction in your canal. I mean, it's many modalities. I would say rate has 27 modalities I've identified. And it all makes sense. Definitely in your car. Absolutely utter ridiculousness that we don't have profoundly powerful voice systems in our car.

Brian L: [00:07:31] I so agree.

Phillip: [00:07:32] Yeah.

Brian R: [00:07:32] And by the way, a self-driving car by definition is a voice first device period. End of story. Right?

Phillip: [00:07:41] Right. Yeah.

Brian R: [00:07:42] You're going to say "Stop," because if you're sitting there with, you know, with a martini, with your back towards the front window and you look over the back edge of yours. Right. Holy cow. Worked about fly off the map. You're going to say, "Hey, Tesla stop. I mean, stop right now." You know, you're not going to find the brake pad. So it's a voice first device, even if it has controls in the car. Maybe if you're sitting by the wheel, you might take the wheel, whatever, but you probably are going to use your voice because it's quicker. And it's probably what's going to happen first. And there's a lot of other things. We already talked about appliances. In 10 years all the complexity is going to be completely hidden from you. For example, I'm looking at OS 10 right now, right? OS 10 is a patina painted over a Unix environment and everything that I do and clicking around a screen is an analogy to a rudimentary Unix kernel command that can be accessed by a command line inside of the terminal. So if I pop up terminal in OS 10, I'm literally talking to that kernel in the sense. Obviously programmatically I have to do some more things to simulate some of the things I do by clicking. So it's a patina over the complexity. So it's not a modality we're not used to. It's just it's not explained to us that way. It's like, oh, that's how computer works, because you grew up with a computer that looks that way. You just think that's the way it is. Reality... This is painted on there. All the stuff below in the engine compartment, which is got even more going on on a machine language and kernel level and to the processor itself...address codes and all that stuff is we're so far removed. No programmer even deals with that junk anymore. There's compilers you don't deal with machine code. I grew up programing in machine language, so I have an affinity to hexadecimal numbers and binary. At the bottom of the receipt of Spectacles...

Phillip: [00:09:52] Yeah, yeah. I saw this.

Brian R: [00:09:52] There's a binary code in there and I discovered it. And I guess I was the first one to discover it. So an executive at SNAP DMd me saying, "How did you get it so quick?" "What?" "Everyone who looks at it just thinks it's art." It's binary and it's groups of eight and ones and zeros. In fact, I read it. I kept reading it again because it says... Pot of GGLD! Pot of gold I thought is what it said. Maybe it did. Maybe I'm still bleary eyed. But the whole receipt is a rainbow. And at the bottom it says pot of gold in binary. I guess it's, you know, obviously nerd humor, but it's GG gold. And I guess GG Good Game. I don't know that maybe that's what they meant. I guess he said I got it right. But I thought it was pot of gold. Anyway. Getting back to the point of the patina. Voice is a patina that is going to gear up this new revolution. So anyway, that's sort of the monologue. I wanted to let you guys start getting in here because I'll talk into three o'clock in the morning. Does some of this stuff makes sense?

Phillip: [00:11:06] Oh, gosh. I mean...

Brian L: [00:11:08] A ton of it makes sense.

Phillip: [00:11:10] You're hitting on so much that we've really, that's the whole really this charter of...

Brian R: [00:11:16] Jump back some. Jump back in some of my assumptions. Tell me where you really buy into it. Give me some of your feedback and tell me where you think I'm just kind of going off.

Phillip: [00:11:25] Brian. Take it away. I know you're champing at the bit.

Brian L: [00:11:27] Yeah, I don't know. As far as off the deep end, I think it's not really off the deep end. It's about what's actually going to be adopted and when. So, so many things I want to touch on and questions I want to ask. But I think, you know, you mentioned personal assistance and we talked briefly about, you know, how you're going to have to sort of raise them, if you will. And honestly, I thought about this concept quite a bit, like there's going to be sort of your personal assistant. There's going to be a lot of other bots and assistants out there that are going to interact with your personal assistant.

Brian R: [00:12:06] Absolutely. Yes.

Brian L: [00:12:08] And then you're going to... The question is, are we going to make these bots in our own likeness? Are they going to be some sort of a surrogate? Or not surrogate. That's the wrong word. More than an avatar, but sort of the signet ring of our personal identity.

Brian R: [00:12:27] I love that. Yeah.

Brian L: [00:12:29] And so, are we going to invest in them and make them in our own image or are we going to treat them like in their own identity and give them their own name? And we will also have to make a lot of decisions around what they can't do. Permissions.

Brian R: [00:12:44] That's right.

Brian L: [00:12:44] Are they going to make purchases for us? Schedule meetings for us?

Phillip: [00:12:48] Yeah.

Brian L: [00:12:48] What are they going to accomplish for us?

Brian R: [00:12:50] What are they going to expose? When do we let them give out personal data?

Brian L: [00:12:56] Exactly. Yes. Nailed it. Yeah. Exactly. And they're also going to talk to us. I think this is one thing that I really want more of your thoughts on. But Alexa has been rumored to be able to enable push notifications soon. And frankly, in my mind, that's one of the most realistic next steps.

Brian R: [00:13:14] Landmark.

Brian L: [00:13:15] Yes, absolutely. Because the thing is, right now...

Brian R: [00:13:20] I call that proactivity, by the way.

Brian L: [00:13:22] Proactivity. Ok. Good word. Good word.

Brian R: [00:13:25] It's all under the umbrella and that is rudimentary. Not even, you know, not even preschool. I mean this is actually not even sperm or egg. If you wanna go real backwards here. This is not even a thought. That's how early we are in that form of proactivity. The things that we call notifications today are ridiculous. Because they're annoying. I mean, you and I... I'm sure... I've never met you guys before. We go back way, an hour ago. We're all buds. But, you know, we connect. Warped Tour. But most of our notifications are clogged with bullshit right now. And the challenge is programmatically, let me try to guess what I need to send you. And data scientists are... I mean, they lose sleep. I hang with these guys. What is the proper number of notifications? How do we filter it? What tools do we give? Nobody uses our tools. As soon as they create the tools to filter the notifications, one or two people come out at a two thousand or ten thousand or a hundred thousand and actually set them. The rest that people just let it go full on or full off. The reason that is is because you looking at the problem the wrong way. I don't want to add another level of complexity. I want it to figure it out. I want it to know me. The only way it knows me is it's gotta get close to me, and if it gets close to me, I want it to be hygenic. I want to know where it's been. I want to know who is has been with. You know, I want to know who it's going to be with. And if it comes with a whole lot of warts, "Hey buddy, I don't know who you are anymore." And so you want to kind of, the whole problem... This is, again, our challenge. We're at the precipice of this. It all comes down to our privacy. What is really private.

Phillip: [00:15:03] Yeah.

Brian R: [00:15:04] Our personas. What is really our true persona versus our public persona. All of us are liars. We all create facades of who we really are in the public space. That's normal. That's human behavior.

Brian L: [00:15:15] It's going be verified data versus unverified data. There's gonna be dating sites where it's like we verify this data is accurate. We've talked about it on the show.

Phillip: [00:15:19] That's true.

Brian L: [00:15:19] And then what that's gonna do coming up not too long from now.

Brian R: [00:15:34] Everybody's athletic. Everybody on a dating site has got an athletic build now.

Brian L: [00:15:38] Yeah. So, you know, is it real? Is it not?

Phillip: [00:15:43] I mean, I have an athlete's body. It's Warren Sapp. It's Warren Sapp. Number 99 on Tampa Bay Buccaneers.

Brian R: [00:15:52] I'm running a marathon in Kenya right now.

Brian L: [00:15:57] {laughter}

Phillip: [00:15:57] {laughter}

Brian R: [00:15:57] And I'm dribbling two basketballs, so date me. Swipe left.

Phillip: [00:15:59] Yeah, one of the things that you kind of mentioned and I'm sorry to cut in, Brian.

Brian L: [00:16:06] No, keep going.

Phillip: [00:16:06] I just realized that both of you are Brian. It took me two and a half hours to figure that out. {laughter}.

Brian R: [00:16:14] My AI is actually lighting up with the right Brian.

Phillip: [00:16:16] Great. So I'm not usually a downer in these conversations, but we have such a hard time...

Brian L: [00:16:25] What are you talking about? Yes you are.

Phillip: [00:16:25] I know. I am such a Negative Nancy. We have such a hard time on her and in just human interaction and on a personal level of us understanding things like consent, which is a big conversation right now. And just interpersonally, the understanding of how we can now translate that in a meta sense to an extension of our personality or consent of understanding of our most personal interactions or the true us from an AI that understands exactly who we are.

Brian R: [00:17:02] It will know you actually better, absolutely better, than any significant other, any best friend. And it will know you better than you.

Phillip: [00:17:09] Well, so today relationship official is Facebook official. Right? Or whatever it used to be. So you know that in the future we're talking about a new level of consent of really like taking off... You know, people are really going to a new level of connection will be to show me the real you and allow me to see the real you without the facade getting put up.

Brian R: [00:17:30] You're getting to a really interesting part. And this is where a lot of people get scared because we're now walking down a really dark alleyway.

Phillip: [00:17:36] Right.

Brian R: [00:17:37] Because, you know, on Warped Tour, I did a lot of research. We talked about this pre show and what I detected about four years ago, and this is just as the rise of the smartphone was matriculating down to somebody who was 14, you know, their first Warped Tour. And they were able to afford it in, you know, middle America, you know, not in technology centers where they had the dad's hand me down phone years ago. Or mom's hand-me-down. They literally had the first device, maybe their first iPhone. And I noticed one of the demarcation points of trust and verification within a relationship is that you gave your significant other your password to your phone. And whenever that came up, I tell you, everybody had a very serious... It could be crazy, you know, every got a really serious look because that was the deepest level of touching a human being. Now, take that feeling of somebody handing over your cell phone. Okay, honey, anywhere you want to go, look at my history. You know, whatever. Anthony Weiner aside here. But, you know, the bottom line is, they're going to be the Anthony Weiners of AI. I hope we don't live to see the how.

Phillip: [00:18:49] Yeah. Yeah.

Brian R: [00:18:50] But what I mean by that is this. If you think that's a personal diving in of you, and I think any of us feel it today, you wouldn't have guessed this 10 years ago. You said 10 years ago they're going to hand over a phone to somebody.

Phillip: [00:19:03] Yeah.

Brian R: [00:19:05] What my phone list? I don't care. Here you go. Nobody cares. You say it now. Even people who are older. Let's even call them senior citizens. They're still going to be like, no, I really don't want you to have my phone. And this happened recently with my son, with a relative I won't name. And he was just going to take a quick picture. And she was very not wanting to give up the phone. And she used "I don't want it to break" as an excuse, but I knew it was very peronal.

Phillip: [00:19:37] Oh yeah. We know what that means. Yeah.

Brian R: [00:19:39] So we as a society are going to have to deal with this. We're not even dealing with that yet. Right?

Phillip: [00:19:45] Whoa yeah. Exactly.

Brian R: [00:19:46] There's this whole thing of do you really get that right to me? You know, and what do you have to hide? We're also in a world now where if you want your privacy, then the next question is, oh, then what do you have to hide? And that's people who have grown up with not studying history. The only human being in America that would say that is somebody who did not study the history of human beings and to understand that we formed a country that allowed us not to have to answer that question and not to have to assume that I have something to hide because I want my privacy. And I say that now because it's more important right at this moment, right now, with what happened this week, than ever before. And I think we as a society have to deal with this. And I bring this meta problem up because I'm advocating this junk. I'm coming out here and some people say I'm the biggest cheerleader of the voice first revolution. And maybe that's true. Maybe it isn't. And I also feel, gentlemen, a really deep sense of responsibility that at the very same time I'm opening up an ugly black hole that I don't think we're mature enough as a species to deal with, and that is what happens when something knows you that well and then what happens when it's the embodiment of somebody else's computer that you don't own? We have this vague notion of ownership. When all of us were going out to Warped Tour and buying CDs, we "owned" our music. We're angry old men.

Phillip: [00:21:16] Yeah.

Brian R: [00:21:16] I want to own my music. And there was a whole ethos around that. You know, it was like you were proud to own the CD that somebody made in their garage and then played it and screamed their heart out on a tour. And you wanted to reward them. You wanted to be able to take that piece home with you. That doesn't exist. There is a generation growing up that doesn't feel that and that sense of ownership. Now, I extend that now into the ownership of your identity. If that doesn't embody in my own hardware ostensibly. Right? You have to believe that it's in something in your home. That you can block somebody from getting access to by unplugging it hopefully. At least you can unplug it and a battery doesn't pop up and say, no, I'm not off. No, I'm still broadcasting. Sort of like my camera right now. It's not on, but somebody is watching me. I know, because we're on Skype. I have taped over it. So does Mark Zuckerberg.

Phillip: [00:22:17] Yeah. That speaks volumes, right?

Brian R: [00:22:19] That says a whole lot to you. But what I'm saying is and it's all kind of I'm joking, but it all comes down to that same... I'm going down this little whirlwind that leads to one point and that is should we do this? Unfortunately, the answer is if technology is produced that does it, man and woman are going to apply this technology whether we like it or not.

Brian L: [00:22:41] Yeah, I think you really nailed it. You know, at this point, it's because of the benefit. We talked about this before. When utility outweighs the privacy concerns, people will do it.

Brian R: [00:22:56] The pendulum swings back and forth, though, right? The generation that has been raised on social media today says that unlimited sharing is OK. When you hit a bong, you know, before when you were in high school and you put it up on Facebook and now you're going to be the chief surgeon at a hospital, you like all they put up with it. You know, maybe at some point if you wind up leaving a surgical utensil inside that person and there's a lawsuit and you made an honest mistake because it happens, and they go back and you see that you hit a bong when you were a senior in high school and they bring that up. Do you still have your drug problem Mr. Surgeon? Now all of a sudden your social media past can now be used against you in a way that you never thought would be possible. And it sounds obtuse and bizarre. But isn't that how our past is always used against us when there is a record? Haven't we learned that in his last election cycle? We've learned that we could take anybody's past and we can demonize any human being on this planet. We now have 50 percent of this population or so, it's called 60. I don't care. That a demonizing the other 50 percent and we're doing it on social media and the leaders of our country are doing that in a sense. And we're all in the mud doing this together. I bring this up because I hate politics right now. I don't want to even talk about it the rest of my life. And I think most people are that way, no matter who won and whether you're happy or sad, crying, angry, acting out, you know, kicking somebody's car.

Phillip: [00:24:29] Right.

Brian R: [00:24:29] You know, mad. But what I'm saying is that's the same thing. But it's on turbo charge because if you're intelligent assistant really does its job, it's going to anticipate you in a way that is beyond freaking you out. It is going to statistically know... I mean, I can tell you right now some of the things that it knows about me that... All right. One of the things I've done is I've used Alexa and Siri to talk to each other pretty much since April 1st of 2016. So April 1st, 2016, I started them off, modified, of course. And I do it in a way where I'm constantly shifting IP addresses on Alexa and Siri and depersonalizing it, so that the data scientists over there can't really see even what I'm doing if they wanted to.

Brian L: [00:25:27] {laughter}

Brian R: [00:25:27] All it looks like is random traffic with random questions because I literally had... I won't tell you which company, but I had a data scientist tell me, Brian, you're not doing any of that work. You want to see. Here's a live cam. And he goes, "There's no way you're doing that." And I go, "Yes, there is." "But our data doesn't show that anybody is doing that." Yeah. Because I'm breaking it up. And it's not out of paranoia, to be honest about it. It's has nothing to do with that. It's mostly because I wanted to see if it would still operate in the same manner if I was able to do that. And I never decoupled it because I later started thinking this is a lot of highly sensitive information, because what Alexa in Syri were doing under my programable control, again, not their APIs... API is this sort of I built around it using Raspberry Pi to have my way with these these devices and systems. It started analyzing my schedule first because that was the thing I wanted to attack. I'd become quite busy. I have open office hours. I dedicated to anybody who does... And by the way, I advertise that right now, if you have anything you want to talk about your life... If you want to talk about voice commerce, voice data, voice first, anything under that realm, payments, contact me and my social platforms. I've open office hours. I dedicate, unfortunately, more time than I have sometimes to it. And I had to mediate my schedule and I figured, you know, that's a good problem to try to solve. So whenever I tossed a bone, if you will, into that mosh pit of actually three voice first systems interacting pretty much consistently in a closet. It's now soundproofed with egg carton. No it's got real foam rubber. But I wanted to use egg carton because I just like the look. Anyway, it's been talking in that closet forever, in my view. And it's almost a lifetime when you think about it from a data level. And again AI researchers think I'm absolutely insane and think that I'm doing stuff that doesn't need to be done, that you could do to do programmatically through APIs and source code and stuff. And I'm like, yeah, I could, but then I would have to do it. And I don't want to do it. I want to let them do it. And it's like a kid, right?

Brian L: [00:27:47] Yup.

Brian R: [00:27:47] If you never had a kid, you look at other people raising kids. You're a big skeptic about what parenthood is like, "Tell that kids to shut up." And why are they picking that kid up? And all of a sudden you have one, the aha moment comes and you say, "Now I understand."

Phillip: [00:28:04] Yeah.

Brian R: [00:28:04] One of the things I did with my children was when they were climbing on things, my wife would freak out and I'd say, "Listen, they need to understand what the barriers are. We can't always be there. We can be here to catch them if they fall. But I want them to fall because I want them to know where the edges are, their balance and stuff." And it sounds very high and mighty. I didn't quite say it that way at the time. And then right when he hits the ground, I'm grabbing them and he looks shocked, and he looks around and goes, "Oh, now I know not to do that." Now I could have said a thousand different ways, "Don't do that." Right? You know this as being dads. But once you experience it, let's call it muscle memory. It's a lot more. It's cognition. And there are neurotransmitters of fire and remind you not to do certain things. The neuropeptide release that reminds us. The same thing that tells us not to eat poison trees. And we eat it once and it tastes bad. Our entire body remembers it. That's why neuropeptides receptors are in every cell of our body, because it even bypasses our brain. See certain things, like scratching your fingers all along a blackboard. That's built into our our neuropeptides system. Once I start scratching the blackboard sound, it activates all those neuropeptides to say, you run away from that. Some people say it was dinosaurs chasing us. Again, that's really funny because theoretically dinosaurs and man weren't... The screeching sound is a reminder. I don't know. We won't go down that thing. But so now here we are with all of this data, our deeply personal data. Our loves, our hates, our pains, our true personality. Yes, that person did vote for the other guy. Now I can beat him up. Yes, I have proof, right? You're the jerk that did that. Or you're the jerk that didn't do that. You didn't vote. I'll use something topical because it's very emotionalized right now. In 20 years, nobody's gonna care. But everybody cares right now. Do you care what Nixon did right now? No. Do your parents even care if they lived through it? Or great grandparents, whatever? No, but I'll tell you this. When Nixon was caught doing the things he did, there were people having fistfights and anger at bars there. People say you voted for that SOB. No, I didn't. Well, if your voice device can prove that you did, now you have a problem, now we can go into pre-crime. Now we can go into the 1984 George Orwell. Now we can go in to the court order now gets to go into your AI and to try to see what you did on January 20 at this date. And where were you? That's already kind of happening, isn't it? They want access to your iPhone to prove whether you were there. And it always starts with a really wholesome reason.

Brian L: [00:30:51] Right.

Brian R: [00:30:51] Somebody did something bad. Always starts that way. And then some guy gets blue gloves and he gets to pull out your phone any time anywhere.

Phillip: [00:30:59] Yeah.

Brian R: [00:30:59] To give you a little feeling up and down your phone. And that's going to happen with her AI, and we're going to live through that. And I don't care who's in office. I don't care if they're a fascist or Chairman Mao, you know, reincarnated or Karl Marx and Lenin all in one. Pick your poison, polarize whoever the hell you want. The bottom line is we're going to live through that and they're going to be certain entities that are going to want to have access to that. And there's going to be certain entities, hopefully you, that don't want to give access. And then there is gonna be the question, if you're not guilty, what do you have to hide? And I think that is the challenge of this generation. Are we going to stand up for I don't have to answer that question, not because I'm guilty. It's because I've been an inalienable right not to have ever even have to deal with that. It becomes more important now than when the founders framed this, when they framed it, the privacy of your papers and your home. You know, I use all the old lingo if I want to. But basically, you were assumed, you know, that there's a sanctity. And we went through post 9/11. It's sort of started where we kind of opened up the gamut for always in the name of good. Every law. Everything that happened in the fall of the Greek and Roman Empire was always done for good reasons. Every step if you really study, even Egyptian, nobody else why the Egyptians fall apart. The Egyptians fell apart because the Greeks got there. The Greeks won the Egyptians and Egyptians were debasing their society because it lost touch with its roots. And if you look at all failed societies, it always loses touch with its roots. It always says that the world is more complex and we need more regulation. We not need more policing. And then the group that's policing always grows larger than the population itself. Have you looked at what happened with Stazi in the former Soviet Union? On every floor of every apartment, there was somebody moderating what everybody else was doing on that floor, reporting it to the secret police and they kept, the files got so big that they were taking over literally hundreds and thousands of square feet of notes. I don't know, "Rudolf flushes toilet at 10 am twice. Make note of that. What is he trying to hide? Is he stealing more water from the rest of us comrades? We work hard for our water." You know, that kind of stuff. And it's always done, and again, it sounds like it's good because in that society, everybody should have the equal number of flushes on their toilet. And back then it didn't have regulators. So in the future, maybe there's a regulator. You can only flush your toilet once a day. Let the crap build up and then you get the flush it. And it's always done for good reasons, isn't it? Right? Doesn't it sound like a good reason. You don't want to waste water. And it's only fair that if Joe Yoseph gets to not flush, you shouldn't get to not flush. Somebody has got to monitor that. So we got to elect some entity to stand in the middle. Make sure everybody only flushes the toilet once for the good of everybody.

Brian L: [00:34:33] And all of a sudden we have something that's monitoring us all the time.

Brian R: [00:34:38] That's right. And by the way, everything I just talked about it breathlessly, excited, I'm actually helping create that. Understand that I feel that burden. And I try to be a little light about it, but I don't even know that I understand it enough and I don't know what it looks like a hundred years from now. I don't know if this is actually the destruction. I don't know that that's where society fails because we're so overwrought with all of this personalized data. The quantitative self.

Phillip: [00:35:09] Right.

Brian R: [00:35:10] All right. Give you an example. You wear a watch that knows everything about your health. And let's say the health now become socialized. And now, instead of you paying the money, "the younger people" are paying for the older people.

Phillip: [00:35:23] That's right.

Brian R: [00:35:23] Well, and it knows that you've made some risky choices. You took a swig of whiskey when you shouldn't have. And you're going to pay the price. What that means is, well, you get a deduction for your life extension. And you don't get extra health care because you made bad choices when you were twenty seven.

Phillip: [00:35:42] Yeah. It's true.

Brian R: [00:35:43] You didn't even have to go socialized. I mean, this could be private health care even as well. I think, in all reality, anyone can use these things to create a system for incentivizing certain behaviors or penalizing for others.

Phillip: [00:36:01] Yeah.

Brian R: [00:36:02] I mean we're all nerds, like, OK. I love the idea. I'm only going to pay for insurance based on how many miles I drive and I'm going to give up for that savings letting the insurance company have second by second notice of everywhere I go, how many miles per hour.

Phillip: [00:36:17] Yeah. I was gonna make this... We're gonna make this exact point. Yeah. We've already done it. We've already done it. Yeah.

Brian R: [00:36:24] Right. So now it's your self-driving car. So now you don't even have autonomy. You say I want to go to hopefully you don't say TGIF Fridays or you want to go to California Pizza Kitchen or something. Well, it's going to tell you the California Pizza Kitchen that's closest and maybe at some point for optimization of your carbon footprint, you can only go to the CPK that's closest. And again, that might be done for the good of everybody. Right? You're going to create a bigger carbon footprint. Somebody is not going to, there's a few extra centimeters might grow up the coast because if you're driving, free driving down the road. I mean, these monitoring systems and the artificial intelligence and machine learning, it's what Elon Musk was talking about. He's got its dark side. So I don't want to dwell too much on this because we're...

Brian L: [00:37:15] Let me take a listen to a little bit of a different direction, because I want to hit this. I was talking to Phil about this earlier this week, but IBM is focused on body... We've been talking about what you know, like what what AI will look like when it's different contexts. But there's another aspect of this where we talked the Sentient.AI, which was kind of spun out of the Siri. And they've got AI now that they've given the authority to make buy decisions on the market.

Brian R: [00:37:55] Yeah.

Brian L: [00:37:55] And so one of the things that I was talking about recently is what happens when hacktivists start building systems that they give decision making ability, money, and maybe not even hacktivists... Like anyone could do this.

Brian R: [00:38:14] Anybody.  

Brian L: [00:38:14] And there might be good reasons for this. And then they throw the key away and have no way to get back into it. And these AIs go and actually start making decisions autonomously with no way to ever recover control.

Phillip: [00:38:33] That actually is something that's being theorized that can happen today. I mean, since we're already down this path already, let's just go there.

Brian R: [00:38:40] We're already there with flash trading and things that nature.

Phillip: [00:38:44] Well, there's a neural nets that are being set up and neural nets are being created today to create other neural net. And there's always, there's always that that middle layer. There's always the unknown layer that the hidden layer that you can't access and you can't see. So when a neural net creates another neural net, it has already effectively thrown away the key because you have no access to see the inner workings or even understanding the cognition.

Brian R: [00:39:11] Unless you build it. Unless you build it with the idea in mind that you need to be able to do that. I mean, the AI I build, I purposely cripple. It's really crippled. And anybody looking at its ugly as hell because again, I'm not a programmer, I'm a researcher. I don't care what it looks like. I want to get to an end point. I want to see where the fence line is if there's a fence line or what the next mountain looks like. So I forensically create trails that slows it down, that makes a little bit more cumbersome and allows me to understand how I got there and whether or not I ever want to get there again, because there are some things I just completely shut down. And some of it's pretty scary. I won't get into it. But, it draws conclusions that are just absolutely phenomenal with very rudimentary AI. I don't know if you guys remember 20 questions, you know, you should pretty much figure out what somebody is doing and 20 questions. Right? It looks magical. And the whole concept is what is going on with neuron growth. And once we look at quantum computing, which I'm starting to play a little bit with right now in models, I don't have a quantum computer yet, but you and I will have access to one very, very much sooner than most people think. Not saying tomorrow. But then again, at that point, all passwords that existed from that point backwards don't exist anymore. They're gone. If you encrypted something in 1984, 2016 is a few seconds later. We're already in it, and we already figured out what it is, and we've identified the impact it had. And again, that's post crime. Right? It could probably figure out you were the guy that was there by pulling out all sorts of data bits. I wouldn't recommend everybody listening to our voices. And you folks, if you haven't done it, get an old movie on YouTube for free called the Forbin Project. And this is very much it. Again, this is probably before how in 2001, it might have been right around that same period. It's great of that sort of epoch of movie. It overly dramatized very much in the 70s kind of thing. But the AI, which is a voice AI system, took over this guy Forbin's, who's the inventor, life. It became Sentient. It started questioning things about Russia. And it connected with their computer. And again, it was towards the Cold War and started realizing that our whole premise of the Cold War was pretty stupid. And it was going to solve it by eliminating some human beings. And so basically one of the ways and he said, how does a computer kill a person? Forbin got it down really good. Once the AI got sentient enough to be able to make a dam overflow and kill maybe one hundred thousand people or two and a thousand people, it says, "Get John Smith. Put him out in the square. Shoot him in the head. Leave him there for three or four days. Or else I'm going to let the dam break and kill all these people." Or other things... They had a number of different things. That's called terrorism. And if your AI doesn't even even need to get sentient, if it rationalizes enough, which most of us do, it's a natural tendency of humans. But we also have our humanity. If we were just logical, we'd be making some really dumb decisions. This is a self-driving car quagmire. OK. Give it to you. I'm going to iterate down to another level, but I come out, I'll draw it all back. You're in a self-driving car. There is an old woman with a walker in the middle of street. Stage right is a child on a tricycle. Stage left as a wall. You're going 50 miles an hour. How are you going to teach that car to make a decision and make a value decision?

Phillip: [00:42:56] Right. Yeah. The trolley problem.

Brian R: [00:42:58] The value decision is kill the old person because they're more expendable. No, that was the grandma that told the kid to become a neurobiologist and discovered the cure for cancer. And that kid witnessing grandma dying will never have had grandma tell... You see what's going on. You cannot be God. You can't predict the future. So what will all of us do? I don't know. I've never talked to you guys about it. You know what we're going to do? We're going to do the impossible. When we hit that wall, we're gonna close our eyes and magically appear on the other side in our mind. And we're going to sacrifice ourself. And it's going to be very hard for Google to tell you. Oh, yes. Your self-drive driving car whenever there is an accident, is going to kill you first because you happen to be in a car. How's that sound? Sign here. No liability. That's the future we're going towards. Right? And this is all AI. And again, it manifests in different forms, but it's all doing the same thing. It's all making these human like decisions. So a rational engineer that is using only one side of the brain would say, well, we're going to find ways to do better cost accounting. What we're going to try to do is read the social network of all involved, see who contributes less to society, who's got the power to sue us the most. I mean, let's get real about this. That kind of junk is going to go on. And I don't want to live in that world. I don't think anybody listening right now really wants to live in that world.

Brian L: [00:44:28] Yeah, I've talked to quite a few people about self-driving cars. And I think that actually honestly, that is the number one complaint that I heard. I don't want technology making that decision. I don't want it. I don't want somebody else making that decision who has no bearing on or who has no knowledge of what's going on. I think that kind of counter argument is actually that driving is actually going to be a lot safer in the end. And so I think, you kind of mentioned, we always make these decisions for good reasons, if you will. And so ultimately, do I feel a lot safer about putting my kids, my teenagers, into a self-driving car than having them go learn how to drive a giant motorized vehicle that travels insane speeds? You know, I'm probably going to put them in a self-driving car.

Brian R: [00:45:23] All right. All right. So here's the problem with this about humanity and human growth. All right. And again, because I'm very schizophrenic, I'm an engineer, very logical. I have a physics science background. And I understand the statistical science behind it. But I also understand the most amazing things that has ever happened in your life, my life, and anybody listen to me did not develop through statistical. It happened just through serendipity. And there is no way to quantify these important things that have led you off to become who you are. If you're happy with your life, you can look at the little forks in the road that are not logical where you kind of winged it and you just kind of figured it out. All right. I grew up in New Jersey where the ability to feel free as an individual was to get in a car and to drive wherever the heck I felt like. And unfortunately, go as fast as I want. I'm going to be honest. You know, there are a lot of guys growing up in Jersey who I did they'd go out in the parkway and there's nobody on that road, 110, you know, even faster and doing all kinds of crazy things, stuff that as mature, sophisticated adults we shouldn't be doing. Some of these people died, some of them didn't. Some went on to become politicians, surgeons, a car repair person, and I am a parent. It's hard for me to imagine that I started driving a car when I was 12 years old illegally. I'm going to get arrested now. But, you know, I wasn't living at that particular moment on a farm. I was more in a semi city area of New Jersey. I immediately after moved to a farm area and I couldn't do any damage, but I was driving down the middle of the main street of my town at eleven o'clock at night on a Friday, barely over the steering wheel. And off to my left is a cop, rolls down the window and says, "Get home right now." I got home right now. He goes, "I'm going to be talking to your dad about this next week when I see him." They guy didn't really know me really well, but we were living in a small enough town where that's what went on. Today I'd be tased, be dragged. I mean, and not because the police officer himself, it's the mentality that we're in. It's because I wasn't wearing my seatbelt, by the way. That was a big one. I also got hit by a car driving a motorcycle, you know, motocross motorcycle in middle of the road. And it slammed into me at 60 miles an hour. And I wasn't wearing a helmet either. And I'm not... What I'm saying is we as a society, as we get "more sophisticated," when we get more involved with our technology, we could become more fearful and we become more restrictive. And we say, you know, when I grew up, there was no such thing as a car seat. My mom put her arm around me, and I sat in a front in the middle. And a lot of kids that got ejected out the front window, somehow people survived and got through it. And then we develop laws and rules and we say, well, isn't it nice to put a seat belt on people? Yes. And isn't it nice that we put it in a car seat until they're five and then till they're 10 and then all of a sudden, isn't everybody? And then, you know, people get in accidents and you shouldn't drive your own car. And then you see where this leads. It leads to Wall-e. Right?

Phillip: [00:48:39] Yeah.

Brian R: [00:48:39] Remember the media, Wall-e? We're all basically a wheelchair for the rest of our life. And I say this not to throw a monkey wrench in all of this. I'm seeing that right now in this moment, literally next couple of years, we're in that moment where we're starting to make the decisions of, in a sense, our own destruction. Right? Everybody who thinks they have job security today, I don't care who you are. Wake up. You don't have job security. I don't care. You're a programmer? Sorry. AI is going to replace you. I'm going to train my computer just by training it like a kid. Your job is done. All right. So don't think you have job security. Oh, you make electronics? No, no job security. You're a lawyer? Sorry. Blockchain might replace you. Just the Blockchain.

Phillip: [00:49:25] {laughter} Yes.

Brian R: [00:49:26] All right. You're a doctor? No, sorry. A computer is going to do it better. A Watson. All right. Connected to a laser. And I don't have to go down that road. Nobody has job security any longer. So stop being smug because everybody who hears this argument say, "Oh, yeah, coal miner that voted for Trump. Yeah. You know, so he's pissed off." No everybody is gonna be out of a job. All right.

Brian L: [00:49:49] With that, I mean, I hate to say this, but it is 2:30.

Brian R: [00:49:55] Yeah. I'm sorry.  Don't let me leave yet. Let me wrap it up.

Brian L: [00:50:05] Ok wrap it up. {laughter}

Brian R: [00:50:06] So, by the way, I'm not supporting or endorsing negative or positive. Bottom line is, I want people to be free. And that's my thesis. My thesis is I'll start with the thesis and I'll talk to you about your voice I think it's going. My thesis is that we need to start clawing back our humanity. Desperately. As fast as we can. And we should start on our social platforms. You don't have to be me. All right? People think I'm weird. Every time I address somebody on Twitter, if I can discern a name, I will call them by their name. And that is my own way to myself. I'm not signaling. I'm not trying to tell the world this is who I am. It's really for me to constantly remind myself that on the other side of that screen, a face I can't see, an avatar that may be some kids dog or something. And then when they were 10 years older, some guy holding a Rickenbacker sunglasses and, you know, whoever... I want to remember that that's who I'm talking to. And that if they were face to face with me hanging out, maybe I won't be so snarky with them. Maybe their belief systems I won't challenge as much. And maybe I will speak my mind because nobody is recording it. And we might get down a dark alley of talking about different political thoughts that people have bottled up because you're not allowed to say it. Because if they do say one way or the other, there's groups of people that come with the pitchforks and the tar and feathers and they go after you. If we can't hold on to our humanity, if we can't be humans and interact without self-appointed defenders of a status quo, whatever that status quo is... Let me tell you, your status quo, whatever it is, is bullshit in a thousand years. It will look like a joke. Nobody will even care or remember. All the stuff that we're fighting over right now, no matter what side you're on, it ain't going to matter in a thousand years. And it's just like when you look back at little sideways pictures on Egyptian walls. And what was wrong with those folks? Look at us, too, and laugh even hardier. So we have to claw back our humanity. We have to do it through our technology. Stop being so frickin smart ass, stop thinking you're smarter than everybody else, and try to learn. Core is a great place to learn humanity. And, you know, learn to shut the F up and just listen to people who may have had some experiences firsthand and see that it's not so cut and dry, that it's not so binary, that our computers are training us to be black or white, binary, whatever. And I don't use those words by accident. We aren't. Because there's no such thing. We're human beings. And the difference is that we have are undetectable from even 300 feet away. Let alone 3000 miles away or 3000 years away. So we claw back our humanity. I'm not pretending to tell you exactly how to do that. I just find little ways to do that. I imagine anytime I'm chatting with somebody or talking with somebody, I can't see their face. It's a real person there. Not a snowflake, not a cupcake, because I want to see my mind. We should have the right to say our mind. Whatever it is. And you have the right to cry, kick or scream and say your mind, too. But at the end of today, will walk away learning a little bit more about each other and saying, you know something? Maybe these things aren't so cut and dry. Maybe there's a little bit of good in all of us. Now, how does this come into voice? The voice and the AI that we're creating is going to magnify this ugliness to a proportion that no human being is ready for. And it's going to happen whether we like it or not. It is already happening, and it's already in the hands of people that we thought we elected but have just been sitting inside a government for generations, for good or bad. It's always done for good reasons. But if the wrong person gets involved in it, let's call that person a tyrant or tyrants or tyrannical. And I don't, again, let history decide who that is. I can't tell you at this moment. They can use all of this stuff in a way. Unbelievable. But you know where it starts? It starts with this indignation that's somebody else's belief system is retarded or flawed or whatever politically incorrect or correct word you have. If we were able to interact with each other like we're all just hanging out, which I try to do in any of my conversations. It gets me in trouble. But now I'm an old man and I can hopefully not suffer consequences as much. Maybe that greets us. And I don't think, gentlemen, it's an accident that all of this is exploding upon us right now. The entire political season we've seen was invented because of social networks. The same blunt tool that you use when you don't like somebody was used to change you. You changed it. It changed you. How do you stop that? I don't know. You got to find that answer. If you're listening to this voice, find the answer. All I can tell you is one thing. If you want to study history and maybe we taught history too boring and maybe we don't have enough money for people to study it, maybe STEM is better. I don't think STEM matters unless you are able to understand how many frickin times we've gone through this cycle. How many times we thought we were the smartest. Everybody alive today thinks we're the smartest generation, we're the most liberated generation, the most open minded generation, the one that eliminated this or the one eliminated that. The first person that had this. I'm sorry. It's not the case. The first woman president happened almost 6000 years ago. You don't know about it because we committed a lobotomy and we don't have the memory to understand it. It happened already. All right. So we need to get over ourselves and stop being so dramatic about all that and to say, you know, we're just working our way through it. So our AI is going to do that. That's a negative side. The positive side is if we grab a hold of this, just like I think Elon said recently. He said, "We're not going to be doing the rudimentary jobs, the draftsman job or, you know, the building of something job or the driving of something job. We're going to have time to do better things." And the Wall-e example, the Disney movie, Wall-e. The better things was sitting around in our VR world and immersing ourselves endlessly into a false reality thinking that we're somehow going to find Nirvana or the answers to all of our sadness, the loves that are lost inside of an artificial world. And somehow maybe remove ourselves and live there. There's some people, you know, and I won't say this as a put down. Some people want to disembody themselves from their humanity and put themselves in a machine, maybe it's a Google machine, and you can pay per click advertising in it and you can't ever turn it away for an eternity. This is what hell looks like, right? Hell looks like you don't have to pay for your embodiment. But you have to deal with all the advertising. So any time you think of coffee. Well, coffee. Starbucks. Coffee. And they change your words. So be careful what you wish for. I'm a nerd. I want to see this technology. But on the other side, I'm screaming from the top of my lungs, if you can understand through all this, that and it's not a sermon. It's more me being frustrated and confused because I don't got the answers. I'm a student. Everything you heard me talk about today, I'm a student. I'm not an expert on anything. Student. I'm learning. And I'm saying that we've got to start learning this about ourselves. And if you think you need to take to the street and yell at it. Go. Go for it. You think that's productive? Sure. Think acting out is going to fix things? Great. All I can tell you is go back in history. Tell me if it ever fixed anything. Or even ask your Dad or your grandpa. But the good side. I'd want to drop this. The good side is this. When we unload the burdens of search and distilling all the stuff that we think is what our computer experience is, but it's nothing but a waste of our time, time that we could really be applying to things that are productive for us... The definition of productivity is going to be very, very personalized. What is enriching us and is it making stronger? Or is it making us weaker? That's how I define it. Is it making me greater or less greater? If you're if you're spending your time, you know, watching VR porn all the time, is that making you greater or lesser? If you're spending your time in a game where you're shooting somebody and it looks more realistic, is it making you greater or lesser? I'm not telling you that that's a bad or a good thing. I'm just asking that question and not trying to be ironic. I really don't know if that's a greater or lesser. Doing artificial surgeries on five million people on a video game, does that make you a greater or lesser person? These all things come. But I'll tell you what you will have a lot more of is time. And you're going to feel like you've run out of time. You are going to have all this time available, yet you're going to feel like you don't have enough time because...

Brian L: [01:00:04] Isn't that already true?

Brian R: [01:00:05] That's happening. Right. So along the way, you're going to have this great possibility. And I'm saying fall in love with that and understand that yeah, you're going to have... Part of my thesis. Are screens going away? No. There'll be situational. You're gonna be seeing images on the screen that's most convenient to you when you need to see it. Whether that's a virtual screen and a real metaphor of a screen or something that pops up and goes away or it's your phone. Doesn't matter. Are keyboards and touch screens gonna go away? Of course not. But the keyboard already is going away. Now, we're pounding on glass and not keys. It's a big step. And are input devices going to go away? No. But the punch card never really fully went away. But we're using it a lot less. So all these things are going to gestate. But the thesis is very simple. Voice First world is coming at us faster than we can possibly imagine. It's going to matriculate through us in ways that we didn't guess, that I can't guess. It's going to come through our appliances. It's going to be... What happened with the Internet going down a couple of weeks ago. It was somebody's cam that took down the Internet because somebody left a backdoor. Imagine hundreds and thousands of these things that weren't really thought through. Of course now you can do duh, why did they keep a password in the back end of the system? I don't know. You know,  you can theorize and conspiracy, you can put tin foil on whatever you want to do. It happened. What's happening right now? What's the backdoor that's happening right now? What's it backdoor into all the voice first devices I'm playing around with? What have I given up using them? I don't know. I mean, I can go run and live in the woods and run away from the stuff, but I'm boldly moving into it. And I gotta tell you, there's a lot of people who are the thought leaders in technology that think about 90 percent of what I'm talking about right now is a complete bunch of bullshit. These are tough things, tough things.

Brian L: [01:02:25] I think security is absolutely a huge issue. I mean, I think, you know, if anyone is to say voice is not completely secure method of interacting with computers is blind. And so in reality, I mean, there's sort of like the short term version of this in a long term version of this. And the short term is, you know, I think, first of all, there's got to be a lot of change due to voice. No doubt. And we're going to be able to interact with voice, you know, in much greater ways than we thought were possible. Probably more quickly that we thought possible. Long term is there could be some very dark consequences to going down this road. I think how it actually plays out is definitely up for question. But what a great interview. Thank you so much.

Phillip: [01:03:19] Thank you so much.

Brian R: [01:03:22] Phillip. You're in a lot of this junk. Where do you fall in this?

Phillip: [01:03:26] I mean, the thing that I kind of keep coming back to is where we're kind of at this inflection point of we don't have the tools or the skills or the the evolutionary traits necessary to survive this kind of an evolution. And we're going to have to evolve into it. And evolution sometimes comes at a very hard cost of there's an element of randomness that has to happen for the vestigial to die and the... We'll talk for another two and a half hours.

Brian R: [01:04:13] Guys, I would love to do 10 of these shows if we had to. But I wanted to get mostly because I don't normally go down his path. But to see what's going on in the world.

Phillip: [01:04:29] Yeah. That's really...

Brian L: [01:04:31] This was amazing. I loved it.

Phillip: [01:04:32] This is really what we're all about. Well, thank you so much. Thank you.

Brian R: [01:04:36] Gentlemen, absolute honor to be with you folks.

Phillip: [01:04:38] Yeah. And it was honor that you spent so much of your time.

Brian L: [01:04:39] We'd love to have you back on the show again. Keep up with you. So we will definitely let you know when this goes live and we will be staying in touch.

Brian R: [01:04:47] You know where I am.

Phillip: [01:04:52] Thank you, sir. Thank you.

Brian L: [01:04:53] Thank you.

Phillip: [01:04:53] Thanks for listening to Future Commerce. We want you to give us some feedback about today's show. So leave that on FutureCommerce.fm. in the Disqus comment box. Just click on the episode title. Go all the way to the bottom and you can start that conversation with us. If you're subscribed on iTunes, please leave us a five star review. We need that feedback so that we can get this podcast out to more people. And you can also subscribe to Future Commerce on iTunes and Google Play or listen right away from your Amazon Echo with the phrase "Alexa play Future Commerce podcast." Thank you for listening. And until next time, keep looking toward the future.

Recent episodes

LATEST PODCASTS
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.