We live in a world that doesn’t work all the way. Is the amount of automation and machine learning and AI and even robotics that we're implementing in the world, displacing people's jobs and having a human impact?
Have any questions or comments about the show? Let us know on Futurecommerce.com, or reach out to us on Twitter, Facebook, Instagram, or LinkedIn. We love hearing from our listeners!
Phillip Jackson: [00:01:55] Hello and welcome to Future Commerce, the podcast about the next generation of commerce and how we can affect its outcome. I'm Phillip and today I am with the Mikes. We're here at the Visions Summit in West Palm Beach, Florida. And we're talking about a really interesting topic. Before we get into it, I'd like to introduce Mike Lackman. Say hello.
Mike Lackman: [00:02:36] I'm Mike Lachman, CEO at Trade Coffee.
Phillip Jackson: [00:02:38] And Michael.
Michael Miraflor: [00:02:39] Michael Miraflor, a Chief Brand Officer at Hannah Gray VC.
Phillip Jackson: [00:02:43] And we are diving into the themes of our report and really teasing out some of these like heady topics. This one when we concepted the idea, at the outset, it was sort of musing on this idea that the world doesn't work all the way. Like we have a lot of automation, especially on the consumer side. I remember the first Roomba I bought and I thought, "Really? Is that the most efficient path that it can take around the living room?" We have so much that of these types of promises of this future, The Jetsons setting the standard for the world that we all live in. And for the most part, we live in that world now. Like all of those things. Autonomous cars... It all is there, but it doesn't work quite all the way. And still, there seems to be sort of latent concerns about what this all means in the future that we're building. So that's really the topic here. We call it our shitty robot future.
Mike Lackman: [00:03:42] Love it.
Phillip Jackson: [00:03:43] But when we actually started doing the research and we got into it, we actually teased out a bunch of really interesting themes. So let's kind of dive into it here today. I think the biggest thing that is on everyone's mind is, is the amount of automation and machine learning and AI and even robotics that we're implementing in the world, displacing people's jobs and having a human impact? I think let's start there and we'll find the thread after that. Mike Lackman, what's your perspective on the amount of automation and how we're deploying it in our businesses?
Mike Lackman: [00:04:19] I'm far from being qualified to talk about the macroeconomic factors of whether it's in fact getting rid of jobs and being a net negative. I guess what I could say is it's creating some dynamics where a much smaller number of people can drive outcomes with a much larger number of dollars. And so I think if we're not really effective at promoting financial literacy and asking some big questions about what's different looking 20 years ahead from what the last 20 years were like, then there's going to be some collateral damage in the system that I think is really bad for everybody involved. And so I think, again, to be somewhat parochial in the way that we manage things at our company, we're very, very big on financial literacy. We try really hard to be a promise-keeping company and we try to then be very clear about what the hazards are with that. And when we hire people, we try really hard to almost convince them not to come based on all the good, bad, and ugly of what's involved in trying to run a business that isn't guaranteed to succeed yet and is trying to do something that we believe to be somewhat exceptional. So I think when you look at those factors, it should be a force for good if managed properly and there are a million ways we can screw it up if we're not really careful.
Phillip Jackson: [00:05:33] This comes out all the time, Mr. Miraflor, that you have inherent biases that are baked into these types of systems, in particular, AI.
Michael Miraflor: [00:05:45] Yeah.
Phillip Jackson: [00:05:46] And it seems that especially in the space in the world that you occupy, there's a lot of venture that's making big bets on the way that AI and machine learning are going to shape the future that we'll all have to inherit. What are your thoughts just broadly on a future that is more biased because we're the ones building it?
Michael Miraflor: [00:06:06] I mean, yeah, it is a question about can I see the engineering team that is building this product? Because I want to make sure that everything about them that is going into this machine learning project is applicable beyond them and is representative of the population at whole, especially when it comes to things like computer vision, voice identification, and things like that. But even more so, I mean, to kind of take a step back when it comes to automation, when I look at a business plan or if I'm evaluating a business, I pause when I hear that automation is part of the initial business plan. Because you can automate things to make them more efficient, but you have to know how to do it well in the first place with a certain level of agency and authority and getting your hands dirty with it. So I think we're getting to the point with some businesses and maybe industries where we might be losing a certain amount of knowledge about how to do something from a tradecraft perspective because we've focused so hard on automating our way out of these otherwise manual things to the point where after that older generation that used to do it more manually retires the generation in charge will literally not know how to do it. To use an analogy, they won't know how to pop an engine and change the oil. And it sounds like I'm talking about like woodwork or manual labor type stuff, but I think this applies to digital back end systems as well. There's an entire generation I know of who are steeped in the AdTech MarTech world that never had to get their hands dirty with learning how to do the basics of ad trafficking, which no one wants to do. But sometimes you have to learn the mechanics and dynamics of those things from both a systems level, but also like an interpersonal level, like how it actually works throughout the company to be able to improve upon that later with a certain level of automation. And I hate to sound like this is like a sweeping generality. But I've seen instances of this throughout my career, and I'm seeing it more often than not.
Mike Lackman: [00:08:23] And I love that you're pointing that out. I think there are some examples of automation that are particularly well suited to circumstances where you have a very, very high speed and low variable cost of value exploration. Match 3 Gaming Optimization AI. Cool. Very quickly, I'm watching people play this game and I can tell which version of the gameplay is going to make people more or less engaged, and I can iterate through that at dramatic speed. AI makes a lot of sense in terms of trying to discover what things are worth iterating on. You then try to apply that to physical commerce in some circumstances. You look at a business like ours where we're going to try to send someone coffee 20 to 50 times a year. Any one of those that goes wrong can offset a half of to 2 to 3 years of value for that customer. So the cost of value exploration is extraordinarily expensive if you're going to iterate, test, and learn through each one of those customers. And so I think to the point you're making if you're not really connected to those user stories with some pretty reasonably grounded hypotheses along the lines of when you meet a merchant who knows their customer and they go like, "I just don't think anyone's going to buy those shoes, and I don't need a report to tell me that," there's still some wisdom for that. And I think if you kind of excoriate that from the working population in some of these companies from the beginning, you're not going to introduce enough of the good ideas into those rock tumblers because there's just too much cost to creating the kind of statistically significant data sets on some of these lower velocity problems that we all have to solve if we all analogize ourselves to some of the things that are simpler to understand, like what's Google doing with the most common search terms kind of stuff?
Michael Miraflor: [00:10:02] We undervalue that wisdom, to your point.
Mike Lackman: [00:10:03] Totally.
Michael Miraflor: [00:10:05] Yeah.
Phillip Jackson: [00:10:05] But I believe... Let's talk about a different kind of bias. There's sort of a software bias that we have in the world, especially in starting a new business. And maybe that is my own perspective and I'm now telling on myself, but I think a lot of businesses, rather than thinking of solving the customer problem first, we're actually designing the software stack before we've even got a product. I see this play out in eCommerce quite frequently. It's more about what is the software stack for success and less about the type of product that has such high quality or delivers such high value to a customer that keeps them coming back for more. And I often wonder if, here's a great example... A customer will endure any amount of friction. They'll wait in line outside in the cold for hours. They'll wait on a website that's crashing over and over again for days. They will go through any amount of friction to have a product that they deem worthwhile. And so us just blatantly throwing software at problems to solve problems, thinking that it's the software that overcomes the deficiencies of an impractical product or business, I think is itself inherently a challenge that we're all facing in our industry. I didn't ask a question, so I don't expect you to have to respond to that. Let's shift gears because I think another way to look at this in futurism is that we build the future that we believe should exist. We've grown up watching it on TV. We read it in science fiction novels. In many ways, that's the world that we've also inherited. What we found, though, is in that world, if we build that world, we often may not ask ourselves the question "Is that world one that's equitable or worth having?" Here's a good example. Rosie the Robot from The Jetsons was a mammy archetype of a live-in maid who was of a lower economic class and personified as a robot that has emotions and feelings and that is inherently problematic. What are some other ways that we could sort of muse on and maybe tease out here?
Mike Lackman: [00:12:21] You trying to go for all the Gen Z audience with The Jetsons? Because they were watching The Jetsons last night.
Phillip Jackson: [00:12:25] No, they were. They were they know.
Michael Miraflor: [00:12:27] But that's a good IP to reboot actually.
Phillip Jackson: [00:12:30] It could be. It could be. Way too much CGI in the reboot, obviously. But think about that for a second. What are some of those things that we're bringing into being or what we're building in the world that are laden with our biases? One thing could be just the way we design websites. You know, they are inherently probably inaccessible from the start. So what are ways that we can combat that? Is that an organizational shift? Is that an organizational change or does that begin with a person?
Mike Lackman: [00:13:05] I mean, I can start maybe by trying to object to the premise that you started your question with.
Phillip Jackson: [00:13:08] Oh, I love it.
Mike Lackman: [00:13:09] Which was that we create this future that we intended and want to live in. And if you take an evolutionary example, an organism doesn't evolve to have the traits that would be best for the organism. If a bird or a wasp or something is going to evolve, it could evolve a machine gun on its back to fight off its predators. That'd be the best thing for it. It's not going to happen. You're going to evolve things that are sequentially pragmatic and that can evolve one step at a time, sequentially from the places that were happening naturally in that organism's journey. So an insect gets wings because it had a heat diffusion system that eventually pragmatically was close enough to start to create something that would become wings as opposed to jet engines that would have taken up and off the ground. And so I think when you look at the way that we're going to adapt to these things, the answer for how we should do them in a way that's ethically grounded should probably be rooted in the where were we two, three, four years ago? Where will we be in five years? How do we think about that? Because that's going to be a lot more pragmatic than some of the...
Phillip Jackson: [00:14:13] Way more pragmatic.
Mike Lackman: [00:14:13] Than some of the like, what if we actually step function change to something that doesn't even resemble where we currently are?
Michael Miraflor: [00:14:19] Right. Right. In that way that resembles the move fast and break things mentality of the ship, let's get it out there, let's see how end users interact with it and we will fix the deficiencies over time. But net net we will be delivering value out in the marketplace. I'm not so sure that holds anymore. And I think from a consumer perspective, we've seen step changes in technology like net new things be put out there into the world to initial excitement and a certain level of satisfaction by consumers and then kind of like drifted off in different directions or kind of like stalled. Like voice is a great example. Recall was it 2015/16 when Alexa and Google Home were kind of put out there in the marketplace and the developer community was really excited about developing applications. There were grants aplenty and people put a lot of headspace and thought leadership behind the possibilities of what that could become. And here we are like, what, half a decade later? And it's pretty much and this is, you know, a limitation of the algorithms and maybe the devices and whatever. But a lot of it is being driven by not necessarily a lack of consumer demand, but just the reality that there are only so many things that you want to do from a computational perspective as an end user by using your voice. I'm never going to want and buy a plane ticket. I'm never going to want to do anything more complex than asking a series of questions that is no more than two questions. And in retrospect, it's kind of crazy how we didn't think of that prior. But sometimes maybe it does take that risk of getting it out there and trying to figure out what can be built around it. That's not to say it's the right way. Maybe you can make an easy argument that that was a vast waste of resources and maybe it should have had a bit more of a thoughtful roadmap prior to this technology be putting out there in the world. But nonetheless, I mean, now we know. And it still exists and people still use it just in a more compact way.
Phillip Jackson: [00:16:25] It's utilitarian.
Michael Miraflor: [00:16:26] It's very utilitarian. Like wake me up, set a timer, and maybe a couple of other bit more complex things. But now we know the role that it plays, at least at this point in time. I'm not sure if this directly answers your question, but...
Phillip Jackson: [00:16:42] It does. It's actually the most amazing segue because that's the next part of our study was really to try to dive into who is our truest self? And I have found I'm appalled at my own behavior when I go back and listen to my Alexa voice recordings, which is a thing that they should never have allowed me to do because both myself and my family, including my very small children, speak to this digital assistant in a very dehumanizing way. Not that this but we have definitely othered Alexa in our house. And when it doesn't do exactly the thing that we wanted to do, it's instant anger, like it's instant frustration. It goes from 0 to 60 immediately. And it makes me want to philosophize like who am I really then? Is that revealing something about me that is latent that I didn't know about? And it makes me wonder if in mass really we all have that sort of hiding below the surface and it's taken this piece of technology to reveal it. So I guess I would ask you, Mr. Lackman, do you believe that we're just inherently evil and that the digital assistants are drawing it out of us? {laughter}
Mike Lackman: [00:17:59] So, I mean, let me try to root this in commerce for a second because I think we could have a different conversation over drinks at some point, but I think there is this notion of the uncanny valley, which is actually profoundly human, that the most disorienting thing you can have is something that just very nearly seems really human and is just that far off. A joke that you have to sit through that really is not funny and that you feel like you have to affect laughter from the person on the other side. These are deeply uncomfortable human things. And so I think from a commercial perspective to the extent that there are still companies full of people running companies and there are still customers you're interacting with and connections that you have when you can assist those authentically, and I think at least transparently with AI, so an example would be for the scale of our business, we're resolving 20 to 25% of our customer service contacts through AI at this point. But we're doing so very transparently, meaning questions that we can answer through AI, we transparently answer through AI. And part of that answer is, "But if you want a real person, we're here too. Tell us right now." And we're pretty good at making sure it's like, "I'd rather have an answer in 2 seconds." If that's what you're looking for, but if you really want to talk, we're here to talk. That's not super disorienting. Acting like this thing is kind of the real thing and yet I have to treat it... Like you don't have to treat a keypad interface in a driveway with the dignity of another person. And so asking a consumer to treat this AI bot that's pretty poorly built as if it were a person with dignity and respect, I think is kind of a false paradigm.
Phillip Jackson: [00:19:35] Perhaps. Perhaps. Yeah.
Mike Lackman: [00:19:37] So I don't think it's covering some kind of deeper, nefarious, misanthropic nature that we have inside of us that then we tip over. I think you can get frustrated with a soda can that won't open. It doesn't make you a bad person.
Michael Miraflor: [00:19:50] That's until the soda can develops a consciousness on its own and is able to enact punitive measures on the way that you're treating it. It'll attack you.
Phillip Jackson: [00:19:57] Do you agree or disagree?
Michael Miraflor: [00:19:59] I mean, from a consumer perspective, one of the most frustrating things about customer service is when you end up on the phone with a system that is trying to replicate that of a human voice, human interaction, or human way of answering questions. It's just you're a robot. I know you're a robot. You don't have to try to trick me to think that you're a person because. You're just not. I would rather hit a series of numbers versus trying to have a conversation in the same way that I'm like typing in a search query into Google. I'm trying to just hit the right keywords to get you to answer versus me interacting with this AI as if you were human. I do love what you've done, Mike. Where you get to the point where whatever heuristic you set up to help answer questions quickly is not it, and I just want to talk to you human. Sure. Sometimes I just won't talk to a human. I mean, I think out of frustration, that's the first thing I say to a human after I get past whatever robot is trying to talk to me, like "Thank God it's a human," or "It took you a little too long. Can you tell someone, whoever's in charge of your department that took too long for me to get to a human," or "I had to Google "How do I speak to a human?'" I think there's an entire website.
Phillip Jackson: [00:21:11] There's a whole website.
Michael Miraflor: [00:21:12] There's a whole website. "Here's the button combination," or whatever to get past it. I think from an everyday consumer, even I overhear my mother struggling with this when I hang out at her house and she's trying to get through customer service. My mother is the last person in the world that deserves to be confused by a poorly programmed AI just to talk to someone to answer a 30 second question. And I hate that there is a bit more prioritization over improving that bit of it versus maybe counterbalancing that with a warmer way of identifying and addressing consumer needs. I remember that it wasn't... I mean, in my lifetime, there was certainly a time when I could call my bank and talk to a human. I get that to scale that becomes very expensive. But at the same time, I feel like my feelings towards a brand for which I received an altogether or partially negative customer experience because of not great deployment of AI is just I don't forget it. To me, it feels the same as going to a store and having bad customer service. You hold a grudge a little bit.
Phillip Jackson: [00:22:29] It sullies your image of the brand. Yeah.
Mike Lackman: [00:22:32] Yeah. But they're putting on a garment that doesn't have enough stitches in the sleeves to hold them together or doesn't hold up under washing. It's like, well, I'm not going to pay for this again.
Michael Miraflor: [00:22:39] Yeah, yeah. Why would I?
Mike Lackman: [00:22:41] Exactly. So if the service is an extension of that, it's like, "Oh, you're just trying to cheap out on this thing I need from your company." It's no different than getting the buttons on the shirt the right way.
Phillip Jackson: [00:22:50] I so appreciate that you're rooting it back into commerce. I do believe that leading to a point here, so much of what we interact with in search queries, and Alexa is not the only place that we interact with a voice search. Voice search is proliferated into voice typing and so many other things. So a lot of predictive search is being tailored based on our interactions with AI. I really, truly believe, and our survey respondents believe it as well, the majority believe that their tastes are being shaped by AI. They believe that their tastes are being shaped by algorithms that are in every product that they use today. And some of them are more protective of others. Spotify, for instance, is very protective of that algorithm because they want it to be very taste driven for them. The challenge is when a lot of our tastes for the things that we buy in this world are shaped by the same algorithms and they become reinforcing functions. "Oh, you like this? You must like that." And I believe that starts to narrow in on things again, just coming back to our default state and our default way of being, I wonder if a lot of the challenges around sustainability or wastefulness, maybe the rise in some of the fast fashion is really just fueled by it is our nature and our being to sort of want to consume and consume and consume and consume. And AI is tapping into that and helping reinforce it.
Mike Lackman: [00:24:24] I mean, maybe I could start somewhat high level.
Phillip Jackson: [00:24:27] Take it.
Mike Lackman: [00:24:30] We need to think about these things in terms of the counterfactual. We need to make a discrete choice between the kind of things we want to do, not kind of throw tomatoes at the things we don't like and what we see. And so I think the counterpoint to the AI is shaping what we're experiencing is an element of somewhat tyrannical or oligarchical editorial authority that would then shape what we're going to see beyond what we would vote on ourselves. The element of everyone watched the MASH finale because a smoke-filled, literally smoke-filled room of guys in suits in New York decided this is what everyone was going to watch. And the government said there are only three channels that are allowed to go across the public airwaves. And I'm even going to talk about what we consider to be prurient and how many beds are allowed in the bedroom and real censorship stuff. That's the counterfactual to just letting the market decide what they want to watch. And so I don't think there's a great choice for either one of those, especially when there isn't as homogeneous a sense of mores or institutions or things that we are willing to blindly trust that kind of editorial authority.
Phillip Jackson: [00:25:31] Wow.
Mike Lackman: [00:25:32] So bring it back to then like how you sell coffee because we do a lot of that. If you try to sell one SKU to people who like good coffee and like variety in that, you won't create enough lifetime value to run a business. And so we need to sell a variety of new things. I also can't ask people to be curating the kinds of coffees I'm sending them 40 times a year, because now I'm making them do work for us. Well, with the cost of value expiration being pretty high, you can bet that our algorithms get you into a routine. And unless we actually see that that's leading to degradation in delight, evangelism, loyalty, those kinds of things from us, then we're not going to go too far out there trying to get you to try this kind of coffee that's a one in three chance of being really disruptive and a lousy experience for what you're doing. Now, if we can move that harmoniously into what we're doing, then let's make that happen. But you can absolutely bet that there are some elements where we're winnowing this thing down and getting you into a track. But part of that's because we've tested the counterfactual and way more people cancel when you jar that thing up. Now I guess that gets to a point then, in the last podcast talking with Brian about you've got to be okay with the idea that dopamine is a drug. It is.
Michael Miraflor: [00:26:41] Put that on a t-shirt.
Mike Lackman: [00:26:42] And as long as you're okay with that, the way you... We all drank some wine last night. If we started this podcast by drinking a whole bottle of wine each, it'd be really inappropriate. So wine isn't altogether a problem. It can certainly become a problem. So I think if you're wielding these things we're talking about with completely irresponsible velocity in terms of the consumption of dopamine, at the core of those things people want, there's probably going to be some really cataclysmic consequences because we could also talk about running a business where we just sell tons of cocaine all the time. And I'm sure we can look at all kinds of neat commercial things, but look how good the customer loyalty is and the economics of the last mile. But it would not be an ethically responsible thing to do. So if you look at it through that lens, you need some things that are not necessarily going to be figured out by each particular company to kind of hem those in with mores. If you're going to talk about not just letting the crowd vote on what it's going to see and getting into little rat holes with that stuff.
Phillip Jackson: [00:27:39] Wow. So my pushback to you would be I tend to agree in the micro and the practical. I question broadly the recursive human effect of having our own tastes redefine our own tastes. And I think that that is what we have been dealing with over the past five or six years as we become way more polarized in our society, is that we're becoming... We are inherently tribal. We just happen to keep finding new tribes to belong to. And so I'm wondering, Michael, give me some perspectives for you. Do you believe that there is an inherent challenge with us sort of applying AI and machine learning in a tastemaking fashion? Or are there ways that it can be inherently bad net net?
Michael Miraflor: [00:28:39] Yeah. Let me try to answer it this way. As a former trend forecaster and Head of Innovation at an agency where I had clients who would depend on me to look around corners, [00:28:52] this is probably the most interesting and difficult time to be a trend or a future forecaster, especially when it comes to youth lifestyle trends. Like TikTok has birthed 1,000 communities of trends that go in every sort of direction. And it's very difficult sometimes because it is self-reinforcing and you're not sure which of these trends is relegated to TikTok and social media or if it actually exists in the real world and [00:29:24] the debate of whether that's a good or bad thing if it goes one direction or another. It's just there are so many sub-variables that determine whether something is important or not. You have to think, wait, maybe we should just for the sake of being able to even address the cultural changes that are happening because a lot of this is a reflection of the mood of a generation. And if we are kind of protectorates of that generation because we're able to enact policy and whatever, you have to get a gauge on what young people are thinking as expressed by what they're wearing and what they're doing. If you can't really get a sense of that, because the dominant form of social media is encouraging the proliferation of micro trends that really aren't a reflection, but they send a smoke signal, then you might have something that's a bit problematic on your hands. And I'm not saying that going through social media should be any sort of leading indicator for how policymakers should see the mood of an entire generation or a populace. But it counts and it matters. From a commercial perspective, it makes it extremely difficult for brands to be able to forecast what patterns and materials they should be investing in, like 18 to 24 months ahead of time. Because there was a point in time where the editors at the big fashion magazines and things like Pantone and WGSN and whoever they would, more or less kind of push and pull and kind of dictate what is and will be important to a certain extent. Obviously, it's an interplay with culture, but now you're seeing those companies sort of have to reconfigure their ways of working to accommodate for this new reality where, and it sounds like I'm picking on TikTok, but I just think it's so dominant right now and it's so black box and it's so effective. I'm sure as users of TikTok and we might have kids that are on TikTok and it's just like that to me is the singular example of how much a strong algorithm can affect society at large. From a commercial perspective, but also I feel like a societal perspective. I can't walk through a mall without bumping into three sets of teenagers, like dancing in front of the camera. And that's real-world behavior.
Mike Lackman: [00:31:36] So I'm curious because you understand these aesthetics way better than I do. I'm a Luddite when it comes to some of these things. How do you handicap the potential downsides of that against something like when you go back now and watch this Netflix documentary about the history of Abercrombie and Fitch? Where like the absence of that is necessarily complemented by the other point of just yeah, it was these six pretty monstrous people just saying, Hey, you're all going to wear plaid. Deal with it.
Phillip Jackson: [00:32:03] Yeah, yeah.
Michael Miraflor: [00:32:04] It's hard. Okay. Again, I feel like I'm answering questions indirectly, but I have to come to a conclusion by talking my way through it, if you don't mind.
Phillip Jackson: [00:32:13] Please.
Michael Miraflor: [00:32:14] Let me throw a question back at you. Do you believe that there is coffee that is objectively better than other coffee? Or do you think that's all subjective based on palate and lived experiences and what have you?
Mike Lackman: [00:32:24] It's totally in the eye of the beholder.
Michael Miraflor: [00:32:26] Totally in the eye of the beholder. But you can say that there is coffee that is more x, y, z than whatever? There's bad coffee in the world as well. Right? We can say that?
Mike Lackman: [00:32:37] Yes.
Michael Miraflor: [00:32:37] Apply that to pop culture and fashion. Are there styles that are objectively better than others that exist at the same time in the marketplace or is it in the eye of the beholder? Or is it subjective? Lived experiences? Whatever. There was a certain point in time where you didn't really have to think for yourself too much because you can rely on the editorial prowess or bias or just authority of Anna Wintour and her contemporaries who run the fashion magazines that basically dictated what American style and fashion would be. That's all falling apart to a large extent. They're still around. They're still influential but to a much lesser extent. Without there being a guide to what is good and the question of "Can something be objectively better than something else?" kind of up in the air, in part, because we know that there is an infinite number of available styles and there are at any given point, we might be exposed to 500 trends that maybe five years ago we wouldn't have been aware of because they haven't been expressed in the short form video social media platform. It's just it creates confusion and it makes it very unpredictable to know in which directions culture will run. And then the secondary effects of that effect on culture will directly affect things like commerce. Maybe a third-order effect is it affects things like policy. I mean, I don't want to dive into politics, but this whole like Libs of TikTok thing that has become quite an issue is like that's another expression of how exposure and refinement of an algorithm that is basically you're in your own echo chamber can really affect the way that you think about things outside of that initial subject matter. So when I see things on TikTok like very young people talking about... Oh, let me take the flip side of that of what I was going to go into. The uptick in like Adderall prescriptions or something, because that's been made so available in part because of the way that TikTok can be gamed in a certain way to amplify that message, which ostensibly could be seen as a good thing because you're talking about mental health out in the open, but there's kind of an underlying kind of message there that, "Oh, you can easily address this by doing this direct to consumer kind of a pharmaceutical thing." It's like, I think that's, net net, not a great thing, right? So again, I don't know if I've answered the question, but I kind of ran in circles there.
Mike Lackman: [00:35:17] I guess the point I'm trying to jeer at, and it is interesting in that this exists in this kind of tripartite like us as a society all the way down to like just hard commerce stuff.
Michael Miraflor: [00:35:30] Yeah.
Mike Lackman: [00:35:30] There is an element where even when imperfect mores take a very long time to build and they're very, very quickly ruptured.
Michael Miraflor: [00:35:38] Yeah.
Mike Lackman: [00:35:38] And from a standing on the shoulders of giants perspective like you need to be able to build on those things being taken for granted to make some of those kind of progress things, those efficiency things you're talking about. The notion of those things being predictable. And so I guess the interesting piece here is that when we talk about this dystopian sort of like, well, this AI future, we're all going to get pushed around this and that. The alternative to that has to be an institution with enough trust and credibility that we're willing to let them become tastemakers, editors of newspapers, curators of fashion, pairers of coffee, those kinds of things.
Michael Miraflor: [00:36:10] You're talking about a return to the monoculture, which is that even possible anymore? Genie's out of the bottle.
Mike Lackman: [00:36:17] I do think the genie is out of the bottle. But I guess the question is, if we're asking about the problem with our robot future, at some point, the counterpoint to that robot deciding, "Well, you're all voting with your dollars. I'll show you what you'd like to buy," is someone going "No, this is what's for sale. Buy it or don't." And I think it's going to require a lot of trust for those kinds of systems to emerge as a prevailing option against what you're positioning this whole conversation around, which is, unchecked, we're all just going to get down these infinite loops that we find ourselves in.
Phillip Jackson: [00:36:52] And you said stand on the shoulders of giants. I would say maybe like, you know, tread on the casualties of war. I think that there's a challenge here where consider the last two years that we all just collectively lived. And I don't know that we've really understood or will understand for a long time the psychological impacts on very young children who have had to sit in distance learning for hours upon hours upon hours a day. And in my own experience with my children, there is a distractibility that comes along with it that's not inherent in being in person, in a classroom, and having the social norm of everyone sitting and heads up paying attention. And what I have experienced is that my children's tastes were shaped directly in that period of time by YouTube kids and by the other various games platforms in that they became much more aware that there are sort of rabbit holes that they can fall down into and to the point that they're aware now that they have trained something like YouTube. They are aware now that their experience of YouTube is very different from their friends. And that is not a comforting thing for them because they never lived through a world where there was a monoculture. For them, it's like, "Why don't I have the same experience as someone else?" And that in itself is very isolating. So for us, it might be refreshing. We've lived through a world that was different. For them, their experience is there is no collective we. I don't have that we. I'm not living that we. There is only me. And I think that that can be in and of itself objectively kind of terrifying. You feel alone and that your experience, you know, Camille's experience is different from Samira's experience is different to Hattie's experience. They're all having their own little perspective in the world if and when because for many years their only window into the world was through a computer screen. And that's the thing that I feel like when it comes back to commerce you're talking about niche-ification, Michael, we, I don't think know how to reckon with the idea that we have to sort of deliver on some kind of now heightened expectation that we can talk to all of those things all at once. It's not just about brands and forecasting. It's about even just delivering digital experience. There's no personalization platform that's capable of doing this. There's no CDP that can power this. There's no stack.
Michael Miraflor: [00:39:26] You can't algorithm your way out of it.
Phillip Jackson: [00:39:27] No, but we definitely algorithmed our way into it.
Michael Miraflor: [00:39:30] Exactly.
Mike Lackman: [00:39:32] If you can indulge me...
Phillip Jackson: [00:39:36] Please...
Mike Lackman: [00:39:36] And we can get a little heady here, but let's just have some fun with it. Sometimes we either think of history being kind of like straight line degradation, straight line teleological up into the right.
Michael Miraflor: [00:39:48] I think about it as a pendulum.
Mike Lackman: [00:39:51] Or it can be a straight line with some aberrations that are unrelated to one another. Is it even a line graph versus just a scatter. And I guess my point with that is let's take Italy in the 19th century as an example, we talked about tribalism and all those kind of things. That was a country that from a geographic and sort of a Roman institutions perspective, very, very ancient. From a country perspective, very, very incipient, very, very young. And they have this notion of campanilismo, that society is literally night and day different from the earshot of one campanile or bell tower to another. And in World War I, it was the first time you brought all these different people called Italians in at the same place. They literally could not understand each other. Most of them spoke languages that were more different from one another than Spanish is from French or Spanish is from Italian. [00:40:43] So you could argue that we saw an aberrational element of monoculture and these kind of things that we're trying to hearken back to that we're saying we're moving away from that if you see it more of an aberration than us going on a different course... [00:40:59]
Michael Miraflor: [00:40:59] That's interesting.
Mike Lackman: [00:40:59] ...you could find where this is heading by looking into the past in terms of, well, what was Italian society like in 1845 where you have someone from rural Campagna near Naples, someone from the city and Madina? Like, how different are they? Is there anything in common with those folks? How are their taste shaped when these are completely different and fragmented experiences? There is no shared. We all watched whatever Cheers or Frasier.
Michael Miraflor: [00:41:24] But there was a lot of naivete. There [00:41:30] wasn't an awareness that there was a multitude of communities that were so different from yours outside of your immediate surroundings. [00:41:37] So unlike the experience that is being lived by Phillip's kids where you're very aware that you might belong to a couple of subcultures that my friends don't, so that dissociation is what creates that kind of like feeling of unease. That's maybe the difference, that availability of information is the difference between now and 19th century Italy. But totally hear what you're saying where maybe traditional broadcast media is the outlier and the aberration, the monoculture that was created in post-World War II America and the West is more the exception to the rule.
Mike Lackman: [00:42:18] Yeah, I remember Robert Dahl had a thing on the writing of the Declaration of Independence that it was implicitly tyrannical at some point from a sovereignty perspective. It was supposed to be "We The States," and they just decided, nope, it's "We The People." And "I'm speaking on behalf of everybody. This is what we all agreed to." And it was ultimately somewhat tyrannical because the commission of the people that went to Philadelphia was to write about We The States. And so I think there was a dark side to those things that we do see as unifying, which is they tend to have some sort of pretty strong editorial, perhaps even tyrannical authority associated with them to be able to set those standards in place. And I guess my point of jousting with you on that when we think about AI, the counterpoint to letting the system just run the algorithm unchecked is that someone has to put guardrails on that. And when you look at the craziest, scariest parts of AI, like going to a machine that now you can say, "Write me 20 pages about the life of a sea turtle in the style of Ernest Hemingway." And it will do it really well. Well, now you have to hem that in with, "But don't talk about this and this is immoral and don't put those things on." There's going to be some editorial authority on that stuff. And I think as we all implement that in our businesses, I hope the stakes are way lower, but we're going to reckon with those kinds of same dynamics.
Phillip Jackson: [00:43:31] So you're actually naturally gravitating to our last point, which is how do things like art and information that is generated from machine learning begin to reinform the way that we interpret the world? Here's a good example. Dall-E, is this new...
Michael Miraflor: [00:43:51] Oh, my God. Mind blowing.
Phillip Jackson: [00:43:52] Yeah. It's this new generative art AI engine that is a variant of GPT-3, and it can generate images in mass based on text commands. It is kind of incredible when you look at a prompt like a chair in the shape of an avocado. It's going to give you 6 million variations of that. I'm curious how that begins to shape our perspective on if generative art in that way is then sort of delineated or devalued or seen as other something else. Or if we begin to mimic it, if there is some sort of human nature for us to sort of engage in mimetics and say, "This is what is connecting." We will reinterpret that in our own way and make it very human. That becomes a really interesting cycle of reinterpretation of something that it ultimately learned from us, to begin with. Now we're learning from it.
Michael Miraflor: [00:44:57] I haven't encountered a technology, much less an AI, that has stunned me to the degree that the output of Dall-E 2 works of art have. Incredible, incredible stuff. And in a way, because it's generative and it's not super sharp and precise, it's not meant to be. Some of the images, it's as if you're looking into someone's mind while they're having a dream. Which makes it... If there was an uncanny valley representation in generative art, I think this would be it, because it's like, "Oh man, that feels like I'm looking into a dream memory or something like that," which makes it pretty terrifying in a way. And I think that is such a provocative question of the outputs are a combination of billions of inputs generated by humans, but in turn, even the aesthetics of the output I can see very easily be applied to things in the commercial sphere. I don't know if that's a good thing or a bad thing.
Phillip Jackson: [00:46:01] Neither do I.
Michael Miraflor: [00:46:01] It's just so bizarre to even think about. And it kind of hit us like a truck. I know that type of AI has been developed and refined over the past decade, but this is the first time that we've seen it put out there in such a way. Whereas like other things that we've been worrying and talking about for so long, like AR, VR, like voice and stuff, it's like it's been like a slow crawl and then there's product and whatever. It's like Dall-E 2 was like, "Oh, this is possible. What are the ramifications going to be?" Question about art... Yeah, there will be. There will probably be a branch of artists who are just like, "I am directly inspired by the output of this AI," and then their output will just be fed back into the machine.
Phillip Jackson: [00:46:39] Yeah. Correct.
Michael Miraflor: [00:46:40] So interesting.
Phillip Jackson: [00:46:41] Yeah.
Michael Miraflor: [00:46:41] So, so interesting.
Mike Lackman: [00:46:44] Yeah. I think again though, I might just draw on a historical analog...
Phillip Jackson: [00:46:48] This is why you're here.
Mike Lackman: [00:46:49] I'm from Philadelphia and the historic buildings around Independence Hall, one of them is Carpenter's Hall. And at the time, Carpenters actually were some of the wealthiest artists and tradespeople in the colonies because they had these very artful crown moldings that they put in all that kind of that era of revolutionary architecture or colonial architecture, as we call it. And it was just an extraordinarily differentiated thing that you could make that molding set. And you had a shop almost like when you hear about Michelangelo's shop or something like that. His assistants made this thing. It was part of his shop that was doing that. And then it became commoditized. Toll Brothers can throw up a couple thousand homes with the same crown molding set. And what we replaced that with reforms of things that we found scarcity and value in that could not have possibly been comprehended in that time. And so I think it's very scary to look at the exhaustive set of all the things we can comprehend, seeing something that does seem to have real, incredible scarcity and then watching it go away because you feel like it's going to be replaced with nothingness. But it's possible that this is the one time history is different, or it's possible that the kinds of things that become less scarce because of GPT-3 are replaced with things that become scarce that we couldn't have possibly comprehended in our current situation. And I think it's going to be... Sometimes it's not very interesting. If you analogize that to World War I, it can be done in a way that's very cataclysmic when you have those kinds of epochal shift things. And there's no question that this is going to be one of those big shift things if GPT-3 kind of outputs are a part of our day-to-day lives.
Phillip Jackson: [00:48:18] I personally haven't experienced GPT-3 with the kind of weight or gravity that I think a lot of folks in my social circles have. I look at the 20 paragraph essay it generates and I'm like, that's okay. I don't read 20 paragraph essays that other people write anyway. So what does it matter? What does it matter if the computer generated it or not? I tend to wonder if there are ideas that can begin to alter the way that we're perceiving the world that maybe could be enlightening. It doesn't all have to be shitty robot future. It could be inspiring robot future. I believe there are questions worth asking because as we reinterpret it, I can see an application in particular from Dall-E. It is being used in a way to, well, do we need to shoot every single colorway? Do we need to shoot every single? Do we need to shoot lifestyle in 30 locations? No, not anymore. And that maybe leads us back to our first point of what is the human impact? Is there a creativity angle of the artist who actually makes all of that work or meaningful work for an artist or model who are then displaced by this generative algorithm? These are all, I think, impacts that have sort of circular questions that nobody really has an answer to yet, but they do affect commerce. Any last thoughts? I would just give a volley. Anyone?
Michael Miraflor: [00:49:49] What did you just call it? Not this dystopian future but...
Phillip Jackson: [00:49:52] Inspiring robot future.
Michael Miraflor: [00:49:53] Inspiring robot future. I like that better than shitty robot future.
Phillip Jackson: [00:49:56] Okay, I like that.
Michael Miraflor: [00:49:57] Keep it positive.
Phillip Jackson: [00:49:58] Yeah, I love keeping it positive. And thank you so much for joining us. Thank you for watching. Thank you for listening to Future Commerce. You can find more episodes of this podcast at FutureCommerce.fm.