When it comes to AI and machine learning and big data, there’s really no person I like learning from and talking with than Dr. Michael Wu, recognized industry expert and Chief AI Strategist for PROS, a provider of SaaS solutions optimizing shopping and selling experiences. And recently I had a great livestream conversation with him and my CRM Playaz co-host Paul Greenberg where we took a deep dive into where we are with AI and how it is helping companies make it through the last 18 months.
Below is an edited transcript of a portion of our conversation that touches on the role of ethics and inclusivity as more business interactions and transactions go digital – providing AI with the data it craves to predict and recommend things back to us. To hear the full conversation click on the embedded SoundCloud player.
Paul Greenberg: How should AI strategy be integrated into the broader business strategy?
Michael Wu: Yeah, I think that this is actually a major problem in the industry. I mean, there is no doubt that AI is going to be a very pervasive part of our life moving forward. Whether it is in business at work or just in our daily life, it’s going to be there, it’s going to be part of it. I think what’s preventing a lot of business from jumping on board is, I would say, if you look at it, there’s AI consumer applications and there’s also business applications of AI. For consumers, it automates daily tasks like, for example, you could automate your home routines using smart homes and all that stuff. I mean, if it makes a mistake, it’s some, I would say, minor inconvenience. I mean, if you ask Siri something, I didn’t understand you, you just rephrase it. It’s a little bit of inconvenience, you lose a few minutes of time, and it’s no big deal. So the risks and the costs of a wrong decision in the AI is small.
But that’s totally not the case in business. I mean, business, when you make a wrong decision in AI, it could be millions of dollar lost and, I would say, a PR crisis. It could be a loss of lots and lots of customers who will never come back to you. That risk of the cost of a wrong decision is much higher in the business setting, therefore, businesses are just reluctant to jump onto AI. And a lot of the reason is because, well, I mean, it’s actually not the technical component of AI. A lot of it has to do with the non-technical component, for example, the design, the adoption. If you buy AI technology and if people don’t use it because they don’t trust it because they’re afraid of it, nobody’s getting any benefit.
I mean, if you have AI but you can’t explain it so people don’t trust it, then you have another issue too. I mean, so there’s a lot of, I would say, these non-technical issues, for example, around user experience, adoption, legal, and design. So these are, I would say, issues surrounding this AI technology that need to be addressed in order to move this whole industry forward.
Small Business Trends: Can you maybe talk about how people should be looking at AI in terms of improving things like logistics, fulfillment, order stuff? Because that does have I think an outsized and even increasingly growing impact on customer experience, even if it doesn’t feel like it’s the most direct thing for a customer experience.
Michael Wu: Well, I think it does have a direct relationship to customer experience because, I mean, let me ask a simple question. What do you think customer experience is? Because, to me, customer experiences can be viewed, can be understood in very, very simple term. It’s the difference between what the company delivers and what the customer expects. If the company delivers more than what the customer expects, then that’s a good experience. It’s a delighting experience. If the company delivers less than what the customer expects, then you have a disappointed customer. It’s really simple if you look at it in that way. So, I mean, I think the customer’s expectation is the piece that we need to focus on because that is the piece that actually changes very dramatically.
Paul Greenberg: Right.
Michael Wu: Everything now in the post-pandemic era, everything is moving on to digital and things become more transparent. People can actually see every other vendor online. So, actually, it’s very easy for customers to change their experience. For example, if you see an offer to you, you may have a customer experience, but the very minute I go and see another vendor offering the same thing for, say, 10% less, immediately your customer experience changed. This transparency makes customer experience really challenging because the customer expectation can fluctuate and so much influenced by the environment. Even when you wake in a bad day when it’s raining or something like that, you could have a worst customer experience. So to keep up with these, I would say, ever changing, constantly changing customer expectations, you need something like I would say, “I can help you online.”
I think in the traditional world when you actually are dealing with a human, humans are very good at gauging customers’ expectations. If I’m talking to you, seeing you face to face, your body language tells me something about whether you are happy or not happy about what I’m offering or anything. But when you’re talking to a machine when you’re online, when the customers are not engaged with an actual person, then it becomes really challenging to gauge what the customer expectation is. So how do you do that? I mean, to do that you need a live stream of real time, I would say, environmental contextual data that the customer is in, or what channel is coming in, which region is it in, all this other contextual data about this customer, then you will help the AI to understand the customer on the other end. But the key thing to recognize in this age is that even though we have big data, there’s never enough data.
I think there is big data in totality, but anytime when we’re dealing with a single customer or a single contact, the data that’s available to help you make that decision is dramatically reduced. There is lots of data out there. A lot of it is actually a useful in some contexts, but in this particular context at this moment and I’m dealing with this customer at this time, the relevant data that will help you decide what to do is actually fairly small. So the key thing is to identify those data. The second thing is, when you saying that there are these new channels where this information is coming in, AI, one of the beauties of it is the ability to learn. So AI has a component inside that’s called machine learning that enable them to actually learn from data. So, that allows it to adapt. When you have learning, you can adapt. So this is actually the same thing as how a human works. When you see this new stream, say, TikTok coming in, first you say, “Let’s ignore it.”
But after a while, you see TikTok, TikTok, TikTok, and then you say, “Oh, okay. Maybe I should pay attention to that.” You learn, you see that there’s more coming in more frequently, so it becomes more and more relevant. Then you should actually change your model of how this world operates that and put more weight on this particular channel than your traditional other channel that you hadn’t been paying attention to. So this is exactly the same way that AI would operate. First, you would say that you put very little weight on this new channel but as it comes up more and more frequently, you will basically revise your algorithm to start to put more and more weight on this channel if it turns out to be relevant. Maybe if it’s a lot of very loud, very noisy, but it’s actually not relevant, then you would keep the waiting or the impact that, that channel has still at a fairly low level. I think it’s a learning process and the learning is actually enabled in these AI systems through machine learning.
Small Business Trends: How is that impacting the ethical use of AI? Are we seeing any convergence or divergence? More data, less ethics? Or is it more data, more ethics? Do they have a relationship at all? Because it seems to me like the more data we find, the more temptation it is to use this stuff in any way that was old Malcolm X, “by any means necessary.” But is the ethics behind AI getting any better as we get more data thrown at this?
Michael Wu: I think there’s certainly more awareness of it. I think that right now there’s, I would say, fairness, transparency. People talk about this black box issue. This AI, we don’t know how it’s making decisions and everything. So it is a challenge, but it’s actually bringing up more and more people to pay attention to this area of ethics and fairness and accountability. So all these extra, I would say, big data that we’re using, it is very tempting. But I think there needs to be an opposing force to equally challenge these data scientists. I think there needs to be this, I would say, healthy tension between the two groups. It’s not that the AI scientists should dictate everything. Advancement should not drive everything. It’s not everything about advancement, but it’s not everything about regulation either. I mean, I think the two groups need to have this kind of a healthy tension that we raise issues that we do worry about.
And if you don’t raise the issue, scientists will not solve it. “It works. Why do I address this issue?” So if you raise that issue, then more and more scientists will be aware of it and say, “Okay, that’s a challenging problem that I need to address to make this better, better for humanity, better for everyone.” That’s why that healthy tension needs to be there. And I think that right now, to answer your question, yes, they are bringing up more and more of these. I think right now there’s more questions than solutions, but the pendulum will swing around. Because, previously, it was all AI advancement, so it’s all pendulums on the one side. And now that people are aware of the power of AI, all these ethics. I’m myself almost a half of an ethicist to say, “Hey, you can’t just look at using AI for whatever you want. You need to look at the ethical use of your data so that you don’t marginalize any group or anything.”
There are actually a lot of voices on that side now, and so that raises a lot of concern. We don’t have a solution yet, but I think now more and more people are actually paying attention to address those problems.
Small Business Trends: Well, ethics themselves or whoever’s programming the AI, you’ve got to go by their set of ethics and not everybody’s set of ethics are the same, so I know that’s going to be difficult. And the other thing I think too is the set or the population of folks really creating the data science, it’s close to being a homogenous group of people. It’s not very varied. It’s not very diverse. It’s almost like not only do you need ethical AI, you need inclusive AI in order for it to be a little bit more representative, and I think that’s the thing that may be missing the most, the set of people that are doing it.
I’m so glad to hear you say that it’s changing, because there was that great Netflix when we talked about the facial recognition and the AI couldn’t detect a Black woman from something else, and part of that reasoning was because the folks who were creating the AI didn’t look like the Black woman it was trying to detect. And so that’s been one of those things that’s been a missing ingredient to this.
Michael Wu: Yes.
Small Business Trends: Not to say that that’s purely the reason ethical AI is so hard to do, but when you don’t have certain folks or certain pieces represented in the creation of the technology, you’re automatically going to be losing something that may be very important to it being as successful as it should be.
Michael Wu: Totally. I think that’s actually, I would say, the big conundrum in AI. The data that you use to train the machine, you actually don’t have the entire world’s data. You use a sample of data. So that sample of data is selected, I would say, even though you think it’s random sometimes there may be some biases in there. And the inherent bias in the data that you select to use to train the AI will bias how your AI behaves. If you actually use this AI in the context of where that data was sampled from, only the population where you sampled the data from, there’s no problem. The problem is that we often, very, very often, use this AI, over-generalize it to a much bigger population, and that’s when you have problem with not including these other perspectives.
It may be ethical to you but not ethical to me, so we need to look at these different perspective as well. That’s where the inclusiveness is actually very important. Right now, more and more companies actually, are including many more of these, I would say, social science discipline, psychology, behavior, economists, social scientists into these kinds of discussions of the design of these AI systems, which is good. This is actually very good and very healthy. Like I said, AI, the technical aspect of it is one component, but there’s actually a huge, I would say, area involved in non-technical components that’s actually equally important to drive acceptance and adoption in this society.
This is part of the One-on-One Interview series with thought leaders. The transcript has been edited for publication. If it’s an audio or video interview, click on the embedded player above, or subscribe via iTunes or via Stitcher.