Description
In this episode of the IDEMS podcast, Lily Clements and David Stern explore the intersection of technology and humanity through the lens of Amazon’s AI-powered shops. They discuss the concept of “phygital” initiatives, such as the mobile phone-based money transfer service M-PESA, where physical and digital realms merge to create jobs and enhance human interactions. They consider the future role of AI, the importance of community, and the ethical considerations of outsourcing labour.
[00:00:00] Lily: Hello, and welcome to the IDEMS podcast. I’m Lily Clements, a data scientist, and I’m here today with David Stern, a founding director of IDEMS.
Hi, David.
[00:00:15] David: Hi, Lily. What are we going to discuss?
[00:00:18] Lily: Well, I thought today we could discuss, we’ll start anyway, who knows where these discussions ever head, and you always say, this didn’t go the way I thought it would go. But we could just start by discussing this article on Amazon’s AI powered shops.
[00:00:33] David: You mean their Indian powered shops?
[00:00:35] Lily: Their what powered shops?
[00:00:37] David: Indian powered shops. Because my understanding is that actually, although there was a little bit of AI involved, it was mainly about outsourcing all the cashiers to a sort of processing unit where people watch the videos in India.
[00:00:49] Lily: Well, that has a lot less glitz and glam to it. They aren’t going to put that on the poster, aren’t they?
[00:00:53] David: Well, exactly. But that’s the reality of what was actually happening. And this is why, as I understand it, it actually ended up stopping.
[00:01:02] Lily: So maybe we should summarise this kind of article that we’re referring to. So a few years ago, there was this kind of Just Walk Out that came out, which was like how people were using kind of Amazon food shops. There was one set up in London, in Ealing, and the idea was that you just pick up what you’re gonna buy, you don’t have to go through a cash point, and it works out for you how much everything comes to.
[00:01:29] David: And of course you’re registered and so it’s recognised who you are and it charges your account.
[00:01:34] Lily: Yes. An idea, I guess, to help curb shoplifting and to make things less, well, the future, I suppose, make us less reliant on humans as we for some reason like to do in this side of the world.
[00:01:47] David: Well, and I think this is a really important point and it’s not where I expected the conversation to go. So this is good.
[00:01:53] Lily: Sorry.
[00:01:53] David: But this is great that, you know, this idea that actually technology is about removing humans is a very high resource environment idea. You know, really powerful technology in low resource environments creates jobs. It actually involves humans. It makes humans do better things. And my favourite example of this, I don’t know if we’ve discussed it before, but it’s always there, is Mpesa.
It’s the classic example which has come out of Kenya of mobile money. And you go across Africa, anywhere you go now, and the shops you see the most of are in Mpesa shops, where people are making a living offering digital services, using the technology to transfer money in really powerful ways.
[00:02:34] Lily: So Mpesa is, my understanding anyway, is this way that you can transfer money from one account to another using mobile phones.
[00:02:42] David: Well, broadly. The story I love about it, which is also where the impacts of it become so powerful. There is quite a lot of remittances which get sent from urban centres back into rural centres. And what was observed was that actually, some people, instead of carrying the money back, which most people did, they’d go on the bus and they’d carry lots of money back on the buses. And this is one of the reasons that it was so dangerous to travel by bus in Kenya at that point in time because people would then hijack the buses because there was lots of money on buses.
But some people found that instead of doing that, they would then buy a mobile phone for their family in the rural area, and then they would send them credit. So they’d buy credit for the phone and they’d send them credit. And the family in the rural area would then have a phone which has credit on.
So if anyone in the village wanted to call someone somewhere else, they’d come and they’d pay them to use the credit. And so that meant that they had converted this sort of credit setback into money. And so this became an informal way of sending money across the country without having to carry it in buses.
And there was somebody who noticed this sort of phenomena happening and decided to actually formalize it. And so there was then a way of taking your phone and actually turning it back into money. And so you’d have people who were the Mpesa agents. Pesa is money in Swahili, M is mobile, so this was mobile money. And so you’d actually could go to an Mpesa agent, and for a really tiny fee, you could put your money into your phone, which is sort of an account on the phone, and you could send it to someone else or take money out of it through the Mpesa agent.
And this became a really formalized sort of job. And it’s created employment all across the country in really powerful ways. I love the term, it’s phygital, not digital, because it’s physical as well as digital. It’s a phygital initiative. Powerful, wonderful stuff, creating jobs, using society, building human interactions in, through technological innovation.
Exactly the opposite of what we were actually discussing, which was the idea of instead of having people, you know, as cashiers in the shop, you can just get rid of those. People just walk out and then you have a whole set of people who are then, of course, having to do the actual watching and seeing what people put in their baskets because the AI is not powerful enough to figure it out.
[00:05:04] Lily: So moving on from that, so this article in particular that we’re referring to say that they actually use a lot of humans. And as you pointed out at the start, those humans are generally over in India, I think I read Kenya as well, places where they can be paid a lot less than in the UK. In a way, is that not phygital? I know that it’s not because those human interactions aren’t there for us.
[00:05:26] David: This is exactly the point. The human interactions are not there, they are hidden. Actually, almost all AI relies on a huge labour force, human labour force, which is hidden, and which is all the coding, it’s all actually helping the AI learn.
And the point is that that’s not phygital in the sense that it’s not combining the physical and the digital. It’s trying to sort of remove the visible physical and replace it by a hidden behind the scenes, low paid human effort. And so, a lot of this comes back to the fact that your physical people, I mean, I’ve had this, I was in a shop recently, and I had a cashier, and I had a two minute interaction with them, which was quite nice, and we joked backwards and forwards about the fact that actually we had bought a lot of anchovies, and that anchovies were good in salad, and they might like to try anchovies in salad.
And you know, it was a two minute interaction, but it left me feeling happy, and I hope the cashier didn’t feel too unhappy either, they’re doing a public service by interacting with people, creating human interaction. And so a good phygital intervention, whatever that might mean, an initiative, creates more human interaction than it removes.
And this is what we want. We don’t want a society where humans don’t interact with one another. I mean, the high resource environment equivalent of this is of course a dating app. You’re actually supposed to meet up in the end. That’s the whole point. But you know, that’s trying to create human interactions with people you might not have otherwise known to meet. And so that’s creating human interactions in certain ways.
Now, I would argue most dating apps are not created as what I would consider phygital interactions, but it’d be interesting to think what might a phygital dating app look like? I don’t have any idea at this point in time, but how that could be conceived differently. I suppose the traditional way of doing this is you have your agony aunt or somebody who’s actually the connector in the sort of community, bringing people together.
So how do you get a digital tool which supports them to build those human connections in the communities and to be part of that, that might be your phygital, creating roles for people to help people connect and to help them find people who they have things in common and to make the dating experience better.
Because my understanding is, and I’m rather lucky that I met my wife in sort of a more physical sort of approach. We didn’t meet through digital dating and I’m not saying anything against digital dating, but the experiences I’ve heard people having had are not all positive, and there has been a whole range of experiences. And I wonder whether as a society we could think more about how do we build community? How do we use technology to build community? And that’s not about removing humans from interactions. It’s about making sure that humans who are part of interactions are able to do so in really positive ways.
I have a few other examples on that, but I feel that I’m going off topic. Should we be coming back to Amazon at some point?
[00:08:38] Lily: Well, this was what I was going to say. I was going to say, okay, this is all incredibly interesting, and I know that we’ve spoken a bit about phygital in the past, maybe not necessarily on the Responsible AI episodes, though. However, you said, and I said, this conversation probably isn’t going to go in the direction that you think it will, because it never does. But what is the direction you thought this conversation was going to go in?
[00:09:00] David: That’s a good question. I mean, I assumed this conversation would go more in a direction around, you know, when you mentioned this particular example, more really in the direction about, well, what is AI and what is the role of AI? What can AI currently do? What might it be able to do in the future?
There is no reason to imagine that in the future, we couldn’t build AI systems which could watch a supermarket and see what people are putting into their pockets or their bags or wherever they might be putting it and actually automate that. That is a process which it is simply a technical challenge to do so.
The fact that we cannot do it now should tell people a lot about the state of current AI and where there’s a long way for us to go because that’s a solvable problem. It’s just a hard problem to solve. And you know, as mathematicians, we like hard problems, give us enough time and we’ll solve them.
So over the span of a few hundred years, I would certainly expect us to be able to build technology that can do that. What I hope is, of course, that by the time we have the technology to do that, we actually decide to do it in much more human ways. And so we actually have recognized the value that humans can bring.
In a world where we can do that, in a society that I would want to live in, what might it look like? My hope is you’d still have the cashier type person, who, when you’re going out, you have a little conversation with, and they look through what you’ve bought in a sensible way, they have that information there, and there’s an interaction.
So you’re not losing the human interaction. Maybe it’s not necessary in the same way, but my hope is it would be valued, and that that value would be there, and you’d choose to have an interaction. Maybe you could avoid cues, because they’d be able to manage this better in that sort of point in time, or you don’t have to have that interaction, it’s a choice. So if the cue’s too long, you decide to just walk out, because you can.
And so I don’t know what that society might look like, but my hope is that if that society really works well, you won’t be taking humans out of those roles. So what might a shop assistant look like when you really don’t need one? Well, what if, at the moment, I actually find I don’t like shop assistants as much as I did a few years ago.
I remember, back in the day, I suppose I’m showing my age, but where shop assistants seemed like they were there to help you. Quite often now I find, and I have talked to people who were, as shop assistants, given these instructions, shop assistants aren’t really there to help you. They’re there to make sure you buy extra stuff. Maybe stuff you don’t need, that you wouldn’t have bought otherwise.
There’s a particular shop which sells sunglasses where I think it was my sister who was working there at some point, and she left at the point where the management told her, no, slip this into their shopping and if they take it out then they don’t need to buy it.
And so you have to actually actively say, no, I don’t want to buy it. No, I don’t need these extra things. And that was the role of the shop assistant. It’s not serving the customer, it’s serving the profitability of the shop. And that’s one of the reasons that I would argue that, you know, online shopping became so attractive. Because you don’t have to deal with shop assistants trying to sell you extra stuff.
[00:12:27] Lily: Absolutely.
[00:12:29] David: What if shop assistants’ role was to help you and, to help you navigate the world of things that you don’t know, and actually maybe they could even be knowledgeable! That would be really, really useful. And that’s what I remember when I was growing up. And there are still shop assistants who are knowledgeable. And there are shops who encourage that and promote that.
I won’t get sucked into sort of any legal dispute of trying to name or not name them. But they do still exist. And some of them are small and independent and some of them are big. But there are groups that actually value that role, and I hope that they win out in the end. That people choose to use, well, to use shop assistants because they add value. And that to me would be part of that future that I would envisage.
[00:13:16] Lily: Yeah, absolutely. Well, I think just maybe going a little bit not quite in that future you’d envisage, but actually what’s quite ironic is when I’m doing online shopping, I want to have a human to speak to, not a chatbot, it never has the option that I want to talk to it. You know, I want the human with the knowledge there if I’m doing online shopping.
[00:13:35] David: Well, this is the sort of thing that actually, well, a good AI could solve that, surely they could, and you might not know the difference. I mean, that’s the Turing test. There is this sort of question about whether a good AI could solve that. And so it’s just that they haven’t got a good enough AI chatbot to fool you now into thinking you’re interacting with a human and give you what you want.
But the frustrating thing the other way on is that quite often when you do get a human, they’re not knowledgeable, you know, in a lot of different cases because their role isn’t necessarily to help you. And this is one of the other frustrations. So that incentive systems, creating knowledgeable humans who are able to help and whose role it is to support, that’s what I would love from society.
But these are all interesting questions about what is the future going to bring? And so I guess that’s more the direction I thought it would go, which isn’t so different, I suppose, because it really is about the roles of humans. I want supermarkets with cashiers who I can have a little interaction with, who can brighten my day, and I can tell a little joke, you know, you have a human interaction. Not because it’s needed…
And to be honest, you know, many cashiers may not enjoy, they might enjoy those human interactions, but they might not enjoy the repetitiveness of their job. So maybe that could be removed by AI and their jobs could become more enjoyable, more people focused. Because that is the service that they’re doing.
I wonder what if we recognize that as the real service and we took away the menial tasks of actually scanning the items, but we didn’t take away the human interaction because of the value it can bring. Those are the sort of questions which I think as a society, well, no, as a society right now we’re not ready to ask those questions.
My hope is by the time the technology has got to the stage where it could actually replace the humans scanning and actually watching the video to see what you put in your basket, my hope is that by then, as a society, we might have got to the point where we value human interaction for what it is, which is an essential part of community in society.
[00:15:42] Lily: But I suppose part of the issue with that is that, or part of the issue of the incentive being the other way, is that humans are expensive, whereas developing the technology is expensive, but once it’s developed, it’s developed. So what is kind of, I guess, my incentive? As someone who, I’m a company owner, of having both humans and this technology.
[00:16:05] David: Ah, very good question. Well, and there is a question about, and this is about true value, you know, your short term value might be that you don’t gain from having humans, and so you could actually make cost savings by getting rid of the humans. But if your competitor didn’t make those cost savings, and they had humans, and people liked having humans, then maybe the profitability was low.
But if, as long as they’re still profitable, actually they still hang around. And now people prefer going to them than you. And so there is sort of competitiveness, that you’re actually being outcompeted, not on profitability, because it’s not necessarily just about maximizing short term profit. And this is good business in general.
Good business is not maximizing short term profit. And that’s one of the things we seem to have forgotten. Really, in a stable society, and one of the problems is we don’t live in a stable society at the moment, because things are changing so fast, but if we get to a more stable society, then it will be good business to offer good service.
And if good service, it comes at a slight price, you know, and so it’s just a bit less profitable. As long as it is profitable to offer that good service, well then maybe you will be able to out compete. So I don’t know what this is going to look like. I don’t know how that’s going to play out. I don’t think it’s going to play out well in the short term, because short term profit, I would argue, is perceived as being of more importance than long term market share and the rest of it.
And you’re seeing that playing out in all sorts of industries where the really old firms are struggling nowadays. Because actually quite often it’s some of these very old firms that really have that longevity because they do combine this. They don’t just want short term profits and then to collapse, you know, they want to have a long term process.
I don’t know, but I believe if we take a view of a few hundred years, rather than a view of a few minutes then maybe we would actually look at things differently and there would be other business models which would actually out compete and which would work better for society.
And that’s, I think, the thing which at the moment doesn’t seem to be working, but it’s sort of a general tendency which, you know, I’m certainly not the first, and I hate to call myself it, but I’m a businessman now because I’m an entrepreneur. So I’m not the first businessman to recognize that short term profits are not necessarily better than a long term vision.
And that has been throughout history. There are wonderful examples of that. And that’s what I hope over time will win out in interesting ways. So I think as a business owner, you know, having a long term view rather than a short term view would be important. Now, one of the problems is, can you have a long term view if your main obligation is to shareholders who only have a short term view?
So that’s again part of the sort of question of the sort of structures we’ve put into society and how people view shares versus you know, profitability and all these things. So there’s a whole set of things caught up in this.
[00:19:25] Lily: Just one thing to, I guess, really highlight there, hone in on there, as you said, in a hundred years, or hundreds of years, I don’t think people have that kind of level of vision. I know through working with you that we’ll be working on something and you’re like, well, you know, that’s a job for 20 years time. Actually, you said that particular point about five, seven years ago, so that must be a job now for 13 years time. You’ll probably still say 20 years time…
[00:19:47] David: Well, well, I don’t know, I’m trying to remember, seven years ago?
[00:19:50] Lily: Seven years ago it was to do with R-Instat, which is this open source software.
[00:19:55] David: Which you were working on seven years ago, yeah, yeah.
[00:19:58] Lily: Yeah. Open source front end to R that we’re developing, which is a free statistical software. And I think it was to do with having dialogues, which can fill themselves through code.
[00:20:10] David: But that’s no longer 13 years. We’re making progress on that. We’re in advance. That was going to be 20 years time. And we might get there in 5 to 7. It’s incredible! Some of these things, which I thought would take 20 years, might only take 10 or 15. It’s really exciting.
[00:20:30] Lily: Well, still, I think some people, though, hearing, well, this will be something in a couple of hundred years, people are like, I mean, I can’t think that far ahead. I don’t know what the state of the world’s going to be like in a couple of hundred years, so…
[00:20:44] David: Well, I don’t know what it’s going to be like in a couple of hundred years. But this is the whole point about actually taking a long term view, is a long term view can’t be within what you can imagine. And so if I were to say 20 years, that’s way too short for some of these things.
Actually looking 100, 200 years time, and there are people who have done this with history and looked at 100 year cycles. There are things which are roughly on 100 year cycles, which are very interesting to look at and very interesting to sort of think about, where societies do tend to go through sort of cycles of around 100 years in certain ways.
So you have to look at the hundreds of years to be able to really get a perspective, a historical perspective. And everybody at the moment seems to be caught up in the new and the now, but it’s exactly in these moments in time when the new and the now is so urgent. That you need to take a really long view as well, and you need to recognise yes, some things are really urgent, they have to happen right now, and we have to do things right now, because it’s, it is the moment.
But the impacts of these, and the consequences of these, are there for hundreds of years to come. And tying those two views together in certain ways is really important. I don’t know how to do it, but I think it’s really important.
We got to that because the problem that what Amazon wanted to do will be solved, my guess is, within the next few hundred years. Without a doubt, that problem will be solved because it’s a solvable problem. It’s maybe not solvable with the technical capabilities we have now, but you know, things have developed so fast over less than a hundred years really working towards that.
And the concepts of AI are just less than a hundred years old. They’re over 80, but they’re less than a hundred, I believe, roughly where these ideas are. There were ideas in sort of other ways before that, but more concretely, really work, active work happening on it is about just over 80 years old, it’s my understanding, or around 80 years old.
So, thinking on a hundred year trajectory of where we’ve come from and where we now are at, and where we’re going, I think within a hundred years or two, we will have the capability of solving it. It’s a maths problem. It’s not that hard. But, it’s not the important problem. The important problem is the societal problem, the community problem, the human problem. Not just human in the sense of human, but humane, thinking about environment, thinking about, you know, other species and biodiversity and all these things. These are the problems which are tied into this, which are at the root. And if we can be thinking about things on this longer term trajectory, then I think we can find actually, the tools that we have now are so incredibly powerful compared to what we had even just a few years ago.
Instead of worrying about what they’re not. Okay, they’re not yet able to let you walk out of the supermarket without going to a cashier’s till or scanning them as you go around. The walkouts have been there for ages. You just take a scanner with you and scan as you go around. I mean, I was doing that over 10 years ago, so you can already do the walkout stuff. But the point which I think is so important is that, it’s not about that borderline of, can we push the boundaries of what technology can do now?
If we take a bit of a longer term view, it’s really about how can we use the technologies which are available now to build communities, to build societies, which are the societies we want to live in. Because that’s what it’s really about. And if we’re looking a hundred years in the future, probably the societies we want to live in are not that different from the societies we want to live in now.
[00:24:38] Lily: Okay.
[00:24:39] David: And that’s where I think these things come together. And the societies we want to live in do involve people having jobs, being able to make a livelihood, being able to have societies which function in different ways. And that is what we’re going to want in the future. And having communities which bring a sense of community. And having some of those as being local, and having some of those as being digital, because that’s now possible in a way it wasn’t before.
So having global communities as well as local communities. Wow, what an amazing richness we could conceive, if we could conceive these communities and these societies, we want to be living in.
And yes, people have tried and they’ve tried over the last 50 years to think this through and to do this. And most efforts have broadly not been as successful as people would have liked, but it doesn’t stop it being a worthwhile effort to try and think of. And this moment in time, there are advances in technology, which mean we can do more than we’ve been able to do in the past.
[00:25:40] Lily: That’s a very interesting and a really kind of powerful idea.
[00:25:45] David: I have no idea how we got there from Amazon.
[00:25:47] Lily: No, me neither, but you’ve explained it really well. I’m a little convinced. No, thank you very much David. Were there any kind of final points you wanted to say?
[00:25:59] David: I guess the final point to tie this back to the Amazon example, you know, I don’t think we should be too harsh on them. They were trying to push the boundaries of technology. They then, recognized that the boundaries of technology weren’t quite where they thought they were and they employed a whole load of people and that put them into a situation where actually what they were doing was unethical. And I would argue that quite strongly, that actually employing a whole set of people to watch videos in India because you pay them a lot less than getting them cashier jobs in San Francisco or London, that’s unethical.
You know, you should be creating those jobs in San Francisco and London for people to be cashiers. But once they recognized that that’s where the technology was, and that’s what the current situation was, they then stopped it. So, you know, we shouldn’t be too hard on them for this, it’s not their fault that AI has false promises at this point in time. What it can do is amazing right now, and it’s way more than it could do a few years ago. But it’s way less than it will be able to do in the future. 100 years time, let’s say. Let’s be patient and let’s accept things for where they are now and actually thrive in this current environment if we can.
That’s the exciting thing. So let’s not be harsh to Amazon for trying to push the boundaries. Great. Especially because they then recognized that what was happening wasn’t ethical and took a step back from it. Good.
[00:27:25] Lily: Sure, sure. Yeah. Thank you very much, David. It’s been a very good conversation. Very insightful as always.
[00:27:32] David: Well, thank you. This has been fun.
[00:27:34] Lily: Thank you.