Description
David and Lily discuss the possible effects that advances in AI might have on the world of work. It seems that AI has the potential to affect almost all work, but should we be worried? What skills do we value as a society, and can AI ever replicate human creativity?
[00:00:00] Lily: Hello and welcome to the IDEMS Responsible AI podcast, our special series of the IDEMS podcast. I’m Lily Clements, an Impact Activation Fellow, and I’m here with David Stern, a founding director of IDEMS.
Hi David.
[00:00:18] David: Hi Lily. I’m looking forward to another discussion. What are we on today?
[00:00:23] Lily: I thought today we could discuss AI in transforming areas of work. There’s a lot of…
[00:00:29] David: It’s done that for you, certainly.
[00:00:31] Lily: Well, yes.
[00:00:33] David: Not just you. Most people within IDEMS have embraced it quite significantly over the last 18 months.
[00:00:41] Lily: And I think that that’s a really positive thing as well. So I use it in my work, as you know, but just as an example I do a lot of coding, so with documenting the code, writing tests for the code, optimizing the code, but even in my writing to help aid in the writing, help with courses that we do. It’s really helped in all areas, even with things with the podcast.
[00:01:03] David: Yes. We use it to try and look at titles in different ways. Rejected everything that came out, but that’s exactly what AI is for. You don’t take AI and use it. You take AI and reject the things which it’s not giving you what you want, but it helps you to recognize what do you want and what don’t you want.
[00:01:19] Lily: Absolutely. And if you work with it, you can question it. And, so I feel that I’m working with ChatGPT a lot of the time. I’ve got my little assistant which is documenting the code for me, it’s telling me what I can improve on and things, and so for me personally I found that it’s been transformative and very, much more efficient.
[00:01:38] David: Absolutely, this is, I think, what’s the evidence is pointing towards as well. My, my favourite example of this is, of course, this study relatively recently where they gave business students a task to try and sort of come up with business ideas and business plans related to their university and so on.
And they then asked the AI to do the same and then they evaluated them and eight out of the top 10 were AI. Now, please don’t misinterpret this. This does not mean you want AI to do business. But it means that a good, well-trained person using AI effectively can be substantially more effective and efficient than you can be if you don’t use AI. And that’s the key point.
[00:02:33] Lily: Another example I have is I have a friend that’s using it to help them with their teacher training, and they have to create lesson plans each week. Well, they use it to help them with their lesson plans, and they’re actually performing very well in their cohort compared to others who aren’t using AI.
[00:02:48] David: And this is the thing, there’s a whole other podcast I’m sure we’ll dig into about AI in education.
[00:02:54] Lily: Oh, absolutely.
[00:02:55] David: Is that cheating? No, I don’t want people to think of that as cheating. The point is, once you get into work, I don’t mind whether you’re using AI or you’re doing it yourself. What I do mind is the quality of the output.
This is really what’s so important. I do mind if you don’t do any work and AI does the work for you, because then I could just fire you and I’d be just as effective. Now I know for a fact that I couldn’t fire you. I would not be happy with the outputs of AI. You add so much value. But that’s the key point.
[00:03:30] Lily: Yes, so AI is transforming people’s work if they’re using it correctly and effectively. But there’s a lot of questions about jobs with AI. Should we be worried? Should we be concerned about people not using it correctly?
[00:03:45] David: Absolutely. I mean, we’ve got all the scandals which have come about people misinterpreting it. So yes, we have to be careful. But that doesn’t mean that we shouldn’t be embracing it as an opportunity to improve people’s work-life balance, to be able to improve people’s working efficiency and effectiveness.
[00:04:05] Lily: Very interesting. So then we should be encouraging the use of it if people are using it effectively.
[00:04:12] David: And responsibly.
[00:04:14] Lily: And responsibly.
[00:04:15] David: Very simply, when you use AI, would you ever use AI and then send the output directly to me or to someone else?
[00:04:24] Lily: [Laughs]. No, no, no.
[00:04:27] David: I mean, it’s obvious! You use it often enough that you know sometimes it does well, sometimes it does less well. Even with things being trained better, it might do well more often.
[00:04:38] Lily: Using it often enough that I can now tell if someone gives me something which has been AI generated.
[00:04:44] David: Yes.
[00:04:45] Lily: Yeah, I can just read. It’s just the word crucial a lot. It’s, you know, it’s very strong language at the moment.
[00:04:52] David: And all of that will of course change over time. This is where, if we go back to the education, trying to identify that is going to be an uphill battle. So it’s not really about identifying whether it’s come from AI or not.
[00:05:07] Lily: No.
[00:05:08] David: It’s about recognizing the value of the end product.
[00:05:14] Lily: So do you think, out of interest, that AI could transform anyone’s work? That’s quite a bold statement I suppose. Can you see an area where it can’t transform someone’s work, I guess?
[00:05:30] David: I suppose your bold statement I was going to answer, you know, yes, I think it will change everyone’s work who has access to it in different ways. Even if it’s just simply tasks which people find mind numbingly boring and bad and don’t do well, you know, timekeeping, you know, recording your time. What have you been working on?
Actually, an AI assistant could really help that to be happening really well, really nicely. And it would be transformative to help people not to report on their own time. Again, good implementation, bad implementation. Bad implementation, me, your employer decides, aha, I can use AI to spy on all my employees and try to understand what they’re doing. So I now set it up and I sort of understand what you’re doing all the time. That would be terrible.
[00:06:24] Lily: It wouldn’t be very interesting either. I’m afraid.
[00:06:28] David: Exactly. It’s mind-numbingly stupid as an idea. And yet it’s things that people are doing and have been doing. And yet, it’s so obviously a bad idea. However, you know, I trust you very much in terms of your responsibility. Using AI so you can actually save time and spend less time recording what you spend your time on, that would be fantastic! And that would be so much better data. The quality of that data would be so much better.
Now, could you cheat and could you say, oh, I was spending all this time doing something you’re not? Yes, but that’s what I’d want. I want to hire people who are responsible, who are effective and efficient at what they do.
This is part of our employment contract. There’s no fixed times you have to work. I don’t want to control that you’re actually working for a certain amount of time. But what I do want to do is I want to know that you’re working as effectively and as efficiently as you can. Admittedly, I do give you too many tasks, so I know you’ll take quite a lot of time in because you’re having to, to meet those.
[00:07:44] Lily: To be fair though, if we let you know that we have free time, then we’re given more tasks.
[00:07:51] David: Well… But it’s, it’s true that it’s this element that that conversation between us about what tasks, what you have time to do and so on. I’m not wanting staff who then use that time untruthfully. I don’t want to have the wrong incentives in place. I want to incentivize you to use your time as effectively and as efficiently as you can.
And if those incentives are right, we get better value. And that, to me, seems obvious, and this is where there’s a big movement towards a four-day week, which could lead to better work-life balance, to sort of shorter working hours, generally. These are great ideas and they’re great movements. The main reason they’re problematic, in certain cases, is that there are certain cases where having the, the shorter time does affect the productivity.
There are other cases where it doesn’t. It’s very interesting the research on this. There’s many job areas where actually having shorter working time means that you’re more effective in the time you do work and so you end up being more productive. Fantastic example of exactly what we expect to see. That it’s not about necessarily how long people are spending in front of a computer, or doing certain tasks.
If you’re a shop assistant, it’s different. The shop needs to be open from some certain point in time to another point in time. If you have less hours, you need more people. Very simple. That’s different. And so there are jobs where that does matter, and it is an hourly thing, where it is the time you spend, because that time is really critical. But other jobs, that’s not the case. Most of our jobs, it’s all about what you deliver.
[00:09:46] Lily: Sure. And so, linking this back with AI and work, you’re saying it’s different for these different contexts on how it should affect it.
[00:09:55] David: Yes.
[00:09:57] Lily: Interesting. And you’ve spoken a bit about AI in work, and something that I’ve also found is, you know, you need to have that interpretation at the end, that human element at the end to be able to interpret the output.
[00:10:10] David: Essential.
[00:10:11] Lily: This says to me that this is a new skill that we need, in a way.
[00:10:16] David: Absolutely.
[00:10:17] Lily: Where do we…? And maybe I’m now going to accidentally touch too much on the education side, which I know we will discuss elsewhere, but…
[00:10:25] David: No, this is exactly what we need to discuss in a whole other podcast where education has to embrace AI as a tool that students can use. You should be able to get better results through using AI than you get if you ignore it in education. So that’s a whole other discussion.
[00:10:49] Lily: Sure.
[00:10:50] David: But it should enter into our education systems all the way through in different ways. What’s critical is understanding the difference between how you use it responsibly and what it means to use it irresponsibly, and what the dangers of that are.
Understanding the difference between a good interpretation and a… irresponsible interpretation of an AI outcome is part of what needs to become part of our literacy. Data literacy should include how do you interpret outputs of machine learning AI models.
[00:11:29] Lily: And so then how do you see AI kind of transforming the workplace?
[00:11:34] David: It’s already this element that you… There have been strikes from the writers’ guild, there have been strikes from actors. These are jobs that you don’t necessarily think of as being so touched by AI, but they are, hugely so. And this aspect that jobs that were considered creative, considered, you know, artists, would have thought to be safe from a lot of technological development.
And suddenly we’re seeing that’s not the case. As a society, we need to decide how do we value those skills? What are those skills which are valued? Because if you can get a painting which is made by AI, which is as appealing as a painting which is made as a human, do we still want to pay artists to do something which isn’t needed?
Well, most certainly we do. But what are we paying them for? And who’s paying them? And why are they being paid? What are they actually bringing? Those are the questions which AI is going to force us to actually consider, to think about the societies we want.
[00:12:44] Lily: But AI effectively, it takes the data that it has and so things like that painting that the AI can create or can make is based on data that we have. Does that mean we’re going to kind of stagnate a bit in kind of our…
[00:13:00] David: Very good question. What’s really interesting, of course, is that it’s not clear that that’s the case because, well, if you’re an artist, what do you do? You take the influences that you’ve seen. You might create something new by bringing things from different areas or different topics, different things you’ve seen together and that creates something new.
But you’re almost certainly building from things that existed in the past. This is one of the things which actually is coming out quite strongly from AI where it is passing creativity tests in ways which are surprising certain people because actually our human process of creativity is built on our experiences, and experiences are in the past. That is historical data.
Are we really as creative as we think, or are we just doing the same thing, which is sort of bringing together different ideas from what we know from the past and putting them together in novel ways? No reason why AI can’t do that.
[00:14:06] Lily: So AI, you think, can kind of build from the past I guess can do that literature review or however it is in whichever field, look at the other artists and create something new based on that.
[00:14:19] David: I would argue that AI could probably do that better than we as humans because it can access more data. It can see more things than we could ever see or experience in our lifetime. In theory, these models can be built up on that. What I don’t think it can do is evaluate what is interesting and what is not.
And this is really interesting and exciting. It’s that judgment. It’s that decision making power. This is where, when people start talking of AI as a decision making tool, it is about, in my mind, informing decision making, not actual decision making.
As an artist in the future, you could still be really inspired by going to AI and say, give me some ideas. I want to do this and this together and give me some ideas. And then you could create something and you could decide, oh, I like that. The reason I like that is because. And then you could maybe implement it yourself or separately, it doesn’t really matter. But it’s that decision making process, which I don’t believe AI is going to compete on that.
It can provide options. It can do things much better. It can be, I think, more creative than we ever imagine it could be in the traditional sense of creativity. We as humans, myself included, are not as creative as we think we are. We have new ideas all the time and then we realise somebody else has had that idea before.
It’s not original, we just didn’t know about it. The number of times I’ve had a brainwave in which I thought, oh wow, you know, I’ve just thought of this amazing idea! And then, you know, a few years later after I’ve built on it, I realised somebody else had that idea ages ago.
And they built on it and they did something on it in a different way. And maybe it’s a bit different to what I did, but really that idea wasn’t as original as I thought it was at the time. I mean, at the heart of it, you know, I loved my maths PhD because there, there was almost no one in the world who had thought about these things before. And I was able to take a definition and say, aha we could do it differently and better and get different results! I bet you that other people have actually had very similar ideas around that and just not formulated them in the way I did. And in fact, a few years later, somebody else reformulated all my work and did it much better than I did. [Laughs].
It’s one of those things that we’re not as creative as we think we are. We are just learning from the past. We’re taking our different experiences, reusing them. Most of what we consider our real core creativity, our wonderful ideas, actually AI can do that. What we do really well as humans, not everybody does it well all the time, but what we do do well and what we don’t want to hand over is judgment.
[00:17:14] Lily: Interesting, because we’ve said before, potentially on this podcast, potentially not, I mean in the series, potentially not, but we’ve previously said about how AI is great at the past, you know, it’s got all the data about the past, but obviously it has no data about the future, so it can’t be used to kind of predict the future. I mean, it can make models to then give ideas about the future. But now you’re saying it can still create new things.
[00:17:45] David: It can create new things, but it doesn’t know whether they’re going to be good or bad in the future. That’s a judgment call.
[00:17:51] Lily: Nice. Okay.
[00:17:53] David: Yeah, and they can create all sorts of new things. Some would be good, some would be bad. It might be able to even have a bias towards good things if you program it well. And don’t get me wrong, humans are really bad at this at times too, you know, there’s terrible judgment calls happening all the time. My claim is, part of the reason for the terrible judgment calls is people are ill-informed.
Now, okay. That’s not the only reason people make bad decisions. People make bad decisions because they have the wrong incentives because they actually have ill intent and all sorts of other reasons. But if you get rid of some of those other reasons, and you actually think about humans who want to make good decisions, big ask, who want to make good decisions and who have good incentives to make good decisions, another big ask, I would argue the main reason that they would still often make bad decisions is because they are ill-informed. And therefore the set of decisions, the set of tools, the set of things that they consider is actually limited by their creativity, so to speak. And very often people just repeat what they know.
That’s where AI could be transformative going forward. Because that’s exactly the tool where AI could enable people to be more creative, to consider more things, to think about things in different ways, and to really consider different angles. Because in theory, and I’m not saying current AI tools are there yet, but in theory, this is what AI should be able to do really well.
And that transition from, actually in the past, that’s not a task that we could hand over to computers in any shape or form. Whereas that is what we are now starting to do. And we are seeing that with the likes of chatGPT, who can write better than me, I can’t write, I’m dyslexic. I’ve never been able to write.
But that aspect of being able to bring that element of creativity that we would normally consider very human to the table is something which I think is going to transform everything about how we work, the way we work, what we do.
[00:20:17] Lily: And so do you think that this is something we should be worried about?
[00:20:24] David: Yes and no. There’s maybe a whole other podcast to dig into this. These are societies’ choices about how they treat this. We could easily go to a dystopian future or a utopian future. I mean, these are possibilities. And they’re within our societies’ choices, I think either is possible. Where are we heading? I don’t particularly like our direction of travel in certain ways. Yes, I am worried. But I’m an eternal optimist. I believe we’ll turn it around. You know, I believe there’s certain reasons why actually in the long run, it does seem that the pressures might mean that once the rapid growth has passed, the pressures will be right that will be pushing us as a society in the right direction. I’m an eternal optimist. I don’t know that that’s true. I hope it might be.
[00:21:25] Lily: Well, sorry that’s just given me another, I know that we need to wrap up, but another one which is: is the rapid growth going to stop?
[00:21:35] David: Yes.
[00:21:35] Lily: In a plateau? Yes. Okay.
[00:21:37] David: I mean, the rapid growth as we are experiencing it now, it feels like a sort of end game of the last 200 years or a bit more. I mean, there has been continual rapid growth in the world for quite a long time. And actually you can trace it back even further than that, it’s just not an exponential line, as the recent rapid growth.
And all signs are, to me, you know, the crises, the poly crises we’re facing. This is exactly what you would expect of a society which has had exponential growth and which is reaching that point at which it’s going to plateau. And therefore it’s overgrowing and undergrowing from what it should be doing.
It’s mathematically modelled if you think about sort of growth trajectories. I would argue we are hitting that point. These crises are symptoms of that fact that we may at some point soon, soon might be relative, it might not be in my lifetime, but in the scale of human development that’s still soon. But at some point relatively soon I would expect us to be plateauing a bit more and actually getting to a position where… Well, no, of course, I am limiting myself to a future which I can predict as being sensible of one which is on this earth.
Of course, if you talk to certain other people, they might be saying, well, okay, we just populate Mars and then we can start, start our exponential growth all over again. If we just take over space, then we’ve got a whole big, a much bigger space to continue growing exponentially. But I think, assuming for a second that we really focus on this earth, wWe’re going to have to plateau at some point, I would argue, soon in the scale of millennia.
[00:23:38] Lily: Sure. Sorry for the slightly off track question at the end there, but I was just intrigued. We should probably wrap up. Do you have any final thoughts?
[00:23:48] David: I guess we sort of went off track with a very futuristic question, but if we hone back in to how’s work going to change because of AI, it really comes down to the fact that almost all work will be affected, but in many different ways.
I believe if society embraces this right, most work should be affected positively, although certain jobs will be lost. But the jobs that survive and continue should be jobs, I hope, where broadly AI is adding value to the working experience. And my hope would be one step further, that actually, if AI is brought and used in the right ways, it could be something where we actually get to much more balanced societies in terms of work-life balance, in terms of prioritization, of social well-being and the way that AI could help us to have better inclusion, better diversity and so on.
So it is a plausible scenario that AI entering into work as disruptively as it will, could lead to better outcomes. I am certainly not saying it will. I think there’s some really hard work needed to make it do that, but I believe it’s possible. But I’m an eternal optimist, so I would believe that. Nobody else should believe me just because I believe something.
[00:25:35] Lily: Great. Thank you very much, David.
[00:25:38] David: Thank you.