Description
David and Lily consider regulation around the development of AI technologies. They discuss Amazon’s gender-biased AI recruitment debacle, and why many big companies are embracing regulation. Can regulation be designed to protect society at large from the dangers of irresponsible AI, whilst ensuring that the right companies benefit and the right companies are disadvantaged?
[00:00:00] Lily: Hello and welcome to the IDEMS Responsible AI Podcast, our special series of the IDEMS podcast. I’m Lily Clements, an Impact Activation Fellow, and I’m here today with David Stern, a founding director of IDEMS. Hi David.
[00:00:19] David: Hi Lily. What are we discussing today?
[00:00:22] Lily: Well, I wanted to talk about regulation and how it impacts businesses today. So the CEO of OpenAI has said that if they can’t comply with the regulations, then they will cease to operate in those regions. And we kind of touched on this in a previous podcast. I wanted to talk about good regulation, how it impacts businesses, is it going to mean that some companies are saying that they’re not going to work with those regulations in that region? Is this going to be good or harmful?
[00:00:51] David: Well, the other way to look at this is in the US, where generally big companies are not that favourable to regulation, a meeting of big tech with government where it was agreed that AI regulation was needed. And it wasn’t agreed what it should be. And so this idea that actually regulation is either good or bad for business per se is totally wrong.
Good regulation can be good for good businesses. And this is sort of where the recognition that actually getting the regulation right so that it supports businesses to do things in the right way is something which is important. And I think the example which I always keep coming back to on this is Amazon. They had a big, it wasn’t a big scandal because they didn’t release it, but they had this big problem with trying to build out their recruitment AI, if you want. And I think you remember that particular case. Should we talk through it briefly?
[00:01:55] Lily: So my memory of the case is that they obviously get a lot of people contact them about working with them. So they were going through the CVs and they were using… they were trying to develop AI to help sift through these CVs for recruitment and…
[00:02:11] David: Essentially do the shortlist for them.
[00:02:12] Lily: Yes but initially they found, okay, because of the data being fed in, the algorithm was then cutting out a lot of the women because historically…
[00:02:22] David: For the high pay, technical jobs women were generally being excluded systematically, and they investigated this.
[00:02:29] Lily: And then they found, as you know, that, it might be that, okay, well, we’ll take out gender. But then it might be that someone has written in there that they were part of the women’s chess team and then they’ll be discriminated for that. They even went through gender neutral terms and there was still some bias in there, and eventually after all of this investment, decided it’s not worth it.
[00:02:52] David: And the really important thing is that I would argue that Amazon decided it’s not worth it for them because they have too much reputation to lose and therefore the cost of actually getting something where they are found to be discriminating by gender, the potential risk of that was too high compared to the benefit that the tool could bring to them, even despite the investment that they’d already made.
[00:03:20] Lily: So if we’re talking about good regulations helping businesses, then in this case, arguing for a smaller business, they might not have cut their losses.
[00:03:29] David: This is exactly why I believe the big tech companies are agreeing to regulation for AI, because it actually serves their purposes. For them not to do what regulators would ask for, which is to make sure that your algorithms are not biased and they’re transparent in certain ways, it’s going to be highly costly in terms of the reputation loss if they get it wrong. And so they’re having to do what one would assume that the regulators would ask for anyway because of their reputation and because of the scrutiny that they’re under. Whereas…
[00:04:05] Lily: I see.
[00:04:05] David: …If the regulators aren’t there, then start ups get away, possibly, with doing essentially what has been a start up mode, which is claiming you’re getting it right first, and then fixing the problems later. But with regulators in place, you couldn’t do that. You couldn’t actually have a gender biased recruitment tool, because it would have to go through processes to get approved, and those processes would probably catch that. And actually fixing that, as Amazon found, is extremely hard.
Now the difference between, an AI based tool which has gone through a process to remove the biases and one which hasn’t is probably almost nothing from a user perspective. Users won’t notice the difference. Therefore, your big tech companies that are wanting to use AI responsibly, rather than just using AI incompetently, they are actually going to benefit from the regulators because other people will have to do what they have to do anyway for their reputation.
So in some sense, the expectation is that the regulators won’t be much stronger necessarily than one would hope would be needed to be able to maintain a good reputation going forward. It’s very interesting, for example, the OpenAI, what do you think of when you hear the word open in OpenAI?
[00:05:37] Lily: Well, you think of open source of AI?
[00:05:40] David: Yes. Yeah. No, no, no. It’s not open source. OpenAI has nothing to do with open source. What else might you think of?
[00:05:47] Lily: Free?
[00:05:47] David: Yeah, not necessarily. You know, you have services, they have the server, they have the hosting costs.
[00:05:52] Lily: Yep.
[00:05:53] David: Open source solutions doesn’t mean you can access them on other people’s servers for free, necessarily. But I would certainly think of open in terms of transparency.
[00:06:04] Lily: Ah, yes.
[00:06:05] David: And yet actually there’s a surprising lack of transparency, especially given their name, And this is where the political, I don’t know what you’d call it actually, the trouble at the top of OpenAI, I followed this as much as you broadly can, I don’t believe I know the full set of stories. But I do believe that there are real conceptual differences of opinion in terms of rapid commercialization, versus if you want the scientific progress of the AI field. I think these were probably behind the clashes at the top of OpenAI.
And I think that this statement that OpenAI, if they cannot meet the regulation, would not work in, let’s say, the EU, because I believe that’s where, really, the comment was aimed at…
[00:06:59] Lily: Yeah, it was.
[00:07:00] David: And they are the furthest ahead in terms of, I believe, getting to sensible regulation. They’ve been thinking very hard about this, we’ve been in touch with some of these groups in Germany doing research on this or towards this, supporting this. And I think the regulatory framework they’re trying to bring in is good, but I think it’s correct that it would be tough for OpenAI to be able to take this into account with all the different products they’re trying to get out.
Some of the elements, I think it will really require research to get there. And I wonder how that implementation would work, because it’s not in the regulator’s interest to be getting successful companies avoiding their region because of their regulation. And without a doubt, OpenAI has produced a really big step forward in the field recently. And so you don’t want OpenAI to be excluded from your region.
[00:08:08] Lily: No.
[00:08:09] David: You also don’t want OpenAI to be determining the regulation.
[00:08:14] Lily: Chat GPT and, and those sorts, you know, OpenAI is exciting though. I want to see what’s going to happen next there.
[00:08:20] David: Absolutely.
[00:08:21] Lily: And you saying to me that this regulation will slow down that progress. I’m like, no, I don’t want to slow. I want to see what happens. Obviously I know the dangers of that.
[00:08:30] David: But you’re also quite good at, when you use, let’s say, ChatGPT for R code, you then go through and you check and say oh no, that’s not doing what I asked it to do, that’s doing it differently and you fix it. What about areas where you’re not doing that and where you can’t do that because you don’t have those skills?
[00:08:46] Lily: Areas where I’m not aware.
[00:08:48] David: Where you’re not aware or where it then just gets used as is. You are aware in certain areas that you can pick what comes out of it and you can actually know enough to be able to fix it in certain ways where it needs fixing. And then it’s basically just saved you a lot of time. And so it’s great. But there’s dangers if people use it without people with those expertises, because it might not be doing what they think it’s doing.
I don’t think the regulators will change that, I don’t think they can, but what the regulators can do is they can ask for things like to make sure that there are tests which are done, that there are processes which have been gone through to check whether certain biases are present within the system. And if they’re present for certain action to be taken. They can put in place structures where a certain level of transparency is required.
Now, open AI doesn’t not want transparency because it’ll slow it down. It doesn’t want transparency because it’ll speed up its competitors. There’s another element there. It’s not that good regulators may not slow things down, they may also stimulate them and speed them up if they enforce elements of transparency which lead or encourage higher levels of collaboration in certain ways. This is the nature of open source software and there are a number of competitors coming out committed to open source.
I think there was this alliance recently between, what was it, IBM and Meta or something, where they were committed to developing open source AI algorithms or AI mechanisms. So maybe the regulators will help support and encourage that because that can be under scrutiny in different ways and that can be tested and so on better. So it might not slow things down. It might slow things down for a company that wants to be very private, for their commercial interests.
[00:10:51] Lily: Absolutely. And ultimately we want these things to be done responsibly and safely, it’s that bit in me that’s like, no, I want to keep doing it, and it’s like, well, no, we need to, we need it to be done, we need the regulation, and…
[00:11:03] David: Well, it is something where without the regulation, the incentives are very different. So good regulation changes the incentives, and I believe if the regulation is being done well, and I know people who are involved in this, in the EU side of things, and it is being done extremely thoughtfully. They’re not trying to slow things down. Bad regulation would be aimed at just slowing things down using blunt tools. That would be bad.
Good regulation is possibly aiming to change it so that the incentives are better for people who take a more transparent approach, for people who actually work more collaboratively. So actually, the regulations can change the landscape of business practice, and that could actually lead to things moving faster.
Just going back historically, why has open source software won, despite there being no regulatory authority pushing open source software? It’s won because actually, in the long term, it out competed the closed solutions. People who were working using open systems, they were able to use the transparency to build better products in a certain way. That’s why the whole internet uses Linux.
[00:12:22] Lily: I was going to say, you say open source has won, I know about the Linux, Red Hat example, but is there other kind of big examples where we use open source? Because things like Meta, Meta’s not open source and that’s a huge company that we use.
[00:12:37] David: Well, when you say Meta’s not open source, one of the big software frameworks is React, and React is all open source, and that’s all developed by Meta, because these are the tools that they’re wanting to develop to be able to build their solutions. The underlying software, Meta is deeply involved in building open source. It’s not by chance that Meta has recently come out in support of open source AI, because it’s got a long history in the open source community.
[00:13:03] Lily: Okay.
[00:13:04] David: Okay, it’s a controversial history, so I don’t want to delve into that too much, but that’s a whole other podcast to discuss that. But you took the example of Meta, which is, which is deeply involved and has been for a long time in developing open source systems.
[00:13:20] Lily: You’re saying to me that actually there’s a lot more open source there than we realise.
[00:13:23] David: Absolutely. This is what’s changed over the last 20, 30 years.
[00:13:27] Lily: Okay.
[00:13:29] David: You know, 30 years ago, open source was a niche thing where it was mostly academics who would do this and do this because they didn’t want to commercialise. Now it’s become a commercial thing. Most big commercial organizations, the really big ones, value open source because actually it’s really cost effective for them. Actually the closed models have led to real issues because you have to have the whole development team internally yourself.
Whereas if you maintain an open source system and you actually guide it, you get the benefits of when somebody gets really pissed off that your system doesn’t work as they want it to, they do the work and they give you then their work for free. So it actually works for them, and they’re still guiding it overall, but by having other people able to fix the things that really frustrate them, it actually has worked for them.
And it’s more than that. There’s, you know, that’s a slightly cynical view on why big tech has really adopted open source. But there are good reasons why it has been shown that the proportion of money which gets fed back into the underlying development is relatively small compared to the numbers of users on some of these systems. And therefore if you can actually say instead of protecting it, you know, Facebook or whatever is really brand recognition. That’s the real value. People use it because, well, they’re used to being on WhatsApp, they’re used to being on Facebook. It’s not because the software is their proprietary software and does something that other software doesn’t do.
In fact, that’s rare. The software isn’t the heart of their business model, but it is important to their business model. And that’s why it makes perfect sense for them to really go with an open source approach. It makes perfect business sense.
[00:15:23] Lily: Interesting, very interesting.
[00:15:25] David: The point which I think is so important is that the same will probably be true of AI large language models and all the rest of it, in 15 years time. The question is, what happens in the interim? And that’s where a regulator could come in, and actually shift the balance, so instead of it being 15 years, it happens over the next 5 to 10 years. And so the open models actually, because of the nature of the way the regulator is putting things in, it pushes people towards the open ecosystems. Ironically, of course, if you do that well that benefits big businesses over startups.
[00:16:10] Lily: Well, this was going to be my next big point, but I was very enjoying that. But yes, I mean, most startups don’t start with a whole bank of money. Who’s paying for this?
[00:16:21] David: Well, AI startups right now do, and it’s a very unique breed, but you’re right. The point is, small startups, small organizations doing this, the bigger the regulatory framework which they’re having to abide to, the harder it is to get going. However, once you’re big, and I’m afraid OpenAI cannot really consider itself a startup anymore, it’s, it’s, it’s big, it has had success, it needs to be dealing with these things. And so, if OpenAI is essentially not willing to engage with the regulatory framework, well then, this is a bad sign, I would argue, for OpenAI going forward and being responsible. This is not a good sign.
I would argue that they should be at the stage where actually they don’t need to be taking a startup approach and really running with it now. They need to be actually solidifying, doing the things they’ve done well better. And actually making sure that their house is in order, so to speak.
And doing that in a really positive way, a good regulatory framework, should help them and be to their advantage now. And actually, it would probably be detrimental to the upcoming startups, which are having to compete with them, which don’t have what they have. So, in theory, the perspective of, you know, oh, I don’t want to engage with the EU because of a regulatory framework, that’s because he’s still got a startup mindset.
Actually, where OpenAI is now, that probably isn’t the right mindset for them to solidify into a long term business. This is why they might be a boom and bust. The whole point is, the reason that this sort of makes sense at the moment, the reason Amazon did make the right decision not to pursue its recruitment tool, was because if you did that as a startup, then you could just go through a boom and bust cycle, as it came to light, the extent of the biases within your algorithms. You go bust, but then you sell on some of the knowledge, and then we go for another startup, which now does it better. Or it gets bought out, and then when it’s bought out, it gets fixed. But the point is, you get the customer base, you get people on board, even when it’s not working as it should.
And unfortunately, that’s how our startup scene works at the moment. I’m not saying it’s good or bad, but I’m saying this is what the lack of regulation encourages. A lack of regulation means that, well, it’s not your fault as an organization if there are biases in your algorithm, you could try to take them to court with a human rights case that I shouldn’t be discriminated against, probably, you know, I probably wouldn’t be discriminated against, this is one of the bad things, I’m rather fortunate, I would probably be discriminated towards but, if the algorithms were discriminatory, then people should be fighting them. And, this has real human impact.
And a startup which gets landed a big lawsuit is very different to Amazon getting landed a big lawsuit for discrimination. And that’s where, a startup basically goes bankrupt if it has a lawsuit, which it can’t win, and so on. Whereas Amazon… they don’t want to be brought down by a lawsuit for that. They wouldn’t be brought down by a lawsuit, but they can’t take that risk.
Whereas a startup is taking risks all the time, it has to, to start up. And so it’s sort of got to accept that risk. There’s danger to that. There’s a lot of issues around that. It’s not necessarily the word that I believe is most productive. But I think there’s an important aspect that, one way or another, regulation helps some people and disadvantages others. The key is to get it so that it helps people that are serving society better, and it’s disadvantaging people who are not.
And what that means in different contexts is totally different. This is the interesting thing. I don’t know, and I don’t know that anyone knows how to get a regulatory framework, which will basically help startups over the established big tech, because imbalance in power there is huge, while also helping society to be protected from irresponsible AI.
I think I have a fairly good idea and I think the EU framework is fairly similar to what’s coming out of China, it’s fairly similar to what’s happening in the US, and so there’s a growing consensus about what’s needed to do the latter. And I find it really ironic that in other contexts your regulatory frameworks are being put in place to try and hold back big tech, and here for once you have a regulatory framework which is trying to protect society, but actually in doing so is almost certainly going to advantage big tech.
From another perspective, there are issues with what I think is going to come out of the AI regulation, because I think it will suppress elements of the startup culture around AI, which is booming at the moment. But I think it’s booming because big tech cannot engage because the risk is too high. Because they have to do what the regulatory frameworks are doing because otherwise the reputational risk or the potential lawsuits in the future are just going to be too expensive. Whereas startups… aren’t worrying about that.
[00:21:56] David: That’s my reading of the situation. I don’t know, you know, other people might interpret it differently. I do believe that good regulation will help stimulate innovation. But it might help stimulate innovation within big tech more than it helps stimulate innovation in startups.
[00:22:18] Lily: Interesting, but it would at least head things into a more responsible direction with AI.
[00:22:28] David: I mean, I don’t know if it’s about, I’m an optimist, I’m an eternal optimist. I think if you look 20 years down the line, I think responsible AI will win out. Whether the regulatory frameworks come in or whether they don’t, just like open source outcompetes, I think it’ll be the same, responsible AI will outcompete.
The question is how long does it take to get there and how many scandals do you have along the way? A good regulatory framework might limit the damage that happens along the way. And that’s why I would support it. Probably won’t make a huge difference in the long term on… I don’t know, I mean it depends on the regulatory framework on, and I was going to say on who, who owns it, how it works.
I think that would sort of come out broadly the same. I don’t know that. A really bad regulatory framework might actually do harm there. But I think broadly, probably the pressures that will come over the long term on this will mean that with or without the regulatory framework, big companies are going to be held to account in certain ways and they will probably control it.
And so, that will probably lead to the same outcomes as a highly regulated or not a highly regulated, a regulated AI field. But I don’t know. This is all unknown, all speculation.
[00:23:58] Lily: Well, it’ll be very interesting to see how it pans out.
[00:24:01] David: And there’s no way of knowing how it would have panned out if we’d done things differently, because this is such a complex thing that there is no experimental learning. Even if we have different parts of the world who do different things, you can’t say that if others had done this, this is what would have happened, because it’s all so complex.
[00:24:20] Lily: Well maybe the AI can get so good that we can run a little simulated experiment and see what kind of…
[00:24:26] David: That’s not how AI works!
[00:24:27] Lily: No, no. No, no, that’s not quite how it works.
[00:24:30] David: AI looks back, it doesn’t look forward. It is used to look forward, and that’s a bit scary, but essentially, fundamentally, what AI is able to do extremely well and extremely successfully is look backwards. It’s able to do that better than we are because it can have more information, more data at its fingertips than we could ever imagine.
[00:24:51] Lily: And that’s absolutely… topic for another podcast.
[00:24:54] David: Absolutely.
[00:24:55] Lily: Are there any final remarks you’d want to say before we finish today?
[00:25:00] David: I guess the final thing I do want to say is that I suppose I, I, I may be being a bit too one sided in the presentation of the consequences of regulatory frameworks on AI. I don’t know what the regulatory frameworks are going to be. What those regulatory frameworks require, that is what will determine the impact of it. I have an idea of what they might be, which I’ve taken from our interactions with people who are working on them. And if those regulatory frameworks were implemented well, that’s what would lead to the sort of changes that I’ve described, where actually we might get equally rapid tech development or AI development, but a shift towards big tech rather than startups, which would come with deeper investment into the responsibility part because of the reputation issues and the framework. And so I feel that that’s a specific scenario, which is what I am seeing in the tea leaf, so to speak. But, and I’ve spoken quite a lot to that specific scenario. There’s many other scenarios, many of which are much scarier to me. I’m not saying that the scenario that I’m seeing doesn’t have its problems and its downsides.
But I am saying that there are other scenarios where a regulatory framework could basically shut off whole continents from using AI tools. That is possible. Regulatory frameworks which shut down innovation and therefore create a system where illegal AI actually overtakes legal and well regulated AI. That’s another possibility. Those are all scenarios which are also possible and many more besides.
[00:26:59] Lily: Very interesting. You said you’re an optimist?
[00:27:03] David: I’m an optimist, but I have no idea what’s going to happen. I do believe, I still believe that used responsibly, AI can be fantastic for humanity. And I think there are paths to achieve responsible AI outcompeting all other forms and hence becoming the dominant approach. And regulatory frameworks could play a positive role in getting to that, maybe sooner than it might happen otherwise. Maybe that’s the place to finish.
[00:27:40] Lily: Great. Well, thank you very much, David. It’s been a very insightful conversation.
[00:27:45] David: Thank you.