073 – AI in Education: Friend or Foe?

The IDEMS Podcast
The IDEMS Podcast
073 – AI in Education: Friend or Foe?
Loading
/

Description

In this episode, Lily and David discuss the current state of education, and the potentials that AI creates for the future. They look at what it means to treat AI as a friend for lecturers, and then what it means to treat it as a foe, using case studies. They then question how lecturers can productively approach teaching given the current AI context.
Note that this podcast is also the third episode of our five-part series on Responsible AI for Lecturers (RAILect).

[00:00:00] Lily: Hello, and welcome to the IDEMS podcast. I’m Lily Clements, a data scientist, and I’m here today with David Stern, a founding director of IDEMS.

Hi, David.

[00:00:08] David: Hi, Lily. We’re on another special episode for our RAILect course, Responsible AI for Lecturers.

[00:00:15] Lily: Yep, and we’re on to our third one today, which is AI in Education, Friend or Foe.

[00:00:19] David: And I’m afraid I’m really interested here in the third part of this, which is does it really matter whether it’s a friend or a foe? It’s here. We’ve got to deal with it.

Let’s start with the friend part, because we’ve been discussing this quite a lot, haven’t we?

[00:00:37] Lily: We have and I was really surprised actually as to how many case studies there were out there of people treating it as a friend. Because I remember, when ChatGPT first came out, and people were acknowledging this as well, it felt that universities were like, what do we do? And tried to, I remember a university where I was working at where they had blocked chat GPT from the servers. And that was the immediate response was, okay, let’s just pause this while we work it out.

[00:01:05] David: And I think very much now the growing consensus is that there are many ways in which, used right, generative AI can be positive for education at university and other levels as well, but particularly at this sort of university level. As lecturers, if we embrace it, good things can happen.

Should we go through a few case studies? I don’t think we want to dwell on this too much because we’ve covered this, this is at the heart of really what we’re trying to get people to embrace.

[00:01:36] Lily: Yes, yeah, absolutely. But I think just in general there was a lot of different university websites, Northampton University, Reading University, Cambridge, they’re just a few to list off the top of my head, but they were just the ones that came up on an immediate lookup. I’m pretty sure a lot of, most universities probably have this, at least in the UK, of tips for their students in using generative AI and the university’s stance on it.

[00:02:01] David: Yeah. And I think this is one of the important things. This is universities giving the sort of advice for students. But for lecturers, the key thing is some of the case studies which are coming out, which really research case studies, where lecturers are using it in constructive ways to enhance education. That’s the aim.

[00:02:23] Lily: Yeah, so Emily Donahue from the University of Mississippi, who was saying that she’s using AI to help students really focus on critical thinking. And I know that next week we’re going to go in a lot more on changing the question, but she’s having her students analyse and critique AI generated arguments for them to then rewrite.

[00:02:41] David: And this is exactly where, what we really dig into next week, when we’re trying to say can we actually get to a higher level of thinking? Can we actually enhance what students can learn thanks to the advances with generative AI?

[00:02:58] Lily: Yeah but we have here lecturers who are finding that including it in education, treating it as a friend is actually really enhancing, is really helping the process.

[00:03:11] David: And that’s something where, in some sense, as I said, this is the easy part of this week. We don’t need to dwell on it because it is this element which we want to bring out and highlight through the course of the five week course for lecturers. But we want to really emphasise that treating it as a friend is to say, how can we achieve better education thanks to the advances which are coming through generative AI?

[00:03:40] Lily: There were some schools in New York, which kind of were looking at integrating AI as an assistant for educators, and that was AI helping them with the more administrative tasks as well, so that then the teachers could focus on these kind of creative elements a lot more in the classroom.

[00:03:56] David: Absolutely, and this is recognising the fact that actually generative AI in certain cases can be used in multiple ways to be able to enhance education. I’m going to just bring one more dream out, and it’s not there yet. Generative AI is not there yet, but the idea that we could get towards the education that you can get from a private tutor through Generative AI.

This goes back a long way and I believe it was Bloom who had this beautiful sort of visualisation of this. That if you actually have your standard traditional teaching, as was the case many years ago when Bloom was thinking about this, but it’s still roughly the case now, your traditional teaching lecture focused education, and you then compare that to the results you can get through mastery.

And mastery in terms of the definitions has changed over time, and there’s all sorts of discussions around this. I don’t want to get hung up on the details, but it broadly comes around to this idea that you’re actually able to get students to repeat assessment with appropriate feedback until they’ve mastered a concept.

And if you have a mastery based approach, you get one standard deviation higher in some sense, so that your average student is now a good student, and your weakest students really are what would have been average before. That sort of is really shifting the distribution of what students achieve.

But as people who are very wealthy know, if you want to get really good results you hire good private tutors. And if you hire a good private tutor then you get two standard deviations higher. And so your average student with a good private tutor is actually considered exceptional, and your weak student is still considered good, in terms of the same learning measurements. And there’s been critiques of this in the past and so on, but I buy into the broad idea that if we can get more personalised tuition, then we can get better education, which is broadly what this is saying.

And the dream of course is with good AI assisted learning, we can get to the stage where the normal education on offer to everyone, your basic education, could be the equivalent of what you’d have had with private tutors in the past. And that would mean that the levels of education within society as a whole could be really increased.

Now, this is not in our case studies. There are people working on such systems, but they don’t exist, they certainly aren’t available at scale at this point in time. But this is where thinking of AI as a friend, lecturers or educators may get worried. If AI is doing that, what’s my role? And the key point is that actually I believe you will always have a role to do more than that minimum. It’s just that you’re building up that baseline.

And so your baseline of the education you can provide is so much higher. And so the educator’s role is to enhance on top of that. That’s the fun stuff as an educator. That’s the stuff I want to be doing. Really thinking of AI as a friend, in terms of where we hope the end goal may be, I believe, that we will eventually be able to get to the stage where your baseline education, the education you can provide for everyone, is enhanced substantially.

And so the education levels as a society increase to a point where it’s almost unrecognisable with what we’ve been able to do in the past. But that’s years and years away, I still think. And don’t get me wrong, I’m really impressed with some of the efforts, I will single out Khanmigo because there are elements of the work they’ve done, which I think are breakthroughs.

And I’ll just try to articulate this in line with our demystifying AI previously. One of their breakthroughs was to recognise that if you tried to get an AI tutor, it is really important to separate out the substance layer from the tutor layer. And so when the AI tutor is responding , what they’ve done is they’ve separated it out so there’s two steps of AI. Where the first step, takes what is asked of it or what the student says and generates a response which is trying to interpret how you get, I guess the best way to frame it is, what the answer to that question would be.

And that step, and this is hard, we discussed this in the previous one, the demystifying data science component. That one, you’re wanting to get the substance right. Whereas the second layer, the layer on top, is then saying, okay, now that interpretation is understood, how should I be communicating this back to the student as a good tutor? And that again is another learning process. And what’s powerful about what they’re doing is that they’re actually separating out these AI models and therefore separating out the learning about how to first interpret and create solutions based on the questions asked, and how to communicate that as a good tutor.

And that to me is really what it takes to build these ideas responsibly. My slight concern is I think it’s still going to be years before we can actually train those two steps really well. Because I think both of those steps have subtleties, which I think, of course, as we discussed with the demystifying data science, subtleties is not AI’s strength. So I do think that there are big challenges to achieve this, but there has been progress already.

[00:10:02] Lily: Absolutely. And this is just the dream of if we treated AI as a friend, where we could, where it could go. I find it absolutely fascinating to hear your insights on it.

[00:10:12] David: But of course most people, when they think about it, are thinking about the foe part.

[00:10:18] Lily: Yes, because it’s new, it’s different and it’s new and particularly I guess if you don’t understand generative AI as well, it’s, you’re now forced to have to look into it and think about it.

[00:10:32] David: And you might not want to. As a lecturer, you were quite happy teaching before, this was part of your job, maybe not the part of your job you were most engaged in, you were maybe really keen on your research and the teaching was something you’d do, but you know how to do that, you’ve got those structures in place. Whereas now you’ve got to think about it.

[00:10:50] Lily: Yeah. And there’s been a lot of things already done. I remember at the very start, Ofqual, who are the Office of Qualifications and Examination and Circulation in the UK, the kind of chief regulator of that recommended, this would have been over a year ago now, this was when, in those early days of ChatGPT, recommended that kind of coursework starts to take place in house, so that it becomes a bit more like an exam in a way.

[00:11:16] David: Yeah. And this idea was particularly for schools, so not necessarily for lecturers in the same way, but the same idea applies. And we’ve seen many universities think about this, where they had gone off to sort of actually have much more, which was remote, or continuous assessment in different ways. And the idea that you’re actually bringing people back in and you’re actually invigilating them more and you’re watching people do things to make sure that it’s them and not the robots that are actually producing whatever they’re producing is a part of the reaction. And in some sense, this is an interesting part of the reaction that I’m not entirely against.

There’s elements of a reaction which I am against, we’ll get onto that in a minute. But the idea that actually, because AI means that you can’t know what people are doing when they’re not there, having things where you are doing things in person, with the human element, whatever that may be, even if that’s just invigilation. I’m not against that.

The element of treating it as a foe, which I’m less keen on is the emphasis which is placed on identification.

[00:12:31] Lily: Yeah.

[00:12:32] David: And that’s something I think you’ve also dug into in quite a lot of detail.

[00:12:36] Lily: Again I’m not sure how different universities are treating this. If universities have given out advice on ways to spot that their students have used it. But what I do know from looking up is that teachers are encouraged to compare, are encouraged to look out for it, and they’re encouraged to do this by comparing current work with their previous work from the students, and they’re given these kind of different indicators to look at: is there American spelling, is it in typed format, and they used to give it in written format, is that overly verbose or hyperbolic language?

But on the flip side, so you know, I see this and what the teachers are being told. And then, the next article down is okay now as a student, how can you hide the fact that you’re doing it? And that has in it, again, very similar tips. Make sure that you’re doing it in UK spelling if you’re from the UK.

[00:13:25] David: And this is where it becomes an arms race and it’s not constructive for anyone. It’s not actually constructive for the student to be then trying to just learn how to use AI and not get detected. In my mind there is learning happening there, and I’m sure there are people who are learning the skills there that they will go on and use in real life, but these are not necessarily the skills that I want for society.

And actually, if you’ve got an arms race there with high stakes, where it’s, as we know from the study which recently came out, the standard ways of identifying AI now are not really capturing most of them, so maybe you’re getting that percentage up from two out of 33, maybe you could get it up to 10, or maybe even 20.

[00:14:15] Lily: This study, by the way, being the Reading study, I assume, from the University of Reading that we’ve mentioned in the previous two podcasts.

[00:14:20] David: Absolutely. It’s one of the studies which came out just before we created this course and therefore we are using it as the background, right the way through where, let’s repeat the study again, I’m sorry for if it’s repetitive for some, but this is a study where they introduced 33 AI generated exams, if you want, to a grading, which was done. Only two of them were identified and the other 31 were passed, and in first and second year exams there were four such exams and in all of them they outperformed the average students and they generally got first class grades. And so you have now as a student strong incentives to use AI for first and second year coursework because AI can do it better than students.

This is what we’re up against and this is why you don’t want to be in an arms race because then the people who win are the students who learn how to not get their AI identified, not the students who do the hard work themselves, and that’s what we want to avoid. So if you’re going to treat it as a foe, the key thing is it’s not about creating situations where you need to really identify, to know whether or not someone’s used it.

If you’re going to treat it as a foe, my suggestion, my recommendation is create scenarios where it can’t be used. There’s some universities looking to go back to oral exams. That’s a great idea. Oral exams are fantastic in certain ways. I was privileged to also study in Germany as well as the UK, and I did an old, what they called a Diplom, which was a sort of master’s degree, where I had to do oral exams, and it was a wonderful experience to understand the value of this.

If your response as a university is, to say we need to avoid people using AI in high stakes examination, then that means we need to go back to forms of examination where it’s not even possible to concede it, rather than trying to identify when people use it. The identification is going to be an arms race and I’m pretty confident that students will win.

[00:16:32] Lily: And what I was going to add to that as well is you were saying about the students who will win in this situation as well are going to be the students that are just learning how to bypass the fact that they’re using AI, the students that know how to hide that they’re using AI. But on top of that, students that are going to lose out are these students that can get accused of using AI when they actually haven’t.

[00:16:54] David: They’re the big losers. And this is happening.

[00:16:57] Lily: Yeah.

[00:16:58] David: And there are a number of case studies, I think, you’ve put in a case study or two about this, haven’t you?

[00:17:03] Lily: Yes, yeah. There’s this example in Nigeria, and I know that we had a podcast on this in our regular IDEMS Responsible AI podcast, but this case study, this situation, where I think it was an author came out and said that if they get an email that says in it the word ‘delve’, they now assume that it’s AI generated and disregard it.

[00:17:26] David: And I remember the response from across Nigeria specifically, where, you know, people were saying, ‘look, this is my WhatsApp chain from the last few weeks. I use delve in three or four messages. This is part of our language. This is part of the English language. The fact that, maybe in US context, you’re not using that language, that means you’re discriminating against me in this way, not because it’s AI generated, but because you have a narrow vocabulary.’

And this is part of the sort of issues with this. That you do get, and you will have, false negatives and false positives. And so there will be losers, this is the thing. And so we’ve got to actually be really careful on this, and I think, yes, I think it’s something where any form where you recognise you’re getting into this arms race, trying to identify it, my advice is that’s the wrong way to think about it. I am absolutely encouraging lecturers to think of AI as a foe and to say, okay, how can we create situations for learning and for assessment where AI is taken out of the equation? It is not possible to be used.

But that is very different to saying: we will identify when it has been used and it shouldn’t have been. If you’re going to take it out of the equation, take it out of the equation. Don’t then try and catch people who are using it and shouldn’t have been. That’s the key. And this is to me central to what I would argue is good thinking when you’re thinking of AI as a foe, and in education. And I really want to repeat, there are many instances where pedagogically, I think it is really valuable to have educational situation where AI is taken out of the equation. This is something which I encourage.

And I guess this is where we get to the main point I think we’re hoping to get across in this topic, which is that, if you have a housemate, it doesn’t matter in some sense whether they’re a friend or a foe. You’ve got to get on and you’ve got to live with them. Same with your family, you don’t choose your family, you choose your friends. You’ve got to get on and you’ve got to live with your family. It doesn’t really matter whether they’re a friend or a foe. You sometimes just got to accept that they are who they are, and they have elements where sometimes they will be a friend and sometimes they’ll be a foe, and that’s okay, that’s life. And many human interactions, actually fall into that category that it doesn’t really matter if they’re a friend or a foe, you’ve just got to live with them, and that’s part of just standard life.

[00:20:25] Lily: And this is our third part in the course, which is living with AI.

[00:20:29] David: Exactly.

[00:20:30] Lily: We can’t decide, it’s here, it exists. So we can go through and say, oh, if it didn’t exist, but there’s, but it’s here now.

[00:20:37] David: It’s here, exactly. Generative AI and the advances that have been made recently mean that this is part of the world we live in now, for better or for worse.

And so the real question, and this is why it is sensible to reflect on what to do if it’s a friend, what to do if it’s a foe, is the fact that we need to live with it. So we need to find positive ways to treat it as a friend and positive ways to recognise that treating it as a foe can lead to good educational outcomes as well. And both of these are valid. And both of them are going to be constructive at some point in time, I’m sure for almost any lecturer, to recognise where you’re wanting to lean in to AI and to generative AI and use this to give better education than could be done before. And when you’re going to want to create situations, moments in time, where students are engaged in activities which are AI free. Because that is still going to have pedagogical value. So being able to create these situations is going to be of value. And I would argue in some circumstances, this is going to be of increasing value.

If I look into a future where your AI, we discussed this when we thought of it as a friend, where you can have basically everybody in the world having an AI private tutor and therefore leading to worldwide educational outcomes, which are much better than we currently have.

What are the rich schools going to do? What are the rich universities going to do? It’s obvious. The rich universities are going to complement that. The rich schools are going to complement that with very human interactions. I still come back to the fact that in the UK, Oxford and Cambridge are characterised by the fact that they are so wealthy that from the first year of undergraduate you have one on two interactions with world class lecturers. And, a good system with private tuition is never going to replace the value that such interactions can bring. This is beyond good private tuition which can be global, this is human interaction in powerful ways and the learnings that come from that are irreplaceable.

This is why they give such a good education. It’s one of the reasons there are others. But that to me is central to what I think we need to be thinking about. That living with AI, and generative AI specifically, is going to be recognising that actually the roles we can play as educators can evolve in positive ways.

We can focus on the things that actually, as a lecturer, this is what I enjoy. It’s the human interaction with students, that a lot of the chore can be taken out. And actually, the flip side of this should be that we can offer an enhanced education above what you could do, which is already this higher level through then leaning into treating AI as a foe and having these spaces of real human interaction.

And that’s where it gets exciting. To me, this living with AI, done right, and don’t get me wrong, there will be places where it’s not done right, but I believe that as lecturers, right now, we don’t need to worry about the private tutor AI, it doesn’t exist yet, but it might do in the future.

But what we do need to think is how can we enhance the education we’re giving using the AI tools which are coming along, treating as a friend, and how can we create spaces for that deeply human interaction, that learning, which often includes much softer skills, communication skills, all sorts of other things, where we are using the time which is maybe freed up from the enhanced education to convert it into spaces or times or moments where we are AI free.

[00:25:02] Lily: Really interesting. Thank you very much, David. Do you have any final points?

[00:25:07] David: I guess the final point I’d have is, this is another episode which is focusing on lecturers. And my hope is that this is something where, if there are members of the audience of this who aren’t lecturers, my hope is that this is still going to be insightful in terms of recognising how we probably are in this situation where our higher education systems aren’t going to evolve.

And if done right, this could enhance society because they could be giving better and better education. But the lecturers are now put in this really rather difficult situation where for a lot of lecturers there are elements where if, as a society, we don’t embrace the role good lecturers can play, the added enhancement they could do, our whole education systems could be transformed by this, and in ways which might not always be good.

These transformations which are happening, let me just give the nightmare scenario for me, is that, these private tuition AI systems are developed and they are commercial. And because they’re developed and they’re commercial, they are then used at scale and the money that would have gone to academics who would have been enabling this education now get sucked into commercial AI companies.

[00:26:39] David: Those commercial AI companies are not going to be furthering learning. They’re going to be stagnating learning. And this is where as a society, we need lecturers to step up here and to recognise that they will always be able to add value, no matter what it is, because AI is based on data from the past, whereas good academics are creating the knowledge of the future.

And that distinction is central to the fact that lecturers’ jobs should not be in danger because of AI, as long as, as a society we recognise the value that they bring. I can say this now because I’m not a lecturer, so this isn’t about me being safe. No, I was a lecturer. This is something I understand quite well.

But I’m now a social entrepreneur. So arguably I should be on the side of actually saying, what if my enterprise could take this up and actually take all the money away from universities to… no, that’s not what I want, it’s not what I believe. We’re a social enterprise and we need academia to step up to be able to play, continue to play the role it has played for, you know, it goes back centuries, of helping advance knowledge for society and with society.

But we are almost at this sort of potential tipping point related to education.

[00:28:11] Lily: Very nice. Thank you very much, David. And I look forward to discussing next week with you on Changing the Question.

[00:28:18] David: And that’s where we’re going to really dig into this sort of AI as a Friend. How do we really, what are the things we can do to use it? I’m excited by that. I’m looking forward to it.

[00:28:28] Lily: I am too, because, to me, this is where you really start to think outside the box, anyway.

[00:28:31] David: We’ll see. We’ll see what we discuss next week. I hope everybody’s enjoying this, who’s listening.

Thank you.

[00:28:37] Lily: Thank you.