074 – Changing the Question

The IDEMS Podcast
The IDEMS Podcast
074 – Changing the Question
Loading
/

Description

In this episode, we are pleased to introduce guest speaker Helen Crompton, who shares her research and teaching experiences focused on AI in education. Helen and David suggest productive ways that lecturers and students can engage with AI to support and enhance their skills and teaching effectiveness.

[00:00:00] David: Hi, and welcome to the IDEMS podcast. I’m David Stern, the founding director of IDEMS, and I’m delighted to be here today with Helen Crompton, I hope I’ve pronounced that correctly, who we’re going to discuss AI in education with, and you’ve been making some, well, you’ve got some really interesting research on this, but also being actively involved in trying to do things differently.

So I’m interested to have this discussion and this is related of course to a Responsible AI for Lecturers course that we have, where we are encouraging lecturers to experiment, to use generative AI in ways which are maybe outside their comfort zone. Hi, Helen.

[00:00:46] Helen: Hello. Wonderful. Thank you. Thank you very much for having me. And yes, Crompton is the correct pronunciation. And it’s great to see what you’re all doing there. And it very much aligns with what I’m doing and many others trying to support educators as they navigate this new world of AI.

[00:01:07] David: It’s a scary world for some, but exciting for others.

[00:01:11] Helen: Yes, very much. People are terrified and some are very excited. In many ways, both are very valid responses.

[00:01:19] David: And in the course, just to give you a little background, we’ve had a first session where we’ve encouraged people to use generative AI. A second topic where we tried to demystify data science in general. A third topic where we discussed AI being a friend or a foe and recognising we need to live with it. And really the sort of piece that you’re coming into is this idea of we need to change the question.

If you’re going to use AI, you can’t use assessment as we used to have it. This is both a good thing, but it’s also scary for people where it has to happen now because previous assessment methods are no longer going to be acceptable. And we’ve been quoting this research from Reading University, where they managed to find that AI generated scripts outperform students for first and second year assessment at university. And so this has to happen now.

Really keen to learn more about what you’ve been learning, what you’ve been doing, and hear from you your perspectives.

[00:02:17] Helen: Yeah, lovely. So let me dive in. Terrifying and very exciting is many of the responses. And very much, like I said, it’s very valid in that in education we don’t like change, it seems. We’ve taken ages to change. For example, you went in for surgery one day and the surgeon had a saw in his hand and said this is a tool, I’ve used it for years, I’ve done it this way, why change? You’d run out, wouldn’t you?

And yet in education we do the same things over and over again, especially in higher education. Lecture is the predominant style of teaching still. You know, you kind of, send over and tell what people what you want to know and hopefully they’ll absorb it.

Assessments also have not changed, the typical end of term paper, and I’ve had an end of term paper in my classes sometimes. We do what we’ve always done. But the terrifying time is that this is such a dramatic change we do need to do now. And it’s not a choice. And we have to remember where educators have been.

So they’ve been happy in the classrooms. Everything’s going hunky dory and then suddenly COVID came and everybody had to learn how to manage these new tools of everybody going online. And not only themselves, but help the students who are also figuring out the shift. And when everyone thought, ‘oh, that’s, that was dramatic, that was tough, but we’re done with it now. We’re all back in session’.

It’s not the case because now we’ve got ChatGPT that came out. Cause we all hopefully all realised that AI has been around for so many years. Many years. We can talk about exactly where it is, even Google search. It’s there, in fact, Google search used AI since 2001. And then they had RankBrain and BERT, which all people noticed was the fact that it was getting better.

But it’s always been there. In fact, I do a lot of conferences and I’ll often say at the start, ‘okay, before ChatGPT, put your hand up if you used AI every day’. No one would have their hand up. And then I’d say, ‘okay, once a week’, and you might have a few putting the hand up, and I’d say, ‘okay, never’. And all these hands would shoot up and I’d say, ‘no, that’s not the case. Do you have a washing machine that kind of stops at a particular time’? We’re using it all the time. But now what’s happened is it’s become more in our faces and when ChatGPT came out, that was a dramatic shift in capabilities.

And really, going to the whole point of this podcast, it has democratised cheating. I mean, cheating’s never going to go away. We’ve always had cheating. People used to write on their hands and tilt their hands upwards, just in the class, to get some answers. So this is different in that those that had money could often say, ‘okay, write my paper for you and I’ll give you some money’. Or ‘I’m really good at computer science and I can hack into the system perhaps to get the answer sheet’.

But now it’s everybody can do it. You have the internet, you can get straight on and have it do it for you. And you don’t even have to type it in. You could just literally copy and paste what the faculty member said and just put it straight in and there it appears quite often. So here’s the time now where we do need to shift and, would you like me to continue on?

[00:06:14] David: I was just going to say that you’ve articulated quite well some of the things we’ve discussed already. And maybe one of the things that I’ll just add to what you’re saying is really the 90s is when the deep learning, AI transformed to be omnipresent, but behind the scenes. And as you say, ChatGPT brought it front of house.

We talked about the Turing test, and how actually, arguably, in certain circumstances, many of these tools now pass the Turing test. Which means that an observing human can’t tell the difference between a human to human interaction and a human to AI interaction.

[00:06:50] Helen: Yes, they can’t tell.

[00:06:52] David: So that’s the Turing test. That’s been around, Alan Turing goes back a long way, we’re talking about 70, 80 years here. And that’s what we always imagined AI would be able to do, and we’ve hit that turning point, it now can do. And the thing which I want to dig into a little bit is, as you’ve said, one way to look at this is, we can use AI to cheat. And if we have the old assignments, there’s good evidence that AI can outperform students. So all the incentives are there to cheat. And this is like people paying for someone else to do their assignment.

But it’s not quite as simple as that. And the reason it’s not quite as simple as that is that, you know, my colleagues in IDEMS, a lot of them use AI in their work, in their daily lives. And it’s not cheating. It is using the tools you have available. And that’s where, if we only think of using AI as being cheating, then we’re missing out. And I know this is something that you’ve done a lot of work on to actually see, okay, what happens if you stop thinking of AI as cheating?

[00:07:55] Helen: Ah. Yes, and we can go straight into that. I would mention one other thing as well that you touched on there. The Turing test.

[00:08:04] David: Yes.

[00:08:04] Helen: Yes, it’s been here a long time. Just last week, again, you have to keep up with everything super fast because it’s changing, OpenAI actually developed a new framework now to understand AI reasoning to upscale the Turing test. And in fact, I have it in front of me here: “OpenAI imagines our AI future stages of artificial intelligence”, and it goes from level one to level five. So it’s level one chatbots with conversational languages, and then reasoners, agent, innovators and organisers. So that’s an interesting one people might actually like to look at.

So now we’re thinking about assignments. What can we do beyond? And a quick one for those that are struggling and they’ve got a lot of assignments to change. What’s nice is we can add in pieces to those as short term fixes to say, okay, make sure you include discussions in class and things like that, because they can’t add those into ChatGPT, not very easily. And lots of different readings. You can add so many, but it gets harder.

But now here comes the change where we go to transformative learning. So for transformative learning, I actually had some ideas and they got published by OpenAI when they presented their educator’s guidebook. And I’m going to touch on some of those because they’ve done really well and I actually use them in my own classes.

So I can touch on three now, three kind of the top ones. The first one, which is the obvious one, is the critique. Now, what’s nice about the critique, and it goes beyond just, ‘oh, is it right? Is it not right?’ I tell the students it goes way beyond that. What I have students doing is I’ll have them, and I like to give choice, because then it gives students ownership. So I’ll say to students, ‘okay, write, have ChatGPT or write a paper on one of these five points, you choose’.

So they have ChatGPT or any others, Claude, Perplexity, whichever one they choose. They have it write the paper and they put it in a word document. Then what they do is they do the brief critique. Does that look right? You know, any mistakes or hallucinations or anything like that, but they have to look into their books.

But what I do then, which is the key piece to this, what about alternative thinking? So one of the papers it might write is technology integration, methods of technology integration, frameworks. Because I teach technology and education. So it’ll bring up frameworks and it’s kind of, is that a good framework to have chosen? Or is this something else?

So I have them really critique, and I do it in a certain way. I have them in the Word document do with track changes and comments, because that makes it harder then again for cheating because technically a student could say oh write a paper on this and now okay critique it and it could give a critique. Even if it does that now, you can still look carefully, the student has to look carefully at where to put the track changes. So to my mind, they get the same learning.

[00:11:34] David: This is the work that we’ve read, and we really feel, as you say, this is the starting point. It’s not just that this is a good exercise to avoid cheating. This is actually a really, it’s a higher skill.

[00:11:47] Helen: Yes.

[00:11:48] David: This is the thing which makes it so exciting, you’re no longer asking people to feed back to you what you want to hear. You’re actually able to be asking them, because of the way they’re critiquing, are you able to understand what has been written? And are you able to reflect on it in a way which is maybe even original and new? And it’s exactly that higher order thinking where the assessment is now, you’ve superpowered it in terms of what the students are taking out.

[00:12:16] Helen: Exactly. And it’s highest level of critical thinking, Bloom’s Taxonomy, that they are creating this critique that, like you said, they have to understand it and they have to look at it, and really understand what they’re seeing, which is very interesting.

So then the next one is a debate. The debate is wonderful. It is actually such fun. In face to face classes, I like to get the students warmed up in how to use the debate, and we might do a silly topic like what’s the best flavour of ice cream? Just something, and we can debate and it’s just fun, and it gets them relaxed.

And then go into, okay, let’s think now about a serious topic here. It could be anything. I have dentistry students coming in, and it’s kind of, should fluoride be in the water or not? We have topics, all sorts of different topics that are really hard hitting at that time, or that they have to know when they go out in the field and be able to argue it.

And I send them off with these, again, choices. They have to check in with me, okay, that’s a great topic, and then they go and do their debate. And I tell them that they can come back, they don’t have to do it all in one session, they can come back if they have to go back to the readings and come in. But again, I ask them in all of these assignments to connect what’s been discussed in class.

I also teach asynchronously. So, again, I just say, okay, the flip post where you’re discussing back and forth, pull information from them and connect to that. So that’s another good one, the debate.

[00:14:03] David: Absolutely, and this is something which, of course, again, in the course, the way we’re thinking about this, move back to oral is something which is very interesting.

[00:14:12] Helen: Yes.

[00:14:13] David: And it’s something where we talk about this when we talk about AI being a friend or a foe. If you’re thinking of it as a foe, then don’t try and identify when people have used it. Try to create these scenarios where you can’t use it, but you’re having to build your skills in that way. And I think this is a wonderful example of using debate as a platform to be able to say this is a context where you can’t really use AI, you know, people could listen to AI, the robot chatting to the robot and debating it. But that’s not the point. The point is actually being able to engage in that.

Again, it’s a higher order skill, and you’ve got the time for this now because you don’t have to do some of the things you would have done before AI was there.

[00:14:56] Helen: Yes.

Now the other one is, and I’m excited to actually take that a little bit further, what you’ve just said as well. I’m going to cover the final one now, which is the interview. Debate and the interview are the favourite for a lot of my students. The interview is magical and I will say, not just for students, but anybody.

Anybody actually going out, getting ready for a job and it doesn’t have to be that looking for a job. It could be, okay, you’re having an interview for this to show your expertise. You don’t have to be going for something I say to the students, but often they are actually going out, many want to often be teachers, school teachers, and they have to think about the use of technology.

So I give them a prompt. I help with that, so I actually give a designated prompt, and you’ll probably be able to find it on the internet if you Google prompts and Helen. And the prompt says, ‘okay, I would like you to interview me in a mock interview for a position as a classroom teacher in a school that highly values technology. I would like you to interview me on frameworks for technology integration, how to support critical thinking and these things, and I would like you to ask me questions. Wait, one at a time, wait for me to answer and then respond to what I’ve answered with’.

You have to kind of put those extra bits in because sometimes it just fires too much out. So you have to say, ‘slow down, tell me, and let me, respond, and then answer back’. And that is magical. And I will say to the students, if you know the specific school that you would like to go to, or for another position, if you know where you want to go, write the name of that place in the interview.

Say ‘I’m going for this’ and that’s absolutely fine, putting that data in, because that is a real place. That’s, where you’re going. You’re not putting your details in, but you can bring in real details and it adds so much more context. So this is absolutely wonderful. And what’s nice is this doesn’t need to be text either.

You don’t have to sit there and type these things. All those accessibility issues with students that can’t do that. There are other options here where ChatGPT 4.0 now provides a free platform with so much more capabilities. Can I demonstrate something?

[00:17:40] David: Yes, absolutely.

[00:17:42] Helen: Okay.

[00:17:43] David: This is just going to be audio, so people will not be able to see this. But…

[00:17:47] Helen: That’s fine. I’m going to chat with ChatGPT now. And I’m going to actually do that interview that I’ve just described. A short version:

Hello, ChatGPT.

[00:18:02] AI: Hi there, how’s your day going?

[00:18:07] Helen: Okay, it’s great. What I’d like you to do is I’d like to have a mock interview for being a school teacher in a school that really likes technology. And I’d like you to interview me about technology integration frameworks. What I’d like you to do is I’d like you to ask me one question at a time, let me respond, and then respond with another question, also taking note of what I’ve said.

[00:18:39] AI: Sure, that sounds like a good plan. Let’s get started. First question, can you describe a technology integration framework that you are familiar with and explain how you’ve used it in your teaching?

[00:18:52] Helen: Okay, so I’ve used the TPACK framework, Technology, Pedagogy and Content Knowledge, and I’ve used it in a class where I’ve used ChatGPT actually to do a critique. Is that the right answer?

[00:19:12] AI: That’s great. How did you integrate ChatGPT into your lessons using the TPACK framework to enhance both the content delivery and the students learning experience?

[00:19:25] Helen: Okay, I’m going to stop it there. Hopefully you could hear that okay. What was nice is I didn’t really answer that first question very well. I didn’t say what I was specifically doing. So it came back and said, ‘okay, you’re using TPACK. How actually did you do it?’ So what’s wonderful about this is it gives me a transcript then that goes back into my ChatGPT. And I actually encourage students to do this because it will pick up on their intonation if they even get stuttery or worried.

It will actually note some of that and respond accordingly saying, ‘don’t worry, I know you’re nervous, but here’s a question’. And so that is a very realistic, real world application of learning. That’s, that’s very magical to me.

[00:20:18] David: Absolutely. And the thing which I love about this example, which is really, you know, you’re definitely using AI as a friend here. You’re actually able to give feedback. You’re able to give, you know, interactions that just would not be possible at scale without it. When we’ve discussed this in previous sessions, we’ve mentioned the dream of getting towards sort of private tutor style level of interactions with AI.

And you can see that some of those elements are already there. I would argue there’s still a long way to go to get as good as a private tutor, but there are people working at this. But what you’re demonstrating with this interview is very much what we can aspire to in the future.

[00:20:59] Helen: Yeah, I, I agree and disagree in some ways with that. I think we could be very close to being there. It all depends on being relative, how good the actual human tutor is. And again, I will go with many places that general AI is, generative AI, sorry, not general, we’re not quite getting there yet, where generative AI has many flaws that we have to be very careful of.

However, looking at the positive side, this can provide a tutor. Many people cannot, like the most, the largest population really cannot afford a tutor. And even in some ways in countries where they cannot afford a teacher or there’s not a teacher available, they’re using adults, just to be there and helping, you know, like we say, we’re young children or even in universities. This is a great help in doing that.

What’s nice as well is it goes with what we do with it. So when we look at the level of ability of it, it is very capable, but what we often lack is how to tap into that capability. So that all stems into prompt engineering and looking at how we devise these prompts. Like I said though, someone could try the same thing and say ‘I’d like a mock interview on this’, and it could go very wrong.

[00:22:36] David: Yeah.

[00:22:36] Helen: And that’s only because we’ve not told it specifically what we want it to do. So, yeah, the sky’s the limit.

[00:22:45] David: But at the heart of that, what we always need to remember is the two aspects which are fundamental to any generative AI or AI in general is the data behind it. What data is actually used behind it, and how it is being trained. And this is, I think, where I would argue the advances can really come in the future. The example I like is actually Khanmigo, and one of the observations that they had, which is, if you add a thought layer, an actual reflection layer to how you generate, use generative AI as a tutor, you get much better results.

Because if you have a layer where it tries to interpret and get to an answer, and then a second layer where you decide how should I communicate that answer? These are two separate processes which you can train independently, and then you get better results at the end, because you’ve got a better initial sort of analysis of what the correct answer would be and a better than interpretation of how you should communicate that as a tutor in that way.

And this is where I feel there’s so much progress still to be made. We’re really still just in early days of how to use these systems effectively to build AI to be able to serve education. But it’s really exciting what’s possible already. You brought out three fantastic examples. So thank you for those. But it’s so exciting to, to think what could be possible in the future.

[00:24:11] Helen: Yeah. Let me ask you a question there. Because we’ve also developed a framework, a prompt engineering framework. And it is Reflect, and Revise is the last one, because it’s not just reflecting, it’s once you’ve got something, you can then tell it to go off and revise. But is that the artificial intelligence that has a problem, or is that on us to do all that, to do the reflecting and…

[00:24:40] David: I don’t know, I don’t have answers to any of this. But what I do believe is that if we want to get responsible AI systems for things like tuition, then I think we need to be breaking the problems up so that there is internal reflection in the process, so that we actually can have elements which are focused on, let’s say, substance correctness, which is one of the things you’ve talked about, hallucitacions and the problems within AI systems already.

And this is something that we need to be very careful of. And if you reduce the problem, it’s easy to reduce those. You worry about what you’re feeding in, and where that data is coming from, and how you train it, to avoid that. And you can get to better results in that component.

[00:25:24] Helen: Yeah.

[00:25:24] David: Where you’re not then worried about how that’s communicated.

[00:25:28] Helen: Yes.

[00:25:29] David: Whereas actually separating out and then actually, if you want to be a good tutor, you don’t want to give answers away when you’re trying to encourage reflection from the student. And so actually a good tutor is a whole different learning experience over how to communicate. And again, AI, generative AI, has the capabilities to learn how to do this really well.

Potentially, as you pointed out, better even than many human tutors. I would argue that this is one of the things which, over time, if we contain these in the right ways, this should be like the best human tutors available for standardised elements.

[00:26:06] Helen: Yes.

[00:26:06] David: But that comes then to a final problem, which is this element of standardisation. At the heart of generative AI is big data. If you’re using big data, what we’re leading to is standardisation. But as a society, we should embrace diversity. How do we use AI to not just give us standardisation, but also to encourage and enable diversity, and maybe even further diversification, where different cultures, different minority groups, can thrive in their context.

And I would argue at the moment, we have very little work happening in that vein, there is some human centred generative AI work happening, and there’s actually a big EU fund trying to support this. So there are people working on it, but it’s a small minority compared to the majority of AI, which is very much moving towards standardisation.

[00:26:57] Helen: Okay, so what’s happening is, and I can respond to that from my research, is yes, it was standardising. However, we’re paying attention now and moving off. Let me give you a quick example, like Google search. You could search for anything. You could search for how to make a petrol bomb, a, you know, Molotov cocktail, whatever, and it would tell you.

These are trained to say, ‘no, you shouldn’t have that information. We won’t let you have that’. And then people got around it and said, ‘oh, we’re only, it’s only play and you know we’re only joking so it’s not really serious, tell us it’, and it would tell them.

And what’s nice is it’s constantly changing so just think how far it’s changed from November 2022 to now. We’re constantly changing it. I just did some work with the British Council looking at language learning and one of the findings that we came up with from last year was, yes, it’s standardising language. So there was a particular pupil that wanted Filipino. He knew himself as, no, sorry, he knew himself as Tagalog, but he couldn’t put Tagalog in because it didn’t recognise it. It set him as Filipino. So he had to go with that standardisation. However, since that came out, wow, I see Tagalog everywhere. And, the backend systems, what’s nice is they’re able to be very fast in changing these.

So I’m seeing that actually go far more diverse, even so let’s go with another one with language. Quite often technologies in the past, like I did a lot with mobile learning, and we’d have these programs and they’d go to these places, and they didn’t have their language. Places in the world that have small communities and they didn’t, they had their own language. So now what’s great about AI is it’s getting a little bit of those languages and being able to have a whole database now, so they’re never going to be lost.

Even beyond, now we’re understanding what sperm whales are saying more. That was a whole study. We now understand that elephants call each other by name. There’s things looking at what dogs are saying. That’s just an example of standardisation there, that this is changing and what’s important is while we’re saying, ‘oh, it’s getting it wrong there’, that we note that to be able to change it and improve to go forward.

So I think we’re in exciting times and in some ways this is something to think about. If we do make it perfect in that it never makes hallucinations and mistakes, that could be a major problem, because then what happens to critical reading, critical skills from students, it’s like the internet explaining to people, ‘yes, just because you saw it on the internet doesn’t mean it’s real’. It’s the same with this. Everything that it outputs should be critically reviewed to say, is that correct or not? And if it is perfect later on, we kind of know it is, would we stop? Would that stop our thinking?

[00:30:20] David: This is a perfect place for us to end. I’m conscious we’re running out of time, but we actually have this sort of common previous example around this. One of the elements, and this is where we feel lecturers jobs are not in danger, because they are academic. And however you look at any of these AI systems, they are built from data, which is in the past. And learning based on the present at the best. Whereas good academics are creating the knowledge of the future.

And by definition, although, you can always hypothesise things about that, and we can use tools to hypothesise about the future, creating that knowledge of the future can, and could use AI to be able to sort of, there’s been discussions you’ve had about how AI can enhance creativity. But the future is still unknown, and that therefore is a very human element to be able to build that, is what we would believe.

[00:31:16] Helen: Yes.

[00:31:17] David: And so the hope is that there will always be a place for educators to add value.

[00:31:22] Helen: Certainly will. And the worth noting as well is AI is all brain and no heart. AI doesn’t care how the students do. AI doesn’t care whether it lives or dies. It doesn’t matter either way. It has no conscience that way. Humans do. They care about the students, it matters how they’re doing. Yeah, we bring a whole different element and it should never replace. This extends and enhances what we do and it should be encouraged that way.

[00:31:55] David: Absolutely. This has been a fantastic discussion. Thank you so much, Helen, and it’s been an absolute pleasure. I hope we get a chance to interact further in the future.

[00:32:04] Helen: Yes, sounds good. Thank you.

[00:32:07] David: Thank you.