4 – Changing the Question

RAILect
RAILect
4 – Changing the Question
Loading
/

Description

In this fourth episode of the RAILect series on the IDEMS podcast, titled “Changing the Question”, we dive into how AI can revolutionise education by shifting the focus from traditional methods to fostering critical thinking skills. Lily and David discuss the importance of adapting questions in a way that leverages AI capabilities, ensuring students develop deeper analytical skills and greater ownership of their work. They explore various case studies and examples of AI applications in education, highlighting the potential of AI to enhance human intelligence and creativity.

[00:00:00] Lily: Hello, and welcome to the IDEMS podcast. I’m Lily Clements, a data scientist, and I’m here with David Stern, a founding director of IDEMS.

Hi, David.

[00:00:08] David: Hi, Lily. I’m looking forward to today’s discussion. It’s the fourth in our RAILect series.

[00:00:15] Lily: Absolutely, RAILect being Responsible AI for Lectures. And today we’re on “Changing the Question”.

[00:00:21] David: Yes. And this has been something we’ve alluded to a lot over the last episodes as well, because it really is central to what we think is possible with AI, and being hopeful about AI and the value it can bring for education.

[00:00:39] Lily: And it’s something that over the last kind of few weeks, while doing research for these different modules, I’ve actually come across quite a lot of people that have already started to change the question and who really already feel this way that, okay, if we’re going to use AI or AI is here now, then how do I use it? Well, I need to change the question. If a student can answer the question with ChatGPT, then we’re asking the wrong question.

[00:01:09] David: Absolutely, and let’s be clear here, this is not our idea. It’s not a big brainwave we’ve had. This is what people in education are thinking. People who know what they’re doing, who’ve been thinking about AI for a long time, and how changes in technology affect education. This is exactly what good practice would dictate that we do.

[00:01:33] Lily: Absolutely. Absolutely. We start to shift towards, well, with AI we’ve said a few times now that we can now focus more on different skills. And different skills are now what students can develop, such as critical thinking. I mean, I personally anyway feel that this is more relevant than ever to develop these critical thinking skills, because we’ve got AI really integrated into different aspects. And we’ve seen so many different case studies where we’re using AI without thinking about it or using AI without that human oversight has caused lots of different problems, such as, in these different case studies for our lawyer, professor, mayor, where they’re all accused of something that they didn’t do because AI has generated it. Or in the scandals like the Dutch welfare scandal that we’ve spoken about before. And so it’s really important, I feel anyway now, that we really think quite critically about, okay, what outputs is the AI giving?

[00:02:40] David: And let’s be clear here, human decision making isn’t always perfect either. And so the point which I think is so central is, if we can recognise that good AI systems in the future are going to be developed, we hope, so that they integrate humans in the loop in constructive ways, our hope is that the human thinking can be focusing on some of the more moral aspects, some of the elements which are subtle, some of the elements where there is maybe context specificity, which might have been missed by the large models, whatever they may be.

My hope is that if we can think through the role of humans in AI systems or in AI, in an AI led world, led word is too strong, in a word where AI is embedded, that then I think that’s where we can see that there are going to be lots of jobs in the future. They just might be different in nature from some of the jobs now, because you’re going to be able to have specific tasks which are currently considered human tasks that will be very adequately and more than adequately performed by AI.

And in education, what I want to bring this back to is at higher education institutions, we’ve had a lot of emphasis on writing. A lot of subjects is writing essays, it’s writing exams in different ways. One of the things that I’ve found more recently when I’m thinking about sort of education and what I want is that I want that emphasis to shift towards maybe critical reading.

This idea that, actually, the task is to create something new is maybe not the most creative task. There’s actually a lot of creativity that can come with iterating, with identifying areas of improvement, areas where innovation, where something new could be brought in. I think if we think about, the idea of humans playing a role at critically reading and adapting, I think those are really interesting skills which have been undervalued in higher education.

And when I get people who come out, those are the first tasks that I tend to give people. I don’t get people to first of all do something totally new, when you bring in somebody who’s gone through and who has gone through higher education in different ways, the tasks that they need, often what they involve is adapting, it’s being critical, it’s understanding deeply so that you can add value.

It’s always that added value and that’s not going to change when the first draft can be written by an AI system. Great! That’s the blank page system which is now gone and you can now focus on. No, that’s not what we wanted to say. That doesn’t correspond to what we believe or what we think is correct. That difference between what’s right and what’s wrong, that’s the human element.

And that’s where we should be training people to be better and better at being critical, and this will hopefully affect things like the misinformation that’s become widespread. We want to be able to train people, not just in creating new things, because generative AI is now very good at that, but in actually making things good.

And there’s been a bit of a loss on actually getting to that. And that’s where I think we can really have an education system which could help people be part of bigger teams, making things really good.

[00:06:46] Lily: Great. And so how do you feel that students could develop that skill? How can we change the question to develop that?

[00:06:54] David: I think part of the point is there is no one way, there need to be many different ways. And you’ve gone through a lot of the case studies, some of the ideas that I really like are this idea that if you were giving an essay topic, then why not instead give the same essay to ChatGPT and then give the students the output of that essay topic and ask them to critique it.

Now, again, you can ask ChatGPT to critique it, but there are ways in which you can do that so that you can actually understand and you can help students to look through and identify. What can they identify that, maybe needs fact checking? What can they identify where the language is maybe not what you would have hoped?

What can they research to check? To verify? And then they can go in and dig into, sort of, some of the things which it brought out that they maybe didn’t know about, which now they can investigate. This is the key to me. That’s just one example, and we’ve heard of many others. But it’s one which I really quite like, because I like the idea that instead of creating something from scratch, in that context, that same task is all about the fact that, actually, you could get something, but how do you deepen it? How do you make it better? And that sort of critical thinking, where do you need to put attention? What is it that interests you within what’s come out that you want to put your time in?

Now, again, there’s questions about how you do this sort of assessment, and there’s approaches using track changes, or comments, and systems where I think you can actually have a pretty good evaluation of student skill at the critical reflection tasks, based on how they’re able to identify and address elements, within a text, and within something which has been generated and presented to them. That has the potential to me to really be quite exciting as a way to change the education that we provide.

[00:09:13] Lily: Great. I know that we’ve worked before on different things. For example, when we’re working with some students on statistics and the kind of idea is for them to generate a report. And the emphasis isn’t on the report. The emphasis is just on “okay, but the statistics that you’ve done, interpreting it, analysing it. What did you do? What did you find?”

But the emphasis isn’t on the actual writing of the report, that’s secondary. And with AI and with these kinds of powers that we can get through AI, it’s great that we can have that as a secondary thing. Actually that’s often the time consuming task, or the time consuming task can be actually writing the report. That doesn’t need to be now.

[00:09:54] David: And I think one of the points you bring out very nicely there is that we wanted students to write the report because actually if they go out into the working world, that’s going to be what they will need to create to be able to communicate whatever they found.

We, of course, prioritise what we’re interested in: how they were able to analyse the data they had, and so on, and draw out insights from their data in responsible ways. But to do that in a way which was realistic, we needed them to produce something which was time consuming for reasons other than the core focus that we had.

Now what’s so exciting is it’s not just the fact that we can change that assessment. It’s that actually the reason that assessment can change is because the real life task is less laborious than it used to be, because of the generative AI. And so it’s not that we’re changing a task like that, we’re not really changing that task.

We’re just encouraging people to do that task more efficiently using the tools they have available. And that’s something which is then realistic, which they can then take into their working life and the skills they take out. That need to produce reports in different ways isn’t going to go away in certain ways, because that’s how you make sure that the work that you’ve done, the insights you’ve drawn out, can be communicated.

But a lot of the sort of time consuming components of that can be reduced, just as you’ve introduced AI into your report writing to increase your efficiencies. That’s part of what now students can do.

[00:11:41] Lily: Yeah, absolutely. I mean, there’s these countless case studies of where using AI has really helped to have that focus instead on the task that you’re interested in developing. Taking the pressure off the other, these other aspects, and got to hone in on different skills, the skills that we’re interested in in this module, in this course.

[00:12:06] David: Absolutely. And I think one of the things, there are concerns about this, don’t get me wrong. We don’t want to go too far in any given direction. I love the fact that, it was one of the earliest studies I saw, where they had a number of undergraduate students and a number of AI generated business cases that went through to be ranked as what were the better business plans, business cases.

And the AI did very well. And what I loved about that study was that the conclusion wasn’t that AI should be used for writing business plans. It was that actually what you could do is you could use AI to generate ideas which stimulated the human creativity and an individual could therefore write better business plans than they could do without it.

That was a very early study that came out of this, there have been many since. And one of the things which I like, which I believe you’ve mentioned as a case study you’ve come across, where the risk of this is that it does reduce diversity of some of the ideas. So there has been another recent study which basically showed that the AI generated, what was it, stories?

[00:13:19] Lily: Yeah, it generated ideas for stories. You had two sets of people and one set could use AI to generate different ideas for stories. And the ones where AI could generate these ideas for them, for people to then write the stories, those stories were seen a lot more positively by the readers, they were enjoyed more. Their enjoyment was higher when reading them because it contained aspects like plot twists.

[00:13:44] David: But there was less diversity between them.

[00:13:48] Lily: Which I love.

So then you’re in this situation of, okay you know, AI can really help and can create this enjoyment because there’s these plot twists. But for how many people can we do that because if too many people do it, if we all do it, then actually the stories will become less enjoyable because they’ll all be very similar.

[00:14:10] David: This is the beautiful thing, remembering the demystifying process, that the demystifying process is the fact that AI is just finding the sort of common, what’s normal. It’s something where, it’s an analytic process, which potentially, if we’re not careful, it reduces diversity because it gets us to what most people would want or most people would do.

And so I love the fact that as lecturers now, we have the opportunity to learn from this. So the advantage of using AI to be able to create things which are more universally appreciated is great. And that means that the skills of the students are there. But it also means that we can now think, how do we incentivise going beyond that and actually building that diversity back in, in certain ways, where that is known, but we see students, or we encourage students to go beyond that and to be creative in other ways.

Not because it’s necessarily more popular to everyone, but because, actually, now there’s the potential to know that you’re doing it consciously, rather than because you don’t know. So to take someone who deliberately doesn’t put the plot twist in, even though the AI is suggesting they do, that now is the skill of being able to say, okay, I’m doing this consciously rather than subconsciously, in making those choices.

And this is again where the human element of this can be enhanced. And so part of what we can do, and this is where if we integrate AI in the right ways, is we can also, recognise and appreciate where people are saying, no, I understand that is a common way to do it, and it’s an appreciated way to do it, but I am choosing to do it differently.

So not only having the skill to be able to do what is widely appreciated, but as lecturers, encouraging and inspiring people who know that and choose to do something different, choose to serve their individuality, is something we can also build into assessments in interesting ways and recognise.

Because these are exactly the sort of things that over time, as the AI components become more common, and if all students are using AI, then the question isn’t to identify who’s using AI. The people who are then creating something substantially different, that can now be appreciated, the ability to come up with something different. And that could enhance human creativity. These are the sort of things we hope for. I’m not saying this is easy. I don’t have easy ways to help lecturers to imagine this and to incentivise this.

But this is the key. If we think about the incentives we’re giving to our students and if the incentives end up being AI content does better than human thought, we’re probably giving the wrong incentives. And so how as lecturers can we play with that to incentivise elements of creativity which build from the new level we can get when working with AI.

[00:17:30] Lily: Really interesting point, I love how you’ve drawn it into that direction. And in this kind of world that we now have, where we have AI, there are these different really creative ways, or I think that they’re very creative, and quite exciting ways of how we can change the question to develop these new skills, but also being aware of AI.

And we’ve spoken about this kind of critiquing . And I think also another kind of skill in developing is really understanding the output. I think with AI, it’s becoming increasingly used in the world. And so I think another skill to develop is ensuring that students know the kind of risks, such as you say, that actually it can reduce that diversity. And so having your own thoughts or consciously deciding, no, I don’t want to have a plot twist in it’s one way…

[00:18:25] David: Which becomes a plot twist in its own right. But having to insert that, this is what I think is so powerful. The idea that you can now more easily create something which is generally appreciated, potentially. That’s a skill to gain for students. But, then the ability to go beyond that and to make conscious choices, to do something different, which are thought out, which are thoughtful. This is the thing we’re wanting to do. This is what I believe good education is. It’s not about, just the fact that you can run these things and then do something.

That’s not what we’re wanting to encourage with education. With education, what we’re trying to get out is deeper thought. We’re trying to get people to be more thoughtful. In my perspective, I would hope that in education we’re trying to get them to appreciate different perspectives, to understand that there is a mass perspective which is appreciated, which is broadly what the AI draws from, but that there are reasons why moving away from that can be better for groups and therefore you can create something which is appreciated in a niche or in a small group. And that’s also a value.

And these are things where I think, if we think about the way education can change. And higher education in particular, it used to be very few people who went through higher education and then who were given roles where they had a lot of responsibility because they had learned how to think deeply. Now with the large numbers going through higher education, it’s become the baseline. If you haven’t gone through higher education, actually, some people appreciate that because you have life skills instead of just academic skills. And so there has been an element in certain contexts where that’s been flipped.

I believe that the AI sort of revolution could help higher education to be able to, again, up that depth of thought that you can get through a higher education degree. And to help a larger number of people really be thinking more deeply, which is to me, where higher education should demonstrate its relevance.

We’ve recruited people who don’t have undergraduate degrees, despite the fact that as an organisation, a lot of IDEMS have PhDs, but we value these life skills approaches as well. But I do find that at the moment, the undergraduate degree is not necessarily in all cases leading to enough depth of thought. And that’s what I believe it should be doing. And so using AI as a tool to be able to enable lecturers to spur that depth of thought.

[00:21:27] David: I hope students get inspired in different ways to then go down rabbit holes, to think deeply about little components in ways which may be able to be recognised better because the nature of that thought process maybe can be brought out more clearly because the ability to communicate it can be aided by AI.

And I think that’s the sort of thing where if we think about how our assessments can really focus on the thinking and the thought iterations, more than the writing and the communication of them. To me, that’s something which I think could really help. And this is the essence of changing the question. To me, we are changing the question from asking people to produce, to asking people to reflect on, to iterate, to think deeply about, and to communicate that.

That’s, I think, the opportunity that we have, thanks to AI, that we can afford to do that. Because there’s so much to learn, so much to understand, that if you spend a long time worrying about how to communicate it, you’re missing sometimes the depth of the thought. And that’s the observation in different areas with different sorts of subjects. I’ve known really strong students struggle because they didn’t have the ability to produce or write despite having the ability to think deeply. And enabling that to come to the forefront I think could benefit all students.

 Again, there’s no single way that we should be prioritising. But I do think that we have been, we haven’t been imaginative enough with the assessment. And I suppose to finish this episode, I wouldn’t mind you going through just a few of the instances that you’ve drawn out which are in the course to highlight some of these ideas.

[00:23:44] Lily: Yeah, absolutely. And I’d love to. There’s so many different case studies that there are and I try to pick ones which are different to one another, but they are all in essence, that there are a lot, sorry, which are very similar. But a lot of them really focus on this critiquing.

 There was one where a teacher at my school recently asked her class to use ChatGPT to write papers on a novel they were reading in class. The students also wrote their own papers and compared the results. And so, they weren’t framing ChatGPT as this kind of, way to cheat, they were not encouraging students to use it secretly, but instead showing, really showcasing ChatGPT and showcasing these generative AI so that they could talk about it.

Another one where Emily Donahue, a writing tutor and educational developer at the University of Mississippi, she kind of uses AI to help students focus on this critical thinking by having her students analyse and critique AI generated arguments to then rewrite. And she’s found that this really helps people as well with that kind of blank page syndrome, you’ve no longer got that problem of starting with a blank page. You’ve now got something and you can work from there. In a lot of cases absolutely change it completely of what you actually started with, but you’ve got that kind of, starting point to jump off from.

[00:25:06] David: And the key point there is that it is bringing out your ability to recognise what you do and don’t want to say.

[00:25:14] Lily: Yes, yeah, true. Yeah, absolutely. Of what you, of what you kind of, think, okay, “I think that this point here is important”, and developing it from there. And so you’re not really aware you’re not really thinking, okay, this is a critiquing exercise. But instead you’re still doing the writing, but you’re silently developing that critiquing skill.

[00:25:39] David: But the important element there to me is this element of ownership. Done right, using AI in this way doesn’t mean it’s less work to create the finished product. But it means that finished product has had more thought gone into it, because you’ve considered what’s not there, what could have been there, as well as what is there.

And that process is so important. This is one of the things that I find in our education system we lack. It’s this ability to not only recognise what is on the page, but what’s not on the page, and what has been chosen to be taken out or to be omitted, is at least as important. And that’s something which we don’t generally get the opportunity to highlight.

[00:26:26] Lily: That’s a fantastic point and really interesting that it’s about what’s not there and about what the students are themselves deciding what to say. Other case studies that I’ve come across are more on developing a skill in how to communicate with AI, what kind of prompts. There’s a bioscience professor who assesses their students on their ability to craft AI prompts.

So he wants to know “what did you ask the AI?” as well when he looks at their work. Because you can see how, sometimes it’s subtle, but sometimes you can change the word slightly and you’ll get a completely different output. A really simple one is when I’ve been working with AI to generate images.

I just love the generating images examples because you can just physically see how it’s interpreting what you’re saying. If I say okay, give me some people with like a darker skin color or something, it will do that. But if I say, okay, give me someone who’s black, it often gives me a kind of outline or a shadow, a silhouette, that’s the word that I want. It will give me a silhouette. When I’m like, no, I don’t want a silhouette. I want to have diversity in my image that you’re generating for me.

[00:27:37] David: And I think this is something where in general, the ability to be able to get something good out of AI is an important skill, which you’re saying a number of people are now highlighting and drawing out. And the ability to be able to take what has come out and make it your own is an important skill. And the thing which I believe is, I want to see when we change the question, whether the question is much more about I’m less interested or less worried about the details of how you ask a question to an AI system at this point in time, because that’s going to change over time.

[00:28:20] Lily: Okay.

[00:28:21] David: But I am really interested in the fact that you can think about reflecting how changing what you ask the system changes what you get out. That’s a really interesting question. The specifics are less important than the dynamics in that context. Whereas, when you get it out, whatever is coming out, how do you make it your own and take ownership?

So the other element of changing the question, which I really like, which comes back to the oral components that he can start putting in, is this element of ownership. I’ve had this problem going back way before there was ChatGPT or anything else, where students would be using references and they would be saying things which then when you dug into, they didn’t really understand or believe.

And I believe that actually using generative AI, we can make sure that actually our assessments and the questions we’re asking students are to say, much more, make sure that there is nothing there which you don’t understand or you don’t agree with. Because you will be challenged on it, and we’ll pick out things, if you’ve got an oral component. We’ll pick out things to be able to ask “why do you believe this?”

And, at the end of the day, I don’t care whether the thoughts which you’ve presented to me, are generated by AI or generated by you writing them, as much as I care that you would stand by them. And this is the thing, that they correspond to communicating what you want to say.

And I think that’s something where creating these cycles, where that becomes a priority, as we start to change the question. It’s not so much about, what is there? We might want to have much smaller, instead of long essays, we could have a small, one page paragraph followed by an oral defense of it.

This is also where we’ve heard about people using debates and changing the question to be can you defend this position and get different people to debate on different positions as being another form of assessment which can be brought in.

 What I want to finish from my side and then I’ll let you go through any other examples you have or to finish off is really this idea that I believe we could have students having more ownership of what they hand in thanks to AI than before AI. And that’s where I think if you can change the questions in ways, and when I say change the question in this context, it’s change the assessment process in such a way that it incentivise students to want to stand by what they’ve written and to be able to then argue it and debate it, or demonstrate depth of understanding based on it. That to me is really the potential. It’s this beautiful irony that I think that through artificial intelligence we can and we should be able to enhance the importance of human intelligence and to bring it to the forefront. And that’s really what this is all about for me.

[00:31:37] Lily: And I think that’s a great kind of message to finish on.

[00:31:41] David: Great. This has been fun. Thank you. And I look forward to our final episode, which we’re doing next week.

[00:31:48] Lily: Yes. Yeah. Oh, now looking to the future will be next week.

[00:31:52] David: Absolutely. Thank you.

[00:31:54] Lily: Thank you very much.