‘It would almost be stupid not to use ChatGPT’

Interview: philosopher Bas Haring and how he annoyed a lot of people with a provocative recent experiment.
Bas Haring. Photo: Ivar Pel.

Amid widespread concern among lecturers about students’ use of AI tools, public philosopher Bas Haring mostly sees opportunities: ‘Outsourcing part of the thinking process to AI shouldn’t be prohibited.’

Bas Haring annoyed a lot of people with a provocative recent experiment. For one of his students last year, the philosopher and professor of public understanding of science delegated his responsibilities as a thesis supervisor to AI. The student discussed her progress not with Haring, but with ChatGPT – and the results were surprisingly positive.

While Haring may be excited about the outcome of his experiment, not everyone shares his enthusiasm. Some have called it unethical, irresponsible, unimaginative and even disgusting. It has also been suggested that this could provide populists with an excuse to further slash education budgets.

Perspective

Is it a bad idea to hand over some parts of student supervision to AI? And are graduation theses even worth anything anymore when so much of the thinking is done by AI?

When we ask Haring about his experiment, he places it in perspective. ‘Twenty-five years ago, there was another new invention: the internet. The sentiment back then was exactly the same as it is today. At the time, my lectures focused on the possibilities of this new technology. How is this internet thing changing our thinking? It captured the public’s imagination, just like AI has done today.’

We need to think carefully about what we want to teach students

‘The internet made knowledge universally available, and now AI is doing the same for thinking. Do we keep doing all our own thinking, or can we outsource certain tasks to the computer because it’s simply better at them?’

ChatGPT can do it

According to the latest figures from Statistics Netherlands, almost a quarter of Dutch people use AI software. Among 18- to 25-year-olds, the proportion rises to almost half. It should be noted, however, that these figures are more than a year old, and that they do not distinguish between students and other young adults.

‘There’s no avoiding it anymore’, says Alex Reuneker, a linguistics lecturer at Leiden University. ‘You’re not allowed to use AI to write your thesis, but how can you guarantee that something was actually written by a student? We can’t always tell if a text was written by AI, so some things are bound to slip through.’

‘I was seeing more and more student reports that were clearly largely written by AI’, says Meryem Janse, a former lecturer at Saxion University of Applied Sciences. ‘That was also one of the reasons I quit teaching. I noticed that educational institutions weren’t flexible enough to adapt to AI, even though they should be. Otherwise, what’s a degree even worth anymore?’

And now thinking is becoming less relevant as well

For many students, AI has become a fixture in their daily lives. ‘I use AI tools for almost everything’, says ‘Mark’, a Master’s student in biomolecular sciences who asked to be identified by a pseudonym. ‘Officially, you’re not allowed to use AI for writing, but I still do it, and so does everyone I know.’

‘Milou’ (also not her real name), a Master’s student pursuing a degree in healthcare management, uses AI tools several times a week, as a search engine and to help her write emails and format references. ‘When I have to do practical tasks that take a lot of time, it’s easy to just go: ChatGPT can do it.’

Speculative

Meanwhile, Bas Haring has noticed a dramatic increase in the quality of theses since the introduction of ChatGPT. ‘That’s because almost all students use AI. Educational institutions just don’t know how to deal with that yet.’

In 1988, Haring was one of the first students of artificial intelligence in the Netherlands. ‘At the time, the field was still highly speculative. The things AI is capable of today seemed impossible back then.’

What does the rise of AI mean for higher education?

‘We need to think carefully about what we want to teach students. Of course students are going to use AI. ChatGPT is available 24/7, so it would almost be stupid not to use it. So how do we ensure that students keep learning?’

‘I always encourage my students to watch a lot of films, read literature and visit museums in preparation for their thesis. Some of them get frustrated because they feel that it’s too time-consuming, and you could argue that it’s inefficient. You can also just talk to ChatGPT, which will give you lots of suggestions and ideas. I don’t know if that’s necessarily the wrong way to go about it.’

Isn’t there value in coming up with your own ideas?

‘Sure, but as a supervisor I also give suggestions. I’ll say things like, ‘Maybe you should look at it this way.’ And ChatGPT happens to be very good at that. The process is not exactly the same: AI is faster and offers a wider range of suggestions. It can help you think things through.’

Surely AI doesn’t inspire much thought if it’s writing your entire thesis for you?

‘No, but if students use it the right way, AI could push them in the right direction. Like a critical supervisor who provides suggestions without spoon-feeding you.’

Does the responsibility for that lie with the students themselves?

‘At the moment, yes. In a few years, I imagine we’ll have an AI tool for education that can handle these kinds of supervisory tasks without spelling everything out for students. That’s also how I use AI myself. When I’m writing a text, I’ll ask things like: how can I rephrase this, what would be an argument against this, what other arguments can I use, am I missing something? That always leads to interesting insights.’

You trusted one of your own students to use AI responsibly, but should we expect the same from every other student?

‘No, I don’t think so. I’m not arguing for thesis supervision to be completely automated. The purpose of my experiment was to suggest that we should think carefully about what we can teach students, and about what tasks we might feel comfortable outsourcing. I should also point out that, in the long run, it might not be sensible to use a commercial product like ChatGPT for this purpose.’

You see parallels between the advent of AI and the early days of the internet. The idea then was that there should be less focus on rote memorisation, and that students should mainly learn critical thinking skills.

‘That wasn’t a bad idea, was it? Everyone has an encyclopaedia at their fingertips these days, which has made knowledge somewhat less relevant. And now thinking is becoming less relevant as well.’

Is that only true for ‘thinking’ or does it also apply to ‘critical thinking’?

‘Critical thinking is still relevant, but some reasoning tasks may become less important.’

What’s the difference?

‘Critical thinking is more inquisitive, more precise. It’s difficult to articulate exactly what the difference is between academic reflection and figuring out how to structure a text in a way that makes sense, but they’re not the same.’

So what will be the primary task of education moving forward?

‘I wouldn’t be surprised if the interpersonal element became much more important in education than the cerebral element. Sitting next to someone, looking someone in the eye, relating to each other – the social aspect. I spoke to GPs about the role of AI in their work, and they told me that a GP has three main tasks: first, they have to make an initial diagnosis, second, they have to treat the patient or refer them to a specialist, and third, they have to provide a listening ear. It’s quite plausible that those first two ‘cerebral’ tasks will become less important, as technology takes over parts of those processes. But we’ll still be doing the ‘human’ work ourselves.’

Open letter

In reality, students often choose the path of least resistance, and many lecturers are concerned. Frans Prins, associate professor and director of educational consultancy and training at Utrecht University: ‘Students can now outsource certain routine tasks, but obviously we have to monitor the quality of education. If we’re not careful, students could miss out on learning experiences.’

Alex Reuneker echoes this sentiment: ‘We need to teach students about the risks of using AI, and how to maintain a critical attitude when interacting with this technology.’

Some lecturers believe that AI has no place in the classroom at all and are advocating for a complete ban. In late June, 500 researchers and professors signed an open letter calling for greater scrutiny of the use of AI in education. ‘The use of AI demonstrably hinders student learning and impairs critical thinking skills’, they wrote.

What are your thoughts on this proposed AI ban?

‘I think it’s an utterly nonsensical suggestion. What do you think these students will do when they get home? Of course they’re going to use AI to write their thesis. If it’s true that students are no longer developing the same thinking skills as they once did – which seems very plausible to me – there’s no point in banning AI. If that’s really the case, we need to think harder about how we are going to use this tool.’

‘We don’t just hand children calculators before teaching them arithmetic. But still, people today are less skilled at mental arithmetic than they were a hundred years ago. Maybe this is a similar situation. We don’t want students to become structurally less intelligent as a result of AI, but we shouldn’t try to ban the technology completely. That wouldn’t even be possible.’

But at the moment, degree programmes don’t have a handle on the situation.

‘This is an unusual time. Everyone is struggling with this new technology, especially in the humanities, where students have to write a lot.’ AI software is, after all, very good at producing texts.

The question then becomes what a degree is even worth anymore.

‘Maybe some unqualified students will manage to slip through the cracks and get a degree. It’s true that universities are concerned about that. Should we make students turn in handwritten assignments? Should we make them take exams without being able to rely on technology? Perhaps we should do that at the beginning of programmes, and then allow more AI use later on.’

What about the thesis?

‘You don’t want a thesis that’s fully written by AI, but when a student has to defend their thesis it becomes clear soon enough whether they really are the ‘owner’ of the work. And as a supervisor, you also speak to students at various points throughout the writing process. Those conversations also allow you to gauge whether they fully understand their subject. But outsourcing part of the thinking process shouldn’t be prohibited. That’s why I thought this would be an interesting – and perhaps somewhat provocative – experiment.’

‘Writing forces you to think things through carefully, but that may be more useful for some people than others. I had a student who was interested in litter, who would ride around on her bicycle and take pictures. She learned more from that than from the writing process.’

Do you think the graduation thesis could become a thing of the past if it no longer accurately reflects students’ abilities?

‘Academia is a conservative world. Educational institutions often want everything to stay the same, so I don’t see the graduation thesis disappearing anytime soon. Still, something has to change. The question is what. I’d suggest we talk to students and ask them about how we should be using AI, because they know the technology – they use it every day. I think a lot of solutions are staring us in the face.’

No AI oversight in higher education for now

Responsibility for the quality of education does not lie solely with degree programmes. There are also legal requirements and guidelines, but these are lagging behind the latest developments and do not address AI. Every six years, degree programmes in higher education are subject to certification by the Accreditation Organisation of the Netherlands and Flanders (NVAO).

Enquiries have revealed that this quality watchdog is not yet concerned with the use of AI in education. During the accreditation process, a panel of experts assesses the programme in question based on a variety of different criteria, following established standards. ‘AI is not mentioned in these standards’, according to a spokesperson. Panellists do not receive training in assessing AI-related aspects of education.

The Inspectorate of Education supervises education at a systemic level. It does not investigate individual degree programmes. In higher education, inspectors look at social safety, equal opportunities and quality assurance systems, among other things. The Inspectorate is also studying ‘digital resilience and AI in education’ to answer the following research question: to what extent are executive boards ensuring digital resilience, and what opportunities and risks do they see when it comes to AI in education? The results of this study have not yet been published.

KNAW

Research universities should teach students about scientific integrity, but what about the role of AI? The debate about this topic is still ongoing, and the Netherlands Code of Conduct for Research Integrity does not yet offer any guidance.
The Royal Netherlands Academy of Arts and Sciences (KNAW) is currently helping to revise the Code of Conduct. In response to questions from HOP, KNAW notes that it is part of a committee ‘tasked with determining how AI can be incorporated into the Code’, but adds that there is ‘little to report on this at present’.

Also read:

Leave a Reply


You must be logged in to write a comment.