Passer au contenu

Université de Montréal

AI (for Anxiety Inc.)

Beyond the fears that it arouses, how can artificial intelligence be used in teaching?

Date:

I don’t like clowns. They don’t make me laugh and they make me uneasy. Apparently, I’m not alone. The fear of clowns is quite common, well-documented, and even has a scientific name: coulrophobia. But name aside, clowns aren’t cool.

 

My discomfort around clowns goes way back. When I was five, I saw a clown drop dead at a block party in Pointe-Calumet. It was no joke—the clown had a heart attack; first aid didn’t help, and the somewhat panicked parents rushed to gather up their children and leave the scene. But the clown=death association in my young mind doesn’t explain everything. It's been amply fed by pop culture mages of sinister, menacing clowns. And there are other factors, undoubtedly.

 

I’ve done my research, as the saying goes.

 

First, there’s the ambiguity. Clowns are supposed to make you laugh, but anyone who gets close to a clown risks humiliation, a kick in the rear, or a pie in the face (I’m a big fan of the Cirque du Soleil, but I never buy front-row tickets). Second, clowns are unpredictable. Unreadable. The heavy makeup prevents any kind of safe interaction because it makes real emotions and intentions unintelligible. Psychology suggests that this uncertainty may rightly be perceived as a danger or a threat. Finally, another explanation I find quite convincing is that clowns have a human form but are distorted: they look a lot like us, but there’s something that’s not quite right, and that little something puts clowns in an in-between space (the “uncanny valley” theory) which is a powerful source of unease. The same discomfort I feel when I see a ventriloquist with their puppet, or a prosthesis that imperfectly imitates a hand or, getting to the subject at hand, an intelligent algorithm that responds to me like a human being.

 

That's what I want to talk about now: the discomfort surrounding artificial intelligence (AI). There’s been a lot of talk about it in recent months and it’s the same we heard about clowns. With AI, you get a sprinkling of anthropomorphism, a pinch of opacity, and a generous bunch of unpredictability—which is all it takes to fuel major anxiety. Like clowns, AI has a bright side and a dark side. But compared to the circus arts, it raises far more serious issues: while it certainly has extraordinary potential for serving the common good, it also carries the risk of an uncontrollable threat to democracy, to truth, indeed to humanity itself. And it would seem that this ambivalence is well-founded, insofar as it is openly expressed by the leading experts in this technology.

 

Artificial intelligence covers a vast territory, from the processing of colossal amounts of data, to decision-making by powerful algorithms driving autonomous machines, to the production of images, sounds and texts that closely mimic human language and creativity. But it's conversational agents such as ChatGPT that have attracted the most attention recently. These chatbots are within the reach of anyone with a computer and Internet access. They are extremely user-friendly. At times, their performance is abysmal, when the machine wanders or goes completely off-track, but it remains breathtakingly fast and efficient. Essentially free of charge, for the time being, chatbots can be used in a host of everyday activities, including those engaged in by teachers, learners and researchers.

 

The new conversational agents are the first major manifestation of the far-reaching impact of artificial intelligence, the first real, large-scale space for conversation between humans and intelligent machines to be recognized as such by their flesh-and-blood interlocuters. Conversational agents irrevocably transform the relationship between humans and knowledge—that is, access to knowledge and the production of knowledge—and can even distort knowledge itself. As such, they alter the conditions of knowledge transmission that are at the heart of the acts of learning and teaching.

 

So, teachers and learners that we are, what should we do with ChatGPT and others of its ilk in the university context? For weeks now, I’ve been trying to find an angle from which to tackle this question that is now on the minds of all those who teach. Study sessions, colloquia, and other roundtables on the subject follow one after the other. But I still haven’t found any satisfying answers to my questions.

 

If you’re a regular reader of this blog, you’ll know that I often reflect on my experience as a veteran professor: “Back in my day...,” I like to say, or “When I started my career...". You can almost hear the rocking chair creak. I’ve seen innovations that were supposedly going to change the course of the history of teaching: the switch from chalk to overhead projector, the “internets,” laptops in the classroom, smartphones, Wikipedia, PowerPoint, open online courses, bimodal teaching, and more. It’s all well and good to say that higher education has always adapted to these new technologies, but none had the power to become more complex on its own. None had the same potential to do harm. Adopting AI in higher education raises issues that are profoundly, qualitatively different.

 

If we are to tackle AI in higher education, we need to consider two things. First: we cannot ignore AI nor can we exclude all its applications from our institutions. Artificial intelligence is here to stay. Second, we need to rapidly identify the beneficial uses of AI and the risks it poses, and disseminate this information as widely as possible. On an individual level, professors have neither the time nor the resources to master all the consequences of the emergence of this technology in the educational environment. Collectively, sharing best practices, successes, and mistakes will allow us to reassure many anxious people and gradually tame the beast. But we will inevitably have to define the space devoted to generative AI on our campuses. To preserve its autonomy, the university community bears the primary responsibility of reconciling its values of freedom with the precautionary principle that is essential today.

 

Starting there, we can distinguish three aspects of university education that are affected by the emergence of increasingly powerful conversational agents: learning assessment, educational tools (AI for teaching), and course content (teaching AI).

 

The disruptive potential of conversational agents is first and foremost measured in the field of evaluation. There is justified concern that ChatGPT or its equivalents could be used to cheat. The content produced by these applications is (or soon will be) sufficiently accurate and coherent that it can be presented with impunity as a student’s personal work. The use of AI assistance in writing an essay is already virtually undetectable. In the wake of this, several universities, including Université de Montréal, have adopted rules that qualify such use as plagiarism if it is not explicitly authorized. Conversations in academia are already defining assessment practices that reduce the risk of this happening: by writing in a controlled environment in class, by holding oral exams, by having students submit methodological notes or drafts with their final essay, etc. There is no shortage of solutions, even if they are, at times, cumbersome and likely to require a little creativity on the part of the teaching staff and a little standard-setting by university administrations. As for me, like a number of observers, I remain convinced that there is only a handful of cheaters in our student body and that the risk of cheating, which is always present, is significantly reduced when we succeed in convincing students that the value of their diploma lies in learning (writing to learn) rather than in the mark awarded to them (writing to be assessed).

 

If I were still teaching, I wouldn’t be questioning the effect of AI on the integrity of exams or assignments. For me, the thorniest questions would be epistemological. They would revolve around the appropriate and secure use of conversational agents as educational tools, and, even more, around what should be added (or subtracted) from the content of my courses to take account of the potential of AI in knowledge acquisition.

 

There is a great deal at stake here, and not just because technology is evolving so quickly. If we manage to overcome the concerns that arise from the educational use of applications that are developed in a more or less hidden way by private actors, outside the university’s digital infrastructure, we still need to establish our pedagogical intentions as clearly as possible. We could, for example:

 

  • achieve a certain level of digital literacy, or even digital citizenship, to enable students in all disciplines to maintain a critical perspective on AI productions;
  • understand the inherent biases related to data used in the development of algorithms;
  • acquire the skills needed to formulate queries that make the most of conversational agents and limit their hallucinations (#jobofthefuture: prompt engineering);
  • encourage the imagination required to harness the power of this resource to accelerate and scale up scientific experiments;
  • have the wisdom to identify the problems for which we would be most likely to find an ethical and responsible solution with the help of AI.
  • envisage other objectives we're not yet in a position to name, but whose relevance will become apparent with the use of these initial tools.

 

For each objective, one can imagine a good number of scenarios and corresponding pedagogical exercises, which are beginning to circulate on university websites and from which we can draw inspiration. All these pedagogical intentions have their place in the university curriculum because ChatGPT is not just the hot topic of the moment and AI is not simply a passing fad. It remains to be seen what resources we could reasonably devote to it, individually and collectively, in an environment that is already quite depleted.

 

Beyond these considerations, the emergence of AI in the academic sphere also calls for institutional and ethical reflection. Do we really want to use conversational agents to support student success? To configure virtual teaching aids and help with feedback? To prevent mental-health problems? Do we want to make or support admissions decisions with AI assistance? Does the use of AI in the classroom impose new requirements in terms of course group size, or new parameters in configuring classrooms and timetables?

 

It's all a bit dizzying. I suggest absorbing the pedagogical paradigm shift in small doses: familiarize yourself, understand, experiment, evaluate, try again. This applies to teaching staff and students alike.

 

So repeat after me: “I am not afraid of AI, I am not afraid of AI, I am not afraid of AI… but I recognize that sinister, malevolent AI does exist!” It's a little like clowns, isn't it?

 

P.S. After writing this post, and as a matter of principle, I asked ChatGPT to draft an essay equating the fear of AI with the fear of clowns. And you know what? My text is much better!

 

Daniel Jutras

 

If you’d like to continue the conversation, please drop me a line.