Generative AI in higher education is a source of both fear and hype. Some predict the end of memory, others a revolution in personalised learning. My two-year classroom experiment points to a more modest reality: Artificial intelligence (AI) changes some skills, leaves others untouched and forces us to rethink the balance.
This indicates that the way forward is to test, not speculate. My results may not match yours, and that is precisely the point. Here are simple activities any teacher can use to see what AI really does in their own classroom.
1. Run group experiments
Divide your students into three seminar groups:
- One barred from using AI
- One allowed to use it without further instruction
- One trained in structured prompting and critique
Compare their results with blind grading on recall tests, essays and presentations. I ran this design for more than two years in a course dedicated to the law of AI. The outcome: AI had no effect on memory. Students allowed to use AI performed just as well on multiple-choice recall tests as those who were barred. Rote knowledge is sticky. But AI did produce effects on reasoning and writing. Students trained in prompting and critical evaluation produced stronger arguments and higher-quality legal writing. You may find different results in your discipline but you will not know until you test.
2. Build an AI research assistant
Ask students to build an AI-powered “daily digest” of news or scholarship relevant to the course. Each student (or group) selects a foundation model and crafts prompts to gather relevant material by providing AI systems with the course manual. At the beginning of each lecture, compare the outputs: which sources did the AI privilege, which arguments did it miss, how did the tone differ across models? Then turn the comparison back to prompting: which formulations produced clearer, more balanced digests, and why?
The aim is not only to practise research but to collectively generate a living record of readings produced week after week by the class. This anchors abstract doctrine in live debates and turns research into a collaborative, iterative process where both prompt quality and model bias become part of the lesson.
- Spotlight guide: Bringing GenAI into the university classroom
- Three ways to use ChatGPT to enhance students’ critical thinking in the classroom
- Promoting ethical and responsible use of GenAI tools
3. Compare outputs
Assign students a dense text such as a judicial opinion, an academic article or a data report. First, have them produce their own summary in class, without AI. Then ask several foundation models (at least two different ones) to do the same and compare AI output in the classroom. In my experience, OpenAI’s GPT-4o often generated longer outputs than Anthropic’s Claude Sonnet 4, which sparked interesting discussions about whether clients (and courts) value more detailed or crisp answers. Finally, compare the machine summaries with the student versions. The lesson might be that AI can speed up comprehension but requires careful verification.
4. Turn AI into a Socratic partner
Instead of being the sole interrogator, let AI play the role of tutor, client or judge. Have students use AI to question them, simulate cross-examination or push back on weak arguments. New “study modes” now built into several foundation models make this kind of tutoring easy to set up. Professors with more technical skills can go further, design their own GPTs or fine-tuned models trained on course content and let students interact directly with them. The point is the practice it creates. Students learn that questioning a machine is part of learning to think like a professional.
5. Ask them how it feels
After an AI-assisted task, dedicate time to structured reflection. A quick survey or a round table will do. Did outsourcing part of the work feel empowering or did it feel wrong? Was there a difference between using AI for core ideas and using it for grammar or citations? In my classes, the split was clear. Students were uneasy when AI proposed central arguments. They felt displaced. The same students welcomed grammar fixes or reference clean-up because those interventions removed friction without threatening ownership.
Track these reactions over time and compare across assignments. By doing so, students become more conscious of when they want to rely on the machine and when they prefer to trust themselves. That metacognition (knowing the boundaries of one’s own learning process) is a learning outcome in itself.
6. Ban uniform bans
As far as I can see from my experiment, AI will not rescue nor ruin higher education. But it will change its contours. If true, the question is not whether to ban or prescribe, but how to give professors the freedom to discover what works best. Used badly, AI fuels shortcuts and laziness. Used carefully, it sharpens judgment and adaptability. The task now is to sort good uses from bad ones. That calls for classrooms to become laboratories, not battlefields of speculation.
The activities I outline are cheap, fast, and adaptable. I provide more detail on their parameters in a report for those who want to adapt them. Importantly, these experiments will only be possible if those of us in charge of curricula give space to do so. Forcing faculty into uniform bans or mandated techniques cuts against academic freedom. Pedagogy thrives on ownership, especially when the target is moving as quickly as generative AI. Higher education institutions will thrive if teachers are free to test and share what they learn from these experiments.
Thibault Schrepel is an associate professor at the Vrije Universiteit Amsterdam.
comment