Universities have spent two years see-sawing between fear and fascination with generative artificial intelligence (GenAI). Most students, meanwhile, just want to know: what’s allowed, what helps me learn, and how do I stay out of trouble?
Our answer was to stop treating AI as a threat to be detected and start treating it as a literacy to be taught. The AI Collaboration Toolkit we developed is a practical, student-first approach that frames AI as a collaborator with clear boundaries, ethical guard rails and inclusive scaffolds that close, rather than widen, attainment gaps.
- The craft and politics of academic writing in the AI universe
- Anatomy of a term paper: using information literacy skills with AI
- Solving the soft skills crisis using artificial intelligence
The toolkit opens with a simple promise: AI is here to augment your work, not replace it. We borrow from the late Clayton Christensen and his Jobs-to-Be-Done theory: students define the specific job to be done, such as brainstorming, structuring, revising or translating, “hire” the right AI for that job, and retain accountability for judgement and originality. This framing gives colleagues a shared language for appropriate use and helps students move beyond the unhelpful binary of “ban it or embrace it”. It’s a discipline of purpose: match the tool to the task and keep the human decisive.
At the heart of the toolkit is a Typology of Student AI Use: a one-page matrix that lists common “jobs”, typical tools, the human role, the AI role, interaction patterns, a Red-Amber-Green (RAG) risk rating and whether a short declaration is required.
For example, brainstorming ideas or seeking formative feedback on a draft sit in the green zone, writing a full assignment with AI sits in the red, and tasks such as paraphrasing complex academic texts or auto-generating references sit in the amber middle, permitted only with stringent verification and a declaration. The typology makes the boundary between support and substitution legible for students and markers alike.
Crucially, the typology spotlights inclusive use-cases that reduce language and confidence barriers. We explicitly name fluency improvement and translation of one’s own ideas as low-risk supports when students write in a second language, as long as authorship and intent remain the student’s, and the final text is critically reviewed. We also encourage process-level help for planning, structuring and clarifying arguments, particularly for commuter students, carers and those working long hours who may have less access to one-to-one support. Clarity, here, is an equity intervention in that it tells uncertain students what “good help” looks like and dignifies the effort of getting there.
Ethical literacy is strengthened through two lightweight practices. First, an AI Reflection Journal prompts students to record the task, how AI helped, what they learned and how they ensured integrity. Reflection makes the human thinking visible again and turns AI from a magic box into a tool to be interrogated.
Second, a Final Self-Check: “Have I only used AI to support, not replace, my thinking?” “Can I explain how I created my work if asked?” “Have I verified every citation?” normalises good habits before submission. Both tools nudge students towards deep thinking rather than box-ticking compliance.
Defining misconduct
We also cut through policy fog with a plain-English section on what counts as AI misconduct. The toolkit distinguishes:
- AI commissioning: outsourcing authorship
- AI falsification: invented data or citations
- Unauthorised use: using AI where it’s prohibited
- Plagiarism: presenting AI’s words as one’s own without attribution
Short student scenarios make these categories concrete and proportionate, from finishing an essay with AI after poor time management (commissioning) to submitting fabricated experiment data (falsification). For staff, these vignettes support consistent decisions. For students, they demystify the line between help and harm.
Christensen also warned that innovations often miss the people who need them most. His work on disruptive innovation urges us to design for the underserved, who in this case are the non-consumers of traditional support, by offering simpler, more accessible ways to make progress. The toolkit maps AI use to formative stages: idea generation, structuring, language clarity and feedback, so that students can readily recover time for thinking rather than merely typing.
We compare permitted AI feedback to what a writing centre or peer reviewer might offer, emphasising that the student still revises, decides and owns the final argument. That equivalence helps destigmatise legitimate support, while keeping the assessment of learning anchored in human understanding.
What might this look like in practice? Consider Amara, a first-year international student juggling a part-time job. She “hires” AI to brainstorm three angles for a case study, selects two promising directions and drafts an outline herself. She then asks the tool for formative feedback on structure and clarity, keeps a reflection journal noting which suggestions she accepted and why, and verifies every reference manually.
Before submission she completes the self-check and includes a 70-word declaration: the job she hired AI to do, how she used it, what she verified and what remains her own. When challenged orally to explain her argument, she can, because she built it herself. The technology supported her progress; it didn’t impersonate it.
Our aim around the use of AI is for disciplined, transparent pedagogy. It asks staff to be explicit about what learning they are assessing, and it asks students to show their process. In Christensen’s terms, it helps learners make progress in the circumstances of their lives. That should be the measure by which we judge any AI policy or practice.
For leaders weighing whether to adopt something similar, start with three moves: give students a shared language for tasks (the typology), give them reflective prompts that make thinking visible (the journal and self-check), and give them permission to declare. Then align assessment with the capabilities you truly value, and train staff and students together using the same artefacts. The rest, particularly integrity, belonging and better work, will tend to follow.
When the rules are clear, students stop asking “Can I use AI?” and start asking a better question: “What job am I hiring AI to do and how will I demonstrate my learning?” That is the shift our sector needs: from policing to partnering, from opacity to openness, from fear to capability. We do not pretend AI isn’t here; nor do we give it the pen. We teach students to collaborate with it in ways that are ethical, transparent and inclusive, so that trust, judgement and genuine learning can flourish.
Lucy Gill-Simmen is vice-dean for education and student experience in the School of Business and Management and Will Shüler is vice-dean of education and student experience for the School of Performing and Digital Arts, both at Royal Holloway, University of London.
If you’d like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment1