Primary tabs

Assessment tasks that support human skills

By Eliza.Compton, 29 September, 2025
Assignments that focus on exploration, analysis and authenticity offer a road map for university assessment that incorporates AI while retaining its rigour and human elements
Article type
Article
Main text

The ban-v-detect debate around the use of generative artificial intelligence is beginning to feel dated. Artificial intelligence (AI) is no longer a novelty and is fading as a threat to be policed in university classrooms. It has instead become part of many students’ and academics’ everyday toolkit. The challenge ahead then is not containing this technology, but reinventing assessment to account for its capabilities while using it to nurture human skills.

This article outlines 10 approaches to assessment, organised into four categories: rethinking traditional formats; AI as a partner in assessment; authentic and experiential learning; and assessing human capabilities. Each category includes assessment tasks taken from project management education.

Rethinking traditional formats

1. From essay to exploration 

When ChatGPT can generate competent academic essays in seconds, the traditional format’s dominance looks less secure as an assessment task. The future lies in moving from essays as knowledge reproduction to assessments that emphasise exploration and curation. Instead of asking students to write about a topic, challenge them to use artificial intelligence to explore multiple perspectives, compare outputs and critically evaluate what emerges.

Example: A management student asks an AI tool to generate several risk plans, then critiques the AI’s assumptions and identifies missing risks.

2. The rise of the dialogic exam

Conversational examinations – interactive, real-time assessments where students engage in dialogue to demonstrate understanding – are gaining new relevance as online tests become less reliable measures of authentic student ability because of AI assistance. In these in-person assessments, students defend their analysis in real time, respond to counter-arguments or reflect on AI-generated responses. Examiners use AI to create dynamic case scenarios and counter-arguments, but it’s the student’s ability to respond, reason and justify that is being assessed.

While vivas have always been oral, AI-enhanced interactivity creates unpredictable challenges that better prepare students for dynamic real-world decision-making.

Example: In an information system viva, students explain their methodology choice while the examiner uses AI to generate counter-arguments they must address on the spot. In a business ethics course, students might defend moral decisions while AI generates competing stakeholder perspectives, testing real-time ethical reasoning rather than memorised frameworks.

Using AI as a partner in assessment

3. AI as peer reviewer

AI can act as a “study partner” by giving students quick, formative feedback on drafts instead of them having to wait weeks for lecturer comments. To avoid hallucinations, students are required to cross-check AI suggestions against set marking rubrics or class resources. This lets lecturers spend their time on deeper critique, while students improve through practice.

Example: A student writing a business case asks AI to review grammar, clarity and structure. They then check the advice against the unit rubric before revising their work.

4. Assessment as co-design

AI can also become part of the assessed task itself. Students use it to generate draft ideas or reports, but the real mark comes from how well they refine and add human insight. This is not about AI creating the whole assessment; rather, students are asked to demonstrate their ability to critique, adapt and personalise AI output.

Example: Students ask an AI tool for a draft communications plan. They then adjust it to suit multicultural teams and their organisation’s culture, showing how they add value beyond the AI-generated draft.

5. AI as an assessment designer

Lecturers can use AI to draft problem sets, case studies or rubrics at scale. The efficiency comes from AI producing large volumes of varied material quickly, while lecturers act as quality controllers rather than sole creators. Fairness is ensured by spot-checking samples, applying clear criteria and using automated filters to flag bias or unrealistic outputs. This shifts the lecturer’s role from writing every item to validating and refining selected ones.

Example: An AI tool generates stakeholder conflict scenarios. The lecturer then reviews a representative set, tweaks wording for realism and reuses approved templates across multiple students. This makes the process manageable without the need to review every scenario individually.

Authentic and experiential learning

6. Avatars in the classroom

AI avatars can simulate professional roles for authentic assessment. The avatars are created using off-the-shelf platforms (such as an AI chatbot or VR tools), so lecturers do not need to build the technology themselves. Instead, they design the scenarios, while the software provides the character interactions. This makes it feasible without major tech support. The assessment focuses not just on the project content, but on how students communicate, negotiate and adapt under pressure.

Example: Students role-play scope negotiations with an AI-generated avatar acting as a demanding sponsor. Marks are based on the student’s interaction style, clarity and conflict-management approach, not just the technical solution.

7. Scaling personalisation

AI can also generate tailored scenarios for each student, reducing plagiarism and making tasks feel more relevant. Lecturers control the assessment design by setting consistent learning outcomes, while AI provides variation in the context.

Example: One student works with a dataset about hospital construction, another with a software roll-out. Both tasks assess the same scheduling techniques, but the context differs to maintain authenticity and engagement.

Assessing human capabilities

8. Measuring the unmeasurable 

AI cannot replicate human judgement, values or ethical reasoning. Assessment must increasingly foreground these dimensions, pushing students towards deeper critical and moral reasoning.

Example: Students critique AI-generated procurement plans that favour speed over worker safety and propose human-centred alternatives.

9. Towards post-assessment pedagogy 

Thanks to the prevalence of AI, evaluation is shifting from product-based to process-based assessment. Instead of focusing solely on final outputs, assess how students use AI, reflect on its limitations and build thinking iteratively through reflection journals and process portfolios.

Example: Students submit portfolios showing AI use across phases of an IT project, with reflections on where the tools were helpful, misleading or ethically problematic.

10. From policing to partnership 

Reframe academic integrity from “catching” misuse to building cultures of trust and partnership. Clear guidelines should establish responsible boundaries, recognising AI as part of professional life and university learning.

Example: Students submit “AI use declarations” explaining what tools they used and why, turning integrity into a learning exercise.

A catalyst for reinvention

Generative AI is not the end of assessment; it is a catalyst for reinvention. The 10 directions outlined here suggest a roadmap for higher education to remain relevant, rigorous and humane in a world where machines can write and simulate thought with ease. What matters most now is not what AI can do, but what students can do with it: apply judgement, exercise creativity and demonstrate professional skills in authentic contexts. Universities must embrace experimentation and build assessments that develop uniquely human capabilities students need in an AI-rich future.

Acknowledgement: This article has been prepared with the assistance of AI.

Afrooz Purarjomandlangrudi is a lecturer in the College of Art, Business, Law, Education and IT, and Amir Ghapanchi is an associate professor and course chair in the College of Sport, Health and Engineering, both at Victoria University, Australia. 

If you’d like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
Assignments that focus on exploration, analysis and authenticity offer a road map for university assessment that incorporates AI while retaining its rigour and human elements

comment