The possibilities of GenAI to speed and scale up research are vast. Large language models (LLMs) and other generative AI tools have opened up ways of working that would have been unthinkable to academics only a few years ago, paving the way for new discoveries. And although ethical concerns remain over how exactly these AI tools should be used in research, academics cannot ignore GenAI’s ability to rapidly trawl huge data sets, uncover hidden patterns, make connections and generate responses to human prompts – in a language we all can understand. When carefully trained, GenAI can predict illness, design drugs and identify sustainable technologies. GenAI can quickly carry out tasks that would take individual researchers many hours, days or even weeks, significantly easing administrative loads and freeing up valuable time for deeper thinking.
But for all of that to happen, researchers need to know how and when to use appropriate GenAI tools to support their scholarly work, and how to ensure these powerful machines do not compromise academic integrity and research quality. This collection of resources offers practical guidance for informed, responsible use of GenAI as a research assistant.
Understanding the complementary strengths of humans and AI in research
Sorting through mountains of data and years of literature can be, at best, overwhelming, at worst, impossible for a single human brain. This is where GenAI models, with their ability to sift through vast data sets in seconds, can carry out tasks that researchers cannot. But GenAI does not understand broader contexts and human values, and it cannot reason and critique through logic. It cannot make the creative, conceptual leaps that have led to some of the greatest advances in human understanding. GenAI relies on existing knowledge, learning from statistical patterns – in often limited data sets – to provide its answers. Together, researchers and GenAI tools can tackle problems that neither could solve alone. These resources offer advice on how to collaborate effectively and recognise where human input is needed.
Harnessing AI to expand scientific discovery: From drug design to climate modelling, artificial intelligence can process data at scales far beyond human capacity. Hongliang Xin from Virginia Tech argues that the future of research lies in harnessing agentic AI through human-guided discovery.
A four-part framework to categorise AI tools for research: How to use AI tools to assist in research, for each phase of the process, outlined by Eric Tsui from The Hong Kong Polytechnic University.
Only a human will do: when to eschew AI in teaching and research: Artificial intelligence tools have streamlined processes and accelerated innovations – efficiencies not lost on higher education. But at times, we need to prioritise human judgement and involvement, as Qin Zhu of Virginia Tech explains.
What AI tools to use – and when – in your research
Effective use of GenAI in research requires more than simply inputting a prompt and expecting it to do the work; it means employing it as a collaborator to augment specific tasks such as literature reviews or citations. To do this well requires the coordination of multiple specialised tools. Dedicated GenAI “research agents” offer unprecedented capabilities to access and analyse quantitative and qualitative data. They can visualise textual or numerical data and automatically extract and synthesise information from academic papers, and much more, as these resources explain.
Beyond search – how AI agents are redefining research: AI-augmented research can speed up processes such as literature review and data synthesis. Ali Shiri from the University of Alberta looks at functionality of three AI agents.
Be the conductor of your own GenAI orchestra for academic writing: Instead of using a single GenAI tool to create a one-note research paper, why not tune up an orchestra of machine assistants? Read advice from Aditi Jhaveri at Hong Kong University of Science and Technology.
Enhance GenAI collaboration for future-proof research support: Embrace the possibilities of generative AI in research support with a strategic mindset. Ryan Henderson and Tse-Hsiang Chen from the University Medical Center Utrecht and Ayla Kruis from Utrecht University share their practical framework for successful human-AI collaboration.
An AI toolkit for all aspects of academic life: Harness the power of technology to reshape the tasks that make up your day. Urbi Ghosh from Colorado State University Global lists helpful AI tools she uses.
The risks of over-reliance on or misuse of AI in research
Concerns over the safety of sensitive data held in GenAI tools make many researchers hesitant to use them. And the way in which LLMs and other GenAI tools learn from existing information means their output can lack diversity, replicate inbuilt biases, reinforce misconceptions and enable gaps in knowledge to go unchecked. Read about how to ensure research is accurate and how data-handling processes can protect those involved.
From model collapse to citation collapse – risks of over-reliance on AI in the academy: The way GenAI surfaces sources for literature reviews risks exacerbating the citation Matthew effect, writes David Joyner of Georgia Tech’s Center for 21st Century Universities. Here, he offers ways to prevent AI-driven search from blunting the impact of new research.
The GenAI awakening – navigating the new frontier of research support: As generative AI gains traction in the world of research, Ryan Henderson from the University Medical Center Utrecht and Ayla Kruis from Utrecht University shed light on using it responsibly in research support.
Creative and practical ways to use GenAI to enhance your research work
Researchers can use GenAI to refine research questions, validate findings and highlight flaws in their work. In interdisciplinary projects, AI tools can facilitate effective data sharing and identify commonalities and differences across disciplines. Some researchers are even using AI to assign tasks to team members based on their skills and expertise, provide real-time feedback and guidance and track progress, essentially turning them into research managers. Learn how in these resources.
Three ways to leverage ChatGPT and other generative AI in research: A guide to three key uses of generative AI tools such as ChatGPT in developing and enhancing research, by Daswin De Silva and Mona El-Ayoubi from La Trobe University.
Use GenAI to slow down and reflect more deeply: University staff are under pressure to produce more with less. Sam Illingworth from Edinburgh Napier University asks: what if, instead of using GenAI to save time, we took a slower approach?
Three reasons to harness AI for interdisciplinary collaboration: New advances in artificial intelligence could be used to communicate across disciplines and share knowledge more seamlessly. Raymond Chan of Hong Kong Baptist University explains how.
Could AI manage your research project?: The possibilities for using artificial intelligence in research know no bounds – it could even end up in a management role. Henry Sauermann and Maximilian Köhler of ESMT Berlin explain what it could do.
How to design effective policies and safeguards for responsible AI use in research
Clear guidance and open conversations encouraging transparency and the critique of GenAI use lay the foundations for ethical research. Learn what institutions and research bodies should be doing to develop frameworks that promote integrity in research and educate academics on the strengths and limitations of AI tools.
Ethical AI: safeguards for the research revolution: Artificial intelligence can accelerate discovery in ways humans alone cannot. For Hongliang Xin at Virginia Tech, the key is pairing AI’s power with ethical safeguards, institutional governance and responsible oversight
How to create a higher education AI policy: A successful university AI policy guides internal innovation and usage, directs resources and identifies key contacts for emergent needs. Eric Scott Sembrat of Georgia Tech’s Center for 21st Century Universities outlines the steps and considerations for writing guidelines.
Why universities must lead on honest AI disclosure and how a new tool can help: Universities must bridge the gap between calls for GenAI transparency and workable standards. A new taxonomy designed by academics at Borys Grinchenko Kyiv Metropolitan University and Berdyansk State Pedagogical University offers a practical system to declare AI’s role in research openly and responsibly.
Developing a GenAI policy for research and innovation: Establishing a framework to guide AI use in research is vital for ensuring institutions are and remain fully compliant. Read advice from the University of East Anglia’s Helen Brownlee and Tracy Moulton.
The power of GenAI to take research beyond the lab
GenAI can help researchers move their knowledge beyond the study or lab, to apply it in real-world settings and engage the public in their work. Read about how researchers have trained AI tools in the art of traditional Cantonese porcelain painting, while others are using AI to help identify research with strong commercial potential.
Harness the power of AI to preserve endangered art forms: Researchers breathed new life into Cantonese porcelain painting techniques using AI, equipping a new generation with traditional skills. Henry Duh from the Hong Kong Polytechnic University explains how they did it.
GenAI as a research assistant: transforming start-up due diligence: Universities are not just centres of innovation, they are also platforms for promoting the commercialisation of research. As “research assistants”, GenAI can open the door for “under the radar” start-ups that human evaluators might miss, says Tiam Lin Sze from Singapore Management University.
Thank you to all Campus contributors who shared their expertise in this guide.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment