AI report

Integrating GenAI into higher education

Dónal Mulligan, lecturer and researcher in media and communications at Dublin City University, outlines the challenges and opportunities generative AI (GenAI) presents in education.

Dónal Mulligan highlights the “huge appetite for innovation” in education and notes that this is a sector where “AI companies are really promising a lot”. He cautions, however, that “these are promises we have heard before with previous technologies so there is some reticence to dive in immediately”.

He states that AI has the potential to have positive impacts on personalised tutorship, access to education, and academic research productivity. Stating that truly effective personalised learning is “a holy grail” in academia, Mulligan outlines Bloom’s ‘2 sigma problem’.

American educational psychologist Benjamin Bloom found that students with a personal tutor experience educational outcomes two standard deviations better than those without.

Mulligan explains that this remains a problem because formal institutional education, at primary, secondary, and third level, recognises how transformative sustained and responsive individual tutorship could be for learners but lacks the capacity to provide this at scale. Class sizes and student-teacher ratios make this unfeasible and technology has often been seen as a way to address this.

He indicates that AI could potentially address this problem but that the existing possibilities to do so are “currently very overstated”. Current GenAI tools are mainly designed for customer service interactions, which is typified by their tendency to be overly agreeable and to reinforce the user’s opinions and views rather than confront or expand them. Many current systems also demonstrate substantial accuracy issues related to so-called “hallucination”. This makes them a poor fit for education.

Mulligan also outlines the ‘lost Einsteins problem’ to demonstrate how GenAI could improve access to education. This is the idea that many potential contributors to knowledge cannot effectively access the education to do so. He describes how GenAI is already being used to help people cross boundaries by improving their access to and outputs in language, writing, and communication styles.

Identifying a third major area of potential, Mulligan discusses the possibility for some GenAI systems to increase research productivity. He states that AI tools can be particularly effective in academic research contexts where data analysis is key for knowledge synthesis or identifying research gaps.

“GenAI is cheating us of an attempt to use our own cognition to connect ideas.”
Dónal Mulligan, Dublin City University

 

 


Challenges

While acknowledging the intriguing possibilities of GenAI for the three themes of opportunity in education, Mulligan details three immediate challenges arising from its use: uncertainty, efficacy, and integrity.

On uncertainty, he discusses his perception of paralysis amongst third level institutions on GenAI policy as they “wait and see” what other institutions do to emulate them. He expresses concern that while this uncertain inaction persists, opportunities may be missed for third level institutions to participate in, rather than respond to, this moment of rapid techno-social change.

He discusses a contemporary context where higher education is uncertain not just about the macro-level response to the technological shift, but the potential harms of reliance on specific tools within the sector, when these might produce inaccurate outputs in a field that so values fact and truth. Institutions are uncertain of the reliability of information presented, and the confident presentation of hallucinations, incorrect facts, and problematic interpretations.

Mulligan notes 2025 research from OpenAI making clear that this issue is inherent in large language models (LLMs), adding that “hallucination and inaccuracy is built into these systems… [Contemporary AI Chatbots] present both incredibly insightful things and incredibly fake things with the same tone of confidence”.

Regarding efficacy, Mulligan says it is crucial to consider what degree of expected change arising from AI tools is merely hype: “There is a worry that there is a lack of robust evidence from good, longitudinal studies showing that change is effective.”

Mulligan cites the 95 per cent of business who did not see a return on investment in AI as stated in The GenAI Divide: State of AI in Business 2025, published by MIT media lab initiative, NANDA, in July 2025. He says this study “shows a huge unevenness in the application of the technology”.

On integrity, Mulligan says the question of how GenAI use can be reconciled with academic pillars must be answered. He asserts that high quality critical thinking is crucial for effective AI use, and that in education generally he is concerned that AI tools could become “an instrumentalist replacement for a more critical approach”. A move towards AI use and away from critical thinking can result in people losing opportunities to learn and becoming uncritically reliant on technological outputs over informed human judgement.

Furthermore, Mulligan says AI and plagiarism are closely connected. Use of several major GenAI chatbot models dropped significantly in June 2025, after the end of exam season in May. The researcher states that this is “terribly worrying” because students are substituting learning opportunities “for a summary that is possibly inaccurate”.

He says students’ choice of AI models is also concerning. Through focus groups conducted in 2024, he found that among models most used by first year undergraduate students in Ireland were Meta AI and Snapchat’s model, My AI. Asserting that these are “not good models”, Mulligan explains that students choose these because they are what is most readily available to them, again linking the uptake of AI tools in education with a lack of critical review.

In addition, Mulligan says many AI tools are “knowingly built upon the unauthorised use of huge amounts of intellectual property”. He notes the significant associated challenge for educational institutions, who champion clear pathways of reliably sourced knowledge and sanction plagiarism among students.

“How can academia now tacitly endorse the use of these tools, when they are directly derived from knowledge practices that so fundamentally clash with our norms?” he asks.

Mulligan also demonstrates how AI use undermines cognition by outlining an MIT study, which examined the performance of various students who used different learning methods including LLMs, classic search tools, and books.

This longitudinal study showed that students displayed a lower cognitive load, a measurement of brain activity while learning, when using LLMs versus other modes. They retained information very poorly and displayed significantly diminished ability to articulate knowledge when the tools were taken away.

He says that this study poses the worrying conclusion that “GenAI is cheating us of an attempt to use our own cognition to connect ideas”.

“It is cheating us of an attempt to use our memory and to reinforce new knowledge because we are handing it off.”

Show More
Back to top button