Higher education’s uncomfortable generative AI crossroads

While our universities are expected to be the critical conscience of society; interrogating the hype, weighing labour and climate impacts, defending intellectual integrity; there is also a duty to deliver relevant and responsive education now, writes Dublin City University’s Dónal Mulligan.
Students, employers, and policymakers understandably want to see AI-ready graduates with critical AI literacy and operating competency, often faster than universities can plausibly deliver. GenAI evolves on quarterly cycles, but curricula move on multi-year ones. That mismatch means universities can appear slow or evasive, even as they begin structured responses. Yet some slowness may be valuable.
MIT’s NANDA initiative reported that roughly 95 per cent of enterprise GenAI pilots delivered no measurable financial return, despite significant spending. For Irish and European academia, this is a warning against rushed AI-washing of programmes just to look modern. It takes time to distinguish enduring capabilities and genuine transformations from expensive distractions.
The same rapid pace of change also makes staff professional development tricky; colleagues who are genuine experts in their discipline can feel permanently behind on the latest implications of AI for their field.
The answer is not to turn every lecturer into a prompt-engineering guru, but to resource cross-institutional expertise, shared exemplars, and realistic, regularly updated guidance. Ideally, this should be coordinated across the third-level sector rather than reinvented campus by campus as academics try to respond to the same challenges in separate silos.
GenAI models also sit uneasily with core academic norms. Hallucinations are a structural feature of current large language model architectures, fundamentally at odds with commitments to accuracy, traceability, and reasoned argument. AI systems often produce confident and plausible inaccuracies mixed with valuable responses. Reliance on LLMs to collate and synthesise ideas promises convenience and productivity, but risks fracturing established practices of referencing and connecting to real, reliable fact. At worst, GenAI tools facilitate plagiarism-as-a-service, replacing original composition and threatening the credibility of assessment.
Emerging longitudinal work from MIT’s Media Lab on the “cognitive debt” of AI-assisted learning adds a deeper worry; students who repeatedly offload writing tasks to LLMs show weaker neural engagement, poorer recall, and more homogeneous outputs than peers who write without AI assistance. If that pattern holds, uncritical use of GenAI becomes explicitly anti-educational, robbing opportunities to learn and outsourcing analytical thinking.
Other externalities remain unresolved. Universities that proudly commit to net zero goals are being asked to normalise tools whose training and usage rely on energy and water-intensive data centres with non-trivial environmental impacts. Assessment regimes demand originality and authorship even as mainstream GenAI systems are built on opaque, large-scale ingestion of human texts, images, and performances without attribution or compensation. For European institutions, the concentration of control over these systems in a small set of US tech firms is also deeply concerning.
So, what should academia do? We should lean into our traditional strengths: slower, rigorous, collective scrutiny of evidence, including honest scrutiny of evidence that GenAI often under-delivers. Academia has a duty to provide thoughtful debate and communicate both the benefits and pitfalls of the technology in applied contexts.
At the same time, many GenAI tools are already embedded in students’ lives and workplaces. An urgent task is building “critical AI literacy” so students can evaluate these systems. We should help students ask better questions of tools that sound authoritative but may be wrong. Accuracy, citation, and understanding of model limitations should become explicit learning outcomes now, even as longer-term curricular development continues. We should also connect AI to wider debates on labour, climate justice, and intellectual property, rather than treating it as a neutral study aid.
Crucially, this cannot be done by universities alone. We should deepen collaboration with media-literacy networks with decades of experience in misinformation, platform power, and critical digital practice, and engage partners committed to ethical, evidence-based deployment rather than hype. Academia’s mission has always been to model structured, careful, and creative thought. Our response to GenAI should bring those critical traditions to bear.



