Harnessing AI for better public service delivery

Jonathan Bright, a former Oxford University professor specialising in computational approaches to the social and political sciences, argues that with thoughtful implementation and public trust at its core, artificial intelligence (AI) can play a pivotal role in modernising and streamlining public services in the UK.
“AI is not new, but its potential in the public sector is accelerating fast,” says Bright, who believes the emergence of generative AI (GenAI) is driving renewed political will and operational experimentation. “What we are seeing now is a step-change in interest and application, not only in what AI can do but also how quickly it is being adopted.”
Bright classifies AI’s public service applications into three broad areas: perception, prediction, and generation. “Perception technologies are already widely used,” he explains, citing examples such as facial recognition at passport gates or analysing aerial imagery to support urban planning. “These are among the most mature applications of AI in government.”
Predictive AI, he notes, has been a major area of public sector innovation for years, though with mixed outcomes. “We have seen models used to forecast missed NHS appointments, identify children at risk, or predict homelessness. But social data is hard to work with, and these tools need to be carefully evaluated for impact.”
GenAI, however, is the “newest frontier” which Bright says is “being trialled enthusiastically across government departments”. From assisting in policy drafting to answering citizen queries through chatbots, its uptake has been “swift and bottom-up”.
Efficiency and empowerment
Recent research from The Alan Turing Institute shows a high rate of GenAI use among public sector professionals, even in areas where digital adoption has traditionally lagged.
“In our survey of professionals across schools, universities, the NHS, social care, and emergency services, nearly one-third of respondents in schools reported using GenAI daily,” says Bright. “Over half knew someone who was already using it. That is a staggering rate of uptake in less than a year.”
More striking still, Bright notes, is how positively public sector workers view the technology. “These are people who historically disliked the digital tools they had to use. But here, they see clear benefits, especially in reducing routine admin and freeing up time for more impactful work.”
In the NHS, for instance, respondents estimated that almost half their working week is spent on administrative tasks. They believe AI could cut this burden by nearly a full day. These are efficiency gains that could be transformative if realised.
Opportunities
Bright’s team also sought to identify high-impact use cases for AI by analysing 400 citizen-facing services across UK central government. These range from passport applications to driver’s licence renewals; transactions that collectively total over one billion annually.
“Of these, we found around 140 million transactions involved complex decision-making,” says Bright. “And over 80 per cent of those show a high potential for automation using existing AI capabilities.”
Bright says that economic methods such as those used to estimate automation potential can help public bodies identify where to invest next. “It is not just about what is possible, but where AI can make the biggest difference,” he says.
Risks
Despite the optimism, Bright says that risks abound: “The promise of AI is huge, but so are the potential pitfalls. If we do not get it right, we risk losing public trust.”
He warns that validating GenAI systems such as those used in citizen interactions is an unsolved challenge: “Setting up a chatbot is easy. Ensuring it gives the right answer every time is incredibly difficult.”
Failure to plan for when technology goes wrong, he adds, has been at the heart of some of the UK’s most damaging public sector digital failures. “We have to assume these systems will fail sometimes and be ready with mechanisms for redress.”
Bias and fairness are equally critical. “Public services do not just cater to the average citizen, they must work for everyone,” says Bright. “An AI system that marginalises vulnerable groups is not just ineffective, it is dangerous.”
Responsible innovation
Bright stresses the need for clear governance frameworks, particularly in light of upcoming legislation like the EU AI Act. Transparency, he says, will be key to earning public trust.
“If a chatbot is responding to a citizen, that citizen deserves to know it is a chatbot. Simple design choices like that help build confidence,” he says. “We must be proactive about responsible innovation, not reactive.”
He draws comparisons with technologies that failed to win public support such as genetically modified crops or nuclear power: “Even the most promising innovations can be rejected if people do not feel they are being used fairly and transparently.”
Bright believes that GenAI can “significantly improve the public service experience”. However, its success hinges on “how it is deployed, governed, and understood”.
He concludes: “With the right structures in place, AI can make government more responsive, more efficient, and more inclusive, but if we ignore the risks or treat governance as an afterthought, we will squander a rare opportunity for transformation.”