AI report

Governing with AI

Ricardo Zapata, Digital Government Policy Analyst at the Organisation for Economic Co-operation and Development (OECD), discusses the OECD’s Governing with Artificial Intelligence report on how governments are harnessing the benefits of AI and how they can manage its risks.

Ricardo Zapata says the report, published in September 2025, highlights that “AI in itself is not an antidote against low levels of trust in government, but if it is used strategically and responsibly, it can support some of the most important drivers of trust”.

Zapata argues that a lack of trust is an important issue for policymakers and refers to the OECD Trust Survey 2023, which found that “44 per cent of the surveyed population have low or no trust in national government” across countries.

The OECD’s Governing with Artificial Intelligence report, published in September 2025, focuses on how governments are using AI to improve their productivity, responsiveness and accountability, therefore increasing citizens’ trust in them.

The report analysed 200 use cases of AI in government across 11 policy functions including tax administration, civic participation, and delivery of justice.

One of the main findings, Zapata says, “is that the governments are already using AI to obtain a number of important benefits”.

He adds: “Governments are often seeking to automate, streamline, and tailor public services to make them more efficient and citizen-centred, which is a goal of 57 per cent of cases analysed.”

Based on the report, Zapata notes that examples of this include “the use of AI to support the drafting of legal decisions, or the use of AI virtual assistants to provide personalised support to citizens”.

The second most popular use case of AI in government is “for better decision-making, sense-making, and forecasting”.

Zapata says this includes “governments identifying risk factors and predicting natural disasters, such as wildfires, so that they can take preventive actions and put responses in place in advance”.

Anomaly detection is another area where AI is being prominently used. According to Zapata, “in 30 per cent of the analysed use cases, governments are using AI to enhance accountability”.

One of the ways this occurs is “using AI to identify suspicious tax filings to help uncover evasion”.

Risks

While Zapata says the “the use of AI by governments holds significant transformative potential once it is more mature”, he insists that it is vital to “manage the risks that come with it”.

There are significant ethical risks associated with AI use in government.

Zapata explains that “AI can infringe on rights and values such as privacy, fairness and autonomy”, particularly through biased data or lack of transparency.

“It is important to highlight that past experiences show that failing to manage this risk properly can cause harm in the real world.

“Governments also need to manage exclusion risks,” Zapata says. “Without inclusive design, AI could deepen digital divides and leave some groups unable to take advantage of AI benefits.”

“Delaying or avoiding the AI adoption where it could add value can lead to missed opportunities and widen the capability gaps between public and private sectors and further erode trust in government.”

An example of this imbalance is the dominance of the English language in generative AI, which can lead to lower quality results for governments or citizens using other languages.

The final risk Zapata highlights is public resistance to AI, which may arise if AI services perform poorly and negatively affect citizens. Additionally, “poor communication can also erode public support for AI in government”.

Despite the risks associated with AI, the OECD report encourages considering the use of the technology in a responsible way as the risk of inaction may outweigh those associated with its adoption.

“Delaying or avoiding the AI adoption where it could add value can lead to missed opportunities and widen the capability gaps between public and private sectors and further erode trust in government.”

Framework for AI in government

In the 2025 Governing with AI report, the OECD released its Framework for Trustworthy AI in Government. Zapata says it is designed to advise governments on how “to use AI in a trustworthy and responsible way, as well as to tackle common implementation challenges”.

He explains that there are three core pillars in this framework, with the first called ‘enablers’. Zapata identifies these as “the building blocks needed to succeed”, including data, infrastructure, skills, and governance.

Examples include governance mechanisms and “concrete guidance, such as the UK’s playbook for government”, and a “mechanism for experimenting and scaling AI, such as AI incubators”.

The second pillar of the framework is ‘guardrails’, the “instruments to guide the use of AI and ensure it is done in a trustworthy manner”.
Zapata explains these can be “binding rules” such as the European Union AI Act, but there are also “non-binding approaches” such as the Irish Guidelines for the responsible use of AI in Public Service.

The framework’s final pillar is ‘engagement’. Zapata says this “involves engaging with the stakeholders, including the public, to ensure that all relevant voices are heard and the use of AI is trustworthy and meets their needs”. An example of this was the citizens’ assembly in Belgium in 2024, where 60 randomly selected people formed a panel to discuss the future of AI in the EU.

Concluding, Zapata highlights the work his organisation is undertaking in order to improve how AI is used by governments around the world.

“We are continuing to extend our work in government AI, actively working on key topics like approaches for governments to experimenting with AI, measuring its impact and return on investments, addressing skills gaps in the civil service, among others.”

Show More
Back to top button