Issues

Government CIO Barry Lowry: AI in the public sector

The Government’s Chief Information Officer, Barry Lowry, speaks to Joshua Murray about the public sector’s use of artificial intelligence and how this can be better understood and regulated.

Lowry says that the adoption of artificial intelligence (AI) in the public sector has been “cautious” over the last year, and that the narrative around artificial intelligence in some parts of the media has created nervousness amongst senior public servants and politicians. “The public service is making really good use of robotic process automation and has been for quite some time. We are also seeing some excellent work in parts of the public service but in general we are seeing a very measured approach”, he says.

Underpinning the Office of the Government Chief Information Officer’s (OGCIO) approach to AI in the public sector, Lowry explains, is the view that AI is primarily a technology tool. “It is a tool which uses data, and which may use algorithms; and its effectiveness is dictated by the human input into both. What we need to do is look to focus first on the purpose and outcomes we have for the technology and then curate the data and construct the algorithms accordingly. We also need to be very aware that the overall risk changes depending on that purpose.”

In terms of the evolution of the use of AI, Lowry outlines that “there are definitely things happening”. “The Revenue Commissioners is using AI in some activity and there are really exciting things happening within the Defence Forces, health, and agriculture. But overall, it is a slow burner, especially in the ChatGPT space. We are starting to see some experimentation in ChatGPT and planning is underway for more. However, this is very much in the domain of better public services. In terms of automated decision-making, that just is not happening and I am not sure there are compelling arguments that it should.”

“We have done that pretty well with a lot of the technology we have introduced. We have really tried to explain to people that there is a national dimension to this, an EU dimension, but it is all underpinned by trying to protect the public and protect public services.”
Barry Lowry, Chief Information Officer

Strong legislation

Having held the role of Chief Information Officer (CIO) since April 2016, Lowry is currently advising the Government as to how best to integrate artificial intelligence into the operations of the public sector. He believes that a learning curve is needed for decision-makers to gain a true understanding of what AI is if they are going to be able to properly legislate for it.

Whilst he believes that the EU’s Artificial Intelligence Act is “generally a good thing”, Lowry nonetheless thinks that there are “bigger issues at stake in terms of when it should and should not be used”. He states that, because the EU was ahead of the rest of the world, it was “very much breaking new ground” and the consultation was “both general and conceptual in nature”. Moreover, Lowry says that the relatively poor response (just over 1,200 valid feedback instances) and minority proportion of citizen respondents (30 per cent) indicates that there are “far more meaningful ways to engage the public” as a key component of the legislative process.

“There is a risk that misunderstood attempts to regulate AI are viewing AI as an entity, but it is not really about AI as an entity, it is about how you use it. How you use it is very specific, and because it is very specific, the understanding of that and the views of that will be naturally very subjective. How you can develop an objective regulatory approach to something which could be very personal is difficult,” the CIO says.

“I just think it is being a little bit rushed and we are in danger of introducing regulation without a real understanding of what is being regulated and without really sitting down with the technology providers and talking about shared risk.”

Lowry analogises: “If you are using any service or app on your phone, where does security kick in? Is it the app level or the phone level? The two seem to co-exist pretty well and that is because they work together. There is a process for making an app as secure as possible on a phone. I think it could have been wider conversation.”

An increasingly fundamental facet

Lowry believes that, as understanding of the concept of AI increases and how best it can maximise efficiencies and be underpinned by proper regulation is better understood, that it will play an increasingly prevalent role in the operations of the public sector.

One area outlined by Lowry as being a top priority for the integration of AI in the short term is citizen’s services. He says that the OGCIO is looking at the area of large language module space, within which the CIO believes that ChatGPT can play a fundamental role. Lowry clarifies: “It is a process which is about using AI as a super search, almost like a Bing on steroids. You can see now even using it in your home computer, that Bing gives you access to AI when you are making your searches; one can see how that is highly valuable.”

Lowry qualifies this by saying that we say that the use of ChatGPT can be highly valuable “as long as the data has been carefully curated”. “I can see government, for example, using ChatGPT or a similar product for Gov.ie, to make it easier for people to make a search, making the website more intuitive. There is absolutely no doubt that these tools are considerably better than a common search engine design could ever be; they are a step beyond that. We will probably see use of it in some form of chatbot where we have carefully curated the data in advance and I have no doubt that greater use of such tools, and user feedback will improve our curation process.”

“There is a risk that misunderstood attempts to regulate AI are viewing AI as an entity, but it is not really about AI as an entity, it is about how you use it.”

Referring to Portugal as an exemplar country in this space, Lowry tells of how the Portuguese Government is making use of an AI-based chatbot which consistently answers and updates the 150 most popular questions which are asked of Portuguese government services. However, he clarifies that this initiative in Portugal involves “very carefully curating the answers, ensuring that the answers that are given by the chatbot are the ones, which the service providers want to be provided.”

In terms of wider public services, Lowry explains that “we will start to see some greater use of AI in systems”. “Our healthcare services are an area where AI can be used to help recognise tumours, and for other healthcare services, where it is very much about pattern analysis because pattern analysis is one of the best functioning features of AI technology.

“This technology is being used in the UK and the USA for recognising tumours and medical irregularities. It is perfect for that, but the important thing again is that it is a support tool for a clinician, it is not replacing the clinician. Indeed, the clinician adds the critical value to the diagnosis by using other evidence, such as bloods and face-to-face assessment. We are starting to see areas where it will be introduced to support people in their work, but the technology will never replace those people.”

Vision and ambitions

Speaking on the Government’s Connecting Government 2030 strategy, Lowry rationalises the strategy as being about “how we are moving, not just with digital government, but how we go past digital government”. “For us, the platinum version of digital government is life events. That is fundamentally about taking what we know about our people and, with their permission, using it more proactively for their benefit.

“If government knows your economic circumstances, it knows what healthcare you are entitled to and what financial support you are entitled to. You should not have to find out about those things yourself. There are opportunities for AI in that space. This is about AI supporting, rather than replacing, the individual. Yes, data can throw up errors, so you need to double check those, but I fundamentally believe that we are moving into a space where there is real opportunity.”

On the Public Service CSR Renewable 2030 strategy, which he describes as planning towards the “golden egg of evidence-based policymaking”, Lowry says: “We have tended in government to analyse data in a siloed approach, with health, social, and data, all being individually analysed. If those can be brought together, we can understand for example how location, economic status and education impacts upon the health of the child and gives us a better insight as to how to help people. The problem is that you have to use vast amounts of data and that is how AI can be used so well.”

Lowry states that the key to bridging the gap between the public and an understanding of AI, which can ultimately lead to a better understanding of AI by decision-makers, is through the establishment of a voluntary AI council, which was published in the Progress Report on a National AI Strategy. “I think it is a good chance to give the public the opportunity to see and understand what we are doing.

“We have done that pretty well with a lot of the technology we have introduced. We have really tried to explain to people that there is a national dimension to this and an EU dimension, but it is all underpinned by trying to protect the public and protect public services; the public response to that has been really positive.”

Concluding, Lowry emphasises that the current initiatives to regulate AI are “on the right track”, but that “there is a learning curve to be done”.

“When did all the media pick up on ChatGPT? It was in January or February 2023, but the AI Act has been in formulation for a lot longer than that. You can just see how the technology side itself is moving at a completely different pace to the regulation, and it is far better to take stock a little bit and really understand what the concerns about this are and what are the opportunities.”

Show More
Back to top button