Technology

The ethical challenges of artificial intelligence

Paula Boddington of the Department of Computer Science, University of Oxford examines the particular challenges of developing workable and effective codes of conduct and regulations in the field of AI.

Ethical and social issues concerning artificial intelligence (AI) are scarcely out of the news these days. Will driverless cars be safe? What decisions will a driverless car make in the event of a looming collision? What are the legal implications, and who is liable in the event of a crash? If driverless cars become safer than human-driven cars, will people who still want to drive come to be seen as reckless and irresponsible? (Remember how much social attitudes to drink-driving have changed over the course of only a few years.) And will driverless cars put professional drivers out of work? Or are they to be welcomed for the freedom they offer to those who would otherwise be unable to drive?

AI covers a multitude of applications, so it is presenting us with a plethora of issues to consider, including many for businesses and the economy, with implications for regulation and policy. For instance, AI may be embedded in algorithms that make decisions in a variety of contexts, from targeted advertising, to sentencing decisions in court, or to automated trading on the stock market. Here important issues include concerns about the hidden biases that such algorithms may have, often masked because the exact way in which decisions are reached may be unknown. It’s feared that algorithms may only intensify existing prejudices. And while it might be good for businesses to use algorithms to work out exactly how to sell as much as possible to the most likely customers, for those on the receiving end, there’s an uncomfortable feeling of being manipulated by intelligent machines which have analysed so much data on us all that they literally know more about us than we do ourselves. Are we being covertly coerced?

A characteristic signature of AI is that it’s used to enhance or replace human agency. This means that we end up asking really fundamental questions about how we as human beings relate to the world and to each other, about our social connections, and what happens when machines come into play.

It’s characteristic of much AI, not so much that it presents us with completely new problems, but that it drives many issues to the extreme. We are already concerned about privacy in this information age – will AI make these concerns even more acute? As more and more of our interactions are via intelligent machines rather than through human contact, what impact will this have on human dignity and on social interactions? And perhaps one of the most basic questions of all is presented by those looking to a possible future where AI and robots are doing much of the work that humans previously did. For what do we do then? Why might we as individuals, and as a society, want robots to work instead of us? What is life all about? What kind of dignity is there in a future where we might have perhaps ‘too much’ leisure?

Many are therefore calling for attention to these ethical and social issues and for appropriate laws and regulations. Work is already underway, for instance on developing broad principles for AI, such as the Future of Life Institute’s 23 Asilomar Principles for Beneficial AI, drawn up in January 2017. But however laudable, there’s no point in aspirational statements that can’t be translated into concrete steps. The IEEE (Institute for Electronic and Electrical Engineers) has been running a Global Initiative for Ethical Considerations in the Design of Autonomous Systems, which includes the development of a variety of industry standards, for instance, on embedding ethics into system design, on combatting algorithmic bias, and on the use of AI in systems designed to ‘nudge’ people into certain behaviour. Governments around the world, including the USA, the UK, and the EU, are putting efforts into understanding the impacts of AI on issues such as the organisation of the workforce and taxation.

“A characteristic signature of AI is that it’s used to enhance or replace human agency. This means that we end up asking really fundamental questions about how we as human beings relate to the world and to each other…”

There are many working in the industry and in academia towards developing ethical and beneficial AI. The Future of Life Institute has funded 37 projects, with a grant from Elon Musk and the Open Philanthropy Project, including one based in the Computer Science Department at Oxford University, where we’ve been working on considering what is needed to develop codes of ethics in AI. My book arising out of this work, Towards a Code of Ethics for Artificial Intelligence, examines the particular challenges of developing workable and effective codes of conduct and regulations in the field of AI.

So what are some of the particular problems here? It’s not easy to slot codes of conduct for AI into existing models of professional ethics, for a variety of reasons. AI may readily be produced outside of any need for the professional accreditation that forms the basis for the general rules of many codes of professional ethics.

The need for professional codes of conduct is premised on the notion that professionals have knowledge and skills, which others lack, and which could be used to the detriment of clients or the general public; hence the need to protect the public from possible problems. But this all presupposes that the professionals have the skills fully to control their products or services to prevent harm and to produce benefit. However, one of the central issues for AI is the capacity of even its creators to control it – the degree of autonomy such systems might have, the lack of transparency over how some AI operates, makes this highly problematic. These problems of the control of AI mean that there are extra complexities to development of codes of ethics to protect clients and the public. This is further exacerbated by the speed at which AI is developing and the multiple ways in which it is becoming embedded in how we receive information and interact with the world and with each other. With such rapid changes, it’s hard to even identify all the ways in which unforeseen effects might arise for society.

One implication of these far reaching issues is that the involvement of wide groups of people with different perspectives and interests is vital as the ethical challenges of AI are uncovered and debated. Those who have expertise in AI need the partnership of others to detect and assess the wider implications of its development and use.

Towards a Code of Ethics for Artificial Intelligence, by Paula Boddington, with forewords by Michael Wooldridge and Peter Millican, Springer 2017, is available now on Amazon and directly from the publisher.

Show More
Back to top button