Technology

Artificial Intelligence: a dangerous proposition?

Assessing that the rise of AI does not merely raise issues of risk, safety and liability, but also challenges the future of humanity, NUIG’s John Danaher asks what role should humans play in a society saturated by intelligent machines?

Joshua Brown loved his Tesla. In December 2015, when Tesla unveiled the latest iteration of their ‘autopilot’ software, Brown gleefully documented its features on his YouTube channel, showing how it could automatically steer, brake and swerve. But on 7 May 2016, Brown’s Tesla was involved in a fatal collision with a truck while running the autopilot application.

Was this the first AI fatality? That depends on how we define ‘AI’. In 1950, Alan Turing famously argued that if a computer could carry out a conversation like a human being, then it was, for all intents and purposes ‘intelligent’. This made human-likeness the ultimate arbiter of what counted as ‘intelligent’. But Turing’s test is limiting and anthropocentric. A more expansive definition is now favoured. A system is said to be intelligent if it acts in a goal-directed manner, if it gathers and processes information, and if it learns from its behaviour. Distinctions are then drawn between ‘broad’ and ‘narrow’ forms of AI. Narrow forms are good at solving particular problems; broad forms are good at solving problems across multiple domains. Extant AI systems are narrow in form, but many dream of creating broader, more generally intelligent systems.

AI, so defined, is on the rise. This is partly because of changes in how we create it. In the early days, engineers created AI from the ‘top-down’, i.e. by programming systems to follow long-lists of ‘if-then’ rules. When designing a chess-playing computer, for instance, an engineer would program it with many iterations of the rule ‘if opponent makes move X, make move Z’. If the engineer was sufficiently comprehensive, the system might stand a good chance of playing chess ‘intelligently’. But this approach proved unwieldy. Even for a simple game like chess, the number of rules would be well beyond the capability of any human to elaborate.

And so, from the 1980s, engineers started to program AI from the ‘bottom-up’. They gave systems a few basic learning rules, and allowed them to work out their own problem-solving techniques by training them on a dataset of well-understood problems. This was the ‘machine learning’ approach, and it has led to many of the recent successes in AI. But machine learning took a long time to come of age. It needed mass surveillance and data-mining to become effective. Couple this with advances in robotics and cloud-based computing, and you have the conditions for much of the current AI-hype. If only a handful of this hype becomes reality, it creates significant ethical and legal problems. Four of them are particularly pertinent for policy-makers.

The first is the impact of AI on privacy. Machine-learning systems refine their problem-solving capabilities by feeding themselves on masses of data. This leads to more efficient, user-friendly systems, but comes at a cost of more invasions into our privacy. We must decide whether we can live with this tradeoff. To do this we have to be fully aware of the scale of the invasion. Often we are not because data-gathering technologies are hidden and secure our consent in dubious ways. The new General Data Protection Regulation updates the current regulatory system in an effort to address these challenges, but we will need constant vigilance if we are to manage the risk.

We may simply become unable to control AI once it
 becomes more intelligent and powerful than us.

The second issue is that of control and security. AI is often sold to us on the hope that it will increase well-being. Self-driving cars, for example, are said to reduce road accidents. While these claims may be true, more general security and control problems will arise once AI becomes widespread. The susceptibility of networked technology to malicious interference has become all-too apparent in recent years. AI-systems are similarly vulnerable. What will happen when a fleet of self-driving vehicles is the subject of a blackhat hack? And its not just malicious hackers we have to worry about. We may simply become unable to control AI once it becomes more intelligent and powerful than us. In his book Superintelligence, Nick Bostrom argues that once AI becomes sufficiently capable, there is a real possibility that it will act in the pursuit of long-term goals that are not compatible with our survival. Although Bostrom’s ‘control problem’ arises at an advanced level of machine intelligence, lesser versions of it are already apparent. Traders who use automated trading algorithms often find themselves disturbed by what happens when these algorithms interact with rival systems.

This leads to the third issue: liability and responsibility. As AI systems increasingly have the ability to do damage to the world, questions arise concerning who is responsible when things go wrong. Machine learning systems often do things that their programmers cannot foresee. This is problematic because many of our legal doctrines depend on foreseeability (and related concepts) for assigning blame. This opens up ‘liability gaps’ in the system. The problem is highlighted by cases like that of Joshua Brown. Tesla required all users of its autopilot software to stand ready to take control if warnings were flashed on screen. According to the official accident report, Brown did not heed those warnings before his fatal collision. This may save Tesla from liability in this instance, but addressing liability in other instances may not be so easy. There is a tension whenever a company markets self-driving technology as being safer than human drivers, whilst insisting that humans need to take responsibility if something goes wrong. This has led many legal theorists to favour alternative solutions, including the increased use of strict liability standards for compensation, and, more controversially, the possibility of electronic personhood for advanced AI.

The final problem concerns the impact of AI on human dignity. AI undoubtedly has a displacing effect on humans. If an AI system functions effectively it obviates the need for a human to perform a task. Humans may still oversee what’s going on, but their ongoing participation will be reduced. What if the task requires a human touch? Consider the use of robots to care for elderly patients. The EU has invested heavily in projects that enable this.2 But do we really want robotic carers? Is care not something that is built upon human connections? Similarly, certain legal and bureaucratic tasks might require human participation in order to be deemed politically and socially legitimate. Allowing AI to dominate in these realms creates a real threat of ‘algocracy’ (rule by algorithm), which would be detrimental to democratic rule.

Show More
Back to top button