Technology

The good, the bad and the artificial


As AI technology proliferates, particularly in relation to autonomous vehicles, we are faced with a challenge of preventing it from simply reifying existing social norms and preferences. The University of Limerick’s Martin Mullins and Martin Cunneen write.

November 2018 saw the passing of Douglas Rain, creator of the chilling voice of HAL 9000 in Stanley Kubrick’s classic movie 2001 A Space Odyssey. In movie history, HAL 9000 is perhaps the most striking example of artificial intelligence in action. More precisely perhaps, HAL is a manifestation of generalised AI in that the computer has a mission to protect and will act ruthlessly to ensure it is completed. One of most famous lines in movie history (certainly spoken by a computer) occurs after HAL has killed other members of the crew. “Look Dave, I can see you’re really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over.” Sounds outlandish – well think again.

The advent of fully autonomous vehicles on our roads mean that AI systems will be making moral consequential decisions. Such decisions will be rare but extremely impactful. Consider the current situation, if a child runs out in front of you and, in a panic, you swerve into an oncoming vehicle and kill the driver, that decision will have ethical content. That said, any ethical decision will be clouded by response time and extreme anxiety. Such occurrences tend to be labelled accidents. However, an automated car taking a similar decision may be making an explicit calculation about the risks involved and the relative value of human lives. Instead of an instinctive turn of the steering wheel, AI systems embedded in the car will calculate the ‘best’ course of action. At the end of such an episode who is to say that your mid-range executive saloon won’t sit you down, tell you to calm yourself and think things over?

Our future relationship with cars is but a microcosm of the how AI will impact upon our lives. Today, the car is one of the strongest symbols of our freedom as consumers, think of the dominant motifs in the ads. It’s all about off-road, frontiers of experience and spontaneity. Well not for much longer. Even before full automation becomes possible, cars are already monitoring our driving behaviour.

Debates are happening in Europe as to what a car should do if your driving is substandard. These will be connected vehicles with the ability to communicate. If, for example, a driver consistently breaks the speed limit should the car inform the police or indeed the insurance company? In extremis, if the car requests a change in behaviour through the human machine interface (the dashboard) and the driver fails to respond would it be ethical for the car to initiate a return to safety routine – and pull over. And if there are children in the back there is an argument that the threshold for such a decision be lower.

So, at the very least, in the field of transport, machines will use AI routines to minimise harm and this will necessarily involve moral choices. In anticipation of this ‘brave new world’, an international group of researchers led by academics at MIT have been trying to ascertain human preferences in advance with regard to the moral dilemmas that pertain to the task of driving.

The results are very interesting. This is news worthy partly because of the sheer number and geographical spread of the observations. In total, researchers gathered 40 million decisions in 233 countries. Whilst it is difficult here to sum up the complete article, we can provide some highlights.

The piece ranked ‘preference in favour of sparing’. In other words, what on the roads would the respondents try hardest to avoid. As you would expect, the young beat the old easily with a push chair top of the ‘sparing’ charts. Similarly, humans would be spared over animals. In a clear spirit of utilitarianism, the many are spared over the few.

One of the risks we face with the introduction of AI is a gradual abdication of our responsibility as human beings to be well informed and indeed to try and be the best that we can… it may become just too easy to step aside and let AI take control.

Some more troubling findings also emerge, for example sandwiched between dogs and cats are criminals. Not quite as dispensable as a dog but more valuable than a cat. In terms of social class, there is a clear preference for sparing higher status people over lower status individuals. Detectable too is a preference for sparing fit people over large people. Homeless do better than both old men and old women.

One conclusion, if you are an elderly overweight criminal, better use the zebra crossing. The implications of such preferences are complex but in the context of this discussion on AI – what if such preferences were implemented by a machine? One of the more profound questions facing us is how we will prevent AI simply reifying existing social norms and preferences.

Toby Walsh, a professor of artificial intelligence at the University of New South Wales in Australia, argues that the development of thinking machines is as bold and ambitious an adventure as mankind has ever attempted. “Like the Copernican revolution, it will fundamentally change how we see ourselves in the universe,” he writes.

One of the risks we face with the introduction of AI is a gradual abdication of our responsibility as human beings to be well informed and indeed to try and be the best that we can. With the arrival of so-called superintelligence, that is the ability of machine to engage in multifaceted complex decision making and to correct previous errors, it may become just too easy to step aside and let AI take control.

For the human race, the consequences may be profound. Part of being human is the desire to develop and to better understand the world. If we become mere spectators, our desire to be educated and thoughtful members of society may be compromised – our very humanity could be undermined.

Of course, HAL 900 introduces a much more dangerous spectre, and this has been reinforced by recently deceased Stephen Hawking, who consistently argued that AI represented an existential threat to our very existence.

Martin Mullins

Dr Martin Mullins is a Director of Transgero, a consultancy which specialises in Emerging Technology Risk and a member of Lero.

Martin Cunneen lectures at the University of Limerick.

Both teach on the new MSc in Artificial Intelligence in UL.

Show More
Back to top button