Technology

Ethical rules for self-driving cars

With the proliferation of autonomous vehicles comes the increased chance for more accidents involving such vehicles. Crucially, this requires the implementation of both legal and ethical rules to govern these occurrences. NUIG law lecturer John Danaher writes.

Driverless vehicles promise a lot. They promise to cut road deaths, free up urban space now reserved for parking, and make commuting more pleasant and productive. They also have a dark side. Since 2016, there have been four documented fatalities involving autonomous driving systems. Two of these occurred in 2016 and involved the Tesla enhanced autopilot system. Two occurred in March 2018, one again involving the Tesla autopilot and the other involving an Uber self-driving car. The latter was the first time that a pedestrian, as opposed to a passenger, was the victim.

These fatalities are just the tip of the iceberg. There are many other reported collisions and incidents. An increase in their number is inevitable. As driverless vehicles become more common, and as they interact more frequently with one another and with human beings, there are going to be more accidents. You cannot create a perfectly safe system. This raises the critical question: what rules should be put in place to deal with these accidents?

Liability for injury or death is, perhaps, the most obvious concern. The traditional rules of legal liability are good at assigning blame to humans. They are less well-equipped to deal with autonomous vehicles. Should the manufacturers or programmers be liable in the event of an accident? Should it be the drivers/passengers? Or should we have a general system of social insurance to deal with the fallout? Different opinions have emerged already.

In response to the first fatality in May 2016, Tesla were quick to disown any liability, noting that their autopilot came with a warning to all drivers that they were expected to take control of the vehicle if something unexpected happened. Other manufacturers have been less inclined to disown liability. Volvo and Audi, for example, have promised to take responsibility for any crashes caused by their self-driving systems.

This ‘responsibility gap’ problem is serious, but it could be addressed by some suitable alteration in the legal rules. It is not as if legal systems have never had to deal with issues of vicarious or strict liability. There are a number of workable solutions. A more serious problem, and one that may ultimately prove more difficult to resolve, concerns the ethical rules that the driverless vehicles themselves should follow in the event of an unavoidable accident. If forced to choose, should they save the passenger over the pedestrian, the young over the old, or the many over the few?

Philosophers and ethicists have long debated these questions. One classic forum for these debates is ‘the trolley problem’ – a contrived hypothetical scenario in which one must choose between saving five people or one person by causing a collision with a railway trolley car. The purpose of the scenario is to test our moral intuitions. This scenario has now been repurposed for the driverless vehicle era. The ethicist Patrick Lin asked people to imagine a self-driving car being faced with a tragic choice between swerving left to avoid collision with an eight-year-old girl or swerving right to avoid collision with an 80-year-old woman. Who should the car be programmed to save?

Variations on Lin’s thought experiment have been sketched by others and much debate has ensued. The problem with this debate – as with many philosophical debates – is that little agreement has emerged. Some people think cars should be strictly utilitarian, saving as many lives as possible and possibly factoring in variables like the age and social utility of victims. Others prefer deontological approaches which outlaw certain kinds of conduct, irrespective of their consequences, and treat all persons with equal dignity. The real question for programmers is what should be done in light of this widespread disagreement?

Three solutions are on the table. The first is simply to go with the majority preference. Although there are interminable philosophical disagreements about the ‘correct’ ethical rules, there are some clear preferences among the general population. The recent ‘Moral Machine’ experiment provides good evidence for this. The experiment involved an online game (available at: moralmachine.mit.edu) in which people were asked to decide what they would like a self-driving car to do in different variations on Lin’s thought experiment.

“Some people think cars should be strictly utilitarian, saving as many lives as possible and possibly factoring in variables like the age and social utility of victims. Others prefer deontological approaches…”

A team of experimenters, led by Edmond Awad, analysed the results of 40 million decisions in this game. The experimenters found a clear preference for saving many lives over fewer lives, for saving the young over the old, women over men, and humans over animals. Nevertheless, they also found some interesting regional and country-level variations, including greater respect for older people in Asian countries and a stronger preference for saving women and physically fit people in Southern countries. The experimenters recommend that we take on board these majority preferences when coming up with the rules for self-driving cars.

But should ethics be decided by majority rule? Another solution, favoured by some philosophers, is to take moral uncertainty seriously. In other words, to acknowledge that we don’t actually know what the right thing to do is (in certain situations) and to design systems that either randomise between different moral rules or pick the rule that maximises the expected moral value in a given scenario. This latter approach is favoured by Vikram Bhargava and Tae Wan Kim in their article ‘Autonomous Vehicles and Moral Uncertainty’. They argue that it provides the best guidance to the would-be programmer of a driverless vehicle.

But would we really tolerate a set of accident algorithms that operate with a degree of randomness? Another solution to the problem would be to let the market decide. In other words, to allow manufacturers to sell vehicles that cater to their customers’ moral preferences. If you are a utilitarian, you can buy a utilitarian vehicle. If you are a deontologist, you can buy a deontological vehicle. And so on.

But do we really want a free market in morals? Furthermore, if we allowed the market to figure this out, we could end up with some unwelcome possibilities. The legal theorist Hin Yan Liu, for example, has argued that car manufacturers could start selling people ‘immunity devices’ that protect them in event of unavoidable accidents, or else give them a higher priority ranking over other potential victims. The idea seems positively dystopian, but we already live in a world in which the wealthy can effectively buy moral priority over the poor in markets for healthcare. Do we want to do the same in the market for protection from self-driving cars?

We have to come up with answers to these questions soon. In May 2018, the EU Commission set out its ambition to make Europe the world leader in ‘automated mobility’. It is currently recruiting an expert group to come up with ethical rules for driverless vehicles. The group is expected to report by the end of 2019. It will be interesting to see what they propose.

Show More
Back to top button