Public Affairs

Security in the age of killer robots

The definition of ‘autonomy’ and the blurring of boundaries between legitimate use and abuse stand in the way of an agreed consensus on the future of lethal robots. Birgit Schippers, a visiting research fellow at the Senator George J. Mitchell Institute for Global Peace, Security and Justice at Queen’s University Belfast, writes.

In July 2016, police in Dallas, Texas, used a bomb disposal robot packed with explosives to kill a sniper suspected of shooting five police officers. In their response to this incident, the American Civil Liberties Union (ACLU) expressed their concern over the use of so-called killer robots in law enforcement situations. ACLU is particularly worried about the implications for the protection of civil and political liberties and constitutional rights; they are also perturbed that the deployment of roboticised weapons by police forces will make the use of lethal weapons easier and more frequent; in a policing climate marred by racial tensions, ACLU fears that such weapons will be open to abuse.

At least two lessons can be learnt from the Dallas incident: first, it demands continued attention to the use of lethal weapons in domestic law enforcement situations, an issue that resonates strongly with the experiences of policing during Northern Ireland’s ‘Troubles’. Second, the Dallas incident also raises a new set of concerns: it shifts the focus to the nature of lethal weapons used by police forces. Specifically, it requires us to decide whether it matters if the lethal weapon is a killer robot.

At first glance, the term ‘killer robot’ appears to be closer to a fictional portrayal of the future than to the reality of policing in the present, more science fiction than fact. However, there is a growing concern amongst technologists and human rights activists that developments in the fields such as robotics and autonomous and intelligent systems, including developments in artificial intelligence, will transform the character of law enforcement as well as the nature of warfare. This concern has spawned organisations such as the Campaign to Stop Killer Robots and the International Committee for Robot Arms Control (ICRAC), which work towards the prohibition of the development, deployment and use of autonomous weapons systems. As ICRAC’s mission statement declares, ‘machines should not be allowed to make the decision to kill people’.

Thus, much of the dispute over killer robots, also referred to as ‘lethal autonomous weapons systems’, or LAWS, relates to the extent and the areas of human control over these weapons. This dispute is complicated by a lack of consensus on what autonomy actually means. According to a report published by the Stockholm International Peace Research Institute (SIPRI), autonomy can manifest itself in a number of ways: for example, it can describe the command-and-control relationship between human and machine; it can also designate a system’s capability to take decisions. Furthermore, the term autonomy can define the types of decisions and functions within a system. These types of decisions, in turn, can relate to issues such as targeting, e.g. target detection and selection, tracking and engagement; they can describe the degree of information sharing with other systems; and they can refer to mobility issues, such as autonomy with respect to take-off, landing and navigation.

“Much of the dispute over killer robots, also referred to as ‘lethal autonomous weapons systems’, or LAWS, relates to the extent and the areas of human control over these weapons.”

This lack of consensus over the meaning of autonomy is one of the reasons why an international agreement on the control and ban of LAWS has not been reached. To give just one example: while the British government has confirmed its commitment to the principle of human oversight of autonomous weapons systems, critics fear that such oversight could be extremely limited. As illustrated by Professor Noel Sharkey, a roboticist and chairperson of ICRAC, human oversight could be restricted to human confirmation of a lethal strike that was initially proposed by a computer system, without necessarily embarking on a detailed review of the context in which such a strike, and with it the potential for loss of human life, should occur.

Given the speed of technological development and the increasingly wide-ranging deployment of LAWS, there is an urgency to reach an agreement on a ban on killer robots. This urgency is starkly demonstrated in the video Slaughterbots, produced by the Future of Life Institute. Slaughterbots depicts a fictional scenario where a swarm of micro-drones, each smaller than the palm of a hand and equipped with face recognition software and explosives, selectively targets individuals in a crowd. Among the targets are a group of democratically elected representatives as well as student activists. The message underpinning this video is two-fold: first, it challenges a widespread perception, which equates killer robots with aircraft-sized drones that are deployed by the military in Afghanistan and other theatres of war. In fact, killer robots come in many shapes and sizes: they include aircraft-sized unmanned aerial vehicles, such as Reaper or Predator; miniature drones; as well as tanks and under-water vehicles. This makes them suitable for deployment in a range of settings, from the battlefield to domestic surveillance, from the policing of public disorder to the targeting of political opponents.

Second, their capabilities blur the boundaries between military and civilian usage, between killing and surveillance, and between legitimate purpose and abuse. According to the report Ethically Aligned Design, published by the Institute of Electrical and Electronics Engineers (IEEE), the potential for abuse includes the re-purposing of autonomous systems that were originally produced for civilian functions as weapons systems, as well as their deployment for covert or non-attributable attacks. Such attacks can be politically motivated or intended for criminal gain; they can be conducted by states and state agencies, but also by non-state actors, including individuals and non-state combatants. The threat of killer robots to our physical and political security, to our democratic process and to our fundamental human rights, including the right to life and the right to freedom of expression, lies in this complex relationship between the autonomous capabilities of intelligent systems and the harnessing of these capabilities by humans for political ends.

Of course, concerns over the impact of killer robots should not blind us to the many benefits that autonomous and intelligent systems bring to our lives. These include advances in medical research and treatment, or the beneficent use of robots in social care, educational settings and environmental protection, to highlight just a few examples. But we must also be aware that autonomous systems can be used in ways that do not benefit humankind. The relationship between humans and intelligent systems lies at the heart of this concern: it centres on the questions of human control, of legal, political and moral responsibility, and of accountability for the actions of these systems. This insight demands urgent attention from legislators and professional bodies, from the research institutes, universities and private companies that design and develop autonomous systems, from those who produce and manufacture them, and from those who educate our engineers and computer scientists. It also requires an alert and informed citizenry.

The Age of Killer Robots The ‘Security and Justice in the Age of Killer Robots’ workshop will be held on 29 and 30 June at the Senator George J.Mitchell Institute for Global Peace, Security and Justice at Queen’s University Belfast. This workshop will discuss how developments in the field of so-called killer robots, or LAWS (lethal autonomous weapon systems), impact on questions of security and justice. These near-future weapons systems are currently the subject of considerable political debate in relation to their implications for international humanitarian law, wider ethical principles and practices, and the possibilities of international arms control or prohibition.

Show More
Back to top button