Technology

Face recognition technology: The human rights concerns

Queen’s University Belfast visiting professor Birgit Schippers examines the friction between human rights and civil liberties and the utilisation of face recognition technology in the realm of policing and law enforcement.

In June 2017, South Wales Police used face recognition software at a UEFA Champions League football match. As reported widely in the media, 92 per cent of the images captured by the police (2,297 out of a total of 2,470) were so-called false positives: these were images that matched incorrectly against images contained in a police database.

More recently, in the summer of 2018, the American Civil Liberties Union (ACLU) tested Rekognition, a face recognition software programme developed by Amazon. When ACLU sought to match the images of members of the United States Congress against 25,000 available arrest photos, it discovered that 28 Congressmen and women were matched – incorrectly – against an arrest photo; nearly 40 per cent of these false matches related to ‘people of colour’.

While the examples from Wales and from the United States Congress highlight the problem of ‘false positives’, research conducted by Joy Buolamwini from the Massachusetts Institute of Technology (MIT) Media Lab revealed the existence of so-called ‘false negatives’. Her study exposed the failure of face recognition technology to detect her dark-skinned face against images contained in a database. Buolamwini refers to the inability to match a face against a database as the ‘coded gaze’, a demographic bias generated by a lack of ethnic and gender diversity in widely shared image databases.

Face recognition technology’s generation of false positives and false negatives have worrying implications for its use in the realm of policing and law enforcement. However, concerns over face recognition technology’s deployment in the real world extend beyond its technical limitations and demographic bias. For anyone committed to human rights and civil liberties, this technology gives serious cause for concern.

Organisations such as Liberty, the London-based civil liberties organisation, have argued that face recognition technology poses a real threat to our right to privacy. Liberty is currently supporting a court case against the South Wales Police Force’s use of this technology.

Others have raised concerns over the right to freedom of expression and the right to freedom of movement. For example, Big Brother Watch argues that the Metropolitan Police’s use of face recognition technology during the Notting Hill Carnival compounds the over-policing of Britain’s Afro-Caribbean population via the use of a technology that has no clear legal basis and that has not been tested for its demographic accuracy bias.

This combination of demographic bias and the threat to civil liberties has led Woodrow Hartzog, Professor of Law at Northeastern University in the United States, to refer to face recognition technology as a ‘menace disguised as a gift’. Perhaps the most insidious aspect, according to Hartzog, is its ‘mission creep’: this is the growing usage of face recognition technology across a wide range of public spaces, including town centres, shopping centres, educational institutions, airports and immigration checkpoints, as well as traffic and crime hotspots.

In light of these concerns, what does face recognition technology’s mission creep mean for human rights protection?

• It normalises pervasive surveillance practices in public spaces and, in doing so, undermines our right to privacy.

• It militarises policing and provides police forces with the capacity to securitise public spaces, target vulnerable and minority communities, and curtail legal and legitimate protest.

• There is a real worry that the indiscriminate use of face recognition technology in the public realm stifles non-conformist modes of appearance and expression, nudging us towards a display of conformist behaviour.

• The growing and indiscriminate use of face recognition technology in public spaces, and the collection and storage of our images implies consent that, in fact, cannot be presumed.

• Human checks and balances recede when computer-generated decision-making is accepted as accurate, with limited or no human oversight.

“There are also real concerns over the collaboration between private corporations that design, produce and sell face recognition software, and the law enforcement agencies that use it.”

There are also real concerns over the collaboration between private corporations that design, produce and sell face recognition software, and the law enforcement agencies that use it. These concerns give voice to a growing discomfort that face recognition technologies are deployed in the service of policies that are widely regarded as unethical.

To give just one example, against the backdrop of the Trump administration’s incarceration of undocumented migrant children, shareholders and employees at Amazon have asked the company’s management to stop selling its face recognition software to police forces and to the US Immigration and Customs Enforcement (ICE) agency. There are also worries that images collected for commercial purposes will be shared with law enforcement bodies. Concerns over the use of computer-assisted facial recognition have also been expressed by the president of Microsoft, Brad Smith, who acknowledges its deeply problematic implications and who has called for tighter regulation.

Can existing human rights protection shield us against the use and possible abuse of this new manifestation of state power? The General Data Protection Regulation (GDPR), which came into effect this year, goes some way towards protecting individuals against the illegal and unethical use of their personal data, including biometric data. More is needed, though. In May 2018, Amnesty International, together with several other human rights and civil liberties organisations, drafted the Toronto Declaration: Protecting the Rights to Equality and Non-Discrimination in Machine Learning, which calls for a binding and actionable body of human rights law to protect individuals against the negative implications of machine learning systems. The Toronto Declaration emphasises the human rights obligations of the public and the private sector and thus builds on the so-called Ruggie Principles, the UN Guiding Principles on Business and Human Rights.

The call for effective protection of our human rights and civil liberties when state agencies deploy new technologies does not imply that we adopt a Luddite view. What is needed, instead, is a serious commitment to rights protection and the establishment of effective oversight over the companies that develop and sell these technologies and of the state agencies that deploy them.

Dr Birgit Schippers is a Visiting Research Fellow at the Senator George J. Mitchell Institute for Global Peace, Security and Justice at Queen’s University Belfast and Senior Lecturer in Politics at St Mary’s University College Belfast. Her research examines the human rights implications of AI-driven technologies.

 

Show More
Back to top button