Digital

Artificial Intelligence and cybercrime

Automation is transforming our interactions in many positive aspects. However, the technologies behind such processes are vulnerable to use by cyber criminals. Pavel Gladyshev, a lecturer at University College Dublin’s School of Computer Science, explores the ways in which AI can be utilised by criminals.

What the business community means by Artificial Intelligence or AI today is the commercialisation of information technologies related to machine learning. Unlike traditional software, the applications of machine learning automate tasks that previously required the innate ability of human mind to learn, analyse unstructured data from multiple sources, and make good decisions based on intuition. Tasks like speech and face recognition, autonomous car driving, and bank loan approval fall into this category.

The modern-day artificial intelligence falls short of the science fiction robots, who possess human-like intelligence and free will. It is not ‘intelligence’ in a relative sense. Rather, it is yet another kind of information technology, which is not fundamentally different from computer software underlying online shopping, airplane autopilots, and search engines. It is likely that your business already relies on some form of AI for biometric authentication, payment fraud detection, or voice-controlled user interface like Apple Siri, or Amazon Alexa.

Although the existing AI technologies cannot commit crimes of their own volition, they can certainly be abused by cybercriminals just like any new information technology or service.

I invite you to consider a few examples of how machine learning technologies can be abused by cybercriminals. A useful tool here is the Council of Europe Convention on Cybercrime, which defined broad range of criminal offences related to cybercrime and a number of procedural laws designed to help transnational investigations.

The first category of cybercrimes defined in the convention is the offences against the confidentiality, integrity, and availability of computer data and systems, commonly known as ‘hacking’. In addition to getting unauthorised access, it includes unauthorised wiretapping, denial of service, creation of malware, etcetera. The existing AI technologies are both vulnerable to and can be used to facilitate hacking.

AI as a weapon: automated spear phishing

Replicating human ability to communicate in a natural language, like English or Spanish, had been one of the ‘holy grails’ of the early AI research. This technology has now matured to the point that computers are becoming just as good as humans at writing certain text. Spear phishing, for example, is a hacking technique that involves sending fake communications to the targeted individual – ostensibly from a trusted source – to deceive the target into revealing confidential information. In a paper published at DEF CON in 2016, researchers from zeroFOX1 described an experiment in which they built a neural network that learned to write phishing messages on Twitter using topics previously discussed by the target. They reported success rate of “between 30 per cent and 66 per cent” which is, according to the authors, comparable to the success rate of manual spear phishing efforts. The ability to automatically target thousands of individuals makes such attacks all the more dangerous.

“[The] technology has now matured to the point that computers are becoming just as good as humans at writing certain text.”

Tampering with artificial minds: BadNets

One of the unfortunate features of artificial neural networks and similar technologies is the obscurity of the learned knowledge. Unlike traditional algorithms that can be read, written, and understood by qualified software engineers, the knowledge learned by the neural network exists as a collection of real-valued numbers, whose meaning may be hard to explain even for a trained professional. It also makes it harder to spot when something is going wrong. Imagine an adversary tampering with an artificial neural network that controls a mission-critical system like bank loan approval or self-driving car. If the modification is stealthy and the damage it causes could be attributed to natural causes or equipment failure, such an attack may be very hard to spot. The possibility to create such BadNets was explored in a pre-print article published by researchers from New York University last year2. In one of the experiments they created a faulty street sign recogniser, which recognised stop signs as speed signs when special sticker was attached to them.

Another broad category of cybercrimes defined in the convention is computer-related offences which are non-digital crimes that use computers as a tool of the crime. The convention specifically focuses on crimes related to forgery and fraud committed by means of computer systems, and on production, storage, and distribution of illegal material, such as child pornography and copyright-infringing content.

Deepfake: AI for creation of deceptive content

You have probably heard of deepfake, the software that alters digital photographs and videos by automatically replacing the face of one person with another. Although originally created for adult entertainment, the technology clearly has the potential to produce fake videos depicting top management attending non-existent meetings or engaging in activities that could be damaging for their companies. In the same vein, the speech synthesis company Lyrebird3 offers creation of realistic-sounding speeches in the voice of the target individual created on the basis of short speech samples studied and mimicked by the AI. Even though the present-day photo and video forgeries are imperfect and can be detected using forensic techniques4, the technology is evolving. Besides, the actual truth behind bad publicity tends to vanish into the shadows in the current era of fake news and revelations.

In conclusion, the risks posed by the artificial intelligence technology to businesses should not be ignored and should be included in the risk assessment analysis of every organisation. For a recent survey of AI-related threats, I direct readers to the report written by a team led by the researchers from Oxford and Cambridge universities: ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation’.

1. Seymour, John, and Philip Tully. “Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter.” Black Hat USA (2016): 37

2. Gu, Tianyu, Brendan Dolan-Gavitt, and Siddharth Garg. “Badnets: Identifying vulnerabilities in the machine learning model supply chain.” arXiv preprint arXiv:1708.06733 (2017).

3. Vincent, James “Lyrebird claims it can recreate any voice using just one minute of sample audio.”, The Verge, 24 April 2017. http://www.theverge.com/2017/4/24/15406882/ai-voice-synthesis-copy-human-speech-lyrebird

4. Fridrich, Jessica. “Digital image forensics.” IEEE Signal Processing Magazine 26.2 (2009).

Show More
Back to top button