Artificial intelligence (AI) has many positive applications, however, it is a dual-use technology and AI researchers and engineers should be mindful of—and proactive about—the potential for its misuse, according to a new report. Best practices can, and should, be learned from disciplines with a longer history of handling dual-use risks, such as computer security, and policy-makers and technical researchers need to work together to understand and prepare for potential malicious use of AI by rogue states, criminals and terrorists, the authors urge.

 

Published by the Centre for the Study of Existential Risk (an interdisciplinary research center at CRASSH within the University of Cambridge), the report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” forecasts rapid growth in cybercrime and the misuse of drones during the next decade as well as an unprecedented rise in the use of so-called “bots” to manipulate everything from elections to the news agenda and social media. The 26 co-authors from a wide range of organizations and disciplines—including Oxford University’s Future of Humanity Institute; Cambridge University’s Centre for the Study of Existential Risk; and non-profit AI research firm OpenAI—write that the report is intended as a “clarion call” for governments, corporations and individuals around the world to address the clear and present danger inherent in the myriad applications of AI.

 

“AI is a game changer and this report has imagined what the world could look like in the next five to 10 years,” says Dr. Seán Ó hÉigeartaigh, executive director of the Centre for the Study of Existential Risk and one of the report’s co-authors. “We live in a world that could become fraught with day-to-day hazards from the misuse of AI, and we need to take ownership of the problems because the risks are real.”

 

The report identifies three security domains—digital, physical and political security—as particularly relevant to the possible malicious use of AI. So, for example, the authors expect novel cyber-attacks such as automated hacking, speech synthesis used to impersonate targets, finely-targeted spam emails using information scraped from social media, or exploiting the vulnerabilities of AI systems themselves, e.g., through adversarial examples and data poisoning. Likewise, the proliferation of drones and cyber-physical systems will allow attackers to deploy or repurpose such systems for harmful ends, such as crashing fleets of autonomous vehicles, turning commercial drones into face-targeting missiles or holding critical infrastructure to ransom, the authors warn.

 

To mitigate such risks, the authors explore several interventions to reduce threats associated with AI misuse. They include rethinking cyber-security, exploring different models of openness in information sharing, promoting a culture of responsibility, and seeking both institutional and technological solutions to “tip the balance in favor of those defending against attacks.”

 

“For many decades hype outstripped fact in terms of AI and machine learning. No longer,” says Ó hÉigeartaigh. “This report looks at the practices that just don’t work anymore, and also suggests broad approaches that might help, such as identifying how to design software and hardware to make it less hackable, and examining laws and international regulations which might work in tandem with this.”

 

Though the technology is still emerging, billions of dollars have already been spent on developing AI systems. IDC, last year, predicted that by 2021, global spending on cognitive and AI systems could reach $57.6 billion. What’s more, last summer, China laid out a development plan to become the world leader in AI by 2030, aiming to build a domestic industry worth almost $150 billion. With such growth, and projected growth, in mind, it certainly seems this is the time to identify safeguards and put them in place to prevent malicious use of AI.

 

What are your thoughts on AI development? Is it’s possible malicious use a concern for executives where you work?