An open letter that warns of a “military artificial intelligence arms race” and calls for a ban on “offensive autonomous weapons” sounds—at first—like something from a summer blockbuster movie. However, the list of those who signed the letter reads like a Who’s Who list of artificial intelligence (AI) experts and leading researchers, which conveys the seriousness of the topic.

 

The letter, presented yesterday at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with 1,000 notable AI and robotics researchers.

 

“AI technology has reached a point where the deployment of [autonomous weapons] is—practically, if not legally—feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms,” the letter notes.

 

The issue, the letter explains, is that if one military power starts developing systems capable of selecting targets and operating autonomously without direct human control, it would lead to an arms race similar to the one for the atom bomb. Unlike nuclear weapons, however, autonomous weapons do not require “costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce,” the letter states.

 

“Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group,” the letter notes. “We therefore believe that a military AI arms race would not be beneficial for humanity.”

 

All of this isn’t to say the letter signers are against AI, they are just against autonomous weapons beyond meaningful human control. Indeed, they explain that most chemists and biologists have no interest in building chemical or biological weapons; likewise most AI researchers have no interest in building AI weapons. They also don’t want others to tarnish the field by doing so because it may in turn lead to a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, the letter notes, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

 

This isn’t the first time potential applications for AI have proven worrisome, or that the letter signers urged a cautious approach be taken. Earlier this year, many of the letter signers had signed another open letter, titled, “Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter.” That letter explains that, for the last 20 years or so, AI research  focused on the problems surrounding the construction of intelligent agents—systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality, or the ability to make good decisions, plans or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience and other fields, the letter notes.

 

As AI research continues, it now seems likely its impact on society will correspondingly increase, and that the potential benefits are significant. On the other hand, in the letter last winter, the authors did caution that, due to the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. More to the point, they wrote, “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”

 

I can see significant potential for AI in fields ranging from supply chain management to medical research. Considering its potential, however, there obviously is a demand for what the AI researchers call “meaningful human control.”

 

What are your thoughts on both AI applications and meaningful human control?