Because the development of artificial intelligence (AI) is growing so quickly, many of the world’s leading scientists and entrepreneurs now urge a renewed focus on safety and ethics to prevent dangers to society.
A recent open letter stating just that was signed by famous physicist Stephen Hawking, Skype co-founder Jaan Tallinn, and SpaceX and Tesla Motors CEO Elon Musk. A list of other signers reads like a Who’s Who in artificial intelligence—including the co-founders of DeepMind; the co-authors of the textbook “Artificial Intelligence: a Modern Approach;” and top minds from universities such as Harvard, Stanford, Massachusetts Institute of Technology (MIT), Cambridge and Oxford, as well as from Google, Microsoft and IBM.
Future of Life Institute carries the letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter.” The letter explains that AI research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents—systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality, or the ability to make good decisions, plans or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience and other fields, the letter notes.
As AI research continues, it now seems likely its impact on society will correspondingly increase, the authors note. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable, the open letter continues.
To me, the most striking—and perhaps, ominous—comment is, as the authors note, due to the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
More to the point, they wrote, “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”
This letter may not come as much of a surprise to those following AI or the notable researchers. Indeed, Stephen Hawking memorably expressed concern about AI in a BBC News interview last month. Hawking said then that efforts to create thinking machines pose a threat to humans’ existence. While the primitive forms of AI developed so far have already proved very useful, Hawking said he fears the consequences of creating something that can match or surpass humans.
The development of full AI “could spell the end of the human race,” Hawking said.
“It would take off on its own, and re-design itself at an ever increasing rate,” Hawking said in the BBC interview. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Elon Musk has publicly expressed similar concern. Indeed, last fall, he told an audience at MIT that “we should be very careful about artificial intelligence.” AI may even be “our biggest existential threat,” he said.
“With artificial intelligence, we are summoning the demon,” Musk said at the meeting.
What are your thoughts on either the potential use of AI to make better decisions in the supply chain or the researchers’ concerns?