The International Robot Exhibition 2015 was held in Tokyo last week, and it—along with recent developments—has me thinking about the future of robots and artificial intelligence (AI), as well as what that means to humans.

 

First of all, Toyota announced it has plans to leverage its manufacturing expertise to become a leader in the field of what it calls “partner robots.” The company only has about 150 robotics engineers out of a worldwide staff of 300,000, but Toyota is investing heavily in research and development.

 

Last month, for instance, Toyota announced a $1 billion investment in a research company headed by robotics expert Gill Pratt to develop AI and robotics. It’s already working with Stanford University and the Massachusetts Institute of Technology on robotics.

 

We are preparing for a future in which people may not be able to drive cars, or they may need artificial intelligence to support them to drive, and once they get out of their cars, they may need help from partner robots,” Akifumi Tamaoki, general manager of Toyota’s partner robot division, said in an Associated Press interview.

 

Then again, one must wonder about the implications of continued research and development in robotics and AI. Noted theoretical physicist Stephen Hawking expressed concern about AI in a BBC News interview last year. While the early forms of AI developed so far have already proved very useful, Hawking said he fears the consequences of creating something that can match or surpass humans. The development of full AI “could spell the end of the human race,” Hawking said.

 

Elon Musk, Tesla Motors’ CEO, has publicly expressed similar thoughts. A host of scientists and entrepreneurs later expressed additional concerns in a letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter” earlier this year.

 

It was interesting then to recently read that the University of Cambridge is launching a new research center, thanks to a $15 million grant from the Leverhulme Trust, to explore the opportunities and challenges to humanity stemming from the development of AI. The Leverhulme Centre for the Future of Intelligence brings together computer scientists, philosophers, social scientists and others to, as center directors explain, examine the technical, practical and philosophical questions AI raises for humanity.

 

“Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together,” says Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge and Director of the Centre. “At present, however, we have barely begun to consider its ramifications, good or bad.”

 

Dr Seán Ó hÉigeartaigh, Executive Director of the University’s Centre for the Study of Existential Risk (CSER), adds that the center is intended to build on CSER’s work on the risks posed by high-level AI and place those concerns in a broader context, “looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones.”

 

That’s an important point and a growing concern. Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Hawking along with 1,000 notable AI and robotics researchers presented an open letter expressing those same concerns at the International Joint Conference on Artificial Intelligence in Buenos Aires last summer. That letter warns of a “military artificial intelligence arms race” and calls for a ban on “offensive autonomous weapons.” The issue, the letter explains, is that if one military power starts developing systems capable of selecting targets and operating autonomously without direct human control, it would lead to an arms race similar to the one for the atom bomb.

 

I don’t necessarily believe that the rise of AI will bring the fears of science fiction writer Philip K. Dick to life. Then again, as research continues on projects such as partner robots, “smart” weapons and self-driving cars, I do believe it’s a good idea to determine just what, exactly, is considered to be the responsible development of technology and AI.

 

What do you think about the future of AI?