Two recent developments have me thinking more about the future of artificial intelligence (AI) and what it means to humans.

 

First, Juniper Research has forecast that more than one in 10 U.S. households will own a consumer robot by the end of the decade. That number is up substantially from the group’s estimates earlier this year of less than one in 25 U.S. households. To be fair, at this stage in development, those robots are expected to be so-called “task oriented” robots for routine, household chores, such as lawn mowing or vacuum cleaning.

 

On the one hand, it’s difficult to argue—perhaps even impossible—with the expected consumer demand. A robot vacuum cleaner may seem a bit like a novelty now, but five years is a long time and I certainly am curious to see how the technology develops over that time.

 

Then again, AI is developing at a surprising rate. It’s crucial to Facebook’s understanding pictures, Tesla’s cars autonomously driving themselves, and IBM’s Watson has moved on from beating humans at the TV game show “Jeopardy!” to medical research and diagnosing cancer. All of those developments are worthwhile, but they may also cause one to wonder where the technology is going.

 

That’s why it’s interesting to learn about a new non-profit AI research company called OpenAI. The company says its goal is to advance digital intelligence in the way that is “most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” What’s also noteworthy is that the company is backed by Tesla and SpaceX CEO Elon Musk and many others, who back the company with funding of an undisclosed amount but which Musk has said could be thought of as $1 billion.

 

“Since our research is free from financial obligations, we can better focus on a positive human impact,” OpenAI executives wrote in an introductory blog post last week. “We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.”

 

The executives also explain in the blog that as a non-profit, their aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and patents—if any—will be shared with the world. They further wrote that the company will freely collaborate with others across many institutions and expect to work with other companies to research and deploy new technologies.

 

OpenAI’s list of donors include PayPal Holding co-founder Peter Thiel and LinkedIn co-founder Reid Hoffman in addition to Musk. The venture’s research director is Ilya Sutskever, a former research scientist at Google, whose work included research into the technology that became Smart Reply, the auto e-mail-writing feature. OpenAI’s chief technology officer is Greg Brockman, formerly the CTO of Stripe, a technology start-up company.

 

Musk has been noticeably critical of the possible harm from AI, telling Massachusetts Institute of Technology students last year that AI may even be “our biggest existential threat.” He then added that with the development of AI, “we are summoning the demon.” So perhaps it’s no surprise that Musk backs OpenAI and advocates responsible development of AI. Indeed, in talking about OpenAI, and referencing his MIT speech, Musk recently said that “if you’re going to summon anything, make sure it’s good.”

 

In some respects, the work of OpenAI and the Leverhulme Centre for the Future of Intelligence can be thought of as a form of risk mitigation. However, rather than planning a response to a possible earthquake or catastrophic problem with a key supplier, leaders at this type of institute are working to prevent a problem in the first place—in this case, the unchecked development of, or even unethical use of, AI.

 

What are your thoughts on the development of AI, or even of robot vacuum cleaners?