Preparing for the advent of superintelligent AI

Mr Musk was asking the rest of the tech industry to consider the unintended consequences of what they are creating before unleashing it on the world.

San Francisco

MARK Zuckerberg thought his fellow Silicon Valley billionaire Elon Musk was behaving like an alarmist. Mr Musk, the entrepreneur behind SpaceX and the electric-car maker Tesla, had taken it upon himself to warn the world that artificial intelligence (AI) was "potentially more dangerous than nukes" in television interviews and on social media.

So, on Nov 19, 2014, Mr Zuckerberg, Facebook's chief executive, invited Mr Musk to dinner at his home in Palo Alto, California. Two top researchers from Facebook's new artificial intelligence lab and two other Facebook executives joined them. As they ate, the Facebook contingent tried to convince Mr Musk that he was wrong. But he wasn't budging.

"I genuinely believe this is dangerous," Mr Musk told the table, according to one of the dinner's attendees, Yann LeCun, the researcher who led Facebook's AI lab.

Mr Musk's fears of AI, distilled to their essence, were simple: If we create machines that are smarter than humans, they could turn against us. Let's for once, he was saying to the rest of the tech industry, consider the unintended consequences of what we are creating before we unleash it on the world.

The creation of "superintelligence" - the name for the supersmart technological breakthrough that takes AI to the next level and creates machines that not only perform narrow tasks that typically require human intelligence (such as self-driving cars) but can actually out-think humans - still feels like science fiction.

But the fight over the future of AI has spread across the tech industry. More than 4,000 Google employees recently signed a petition protesting a US$9 million AI contract that the company had signed with the Pentagon - a deal worth chicken feed to the internet giant but deeply troubling to many AI researchers at the company.

This month, Google executives, trying to head off a worker rebellion, said they wouldn't renew the contract when it expires next year. AI research has enormous potential and enormous implications, both as an economic engine and a source of military superiority.

The Chinese government has said it is willing to spend billions in the coming years to make the country the world's leader in AI, while the Pentagon is aggressively courting the tech industry for help. A new breed of autonomous weapons can't be far away.

All sorts of deep thinkers have joined the debate, from a gathering of philosophers and scientists held along the central California coast to an annual conference hosted in Palm Springs, California, by Amazon's chief executive, Jeff Bezos.

Even such influential figures as Microsoft founder Bill Gates and the late Stephen Hawking have expressed concern about creating machines that are more intelligent than we are.

Even though superintelligence seems decades away, they and others have said, shouldn't we consider the consequences before it's too late?

On Jan 27, 2016, Google's DeepMind lab unveiled a machine that could beat a professional player at the ancient board game Go. In a match played a few months earlier, the machine, called AlphaGo, had defeated the European champion Fan Hui - five games to none.

Even top AI researchers had assumed it would be another decade before a machine could solve the game. Go is complex - there are more possible board positions than atoms in the universe - and the best players win not with sheer calculation but through intuition. Two weeks before AlphaGo was revealed, Mr LeCun said the existence of such a machine was unlikely.

A few months later, AlphaGo beat Lee Sedol, the best Go player of the last decade. The machine made moves that baffled human experts but ultimately led to victory.

Many researchers, including the leaders of DeepMind and OpenAI (an independent AI lab created by Mr Musk), believe the kind of self-learning technology that underpins AlphaGo provided a path to "superintelligence". And they believe progress in this area will significantly accelerate in the coming years.

OpenAI recently "trained" a system to play a boat-racing video game, encouraging it to win as many game points as it could. It proceeded to win those points but did so while spinning in circles, colliding with stone walls and ramming other boats. It's the kind of unpredictability that raise grave concerns about the rise of AI, including superintelligence.

In April, Mr Zuckerberg testified before Congress, explaining how Facebook was going to fix the problems it had helped create. One way to do it? By leaning on AI. But in his testimony, Mr Zuckerberg acknowledged that scientists haven't exactly figured out how some types of AI are learning.

Tech bigwigs and scientists may mock Mr Musk for his Chicken Little routine on AI, but they seem to be moving towards his point of view. Inside Google, a group is exploring flaws in AI methods that can fool computer systems into seeing things that are not there.

Researchers are warning that AI systems that automatically generate realistic images and video will soon make it even harder to trust what we see online.

Both DeepMind and OpenAI now operate research groups dedicated to "AI safety". Demis Hassabis, the founder of DeepMind, still thinks Mr Musk's views are extreme. But he said the same about the views of Mr Zuckerberg.

The threat is not here, he said. Not yet. But Facebook's problems are a warning. "We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come," Mr Hassabis said. "The time we have now is valuable, and we need to make use of it." NYTIMES