What is superintelligence?
The prognosis that artificially-intelligent (AI) machines will get to a point where humanity could, in theory, be made redundant. The decades-old idea triggered a social media squabble this week between Elon Musk, the CEO of Tesla, and Mark Zuckerberg, the Facebook chief. Mr. Zuckerberg said accelerated progress in AI is a good thing as it could eliminate deaths in car accidents and spell the end of disease. On the other hand, Mr. Musk thought that, if not “regulated”, AI could lead to doomsday scenarios of machines taking over the Earth.
Is there a history to this debate apart from science fiction?
Silicon Valley titans, from Mr. Musk to Bill Gates and physicist Stephen Hawking are acknowledged fans of an Oxford philosopher, Nick Bostrom, who has spent little over a decade warning that humans have an option of either achieving “transcendence” or becoming extinct. Transcendence, in this context, involves conquering death and people outsourcing large parts of their mental life to artificial brains. A lot of Mr. Musk’s arguments stem from those of Mr. Bostrom, who relies on probability theory to forecast the future.
Who’s right?
It depends on the time frame one’s considering. The most visible threat that AI now poses is the possibility of jobs being taken over by machines. But robots taking over shop floors, or autonomous cars making drivers redundant don’t yet constitute a ‘rise of the machines’ because, as AI optimists argue, old jobs being made redundant imply new ones being created.
However, Mr. Musk and company argue that traditionally, regulation comes about after a disaster strikes. In the case of AI, an accelerated improvement in neural networks could mean that even a single machine, with a mental capacity dwarfing humans, could reorganise the world in a manner that it deems fit. Attempts by humans to rein it in would have little success in such a scenario.
How exactly can research in AI be regulated?
Nobody has any clue. Geoffrey Hinton, one of the gurus of artificial neural networks, said in an interview to The New Yorker that each incremental step towards improving AI is only seen as a challenging problem that begs a neat solution. In the early 20th century, the knowledge that atoms contain destructive, lethal capacity didn’t hinder fundamental research in the field. Governments did, for a while, ban stem cell research on the grounds that it involved ‘destroying’ nascent life. However, banning computer scientists from trying to connect semiconductors in the most efficient way to simulate the human brain would be a hard sell even for a totalitarian regime.