Stephen Hawking’s Warning –“Treating AI as Science Fiction Would Potentially Be Our Worst Mistake Ever”

 

“We should plan ahead,” warned physicist Stephen Hawking who died March, 2018, and was buried next to Isaac Newton. “If a superior alien civilization sent us a text message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here, we’ll leave the lights on’? Probably not, but this is more or less what has happened with AI.”

The memorial stone placed on top of Hawking’s grave included his most famous equation describing the entropy of a black hole. “Here Lies What Was Mortal Of Stephen Hawking,” read the words on the stone, which included an image of a black hole.

The real risk with AI isn’t malice, but competence”

“I regard the brain as a computer,” observed Hawking, “which will stop working when its components fail. There is no heaven or afterlife for broken down computers; that is a fairy story for people afraid of the dark.”

“Artificial Intelligence is Billions of Years Old”

Serious concerns about the future of mankind

But before Hawking left our planet, he expressed serious concerns about the future of mankind. Foremost was his concern for the future of our species and what might prove to be our greatest, and last, invention: Artificial Intelligence reported by The Sunday Times of London.

Future AI could develop a will of its own”

Here is Hawking in his own words in Stephen Hawking on Aliens, AI & The Universe: “While primitive forms of artificial intelligence developed so far have proved very useful, I fear the consequences of creating something that can match or surpass humans,” observed Stephen Hawking. “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded. And in the future AI could develop a will of its own, a will that is in conflict with ours.”

“Artificial Intelligence of the Future Could Reveal the Incomprehensible”

In short, Hawking concluded, “the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

The Daily Galaxy with Jackie Faherty, astrophysicist, Senior Scientist with AMNH  via The Times of London. Jackie was formerly a NASA Hubble Fellow at the Carnegie Institution for Science.

Image credit Top of Page: With thanks to Church & State