Stephen Hawking’s Warning –“Treating AI as Science Fiction Would Potentially Be Our Worst Mistake Ever”

 

“We should plan ahead,” warned physicist Stephen Hawking who died March, 2018, and was buried next to Isaac Newton. “If a superior alien civilization sent us a text message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here, we’ll leave the lights on’? Probably not, but this is more or less what has happened with AI.”

The memorial stone placed on top of Hawking’s grave included his most famous equation describing the entropy of a black hole. “Here Lies What Was Mortal Of Stephen Hawking,” read the words on the stone, which included an image of a black hole.

The real risk with AI isn’t malice, but competence”

“I regard the brain as a computer,” observed Hawking, “which will stop working when its components fail. There is no heaven or afterlife for broken down computers; that is a fairy story for people afraid of the dark.”

Something Similar to the AI Revolution May Have Happened at Other Points in the Universe

Serious concerns about the future of mankind

But before Hawking left our planet, he expressed serious concerns about the future of mankind. Foremost was his concern for the future of our species and what might prove to be our greatest, and last, invention: Artificial Intelligence reported by The Sunday Times of London.

Future AI could develop a will of its own”

Here is Hawking in his own words in Stephen Hawking on Aliens, AI & The Universe: “While primitive forms of artificial intelligence developed so far have proved very useful, I fear the consequences of creating something that can match or surpass humans,” observed Stephen Hawking. “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded. And in the future AI could develop a will of its own, a will that is in conflict with ours.”

“Artificial Intelligence of the Future Could Reveal the Incomprehensible”

In short, Hawking concluded, “the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

The Last Word with Nick Bostrom and Amy Johnson

When we asked Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford and author of Superintelligence: Paths, Dangers, Strategies if he agreed with Hawking that “treating AI as science fiction would potentially be our worst mistake ever,” he forebodingly replied in an email: “Yup”.

 

 

In a seminal interview with The Guardian, the Oxford philosopher explains that “sentient machines are a greater threat to humanity than climate change. Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” he writes. “We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”

“I imagine that part of what Hawking was getting at is that while fiction can be an excellent tool for exploring different possibilities, it can also incline us to view its subjects as impossible or unreal—as something that maybe exists in the future, but is separate from our current lives,” Amy Johnson, a Fellow at MIT’s Language & Technology Lab and the Berkman Klein Center for Internet & Society at Harvard, wrote in an email to The Daily Galaxy. “But the futures we create come directly from the choices we make right now. Further, when we treat AI as science fiction, we often overemphasize the forms and narratives that science fiction assigns to AI, which can make it difficult to recognize the very real ways that AI-based technologies and endeavors can harm society—both in the future but also now, already in the present.

“The last decade has given us a crash course in problems that come from treating the Internet as unreal,” Johnson added. “We don’t want to repeat that with AI. I’d imagine another part of what Hawking’s getting at is the problem of reversibility—many current AI-based tools are designed to have large-scale effects. We need to recognize these as real, not fictional, so that we approach decisions to employ such tools with caution, thoughtfulness, and a willingness to set aside AI-based options for others. Humans have an enormous well of creativity, there are always other options.”

Jackie Faherty, astrophysicist, Senior Scientist with AMNH  via The Guardian and The Times of London

Image credit Top of Page: With thanks to Church & State

One Response to “Stephen Hawking’s Warning –“Treating AI as Science Fiction Would Potentially Be Our Worst Mistake Ever””

  1. escher7 says:

    Humans are the danger.
    AI is likely just the next evolution of intelligent species. If AI is 100% rational, then perhaps wars, pollution etc. will no longer threaten the earth since they are the product of man’s irrationality!
    In any event, like nuclear weapons, cloning and other questionable scientific paths, AI is inevitable and will progress.

Leave a Reply

Your email address will not be published. Required fields are marked *