China’s ‘Brain Project’ –Ignores Stephen Hawking’s Warning That “Evolution of Artificial intelligence Could Spell the End of the Human Race”



This past March, Robin Li Yanhong, the founder and chief executive of China’s Google, the online search giant Baidu, announced that he is looking to the nation’s military to support the China Brain Project to make the mainland the world leader in developing artificial intelligence (AI) systems. It will be a massive, “state-level” initiative that could be comparable to how the Apollo space program to land the first humans on the moon in 1969.

Earlier in January of 2016 heoretical physicist Stephen Hawking warned this past January, 2016 that blindly embracing pioneering technology could trigger humanity’s annihilation.”The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC in 2014. “Once humans develop artificial intelligence it would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

Artificial intelligence will surpass human intelligence after 2020, predicts Vernor Vinge, a world-renowned pioneer in AI, who has warned about the risks and opportunities that an electronic super-intelligence would offer to mankind. “It seems plausible that with technology we can, in the fairly near future,” says scifi legend Vernor Vinge, “create (or become) creatures who surpass humans in every intellectual and creative dimension. Events beyond such an event — such a singularity — are as unimaginable to us as opera is to a flatworm.”

There was the psychotic HAL 9000 in “2001: A Space Odyssey,” the humanoids which attacked their human masters in “I, Robot” and, of course, “The Terminator”, where a robot is sent into the past to kill a woman whose son will end the tyranny of the machines.

Experts interviewed by AFP were divided. Some agreed with Hawking, saying that the threat, even if it were distant, should be taken seriously. Others said his warning seemed overblown. “I’m pleased that a scientist from the ‘hard sciences’ has spoken out. I’ve been saying the same thing for years,” said Daniela Cerqui, an anthropologist at Switzerland’s Lausanne University.

Gains in AI are creating machines that outstrip human performance, Cerqui argued. The trend eventually will delegate responsibility for human life to the machine, she predicted. “It may seem like science fiction, but it’s only a matter of degrees when you see what is happening right now,” said Cerqui. “We are heading down the road he talked about, one step at a time.”

Nick Bostrom, director of a program on the impacts of future technology at the University of Oxford, said the threat of AI superiority was not immediate. Bostrom pointed to current and near-future applications of AI that were still clearly in human hands — things such as military drones, driverless cars, robot factory workers and automated surveillance of the Internet. But, he said, “I think machine intelligence will eventually surpass biological intelligence — and, yes, there will be significant existential risks associated with that transition.”

Other experts said “true” AI — loosely defined as a machine that can pass itself off as a human being or think creatively — was at best decades away, and cautioned against alarmism.

Since the field was launched at a conference in 1956, “predictions that AI will be achieved in the next 15 to 25 years have littered the field,” according to Oxford researcher Stuart Armstrong. “Unless we missed something really spectacular in the news recently, none of them have come to pass,” Armstrong says in a book, “Smarter than Us: The Rise of Machine Intelligence.”

Jean-Gabriel Ganascia, an AI expert and moral philosopher at the Pierre and Marie Curie University in Paris, said Hawking’s warning was “over the top. Many things in AI unleash emotion and worry because it changes our way of life,” he said. “Hawking said there would be autonomous technology which would develop separately from humans. He has no evidence to support that. There is no data to back this opinion.”

“It’s a little apocalyptic,” said Mathieu Lafourcade, an AI language specialist at the University of Montpellier, southern France. “Machines already do things better than us,” he said, pointing to chess-playing software. “That doesn’t mean they are more intelligent than us.”

Allan Tucker, a senior lecturer in computer science at Britain’s Brunel University, took a look at the hurdles facing AI. Recent years have seen dramatic gains in data-processing speed, spurring flexible software to enable a machine to learn from its mistakes, he said. Balance and reflexes, too, have made big advances. Tucker pointed to the US firm Boston Dynamics as being in the research vanguard. “These things are incredible tools that are really adaptative to an environment, but there is still a human there, directing them,” said Tucker. “To me, none of these are close to what true AI is.”

Tony Cohn, a professor of automated reasoning at Leeds University in northern England, said full AI is “still a long way off… not in my lifetime certainly, and I would say still many decades, given (the) current rate of progress.” Despite big strides in recognition programmes and language cognition, robots perform poorly in open, messy environments where there are lots of noise, movement, objects and faces, said Cohn.

Such situations require machines to have what humans possess naturally and in abundance — “commonsense knowledge” to make sense of things. Tucker said that, ultimately, the biggest barrier facing the age of AI is that machines are… well, machines. “We’ve evolved over however many millennia to be what we are, and the motivation is survival. That motivation is hard-wired into us. It’s key to AI, but it’s very difficult to implement.”

“The Singularity” is seen by some as the end point of our current culture, when the ever-accelerating evolution of technology finally overtakes us and changes everything. It’s been represented as everything from the end of all life to the beginning of a utopian age, which you might recognize as the endgames of most other religious beliefs.

While the definitions of the Singularity are as varied as people’s fantasies of the future, with a very obvious reason, most agree that artificial intelligence will be the turning point. Once an AI is even the tiniest bit smarter than us, it’ll be able to learn faster and we’ll simply never be able to keep up. This will render us utterly obsolete in evolutionary terms, or at least in evolutionary terms.

Susan Schneider of the University of Pennsylvania is one of the few thinkers—outside the realm of science fiction— that have considered the notion that artificial intelligence is already out there, and has been for eons.

Her recent study, Alien Minds, Schneider asks: “how might aliens think? And, would they be conscious? I do not believe that most advanced alien civilizations will be biological, Schneider says. The most sophisticated civilizations will be postbiological, forms of artificial intelligence or Alien superintelligence.”

Search for Extraterrstrial Intelligence (SETI) programs have been searching for biological life. Our culture has long depicted aliens as humanoid creatures with small, pointy chins, massive eyes, and large heads, apparently to house brains that are larger than ours. Paradigmatically, they are “little green men.” While we are aware that our culture is anthropomorphizing, Schneider imagines that her suggestion that aliens are supercomputers may strike us as far-fetched. So what is her rationale for the view that most intelligent alien civilizations will have members that are superintelligent AI?

Schneider presents offer three observations that together, support her conclusion for the existence of alien superintelligence.

The first is “the short window observation”: Once a society creates the technology that could put them in touch with the cosmos, they are only a few hundred years away from changing their own paradigm from biology to AI. This “short window” makes it more likely that the aliens we encounter would be postbiological.

The short window observation is supported by human cultural evolution, at least thus far. Our first radio signals date back only about a hundred and twenty years, and space exploration is only about fifty years old, but we are already immersed in digital technology, such as cell-phones and laptop computers.

Devices such as the Google Glass promise to bring the Internet into more direct contact with our bodies, and it is probably a matter of less than fifty years before sophisticated internet connections are wired directly into our brains.

Today’s Most Popular

The Daily Galaxy via AFP and South China Morning Post

Image credit: With thanks to Paul Imre


"The Galaxy" in Your Inbox, Free, Daily