The weekend edition of the “Planet Earth Report” provides a descriptive link to a headline news story by a leading science journalist about an extraordinary discovery, technology, person, or event changing our knowledge of Planet Earth and the future of the human species.
“Consciousness of the World as a Whole”
Similar to the the Atomic Age, launched in the New Mexico desert at Los Alamos in 1945, COVID-19 has held the entire world hostage, producing an eerie resemblance to the post-apocalyptic world depicted in science fiction that explores the interconnectedness of the planet. We are linked by a perception of globality, what sociologist Roland Robertson, an authority on globalization and cosmology, defines as “the consciousness of the world as a whole.”
In mid-January, 2020, reports Corinne Purtill for The New Yorker, Toby Ord, a Senior Research Fellow at Oxford University’s Future of Humanity Institute, was reviewing the final proofs for his first book, “The Precipice: Existential Risk and the Future of Humanity.” had noticed that “a few of his colleagues were tracking a new virus in Asia. He wondered if the coronavirus might make his book more topical.”
“Today’s Carl Sagan”
Ord, described as today’s Carl Sagan, gave the name “the precipice” to our current phase of history, which began at 11:29 a.m. Coördinated Universal Time, on July 16, 1945—the moment of the Trinity test, when the first nuclear bomb was detonated. It will end, he writes, with either a shared global effort to insure humanity’s continued survival or the extinction of our species. Ord’s wake-up call is similar to that of Stephen Hawking’s warning in 2014, that artificial intelligence is not science fiction, and could spell our doom.
Mirrors Hawking’s Warning
“We should plan ahead,” warned physicist Stephen Hawking who died March, 2018, and was buried next to Isaac Newton. Before Hawking left our planet, he had expressed serious concerns about the future of mankind. Foremost was his concern for the future of our species and what might prove to be our greatest, and last, invention: Artificial Intelligence. reported The Sunday Times of London. “If a superior alien civilization sent us a text message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here, we’ll leave the lights on’? Probably not, but this is more or less what has happened with AI.”.
“While primitive forms of artificial intelligence developed so far have proved very useful, I fear the consequences of creating something that can match or surpass humans,” observed Stephen Hawking. “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded. And in the future AI could develop a will of its own, a will that is in conflict with ours.” In short, Hawking concluded, “the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity.
Fast forward to February, 2020:” the U.S. leg of Ord’s book tour, which was scheduled for the spring and was to include stops at Stanford, M.I.T., and Princeton, was cancelled,” writes Purtill. “Two weeks later, Ord was sheltering in place at home. His wife, Bernadette Young, an infectious-disease specialist at John Radcliffe Hospital, in Oxford, began working overtime, while he cared for their daughter, Rose, who was then five. “I’d already known that, during a crisis, the unthinkable can quickly become the inevitable,” Ord told Purtill, earlier this year. “But, despite having this intellectual knowledge, it was still quite something to see such a thing unfold before my eyes.”
“What might have happened in a world in which covid-19 didn’t exist, or was handled differently,” asks Ord? “What if the virus had been more deadly?” Ord, writes Purtill, “reckons with these divergences on a grand scale, considering both the grim futures that await us if existential threats to humanity aren’t addressed and the far more promising outcomes that become possible if they are.
The Existential Threats
If we manage to avoid a tumble off the precipice, Ord observes, “it will be our era’s defining achievement. Ord catalogues many possible catastrophes that the natural risks we’ve always lived with, closing ranks with the warnings of Lord Martin Rees, such as asteroids, super-volcanic eruptions, and supernova explosions. “None of them keep me awake at night,” Ord writes, as do “the large-scale threats we have created for ourselves: nuclear war, climate change, pandemics (which are made more likely by our way of life), and other novel methods of man-made destruction still to come — empowered artificial intelligence unaligned with human values (he gives it a one-in-ten chance of ending humanity within the next hundred years) and engineered pandemics (he thinks they have a one-in-thirty chance of bringing down the curtain).”
A Warning Shot?
The pandemic we are currently experiencing, writes Purtill, “is the sort of event that Ord describes as a ‘warning shot’—a smaller-scale catastrophe that, though frightening, tragic, and disruptive, might also spur attempts to prevent disasters of greater magnitude in the future.”
The Daily Galaxy, curated and edited by Max Goldberg, via Oxford University Future of Humanity Institute
Image credits: Shutterstock License