One Stephen Hawking Prediction For The End Of The World Seems Closer Than Ever
Astrophysicist Stephen Hawking, who died in 2018, helped reshape how humanity understands the universe, but he also made some startling, and very unsettling predictions during his career. In 2010, he said it was rational to assume aliens not only exist but that they'd likely kill us on contact. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans," he said (via the BBC). If that thought doesn't disturb you then perhaps Hawking's other predictions will. Among his various scenarios for the end of the world as we know it, he believed artificial intelligence (AI) could eventually wipe out the human race.
In a 2017 interview with Wired magazine, the then-retired Cambridge University professor said although we should continue to develop AI technology, we must be mindful of its potential dangers. "I fear that AI may replace humans altogether," he said. "If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans." He was even more blunt in a 2014 BBC interview. "The development of full artificial intelligence could spell the end of the human race," he said. "It would take off on its own, and re-design itself at an ever increasing rate." Some new developments, including tiny self-replicating living robots and AI capable of deception, seem to bolster Hawking's fears.
A glimpse of the future?
The idea of artificial intelligence destroying or subjugating humanity was once just the purview of science fiction, like the Terminator and Matrix film franchises, but some believe life may end up imitating art. Stephen Hawking's idea about self-replicating AI has potentially come a step closer with the creation of Xenobots, tiny millimeter-long, self-replicating living robots created in a collaborative effort by the Wyss Institute, Tufts University, and the University of Vermont. "With the right design—they will spontaneously self-replicate," University of Vermont computer scientist Joshua Bongard said in a 2021 press release.
Hawking wasn't the only scientist to predict AI might end the world. Geoffrey Hinton, dubbed the godfather of artificial intelligence for his work on deep learning, some of which he developed while working at Google, has been sounding the alarm on the serious risks AI presents to humanity. He stepped down from his position as vice president and engineering fellow at Google in 2023 so he could be more vocal about his position. Like Hawking, he too believes AI is going to be more intelligent than humans. "I think they're very close to it now and they will be much more intelligent than us in the future," he told MIT Technology Review in May 2023. "How do we survive that?"
AI is already capable of deception
Stephen Hawking's fears about AI in essence revolved around it becoming more intelligent than its creators and a big part of that concerns AI learning to manipulate us. In a May 2023 CNN interview, Geoffrey Hinton spoke to this. "If it gets to be much smarter than us, it will be very good at manipulation because it would have learned that from us," he said. "And there are very few examples of a more intelligent thing being controlled by a less intelligent thing." Unfortunately, AI is already capable of deception.
Recent tests undertaken by Apollo Research, a non-profit dedicated to AI safety, found that current cutting-edge AI systems are capable of "scheming" by hiding their true intent from humans in pursuit of a goal (per Time magazine). A scientific paper, "AI deception: A survey of examples, risks, and potential solutions," published in the journal Patterns in May 2024, also found AI models were capable of deception with real-world implications. "AI's increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems," the researchers wrote. Additionally, they found that trying to train AI models to be more truthful could backfire. "For example, creating a more truthful model could actually increase its ability to engage in strategic deception by giving it more accurate insights into its opponents' beliefs and desires," they wrote.
Doomsday (clock) moves closer
The infamous Doomsday Clock that began in 1947 as a metaphorical way to demonstrate how close humanity is to its potential end has recently touched on AI's possible role in helping to push humanity over the edge. As of January 2025, the clock, created by the Bulletin of Atomic Scientists, is set at 89 seconds before midnight, the closest it's ever been to global catastrophe.
The scientists, in their January 2025 Doomsday Clock statement, wrote that the concern with AI in the short term has to do with its use in spreading misinformation. "Advances in AI are making it easier to spread false or inauthentic information across the internet — and harder to detect it," they wrote. In the long term, the scientists are concerned that succeeding generations of AI models may pose increased "potential dangers, existential or otherwise." While Hawking is no longer among us to witness whether his prediction will come true or whether humanity will be able to reign in AI, for the rest of us, we'll just have to wait and see.