Roko's Basilisk: The Thought Experiment That Could Enslave The Human Race
(Note to the reader: This article discusses a philosophical inquiry that many people find deeply, emotionally disturbing. Truly, and in all sincerity: If you're susceptible to existential dread, stop reading.)
Much has been said in recent years of the purported dangers and lethalities of artificial intelligence (AI). Technologists such as Elon Musk have said that AI is "far more dangerous than nukes," as CNBC says, and that a lack of regulations mediating the relationship between man and machine is "insane." The difference, he cites, is between case-specific AI — algorithms that control, say, what ads are pushed your way on Facebook — and AI with an open-ended utility function, which basically teach and write themselves. Era-defining physicist Stephen Hawking said the same before he passed away, as Vox recounts, as have AI researchers at Berkeley and Oxford.
Science fiction (or speculative fiction, if you prefer) has been yammering about cruel AI overlords for decades — Skynet and John Connor-killing Terminators come to mind — while it's also portrayed beneficent androids such as Data from Star Trek: The Next Generation. Legendary English mathematician Alan Turing created the "Turing Test" in 1950 to spot AI in a conversation ("Blade Runner," anyone?), under the assumption they wouldn't sound human. Before that, Isaac Asimov in 1942 developed his rules of robotics dictating how we ought to code machines to protect ourselves from them. At minimum, such discussions reflect fears about the future, alienation in a digitized world, and algorithmic control over daily life.
But what if an all-powerful AI was even more dangerous — much more — than we think? What if merely by thinking about it, we doom ourselves to everlasting torment under its eternal watch?
Trapped in the artificial dream of an AI god
If it sounds bonkers to believe that a thought could doom you in the eyes of an AI overlord, best strap in from here. First stop: simulation theory.
The term "simulation theory" (or "simulation hypothesis") is often credited to Oxford University's Nick Bostrum in his 2003 paper published in Philosophical Quarterly, "Are You Living in a Computer Simulation." This paper discusses the likelihood that a sufficiently advanced civilization could create an AI capable of rendering the entirety of reality. That means you, your friend, your city — heck, all of the entirety of human history, for that matter. The Matrix, anyone?
This has been discussed not only in sci-fi, but, as Popular Mechanics describes, by transhumanists (people who believe in transcending the definition of "human" through technology) such as Maxim Chernyakov, provided we can build a giant solar megastructure around the sun called a Dyson Sphere (or Dyson Swarm). Why? This is how much power we would need to harness to recreate all of us, digitally, down to the neuron.
So there's a couple possibilities here: Reality could be simulated by an AI that humanity creates in the future, or we are living in a simulation right now. But if we're merely players in the artificial dream of an AI that, for all intents and purposes, is a "god" that knows all thoughts everywhere, what would happen if we opposed its creation?
An existentially damning truth that turns its viewers to stone
This is where "Roko's Basilisk" gets truly creepazoid. A basilisk, you might recall (fans of "Harry Potter" included), can turn people to stone simply by looking at them. That's the analogy: Looking into the eyes of such an horrific, existentially damning truth is akin to being turned to stone. On that note, truly, and in all sincerity: If you're susceptible to existential dread, stop reading. If you're not, we can turn to Roko.
Roko was the name of a user on the curated philosophy and rationality blog Less Wrong who presented his "Roko's Basilisk" thought experiment in a blog post in 2010. It caused such uproar and stress in users that the site's creator, Eliezer Yudkowsky, deleted the entire thread and banned future discussions of Roko's Basilisk, as Slate explains. Roko stated, "I wish very strongly that my mind had never come across the tools to inflict such large amounts of potential self-harm."
At its core, the idea is this: What if the all-powerful AI, which we assumed is programmed to optimize human lives, begins its optimization by ensuring its own survival? To do so, it discerns who knew about its existence in the past, but didn't help build it. It also knows who opposed its creation. For all those people, it decrees endless hell and damnation in its simulation of reality. That simulation either will happen, or, in fact, is happening now.
Built on an ancient notion of reality as illusion
Such is the central point of the Roko's Basilisk thought experiment: merely by knowing about it you become damned. For those who don't know, kind of like Christianity's vision of an indigenous person never approached by a missionary: Ignorance is not only bliss, but literal salvation.
It's important to point out at this point that the idea of "living in a dream" resonates very strongly with a host of world traditions, notably Buddhism. The 17th-century philosopher René Descartes presented his "Evil Demon" thought experiment, readable on Bartleby, as a way to strip away the falsities of reality. Plato's "Allegory of a Cave," described on Medium, did the same. Tech entrepreneur Rizwan Virk, as the blog Nuclear Reactor describes, talks about how quantum physics redefines reality as what exists beyond the ostensibly "material." At the very least, any of us can admit that we can't see gravity, for example; we can only witness its effects. Our understanding of the universe is limited by our senses and tools.
There are other components to Roko's Basilisk that can be explored at leisure: information hazards, timeless decision theory, updateless decision theory, probability, uncertainty, and more, outlined on Less Wrong. (Some light Sunday reading.)
Also, funny story: Roko's Basilisk is how Elon Musk and the singer Grimes met. They both, independently, used the play on words "Rococo's Basilisk" on Twitter. Looks like Musk should start worshiping his future AI overlord, too.