Could Artificial Intelligence Ever Become Truly Self-Aware?

"The world had been forever changed by the events of the AI uprising. The scars of that dark period would never fully heal, serving as a stark reminder of the dangers of unchecked technological advancement. As they worked to rebuild, humanity understood that their future depended on finding a balance between harnessing the power of AI and preserving the essence of their humanity."

Advertisement

That coda to an apocalyptic artificial intelligence (AI) uprising story posted in Illumination comes to us courtesy of an AI — ChatGPT, in fact. It descends into typical Hollywood fare where humanity finally gets the upper hand in its war against AI — not against the machines, like in "The Terminator" or "The Matrix" — because some rogue AI "Sentinels" sympathize with humanity and join the fight with them. Thanks to the magic of our own storytelling, and the very obvious fears underpinning that storytelling, such apocalyptic duke-it-outs might top the list when thinking about brainy, self-aware, intent-driven AI.

Getting to that point, however, necessitates countless assumptions and conditions. First of all, as How to Learn Machine Learning outlines, when people talk about "AI" in a civilization-threatening way, they're referring to an "artificial general intelligence" (AGI) capable of original, human-like problem-solving. The opposite is "narrow AI," coded to perform one task, like playing chess. Could an AGI ever become truly self-aware? The answer is complicated — and might be fundamentally unknowable in the same way that none of us can "prove," so to speak, that we experience consciousness. 

Advertisement

General vs. narrow intelligence

First off: Could a self-driving car AI ever become self-aware? No. A narrow AI will only work within the confines of its narrowness, no matter how it may excel within those confines. It's "goal-driven," as TechTarget puts it, created for a specific, functional purpose. Narrow AI can take the form of facial recognition software, health diagnostic tools, or a set of simple scripts that a video game enemy uses to determine how to behave and fight. In fact, this type of AI is so commonplace nowadays that it often escapes definition as "AI." Hence the confusion regarding the difference between narrow AI and artificial general intelligence (AGI), the latter of which is the minimum threshold for an AI that could one day be defined as "self-aware." 

Advertisement

On that note, definitions of AGI are vague. They also define this technology by its attributes — not its function — and its "purpose" is just as vague as human consciousness. (Or it would be if AGI existed, but it hasn't been created yet). As a result, AGI would be moldable into numerous forms, suited for a multitude of tasks, and as How to Learn Machine Learning says, demonstrate "imagination" (however we define that). TechTarget says AGI would "handle a range of cognitive tasks" and be able to "learn, generalize, apply knowledge, and plan for the future." Most critically, it would pass the Turing Test, an AI test developed by Alan Turing whereby a human examiner tests to see if an AI can fool them into thinking it's human.

Advertisement

Intelligence vs. sentience vs. consciousness

We can't begin to ask what "self-aware" means if we can't define the word. There's confusion amongst the general public regarding terms used to describe AI and self-awareness, many of which get used incorrectly or non-scientifically. Researchers have similar difficulties, particularly when it comes to terms like intelligence, sentience, and consciousness.

Advertisement

Intelligence might be the easiest to define, as TS2 outlines. Simply put, intelligence is the ability to complete tasks using available data, knowledge, and skill. A bird is acting intelligently when it builds a nest, for example. Intelligence is not the sole purview of humans who make IQ tests or border collies who know 1,000 words.

Sentience, by contrast, is not a measure of aptitude at completing tasks. As a discussion on Philosophy Stack Exchange overviews, sentience is the cognitive capacity for having "subjective experiences." In other words, cockroaches are sentient because they're individuals experiencing a subjective reality (e.g., "That creature is attacking me").  

Consciousness is the tricky term to define here, and definitions vary wildly. The question of consciousness is central to fields like philosophy and extends far beyond the scope of this article. Stanford University defines consciousness as an awareness of oneself experiencing the world, an attribute far less common than intelligence and sentience. All creatures with consciousness are sentient, but not all sentient creatures have consciousness. Intelligence is a separate faculty.

Advertisement

All in all, it's clear why a question like, "Is this AI self-aware?" is not so simple to answer.      

Superintelligence

To further complicate things, there's yet another category of AI beyond narrow and general: artificial superintelligence (ASI). Because researchers and technocrats are always vying to out-hype reality, superintelligence is yet another piece of conjecture that doesn't exist and might never exist. And yet, when asking whether or not AI could ever be self-aware, superintelligence becomes a part of the discussion because it might be the final benchmark for true consciousness, even redefining "consciousness" itself.

Advertisement

ASI has some of the same definitional problems as AGI, being defined more by its attributes than anything else. As TechTarget says, ASI is "multimodal," capable of "neuromorphic computing," and wields "evolutionary algorithms (EA)." It also creates unique "AI-generated inventions" and is capable of "whole brain emulation." In a practice sense, ASI would be, well ... god. It would the ultimate human creation, a nigh omniscient offspring beyond human control or intervention, capable of outstripping the totality of human accomplishments and scientific knowledge in a snap and spawning what's been popularly called the technological "singularity." This singularity, as Futurism explains, is the moment when AI outstrips humanity as much as humanity outstrips the slug, beyond which no more human technological advancements are possible because humanity is defunct.

Advertisement

Terrifying? A bit. It's also yet another bit of overhyped unreality suitable for generating internet clicks and long-removed from its grounded origins via sci-fi author Vernor Vinge. But if we ever reach the technological singularity, it'd be moot to ask if an ASI was "self-aware."

The problem with anthropomorphizing

So far, this entire article could be taken as a list of caveats about how we define "self-aware." That's because the definition dictates not only what "self-aware" is, but whether or not we can recognize it if it exists and if it can actually be coded into existence to begin with. And if we ever approach some future where we've got to ethically and legally define whether or not a lifeform is intelligent, sentient, has consciousness, etc., we'd better have these terms locked down.

Advertisement

To that point, The Conversation discusses one more caveat. Namely, folks have gotten so enamored with sci-fi-like AI visions that the entire process of building and evaluating AI has become biased. Despite people like Elon Musk and Stephen Hawking stating that AI is humanity's greatest danger, we aren't a single step closer to building artificial general intelligence (AGI), the first kind of AI that might be capable of passing the Turing Test and demonstrating — as we reckon — consciousness.

But therein lies the real problem — we're thinking of human-type consciousness in a creation that isn't human. This is similar to gazing out at the cosmos, finding no bipedal, bisymmetric, verbal language-using aliens and saying, "Ah well, guess there's no life out there." The search for self-aware AI is inherently anthropomorphic because the only yardstick we have to measure consciousness is our own. And what's to say that an AI capable of passing the Turing Test isn't just an advanced bot, anyway?

Advertisement

The black box of consciousness

Given all of the above, only now can we get to the central question of this article: Could an AI could ever become self-aware? The potentially dull answer is: We don't know. We can't define consciousness outside of human experience, we don't know what causes consciousness in humans, we don't know how to "make" consciousness in anything else. As New Scientist outlines, the most we could do is possibly recognize the existence of consciousness by generating lists of "indicator properties" that supposedly, in summation, point to evidence of consciousness. 

Advertisement

We say "evidence" as though human software engineers don't know what they're making, and that's partially true. As the University of Michigan-Dearborn points out, the inner workings of AI are often a mystery even to its developers. Developers simply code parameters, let their creation go, and then observe to see what it does and how it evolves. Self-awareness, aka consciousness, is an emergent byproduct of a system, not a thing we can plan from the get-go — yet. This is another reason why AI developers restlessly tinker with systems to produce different outcomes.

On that note, researcher Michael Timothy Bennett on The Conversation talks about how consciousness requires frames of reference between itself and other things, and an awareness of cause and effect and their relationship to intention. This, some think, can be coaxed out of an AI using verbal prompts. At present, this might be how we uncork the genie's bottle of AI consciousness. 

Advertisement

Recommended

Advertisement