Source: John Stonestreet, Kasey Leander via breakpoint.org
Copyright 2022 by the Colson Center for Christian Worldview. Reprinted from BreakPoint.org with permission.
In June, a Google employee who claimed the company had created a sentient artificial intelligence bot was placed on administrative leave. Blake Lemoine, part of Google’s Responsible AI (“artificial intelligence”) program, had been interacting with a language AI known as “Language Model for Dialogue Applications,” or LaMDA. When the algorithm began talking about rights and personhood, Lemoine decided his superiors and eventually the public needed to know. To him, it was clear the program had become “sentient,” with the ability to feel, think, and experience life like a human.
Google denied the claim (which is exactly what they would do, isn’t it?). “There was no evidence that LaMDA was sentient (and lots of evidence against it),” said a spokesperson. The Atlantic’s Stephen Marche agreed: “The fact that LaMDA in particular has been the center of attention is, frankly, a little quaint…. Convincing chatbots are far from groundbreaking tech at this point.”
True, but they are the plot of a thousand science fiction novels. So, the question remains, is a truly “sentient” AI even possible? How could code develop the capacity for feelings, experiences, or intentionality? Even if our best algorithms can one day perfectly mirror the behavior of people, would they be conscious?
How one answers such questions depends on one’s anthropology. What are people? Are we merely “computers made of flesh?” Or is there something more to us than the sum of our parts, a true ghost in the machine? A true ghost in the shell?
These kinds of questions about humans and the things that humans make reflect what philosopher David Chalmers has called “the hard problem of consciousness.” In every age, even if strictly material evidence for the soul remains elusive, people have sensed that personhood, willpower, and first-person subjective experiences mean something. Christians are among those who believe that we are more than the “stuff” of our bodies, though Christians, unlike others, would be quick to add, but not less. There is something to us and the world that goes beyond the physical because there is a non-material, eternal God behind it all.
Christians also hold that there are qualitative differences between people and algorithms, between life and non–living things like rocks and stars, between image bearers and other living creatures. Though much about sentience and consciousness remains a mystery, personhood rests on the solid metaphysical ground of a personal and powerful Creator.
Materialists have a much harder problem declaring such distinctions. By denying the existence of anything other than the physical “stuff” of the universe, they don’t merely erase the substance of certain aspects of the human experience such as good, evil, purpose, and free will: There’s no real grounding for thinking of a “person” as unique, different, or valuable.
According to philosopher Thomas Metzinger, for example, in a conversation with Sam Harris, none of us “ever was or had a self.” Take brain surgery, Metzinger says. You peel back the skull and realize that there is only tissue, tissue made of the exact same components as everything else in the universe. Thus, he concludes, the concept of an individual “person” is meaningless, a purely linguistic construct designed to make sense of phenomena that aren’t there.
That kind of straightforward claim, though shocking to most people, is consistent within a purely materialist worldview. What quickly becomes inconsistent are claims of ethical norms or proper authority in a world without “persons.” In a world without a why or an ought, there’s only is, which tends to be the prerogative of the powerful, a fact that Harris and Metzinger candidly acknowledge.
In a materialist world, any computational program could potentially become “sentient” simply by sufficiently mirroring (and even surpassing) human neurology. After all, in this worldview, there’s no qualitative difference between people and robots, only degrees of complexity. This line of thinking, however, quickly collapses into dissonance. Are we really prepared to look at the ones and zeros of our computer programs the same way we look at a newborn baby? Are we prepared to extend human rights and privileges to our machines and programs?
In Marvel’s 2015 film Avengers: Age of Ultron, lightning from Thor’s hammer hits a synthetic body programmed with an AI algorithm. A new hero, Vision, comes to life and helps save the day. It’s one of the more entertaining movie scenes to wrestle with questions of life and consciousness.
Even in the Marvel universe, no one would believe that a mere AI algorithm, even one designed by Tony Stark, could be sentient, no matter how sophisticated it was. In order to get to consciousness, there needed to be a “secret sauce,” in this case lightning from a Nordic hammer or power from an Infinity Stone. In the same way, as stunning as advances in artificial intelligence are, a consciousness that is truly human requires a spark of the Divine.
Kasey Leander: Download/Print PDF
Header image: DeepMind via unsplash.com