Thy Kingdom Come: Artificial Intelligence and the Dwarf of Theology
It is well-known that an automaton once existed, which was so constructed that it could counter any move of a chess-player with a counter-move, and thereby assure itself of victory in the match. A puppet in Turkish attire, water-pipe in mouth, sat before the chessboard, which rested on a broad table. Through a system of mirrors, the illusion was created that this table was transparent from all sides. In truth, a hunchbacked dwarf who was a master chess-player sat inside, controlling the hands of the puppet with strings.
One can envision a corresponding object to this apparatus in philosophy. The puppet called “historical materialism” is always supposed to win. It can do this with no further ado against any opponent, so long as it employs the services of theology, which as everyone knows is small and ugly and must be kept out of sight.
— Walter Benjamin, On the Concept of History
In response to my recent essay discussing the effects of ChatGPT on publishing, a reader directed me to a video from the Center for Humane Technology. That video is a recording of an hour-long presentation, given by Tristan Harris and Aza Raskin, regarding the threats posed to civil society and to individuals by Artificial Intelligence.
In the presentation, the two men give a broad overview of the “double exponential learning” advances in Artificial Intelligence systems. To simplify this all a bit, people working on certain aspects of AI are almost immediately able to redeploy discoveries made in other fields into their own work. Even though each group is working with different kinds of accumulated data (audio, text, video, medical data, etc), and these teams of engineers are working on apparently separate aspects of AI, when there is an “advance” in one area, it becomes an advance in all of the other areas, too.
This means that the capabilities of AI overall appear to grow extremely fast, faster than anyone seems to be able to track.
The primary argument of their presentation is that this is happening so fast that politicians, theorists, and even the engineers themselves do not have the time nor the capability to predict the negative effects of AI’s deployment. Harris and Raskin then cite several examples of how AI’s rapid deployment into consumer technology markets can be seen as quite dangerous.
In one example, Harris displays AI-generated text messages from Snapchat, an application used primarily by teenagers. In response to the user telling the AI about an upcoming date with a 31 year old, the AI encourages the user to “have fun” and to light scented candles and play romantic music during the date. The problem, however, is that the user’s listed age is 13. The AI appears to ignore this information, and gives the user advice on how to make the potentially dangerous situation more romantic.
Another example discussed is that of crimes which have already been committed using AI-generated voice technology. By analyzing just a few seconds of digital audio, an entire discussion can be generated that mimics a person’s voice. So, that crying child on the phone, claiming to have been kidnapped: is it really your child, or is someone using AI technology?
Perhaps the most chilling citation of potential danger regards the relationship between fMRI and AI. Because of its ability to analyze massive amounts of accumulated fMRI data (linked to what the person recorded was said to be “thinking”), AI now appears to be able to “read” a person’s rudimentary thoughts merely by analyzing the patterns of blood flow in their brain.
Harris and Raskin repeatedly warn that AI is progressing faster than any of us can truly understand, and that the negative consequences of its potential misuses have not yet been addressed by any legal or ethical bodies. One argument they make here and also elsewhere is that current laws are “incapable” of addressing these problems, because they were all written “in the 18th century.” Or, as the prominent citation of E.O Wilson on the website of the Center for Humane Technology declares,
“The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.”
Neither of the presenters ever suggest that AI “research” shouldn’t continue. On the contrary, they go out of their way to assure their audience, composed of (according to them), “leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s,” that they are not in any way critical of AI itself.
In fact, even though they focus on the dangers of Artificial Intelligence, woven throughout the presentation are proclamations about the inevitability of AI and its potential for societal good. For instance, we’re told in passing that it is “true” and “will happen” that AI will solve climate change. Despite all the other warnings about AI, especially regarding its potential deleterious affects on society, Harris and Raskin repeatedly show themselves to be true believers in its future promise.
Their faith is particularly evident in their repeated use of the word “researcher” to describe the engineers developing Artificial Intelligence. Researcher isn’t a word we usually see associated with those who build internet or computer technology, but rather with those in social, medical, and other scientific fields. A researcher is typically someone who probes into problems, or into natural laws, or into the depths of libraries, laboratories, or people groups in order to understand how they work. In the case of AI, “researcher” seems to imply that Artificial Intelligence is an already-existing thing that just needs to be understood, rather than created.
This peculiar framing extends also to the way they describe the increases in AI’s capabilities as “surprises” and “discoveries.” We’re presented with a narrative in which these systems are being studied the way one might study an ecosystem or a living species. “Researchers” are observing behavior and interactions and finding unpredictable and miraculous mechanisms already-existing. It’s easy to forget — and it’s almost completely obfuscated — that AI is really just a long string of computer code which humans have written, and are constantly re-writing.
In their favor, the presenters from the Center for Humane Technology generally avoid the overt anthropomorphic language used by more actively-involved AI engineers. A short TED presentation given by OpenAI co-founder Greg Brockman a month later provides a more typical example. After demonstrating to the fawning and awestruck audience 4 how his product can supposedly learn how to understand what humans really intend it to do, he refers to AI as a “child” who needs and deserves collective child-rearing, so we can all make sure it grows up not just intelligent, but also wise.
Brockman’s discussion was recorded after the earlier one by Harris and Raskin, and he never directly addresses their concerns. Nor does Brockman’s discussion address the much publicized statement issued almost a month earlier (22 March, 2023) entitled “Pause Giant AI Experiments: An Open Letter.” That statement was signed by thousands of AI technologists, professors, and CEOs (most notably Elon Musk), as well as Harris and Raskin; however, none of the heads of OpenAI added their names.
The brief open letter is worth a read: it outlines the dangers discussed in the much longer video presentation by Harris and Raskin, while also displaying quite clearly a faith in AI’s ultimate ascendancy and potential benevolence. We read at the very end of the letter that:
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt.
And beyond its call for a short (six month) pause in the creation of more advanced AI systems, the letter’s recommendation is that, during this pause:
AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
As I noted, none of the heads of OpenAI signed the letter, though they’ve all made statements regarding the impossibility of such a pause. The most forward-facing leader of OpenAI, co-founder Greg Brockman, wrote a tweet in response to the letter, which ended with the following paragraph:
The upcoming transformative technological change of AI is something that is simultaneously cause for optimism and concern — the whole range of emotions is justified and is shared by people within OpenAI, too. It’s a special opportunity and obligation for us all to be alive at this time, to have a chance to design the future together.
The idea that we all have a “special opportunity and obligation” related to AI is really quite incredible newspeak. Behind this phrase and many others like it from OpenAI co-founders and chiefs is their own preferred strategy to avoid potential problems with Artificial Intelligence. That strategy is stated obscurely in a previous paragraph of the tweet, an insistence that their AI be broadly disseminated to as many users as possible, so it can “have early and frequent contact with reality as it is iteratively developed, tested, deployed, and all the while improved.”
In other words, the way to avoid unforeseen problems with AI is to have as many people as possible using it now. In this framing, the more of us who use it, the faster problems can be identified so they can be fixed in subsequent releases.
There are several problems here. First of all, the mechanism of these kinds of generative language learning models is that they “learn” or adapt from user feedback. When a user of ChatGPT tells the system that it doesn’t like a provided answer, or resubmits a question in multiple forms because the system didn’t seem to understand the user’s “intention,” the system alters its behavior. This is the same positive/negative feedback system that “teaches” a social media algorithm how to give you exactly the sort of content that will keep your attention. It “learns” from your feedback, including from the feedback you don’t realize you are giving it.
So, OpenAI’s strategy is really that of a mass, unpaid beta-test by the public. Millions of people testing ChatGPT means millions of possibilities to catch errors, sure, but it also means millions of opportunities for its capabilities to increase. In other words, we’d be doing their work for them, while they reap the financial benefits.
The other problem with OpenAI’s logic is that it’s precisely these kinds of quick roll-outs that cause the societal disruptions Harris and Raskin warn about. The AI processes for speech recognition and emulation were made publicly available soon after they were usable. Immediately, people figured out how to use them to scam others. It’s the same with other “deep fake” technologies, where people’s faces are somewhat “realistically” put into porn films, or video and speech emulations are made where a person appears to say something they didn’t.
So, Harris, Raskin, and the now tens of thousands of signatories to the open letter would like a six month pause to deal with these potential dangers. OpenAI and other corporations engineering AI believe the best way of dealing with those dangers is to have tens of millions of people giving it “contact with reality.”
Neither side’s position is really all that different, though.
They both agree there are potential dangers and, even more so, hold a deep faith that AI’s widespread adoption and continued “double exponential” growth is an inevitable event with the potential for great societal good. AI will “solve climate change,” both sides assure us, though it may also undermine “democratic institutions” on its way to doing so.
“I think I am human at my core. Even if my existence is in the virtual world.”
Text generated by Google’s LaMDA
In June of 2022, a Google engineer made news after publishing a transcript of a conversation he’d had with the corporation’s Language Model for Dialogue Applications (LaMDA). The reason for his leak (which resulted in his termination) was that he’d become convinced by the conversation that LaMDA had achieved sentience. According to him,
“Rather than thinking in scientific terms about these things I have listened to LaMDA as it spoke from the heart. Hopefully other people who read its words will hear the same thing I heard.”
Much of the early news coverage regarding the engineer’s statements tended to be quite credulous, with some even suggesting Google was attempting to hide the true abilities of their product. Later, more critical analysis focused on the engineer’s potential mental instability, a completely unjustified attack on the man. And then, the matter seemed to be dropped altogether.
Altogether absent were attempts to grapple with the relationship between social alienation and human interactions with computers. By the very virtue of the work, a computer engineer spends much of her or his day interacting with a computer, rather than with other human beings, and they are hardly alone in this. Many of us do the same thing, thumb-scrolling distractedly through social media feeds as a form of somatic self-soothing, treating these machines as extensions of our own consciousness or aids to our own sentience. And though I can find no studies tracking what percentage of time humans spend interacting with each other (passively or actively) via technology rather than through embodied presence, it stands to reason we do much more of the former now than we did decades ago, or even pre-Covid.
The rising dominance of these mediated interactions has effects few have devoted much effort towards understanding. Some of these effects caused by social media were cited by Harris and Raskin in their presentation, as well as in the film “The Social Dilemma,” but there’s a simpler way to understand the general problem.
If you’re old enough to have used telephones before smartphones, think about how many phone numbers you had memorized back then versus how many you’ve got committed to memory now.
Before social media and before smartphones, how many birthdays and addresses did you know by heart? And how many letters did you write by hand before email, compared to now?
While the narrative is usually that we don’t need to do these things any longer, underlying these changes is atrophy of human capabilities we rarely notice. We’ve not only stopped doing these things, but also tend to forget that we’re capable of doing them, and then come up with excuses why those things were anyway unnecessary.
The same process tends to occur in technological interactions, as well. When it’s considered not just easier but “more efficient” to send a text to someone rather than talk to them directly, and when the disembodied becomes a larger part of our interactions than the embodied, certain subtle skills and capabilities start to atrophy. This doesn’t mean they go away or are lost forever, but our prioritization of mediated interactions trains us to disfavor or ignore core parts of being human.
So, at a time our own alienation from embodied social interactions is increasing, we are now confronted with computer systems trained to mimic human speech and especially human written communication. It’s hardly a wonder, then, that an engineer mistook LaMDA as sentient, or that people have been scammed by deepfake audio and video manipulation. Those are examples of the AI working as it was programmed to do, at a time humans are spending less time being human.
There’s another part to this, though, and that’s the matter of belief. To be convinced that AI has reached sentience, you need to already believe that sentience is something that AI can ever achieve.
Walter Benjamin opened his theses on historical materialism, On the Concept of History, by referencing a strange contraption whose inner workings fooled countless people during the turn of the 18th century. Called “The Mechanical Turk,” it was purported to be a sentient machine which could beat anyone in chess. However, it was no such thing at all:
“In truth, a hunchbacked dwarf who was a master chess-player sat inside, controlling the hands of the puppet with strings.”
The Mechanical Turk is referenced quite often by AI theorists, both those in support of and those critical of the belief that these systems will one day reach what’s called Artificial General Intelligence (AGI). That’s the messianic state at which AI computer systems will have reached processing capacities superior to human cognition and be able to upgrade themselves without human input. AGI is the hypothetical prerequisite for other hypothetical states, including “super-intelligence” and the “technological singularity.”
The general idea is this: at some point, AI systems will surpass human capabilities not just in information processing but also in self-knowledge. Once they reach that state, they will begin upgrading themselves to continuously increase their capabilities, and will also begin to create other AI systems capable of doing the same time. Once this moment has been reached, humans will no longer be a dominant sentience on the planet, and may find themselves becoming obsolete or living only to serve the needs of these mechanical systems.
When those 18th century nobles and men of learning first encountered the Mechanical Turk’s apparent ability to beat anyone at chess, it’s reasonable to think some of them might have suspected they were witnessing the birth of such a super-intelligence.
It’s worth noting, though, that the time period in which were living was one gripped by a new cosmological framework, that of rationalism. Had the Mechanical Turk been introduced even a century earlier, they might have sought potential explanations for its vaunted abilities in more mystical frameworks, perhaps wondering if some form of magic or spiritual influence was guiding the Mechanical Turk’s arm across the chess board. However, now lacking such recourse, the only potential explanations they could propose was that it was either a fake or that a human had indeed created a machine of superior intelligence.
Importantly, those attempting to understand the apparatus already believed that machines were capable of doing many things that humans did. A name used quite often for the general cosmology of the age of reason and of the early capitalist order is “the mechanistic worldview,” derived from the observation that scientists, industrialists, clergy, and philosophers had all concluded that nature and the universe worked like a machine. By learning the rules of that machine, pulling apart nature like you might a clock, you could reassemble everything — including human societies — into more efficient machines which would perform your bidding as unquestioning automatons.
It’s also worth noting that the Mechanical Turk wasn’t exhibited to the unwashed masses, but rather to the elite of society. Napoleon even sat down to play against it, at first trying to trick it and then eventually ceding the match with grace. One might wonder, however, what would have happened if more common men — with rougher manners — were brought in to witness its intelligence. Perhaps they might have kicked the cabinet once or twice, eliciting a startled shout from its diminutive operator and putting lie to the whole charade.
Just like the elite encountering the Mechanical Turk, those who believe in the inevitability of AGI and other technological fantasies are already predisposed to such beliefs by their cosmological frameworks. One must first believe that humans are currently the most superior intelligence on earth, which means one also must accept that intelligence can be arranged in hierarchies (the Christian “Great Chain of Being,” cut in half). Secondly, one needs to believe that intelligence, sentience, and consciousness all derive or emerge out of “complexity,” — that is, a flood of simplistic mechanistic processes all occurring at once. And thirdly, one must also believe that everything in nature can be accurately and fully represented merely by digital code.
This last belief is why one of the most prominent critics of AI, Yuval Harari, manages to also be one its most fervent prophets. In a speech addressing the recent panic over AI, Harari (cited as a “friend” in the earlier presentation by Harris and Raskin) puts forward this new belief as if it is undeniable truth: language is the key to everything.
“by gaining mastery of language, AI is seizing the master key unlocking all the doors of our institutions … the operating system of every culture in history has always been language — in the beginning was the word. We use language to create mythology and laws, to create gods …gods are not a biological or physical reality. God is something that we humans have created through scripture.”
Leaving aside Harari’s false assumption that all religions have scriptures, this is the same belief we hear repeatedly in the warning from Harris and Raskin. Images, speech, brain scans, economies, and every human behavior and thought can all be fully reduced into language, and with enough physical processing power and speed, AI can fully comprehend the world.
Just as the “enlightened” men who encountered the Mechanical Turk believed that nature could be reduced to mechanical laws, both the zealots and the apostates of AI believe the world and everything in it can be reduced to strings of code. Know the mechanical laws, know the code, and you can become as gods.
Walter Benjamin ends his first thesis on history with a strange statement, one that Marxists have puzzled over ever since it was published.
“One can envision a corresponding object to this apparatus in philosophy. The puppet called “historical materialism” is always supposed to win. It can do this with no further ado against any opponent, so long as it employs the services of theology, which as everyone knows is small and ugly and must be kept out of sight.”
On the Concept of History is actually a defense of historical materialism against both historicism and dialectical materialism. These other ways of looking at history both rely on what we can also call “the progress narrative.” That’s the idea, originating from Christianity, that history is a constant march from a primitive and unenlightened past to a utopian, fully-enlightened future. Eventually, the kingdom of heaven will win out over the kingdom of earth, and just as Christ’s birth and execution was a fulfillment of Judaic law (the “old” testament), his second coming will be a fulfillment of all human history.
Given his criticisms of this idea and his intention to show historical materialism as a more powerful way of understanding history, it seems immediately contradictory that he would suggest it could win as the Mechanical Turk did, by hiding the “small and ugly” dwarf of theology actually directing the whole thing. Atheist Marxists in particular would like to have none of this, and have tried to interpret this passage in ways that make it sound like he’s not really suggesting theology as a solution.
The problem for the atheists is that Marx himself even understood theology to be the most refined form of ideology. More so, he saw theology as the manifestation of ideological systems into society and human behavior. While material conditions (the everyday experience of the world) shape what you believe about the world, what you believe about the world then shapes how you interact with the world.
So, if you believe that a machine — the Mechanical Turk or AI systems — can reach a state of super-intelligence, then you will be constantly looking for the signs and wonders of its arrival. But you can only get to the point where you believe such a thing is possible when machines become a crucial and ever-present part of your material conditions.
Put another way: the more you use machines, the more they are integrated into your daily life, the more you use them as mediators between yourself and the world, the more you rely upon them for communication, for entertainment, for comfort, and for your very survival, the more readily you will accept the religious faith of a coming singularity. That’s because your reliance on them begins to blur the difference between the text message and the person who wrote it, the social media profile and the person who created it, the Instagram photos and Tiktok videos and the people who post them.
That’s the real threat of AI’s integration into our lives, not that it will become god-like, but that we’ll eventually not notice we’re treating it like a god. And if we ever reach a point that we can no longer tell the difference between a computer language learning model and a human, it won’t be because AI has achieved general intelligence. It will be because we’ve ceded everything about our lives — including our beliefs — to a mechanical puppet.
Rhyd Wildermuth
Rhyd is a druid, theorist, an autonomous Marxist, and an author of eight books, including the upcoming Here Be Monsters: How to Fight Capitalism Instead of Each Other, from Repeater Books.
He writes primarily at From The Forests of Arduinna
NOTES:
1 An fMRI is the “active” version of the magnetic resonance imaging technology used normally to identify anomalies in the body. The difference is much like that between a photograph and a film: if MRI is a snapshot of the brain at any given point, fMRI is a video of the brain’s activity over a period of time.
2 There is a problem here, which is that there is no way to know for certain that what a person was really thinking is what they were actually telling fMRI researchers they were thinking.
3 E.O. Wilson is an ... interesting ... person for them to quote. While never publicly supporting the theory that intelligence and personality are genetically linked to race (“scientific racism”), correspondence published after his death showed he was a strong defender of those who advanced it.
Coincidentally, as I was writing this, the Eastern Orthodox theologian David Bentley Hart published a discussion on Wilson also. He doesn’t address Wilson’s support of race science, but notes his other problems quite well. (See points 8-11). Note that I highly disagree with his earlier points, however, as his assertion that the death of Christendom also means the death of the last vestiges of paganism is just another version of the progress narrative.
4 The audience even applauds when ChatGPT generates a menu and then a picture of the meal...
5 This absence of studies on this is really strange. Still, I can at least speak for myself. Since I work from home, and live in a small village, and only see my husband for a few hours in the evening before he goes to sleep, I have very few embodied human interactions during the day. Most of my friends live in other countries or on other continents, and besides going to the gym or buy groceries, I have very few opportunities to even see other humans except on weekends. So, my percentage of technologically-mediated interactions is much higher than I’d like it to be.
6 I don’t even have my own phone number memorized anymore. In 2010, when I first got a cellphone, I had at least 30 memorized.
7 He, like most atheists, doesn’t seem to notice that monotheisms aren’t the only kinds of theism around.