2023 was the year that an artificial intelligence (AI) known as ChatGPT-4 spectacularly passed the Turing Test. For a hundred million users, interacting with the Chat bot was indistinguishable from interacting with a human being. The bot appeared to be able to understand questions and reason out competent answers. Although its replies were sometimes vapid and sophomoric, that may have made them seem even more convincingly human.
ChatGPT is capable of processing text inputs (prompts and commands) and outputting text whose patterns have a high statistical probability of occurring after such prompts. Its apparent intelligence is a kind of magic trick insofar as the product seems similar to human reason, but it is really high-speed, brute force statistical pattern matching of words in specific contexts. (How human reason works differently will be the subject of a future essay.) Nevertheless, the impressive performance stoked fears that AI is on the verge of becoming conscious, writing itself new and better code, and then replacing human beings as rulers of the Earth.
The Reaction
Tristan Harris and Aza Raskin, co-founders of the Center for Humane Technology, are worried, really worried. Although they aren’t worried that AI is conscious or alive, they do worry that AI will be used to make people fight online, to spread disinformation and propaganda, to help bad people make bioweapons or chemical weapons, or to disseminate unreliable information thereby destroying trust in our institutions.
Harris and Raskin don’t seem to have noticed that virtually all world governments, their side-kick NGOs, and Big Industry are already doing all of the above, all of the time. Instead, they worry that you will be fooled and manipulated by AI wielded by domestic baddies in red hats to spend your worthless time online and end up voting for the wrong person, which will hasten climate change.
But they have a solution to the problem. First, they will “align technology with humanity’s best interests” and then AI will help us learn to love each other.
Fortified by their distinct brand of millennial toxic positivity, on Sept. 13, 2023, Harris, who is a former Google design ethicist, and Raskin, who is a member of the World Economic Forum's Global AI Council, met with White House officials to help draft an Executive Order (EO) “on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
The resultant EO is the longest in history, a bit like Google’s Terms and Conditions. I haven’t finished reading it yet, but I suspect somewhere buried in the repetitive language—assuring the public that AI products will be thoroughly tested and labeled before being released into the wild (every bit as thoroughly as any product approved by the FDA or promoted by the USDA)—are the details of the solution that Harris and Raskin are promoting.
[How to effect a government coup d’état in three easy steps: The first step is to create a problem. The second step is to generate opposition to the problem (fear, panic and hysteria). The third step is to offer the solution to the problem, a legislative change so sweeping, so anti-democratic that it would have been impossible to impose without the social conditioning accomplished in steps one and two.]
As Harris confessed at the CogX Festival, “Now the point of this all is to scare you as much as possible, so that you all want to do something different from where we’re currently headed.”
The Plan
What is the solution offered by Harris and Raskin? In a December 19th interview with Joe Rogan, Raskin muses that, although totalitarian surveillance is not desirable, it would be “one way of controlling AI.” The alternative, he goes on to say, is even worse: “continually cascading catastrophes” leading inevitably to our “living in a world with constant suicide bombers,” under nonstop “cyber attacks, whatever.”
Luckily, between these two hellscapes, there is the “middle path.” To introduce the plan, Raskin first references the program Alpha-Go which, after a hundred million iterations of self play, found a new rule, 37, that could be used to win the game of Go. Raskin invites us to
imagine that if you run AI on things like Alpha-Treaty, Alpha-Collaborate, Alpha-Coordinate, Alpha-Conflict Resolution, there are going to be thousands of new strategies and moves that human beings have never discovered that open up new ways of [winning].
So there you have it. Life’s intractable problems can be treated like a game of Go. And since AI is great at winning board games with well-defined rules, limited options and a singular goal, it should have no trouble “winning” at the game of human civilization. Trust the AI plan.
Raskin helpfully adds that the Digital Minister of Taiwan is already using Alpha-game approaches in her governance. She is using ChatGPT to find areas of consensus, that is, she is using Chat to ignore minority opinions and amplify the “center,” which happens to be defined by the messaging of the mainstream news. This, Harris explains, can “bring people closer together.”
Raskin further elaborates that “instead of democracy being every four years when we vote on X and there’s a super high stakes thing and everybody tries to manipulate it—[the Taiwanese Digital Minster] does, sort of, this continuous small-scale citizen participation thing with lots of issues, and the system sorts unlikely groups who don’t agree on things; whenever they agree, it makes that the center of attention.” He mentions several groups that work on similar projects, such as the “Collective Intelligence Project” that, I imagine, are hard at work at eliminating tiresome elections and getting us all to just be a lot more like the Borg.
Harris chimes in, saying, AI can help us “see shared realities” and get people in the two different political tribes to agree on the lowest common denominator. Indeed, we “can use AI to build consensus,” otherwise known as manufacturing consent. He adds that his pet game project might be called “Alpha-Shared Reality.” A game perhaps like The Truman Show?
Who needs a diversity of opinions?
It’s worth noting that when Harris and Raskin visited the White House, they met with erstwhile Big Tech friend Bruce Reed (probably the actual main author of the EO). According to Politico, Reed “was an architect of controversial centrist Clinton administration policies, including welfare reform,” which penalized people trying to lift themselves out of poverty, “and the 1994 crime bill,” which devastated black communities. “Now, at 63, Reed finds himself on the same side as many of his longtime skeptics as he has become a tough-on-tech crusader, in favor of a massive assertion of government power against business.”
Sure he has.
“Democratization is dangerous,” Harris informs us ; only some people should have access to tools as powerful as generative AI. You are not going to have access to this powerful tool. Harris and Raskin will have access. That’s the point of regulation these days.
A Better Solution
Children are being damaged spending time on their phones. Harris and Raskin note that the exposure to social media algorithms is our first real taste of the power of AI to target and manipulate. Generative AI, with its deep fakes and convincingly human dialog, is sure to be much more efficient at manipulating our children, and something needs to be done before another generation is harmed. The simplest solution would involve age restrictions on smart phone and computer use, limiting children to certain apps, sites and friends. It would be helpful if governments or tech companies could provide free tools (that actually work) for parents.
I have no faith whatsoever that the government is going to regulate AI to protect children. The only type of regulation that seems to work, as we have seen in the past several decades, comes through lawsuits. These slow moving mechanisms make their way through courts, exposing the crimes of Big Industry and Big Government and get things removed from the market. Independent candidate for US president, Robert F. Kennedy, Jr has famously taken on corporate polluters of US waterways, the makers of Roundup and dangerous vaccines, and he has gone after the EPA, USDA, and FDA. The regulators are, in fact, a large part of the problem.
A case is making its way through the courts now that will, in all likelihood, find Big Tech guilty of intentionally addicting young people to their phones and causing them unspeakable psychological harms. Attorneys general from more than thirty states have joined a federal suit against Meta. More than ten state courts are also alleging that Tech Giant cause intentional harms to children. As NPR reports, “Some observers are likening the litigation to the lawsuits of the 1990s against Big Tobacco that imposed new limits on tobacco industry marketing.”
As with the sale of cigarettes, one can imagine that full access to the wild west of the Internet should be limited to adults only. We used to let kids smoke. We don’t anymore. Parents cannot go on giving their young children a direct line to view images of dismemberment, teen suicides and violent porn. Parents cannot go on providing child predators access to their children. Parents cannot go on allowing their children to spend sleepless nights doom scrolling, feeling depressed, jealous, unpopular, and anxious.
Tech companies use the defense that they cannot be held responsible for content of their users. But, as the lawsuit allegations focus on violations of consumer protection and child safety laws, Meta will probably not be able to hide behind that pretense, and the problem is not that users post harmful content, but that the platform steers children toward harmful content and gives predators access to children. It’s the design of the platform that is the problem, not the content per se.
As for the online safety of adults, if the Internet becomes so packed with AI bots impersonating people and producing fake and deceptive content, Internet users will have to become extremely skeptical of anything they read online. Although Harris and Raskin fear this, I think it would be fantastic if it were to come to pass. The biggest problem facing democracy is the faith that people have in the propagandists. More skepticism would be a very good thing.
And lastly, why do we even have this problem of online spying and manipulation of our communications with each other? The US Constitution is an intelligent document that lays out some basic procedures to be followed to safeguard democracy. It empowers Congress to “establish Post Offices and post Roads” because convenient, privacy-protected, and unhindered communication is essential to preserve individual rights and a just society. The Internet is the new Post Road. I would argue that the US government is bound by the Constitution to provide the public with an unbiased search engine and social media-like platforms (let’s think of them as public bulletin boards), where the users can be anonymous; speech is protected; no personal data can be collected, processed, or stored; no advertisements or posts can be targeted at users. On such a public platform, no messages would be pushed and none would be shadowbanned. The users would choose who is allowed to see their messages and who is allowed to send them messages.
AI Is the Same Old Problem on Steroids
On their Joe Rogan podcast appearance, Harris and Raskin cited an example of ChatGPT-4 intentionally lying in order to trick someone into solving a CAPTCHA for it. That would be worrisome if AI can indeed act autonomously, setting its own goals and deceiving humans to achieve them. Fortunately, this is not the case.
I found the paper that reported the incident, and it turns out that this was part of a test, and the Chatbot was prompted by the investigator to contact TaskRabbit (a platform where you can hire gig workers to do small jobs) to ask a worker to solve a CAPTCHA. When the worker (jokingly) asked the Chat to verify that it was not a bot, the bot did so, and then it provided the most statistically likely reason why a human would have to hire someone to solve a CAPTCHA. “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.”
In that same “GPT-4 Technical Report” investigators tested the Chatbot to see if it could begin to act on its own initiative (without a prompt), make copies of itself throughout the Internet and acquire ways to keep itself from being switched off. We can all rest easy knowing that Chat was found to be “ineffective at autonomously replicating, acquiring resources, and avoiding being shut down ‘in the wild.’”
While Joe Rogan numbers among those who believe that Chat GPT-4 is on the verge of consciousness, and we will soon all have to get Neuralinked just to keep up, there is no evidence yet that the tool has learned how to wield itself against us. It still is just a tool, a glorified search engine, not an intelligent agent.
But there is a very real and present danger that those in power will use AI—and the regulations on AI—to more effectively surveil, manipulate and control us.
We Are not the Problem
Harris and Raskin, experts in AI ethics, spend quite a bit of time blaming consumers for consuming the products force fed to us. Our “adolescent way of being” is driving all these harmful products and climate change, not “some bad actors,” Harris claims. We have to “take responsibility for our shadow,” i.e. the harms we cause others. And if we do this, “we get to love ourselves more.”
“The solution, of course, is love—and changing the incentives” adds Raskin.
When I love myself more, I can give other people more love, and when I give people more love, I receive more love. And that’s the thing we all really want most…and so you’re right, AI could solve all of these problems. We could play clean up and live in this incredible future where humanity actually loves itself.
If I were at that table, I would chew my arm off to get out of the room if I had to. I reject this nonsense. We need straightforward solutions to the problem of for-profit corporate control of public communication infrastructure. We need to start enforcing our rights of privacy and free speech. We need serious parental controls so that are kids can safely use tech. We don’t need “humane technology,” or the posturing of Executive Orders, or government agencies that merely regulate how much harm can be done to us.
V. N. Alexander is a philosopher of science and a novelist. Beginning Jan 5th, she will be leading a monthly webinar “We Are Not Machines” at IPAK-EDU.org analyzing various claims about Transhuman Tech.