I’ve been fretting about the coming AI revolution for a decade now. It started when I realized that the biggest threat to the human body
was going to be not climate change or political turmoil but the
persistent human weakness for tech wizardry. In 2014 there were only 6
people in the world paid full-time to try to prevent AI from wiping out
humans (according to AI researcher Nick Bostrom). That year I did my TED
talk on The Erotic Crisis
about my fears. But then finally journalists started asking what I
thought were the right questions; not “will AI kill us?” but “what
effect AI will have on human flourishing?” So I felt I could stop
obsessing about it and return to artmaking.
![]() |
Self portrait in my AI-generated studio |
Now
we’re faced with the game-changing appearance of AI generators, which
“create” brand new text or images from human prompts. The results are
eerie and downright frightening (as when Bing’s Sydney insists that Kevin Roose loved it instead of his wife!) Will humans be obsolete now? Not yet, but it is looking more dire for us every day.
“Alignment”
(of AI with human values) has been an ongoing fear since the beginning
of AI. The final singularity will be the objective one, when AI actually
does permanently overtake humans. But I contend that no one will really
know when that happens. It will make little practical difference,
because that will
be proceeded by three others that will mark the end of human dominance
on earth:
The first Singularity will be Relational,
when humans will no longer be able to tell when they are dealing with a
machine or a person. This is already happening in customer service and
other such applications where it really doesn’t matter much whether the
voice on the other end is human as long as your problem is solved. But
where the relation is most important–– such as between a
political representative and her constituents, or AI posing as
intimates–– this will be catastrophic.
The second one will be Economic,
the point where AI so disrupts employment that vast sectors of the
population are made obsolete, as the jobs that have supported us forever
are replaced with AI programs that can do the same work, without breaks
or any requirements other than power, for mere pennies.
The third will be a Political
one, when a certain large portion of the population reacts against the
strangeness of a newly unfathomable world where humans have lost
control. They will blame whomever they hate most (immigrants, Democrats,
techies) and, fueled by social media wildfires, launch a war against
the perceived perpetrators, regardless of facts. When AI becomes
misaligned with “human values” will anyone anywhere be able to tell
source?
And of course the final singularity is Loss of human agency,
but by that point it will forever be unknowable, which hardly matters.
Once civilization is controlled by an invisible hand, humans will simply
not matter. “The AI does not love you, nor does it hate you. You are
made of atoms it can use for something else,” Eliezer Yudkowsky (one early AI safety guy) quite chillingly states. Furthermore, humans will only be able to tell in retrospect, when we realize that at some point we lost all control.
Much
of this is beginning to sound familiar now. I believe the dangers are
so broad and so hidden that when we hear of specific ones it hardly matters since so many more lie hidden or even
beyond human comprehension. Still, we should all be aware of the threats
and take them seriously!
AI is being developed
without controls by competitors for an unbelievably huge prize, a recipe
for certain destruction. Even if all parties know it’s a race to doom,
every one of them will rather be first than see the other guy win. This
is fixed human nature, I’m afraid. Since the capitalist market is now
our god, greed will be our downfall. In such an environment, AI will
steadily grow in capacity while humans will only defend those places
where we can see our own weakness. AI will overtake humans not in areas
we imagine, but the places we never thought of, since it will operate in
ways that never occurred to us! It won’t be until after the takeover,
if ever, that humans will finally see where our weaknesses actually lie.
More likely, we’ll never know how we lost that battle. That’s of course
too late.
I don’t see any remedy, other than a full stop, which Yudkowsky recommends.
More skepticism and more regulation placed on AI will help slow the
crisis. But like Narcissus, we might just die transfixed with the image
we see in our reflection.
I hope we can all apply our humanity to
this problem. We need us all. In the meantime, take great refuge in
your relationships. Human relationships are what make life worth living.
Store up your treasure there!