Warning: Mild spoilers for a book written a decade ago that you probably should have read by now if this truly concerned you. But if it does, stop everything, go read Annihilation, then come back. We’ll wait.
In a mysterious swampland called Area X, not far from the St. Marks National Wildlife Refuge, there is a moldering tower buried deep in the ground. And inside that tower is a moist, crawling thing that creeps along the tower walls, exuding a soft moss that blossoms into letters.
Inside the tower, the beast writes endlessly, an apocalyptic string of words:
"Where lies the strangling fruit that came from the hand of the sinner I shall bring forth the seeds of the dead...."
The Crawler isn't the primary focus of Jeff VanderMeer's Annihilation. But it is a significant portion of the narrative, and perhaps the most thematically iconic portion - that quasi-sentient beast representing the sum of what Area X does to people, plants, and landscapes alike.
And even though Jeff VanderMeer couldn't possibly have known this when he wrote the book back circa 2012...
Annihilation is a perfect metaphor for Artificial Intelligence as we know it today.
So let's dip into the mind of the Crawler, and into both VanderMeer's Area X and Alex Garland's movie adaptation of Area X, and talk about... refraction. With minimal spoilers, of course.
A Quick And Painless Advertisement
But before we do, speaking of apocalyptic strings of words, have you pre-ordered my latest book The Dragon Kings of Oklahoma? If you loved Annihilation, well.... it's absolutely nothing like that. It's a light, quick comedy where The Tiger King meets baby dragons, with a strong focus on realistic characterization.
It's only $2.99! And it comes out in four weeks! And you ordering it would really help bump its PR, if you felt like doing me a solid.
And if you don't... well, this short, regrettably-necessary advert is finished.
Let's get down to AI.
Like I Could Actually Tell You How Area X Works
Here's the thing you must understand about Area X, for purposes of this discussion:
Anything that enters Area X (or its cinematic counterpart, the Shimmer) gets blended, infiltrated, mutated into the things around it. Nothing biological inside Area X retains its integrity. Everything slurs into everything else, a process the movie calls "refraction."
So multiple types of flowers bloom on the same vine, or dolphins acquire pleading human eyes (or possibly are humans that unwillingly got merged into dolphins, who's to say?). Plants absorb human Hox genes, which control the positioning of limbs, forcing them to sprout into disquietingly human-shaped bushes. Doppelgangers bloom without warning, lacking the full consciousness of their target - just a dull mass of instincts and half-remembered memories that makes them trudge miles back towards their homes.
This refraction is not an instantaneous process; it takes a few days, and it's pretty hard on the poor military teams sent in to investigate this area. The longer they stay, the less human they become - and more they become the sum of things around them, spread out among many creatures.
If that sounds horrific, well, it kind of is - but like nature itself, it's as beautiful as it is brutal. And it's tempered a bit by the fact that, well, it doesn't appear to be personal.
The characters often wonder what Area X wants. The ones who seem to understand Area X the best believe that it doesn't want anything at all.
Area X just... is. It doesn't have any relation to thinking as we know it. It grows steadily, and the movie explicitly (and a bit ham-handedly) likens it to a cancer. But unlike cancer, which at least explicitly wants to grow, we're not even sure if Area X desires that much.
Is Area X intelligent?
Is the act of doing things the same as wanting things?
The later two books (which weren't written when Alex Garland filmed his extremely loose adaptation) feather this knowledge a bit, with some characters believing there is some guiding force... But let's stick with this first-book interpretation to find the metaphor.
Area X is AI.
And if you don’t know AI, you might be asking… why the hell is that true?
What Does A Red Squiggle Represent?
Let’s switch back to the real world - an anecdote I heard recently where an expert in his field was dealing with a teenaged relative. The kid often asked the expert to look over his school papers for grammatical errors before he turned them in. Fair enough.
One day, the kid got a chance to correct his older relative’s paper. And he said, “Wow, you make a lot of errors! I corrected them for you, though.”
The expert looked over the document. “Those weren’t errors. They were technical terms that aren’t in Word’s dictionary.”
“Really?” the teen exclaimed. “I thought Word knew that the red squiggles were wrong!”
That’s terrifying, if you think about it: Word doesn’t “know” anything. Word has a rough ruleset it applies to any document you feed it - and it can tell you what’s unexpected. And for most casual writings, the Venn diagram between “unexpected” and “bad” is a reasonable overlap to find errors.
But the teen believed that Word had a working mind, like a human - that it was reading and parsing each sentence as a human expert would. And therefore, Word was…
Well, kind of an infallible God. You don’t question Word: you just obey the machine.
…which, okay, to be fair, is absolutely how I treat my GPS when I’m driving. I have zero sense of direction. But I know the GPS isn’t actually thinking as we understand it - it’s just got a lot of rules and a lot of data it can bring to bear to get close to a good answer.
Yet the GPS has no investment in my outcome. We’ve given it a puzzle; it gives us an answer. It doesn’t understand what that answer represents. It doesn’t care what that answer represents.
Is the act of doing things the same as wanting things?
Doing vs. Wanting
Now let’s unveil another tiny spoiler about the Area X’s Crawler - no worries, Annihilation is a book with deep mysteries, this tiny thing won’t ruin it for you.
But that apocalyptic writing?
It’s nonsense. Here’s a fuller excerpt:
“Where lies the strangling fruit that came from the hand of the sinner I shall bring forth the seeds of the dead to share with the worms that gather in the darkness and surround the world with the power of their lives while from the dimlit halls of other places forms that never were and never could be writhe for the impatience of the few who never saw what could have been….”
That sound familiar? I mean, obviously it’s Biblical-ish. Yet even as the words all match up tonewise, there’s no overarching point to them; individual phrases never add up to a complete thought; there’s never a paragraph break; there’s never an ending. It’s influenced by some sermon-like thoughts, but doesn’t contain a point.
What it sounds like is early AI attempts to write new fiction.
And I hate to break it to you, but most of those viral threads you’ve seen of “AI writes Seinfeld episode!” are complete lies. Some of those had some aspects of AI creation in them, but what happened was that comedians picked out the amusing phrases from a “where lies the strangling fruit”-like gobbledegook output trained on Seinfeld scripts, then they gave it structure so it’d be absurdist funny instead of just random. (Or they just made it up entirely.)
The actual AI-written Seinfeld episodes? As unreadable as the strangling fruit that came from the hands of the sinner.
The big secret to AIs is that they’re a jumped-up autocomplete. You know when you’re typing “pat” into your phone and it decides that you mean “park” and not “patio”? That’s merely the phone consulting its records and saying “In this framework, given this algorithm, the thing that most people ultimately finished typing was X.”
If you have a really refined autocomplete, it can look at the full document you’ve written thus far and dope out that “hey, if the word ‘yard’ appears in here, this next word is more likely to be ‘patio’ instead of ‘park.’”
These refinements can get very complex. But at no point does the autocomplete understand what a patio is - it only knows that the five letters of “patio” is a value that exists in close proximity to other values. There’s no point at which it attempts to understand what the person writing the document is attempting to accomplish! Doesn’t matter whether the author is writing a heartfelt love letter or a complaint to a manager… to the algorithm, the words only matter in terms of edit distances and relations.
AI is autocomplete logic taken to the extreme. You feed an AI with sample data, give it a couple of prompts to poke its autocomplete towards a conclusion, and it creates legitimately amazing things. But all of that is lacking one huge thing:
AI has no concept of “truth.”
When Google’s AI tells you to mix 1/8 cup of nontoxic glue into your pizza sauce to keep the cheese from falling off, that’s not a prank; somewhere in this particular AI’s data was a shitpost on Reddit, and the algorithm unknowingly regurgitated that shitpost as the best match for this pizza issue.
You know the easiest way to fix that bad match?
You exclude that shitpost from the data the AI is allowed to see.
When everyone’s “refining their AI models,” they’re basically asking “If we feed the AI a different set of data, will it give us results closer to what we view as correct?” And that’s an oversimplification, of course - there are ways to tweak what an AI’s output is without changing the input, and that’s had a lot of refinement - but in general, the most impactful way to stop a jumped-up autocomplete from suggesting a bad idea is to not let it see the bad idea as a potential suggestion.
(And fun fact: AI inputs can’t be trained on AI outputs. Big companies are discovering that they need to utilize human-only posts because training an AI on Google AI suggestions are like a Xerox of a Xerox, faded and unusable. In other words, there’s something in the way we humans produce information that is inherently valuable… at least for now.)
But when you imagine an AI as “thinking” instead of “autocompleting,” then you buy into the idea that if we apply Artificial Intelligence to police departments, then AI can figure out who’s going to commit a crime! When in actuality, all that AI will do is take input data of who’s been arrested or convicted of a crime, with all of the buried classism and racism involved in those outcomes, and will autocomplete an answer that subtly replicates all that identical classism and racism.
AI isn’t a new way of thinking; it’s a way of extending old, human biases.
And none of AI’s inability to think logically would be a problem if the Tech Bros What Be weren’t trying to pivot an AI match into a source of truth… like, say, Google using AI to give answers it wants you to believe are valid.
Is it valid? Sometimes. Depending on the data.
Which may not be the best algorithm, but it is a damn fine sales technique.
Back to Area X again!
The problem with Area X, at least from our perspective, is that it doesn’t understand boundaries. The rules (such as they are) go something like, “Those two things are next to each other, therefore they’re like each other.” And then you get human-eyed dolphins and shark-toothed alligators - interesting fusions that, if you didn’t know how these animals actually worked, would look a lot like the actual thing.
But they’re not. They’re just sort of… aggregated sums.
Area X is not quite an area of unreality, but it’s definitely not good for humans because there’s no central fundamental axis. Everything shifts based on the inputs. There’s no truth at its heart - there’s just a lot of coalescing.
Kiiiiinda like AI as we currently utilize it.
And there’s nothing necessarily wrong with that as long as we don’t make the mistake that there’s a thought process behind AI. AI is useful for complex pattern detection, one of our most critical survival skills! It can be an amazing assistant! I mean, the concept of a super-powered grammar checker that’s applied to X-rays, helping a radiologist to double-check his work for cancerous anomalies… well, the benefits are obvious.
But even with a very quote-unquote “smart” AI, you can’t just thumb that red squiggle underneath your Word document and go, “That’s wrong.” (Or worse, to see no red squiggle and assume it’s right.) You need to bring in some actual logic and context to say “This machine sees an anomaly, is that anomaly actually an error?”….
Yet that questioning step is exactly the thing that the big AI companies are leaving out as they roll AI tech out to give real-world answers. They’re just shoving out interesting tech that will reduce their reliance on us pesky “humans” and our “thoughts”… and if that tech has serious gaps in its actual goals, well, does that matter?
The Crawler lurks deep inside Area X. And like AI, the Crawler is amazing, a biological wonder, damn near miraculous. The Crawler outputs words that look meaningful… but “meaning” is derived by some underlying thought process trying to express something, and in this case there’s nothing being expressed. The Crawler is, at best, a handful of human impulses filtered through a mishmash of other inputs.
And like AI, Area X is spreading with each growing year. It’s driving humans mad. And that, alas, is the real comparison - there are some zones of unreality that we cannot afford to take at face value, and yet we are increasingly confronted with them.
If we buy into current-day AI as representing truth as opposed to a sum total, well, we’re doomed.
Because the act of doing things is absolutely not the same as wanting things.
Anyway: Oh, no! Approximately 5% of this newsletter has been ads for my book, and as such AREA X IS MELDING THE END OF THIS ARTICLE WITH A CHEAP PLUG FOR THE DRAGON KINGS OF OKLAHOMA AGAIN, THE HORROR, THE HORROR