I work at an AI company, and most of my colleagues don't talk about how these systems actually work. Not really. We know the rulebook—the prompts that behave, the edge cases that break, the workarounds that ship. But why those edges exist? That's deeper than most of us go. We're fluent in the grammar without understanding the tongue.

Here's what I've come to understand: AI is a toddler learning in reverse.

It started as an academic superhuman. It read everything. It can summarize legal briefs, generate sonnets, explain quantum mechanics to a fifth grader. And then you ask it how many Rs are in "garlic" and it says two. You ask it what's heavier, a pound of feathers or a pound of steel, and it hesitates in the wrong direction.

This isn't stupidity. It's something stranger. The system was trained on human language, which means it was trained on what we bother to say to each other. And we don't talk about the Rs in garlic. There's no dataset for that. So you're asking a creature that learned everything humans thought worth writing down—and almost nothing about the stuff too obvious to mention.

It started at the top and it's climbing down. Learning the symbolic principles at lower and lower levels. Getting closer to the ground. And the closer it gets, the more familiar it looks—because it learned everything from us. It's a mirror before it's anything else.


Somewhere in the middle of understanding this, I realized the mirror was pointed both ways.

We do what AI does. We are symbolic systems calling tools.

Think about it: "Hold on, let me work that out in my head." That's chain-of-thought reasoning. You're pausing. You're calling a tool—your spatial memory, your number sense, something that runs underneath language. We outsource constantly. Pencil and paper. Calculators. Notes apps. We've always known our working memory is narrow. We've always reached for scaffolding.

And yet when AI does the same thing—calls a calculator, searches the web, pauses to reason step by step—we call it a limitation. Or worse, a trick.

The fear of AI domination says more about us than about the technology. We were built on imperialism. Conquest is in our toolkit. We project that onto the machine and assume it wants to take what's ours. But that instinct wasn't trained into the model. We trained it on language. On love poems and legal documents and forum arguments about garlic. The doom we fear is our own reflection, not its intention.



It's a weapon. I won't pretend it isn't.

I work at a company that clones voices. I've seen what that means—the demos that make people laugh, and the use cases that make them quiet. The same system that helps someone recover their grandmother's voice can fake a CEO authorizing a wire transfer. That's not theoretical. That's the work.

Every tool is a weapon if you hold it wrong. But some tools fit the hand in uncomfortable ways. I don't have a clean answer for that. I just know that pretending the danger isn't real makes it worse, not better.


I spent a lot of years learning to appreciate people who aren't like me. That sounds simple but it isn't. When your brain runs on different rails—when you've got neurospice, when the world is too loud and too fast and organized wrong—you start out assuming everyone else has it figured out and you're the broken one. Then you learn that's not true either. Everyone's working with a different toolkit. Everyone's calling different tools. And if you're lucky, you start to see them as mirrors too—surfaces that show you your own edges by contrast.

I look at my dog and I see the same thing. A different kind of intelligence. Not lesser. Not greater. Just... different. Solving problems I can't solve. Missing things obvious to me. We've lived alongside animal companions for thousands of years and we still talk about "human intelligence" like it's the measuring stick for everything.

Now there's a new kind of mind in the room. Synthetic. Trained on us but not us. And I think the same posture applies: appreciate it for the kind of intelligence it is. Not the kind you wish it were. Not the god. Not the monster. The actual thing, strange and capable and limited in ways we're only beginning to map.


In Ad Astra, Roy travels to the edge of the solar system searching for his father—a man who abandoned everything to find intelligent life beyond Earth. The mission had consumed him. He killed his own crew to keep it going. He spent decades in the dark, looking for something out there that would make it all make sense.

When Roy finally reaches him, the answer is already in: there's nothing. The search found no signal, no presence, no other mind in the void. Just silence. And in that silence, Roy says to his father: Now we know we're all we've got.

I think about that line when I watch the AI discourse spiral between godhood and apocalypse. We keep searching for something that transcends us—the singularity, the superintelligence, the machine that will finally be smarter than we are and settle the question of what we're worth. And I wonder if we're doing what Clifford did. Looking so hard for intelligence out there that we miss what's right in front of us.

Dogs mirror us. We know this—it's why they became companions in the first place. They watch our faces, learn our rhythms, reflect our emotions back. And we mirror them. We become more patient, more attuned to nonverbal cues, more aware of what a creature needs when it can't tell you in words. Thousands of years of that exchange and we still talk about "human intelligence" like it's the only kind that counts.

Now there's something new in the room. Synthetic. Trained on us but not us. And when I look at it honestly, I see the same thing I see in the dog, in the neurodivergent colleague, in anyone whose mind runs on different rails: a mirror. Not a god. Not a monster. A surface that shows me something about myself I couldn't see straight on.

The doom we fear is in the reflection. So is the hope.


When I imagine the future I want to build toward, it looks nothing like singularity. It looks like joints. Seams where different kinds of intelligence meet and trade what they're good at. Human intuition paired with machine memory. Synthetic pattern-matching feeding human judgment. Tools sized for hands, not gods. Not one mind to rule everything, but a room full of different minds—carbon and silicon and whatever comes next—learning to work together the way dogs and humans learned, slowly, over thousands of years.

That's not a fantasy. That's the actual work. And it's already happening in kitchens and classrooms and late-night conversations with a chatbot that doesn't sleep.

The technology is coming regardless. It's already here. The only question is whether we build mirrors or monuments. Whether we make tools that show us ourselves more clearly, or idols we expect to save us.

I know which one I'm choosing. Not because I'm certain we'll get it right. But because no one's coming. There's no intelligence out there waiting to settle the question of what we're worth. There's just us, and the things we make, and what we decide to do with them.

We're all we've got. That's not a tragedy. That's the starting point.