I caught myself the other day. I had a problem—not a complicated one, just something I needed to think through—and before I'd spent even thirty seconds with it, I'd already opened Claude. I was reaching for AI the way you reach for your phone when you're bored. Automatically. Without deciding to.
That scared me.
I work at ElevenLabs. We make anything and everything AI audio. And we're good at it. I spend my days helping enterprise customers figure out how to use our technology. I believe in this technology—genuinely, not just professionally.
But lately I've been worried about something I can't quite name. It's not that AI will take my job. It's not that it'll become sentient and enslave us. It's something smaller and more corrosive. I'm worried about what happens to the parts of me that I keep handing over to machines.
The standard AI discourse gives you two options: breathless optimism or existential dread. The accelerationists want to remove all guardrails because progress. The doomers want to stop everything because risk. Both positions are lazy. They let you skip the actual work of figuring out what you're building and why.
Here's where I've landed: the people selling AI should be the most worried about how it's used. Not because the technology is bad. Because it's good enough to cause real damage if we don't set boundaries.
The danger isn't that AI will replace human thinking. The danger is that we'll use it as a calculator for our cognition—a calculator for our logic, a calculator for our emotions, a calculator for our instincts—until the original machinery atrophies from disuse.
There's a concept in Frank Herbert's Dune that I keep returning to.
If you've only seen the movies, you know the surface: desert planet, spice, giant worms, Paul Atreides becoming something more than human. What you might not know is the backstory. Ten thousand years before the story begins, humanity fought a war called the Butlerian Jihad—a crusade against thinking machines.
They won. And then they made a law: Thou shalt not make a machine in the likeness of a human mind.
Here's what's interesting. The Dune universe isn't anti-technology. They have interstellar travel. They have shields that stop any fast-moving object. They have stillsuits that recycle every drop of moisture from your body. The prohibition was specific and narrow: no machines that think like humans. No artificial minds.
And here's what's more interesting: in the void left by thinking machines, humans developed themselves.
The Mentats—human computers trained from birth to process information, calculate probabilities, and provide strategic counsel without the use of machines. The Bene Gesserit—women who mastered their own biology, psychology, and perception so completely that their abilities seemed supernatural. The Spacing Guild—navigators who evolved to fold space itself using nothing but their minds and the spice that expanded their consciousness.
Herbert wasn't writing a Luddite fantasy. He was asking a question: When machines handle the hard cognitive work, what happens to human capability? Do we develop further? Or do we let those capacities die?
The Butlerian Jihad wasn't anti-technology. It was anti-atrophy.
Sixty years later, that question has teeth.
I've been running an experiment on myself.
The premise is simple: I'm being deliberate about where information lives based on what I want to happen to it.
Some things I hand to AI without hesitation. Summarizing long documents. Drafting initial versions that I'll rewrite. Processing large volumes of information into something usable. Synthesizing Slack threads into briefings. That's machete work. AI should do it. That's what it's for.
But for the knowledge I actually need to retain—the strategic understanding of my accounts, the patterns I'm trying to internalize, the decisions I need to own—I keep that in a leather notebook. I write it by hand. And when pages are no longer relevant, I pull them out and archive them. What survives gets rewritten into the current system.
This sounds precious. I know. But the research backs it up: the friction of handwriting engages different neural pathways than typing. The act of compression—of deciding what's worth the effort to transcribe—forces you to actually think about what matters. What I write by hand, I remember. What I delegate to a search function, I forget.
I'm not telling you to buy a fancy planner. I'm telling you that I noticed my own thinking getting thinner, and this is what I did about it.
The Mentats didn't reject technology. They trained themselves to be the technology—but only for the things that mattered most. The rest they delegated without guilt.
That's the balance I'm trying to find.
So what does responsible AI actually look like in practice?
I work at a company that's trying to figure this out. ElevenLabs built voice technology with consent frameworks baked in from the start. Voice actors who use our platform get compensated. We give free licenses to people with ALS, cerebral palsy, mouth cancer—conditions that have taken their ability to speak. We work with governments and medical communities on responsible deployment.
This isn't marketing. It's architecture. The ethics aren't bolted on after the fact. They're load-bearing.
Linear, the project management company, published something they call the Agent Interaction Guidelines. It's the clearest framework I've found for thinking about how AI agents should operate in the world. Three principles that matter:
An agent should always disclose that it's an agent. When humans and AI work side by side, humans need instant certainty about who they're dealing with. No ambiguity. The boundary stays visible.
An agent should respect requests to disengage. When you tell it to stop, it stops. Immediately. No negotiation, no "are you sure," no friction.
An agent cannot be held accountable. An agent can execute tasks, but final responsibility remains with a human. Always.
Not because the AI is unreliable. Because accountability is a human function. It's one of the things you don't get to hand over.
I've started calling this a Greenpunk future—borrowed from a niche genre that contrasts with cyberpunk dystopias. Think Dune, Mad Max, Wall-E, Avatar. If you're an anime fan: Nausicaä of the Valley of the Wind and Princess Mononoke. The phrase sounds strange, but it captures something I don't know how else to say.
Greenpunk isn't accelerationism—it's not "move fast and break things" applied to how humans think. It's not treating optimization as a terminal value or assuming that more AI is always better AI.
Greenpunk isn't doomerism either. It's not refusing to build because something might go wrong. It's not treating all AI as inherently exploitative.
Greenpunk is the position that we can have this future, but only if we build it with intention. AI as tool, not as substrate. Technology that amplifies human capability without replacing human judgment. Systems with clear boundaries, visible accountability, and the wisdom to know what they shouldn't touch.
The people who should care most about this are the people with their hands on the wheel. If you're building AI and you're not a little worried about where this goes, you're not paying attention. And if you're so worried you can't see what's genuinely good here, you're not paying attention either.
Both things can be true. I sell AI for a living. I think Herbert had a point.
I'm building the future and I'm keeping a paper notebook, and I don't think that's a contradiction. I think it's the only honest position available.