The Thin Line Between Using AI and Being Used By It
Photo by Hassan Pasha / Unsplash

The Thin Line Between Using AI and Being Used By It

What happens to your capacity for thought when everyone writes with the same machine.

This post may feel random for my developing body of work, but I guarantee you it's not. AI is everywhere, some of us it, some of don't. Some of us don't know where to begin. But as a leader, it's developing in our orbit when it comes to work. As someone who loves it, believe in ethical use, here's what I have to say about its current state and how it can harm or help your own cognition.


In the past year, I've watched AI seep into every corner of content creation with the quiet persistence of water finding cracks. LinkedIn newsletters, influencer posts, corporate reports that cost a quarter million dollars—the infiltration is total, hallucinations and all. The tells reveal themselves once you know where to look, patterns repeating across industries and platforms like a signature no one meant to leave.

Parallel constructions pile up as if the writer found one hammer and declared everything a nail. Sentences fragment into two-word declarations, then bloat into explanations because the fragment couldn't stand alone. The sameness accumulates until it itches somewhere behind your eyes, and a question surfaces: who actually wrote this? The answer, increasingly, is no one—or rather, someone who pressed a button and accepted what emerged.

This morning, the final nail landed in a place I wasn't expecting. My favorite thought leader, someone I've followed for fifteen years and delusionally believe was my husband in another life, sent his Substack newsletter. It wasn't him—not his depth, not his brilliance, not his rhythms. The hollowness was unmistakable, and that was the moment this became personal enough to write about.

But first, I want to take you somewhere strange. What's happening with AI-generated content isn't merely a style problem or quality crisis that better prompts might solve. This hollowness spreading across the internet opens a window into something far more fascinating about humans and their complicated relationship with thinking itself, and following that thread leads somewhere the people selling these tools would very much prefer we didn't go.

The Miracle Solution They Sold Us

The conventional narrative has a satisfying simplicity that should make you suspicious. Lazy professionals cutting corners, frauds passing off machine output as human thought, a moral failure belonging to people who should have known better and chose convenience anyway. It's a tidy story with clear villains, and it lets the actual architects walk away without anyone noticing they've left the room.

Tilt your head and look at it sideways, though, and something else comes into focus. Consider what AI tools were marketed as from the very beginning—not thinking partners, not collaborators in the messy work of generating ideas, but replacement engines promising liberation from the burden of thought itself. The pitch decks and product demos positioned speed-to-output front and center while quality-of-thought never made it past the lobby, and the financial architecture tells its own story if you bother to trace it.

McKinsey cited $4.4 trillion in productivity gains while conveniently offering consulting services to capture those gains. Harvard's Ash Center tracked how AI keeps getting crowned as inevitable advancement despite persistent problems—built-in biases, hallucinated facts, intellectual property violations—because the narrative serves financial interests whether or not the product serves human ones.

The most revealing part is that tech insiders themselves quietly sing a different tune than their CEOs when the microphones aren't pointed at them, and even OpenAI's cofounder admitted that AI agents "just don't work" with current capabilities—a confession that would tank stock prices if anyone were paying attention to what the people actually building these systems believe about them.

So people used the product exactly as advertised, asked the machine to skip thinking, and it obliged with cheerful efficiency. The malpractice preceded the misuse, which means the crime wasn't user laziness but something far more calculated, baked into the pitch deck from day one by people who understood something about human cognition that most of us would prefer not to know about ourselves.

Why did this bargain work so well, this trade of your thinking for the relief of completion?

The answer lives in territory mapped by cognitive scientists long before anyone dreamed of ChatGPT, and what they found there isn't flattering to anyone involved.

When Exhausted Brains Meet Confident Machines

Tired brains defer to confident machines—not sometimes, not as a character flaw, but always, across every field where someone bothered to look. Automation bias research started tracking this phenomenon in aviation, then healthcare, then pretty much everywhere humans work alongside systems making recommendations. The pattern never varies, and the consistency is what makes it so unsettling once you see it clearly.

The more complex the task and the higher the workload, the more people accept whatever the machine suggests. Pilots ignore their instruments when the autopilot disagrees, and doctors defer to diagnostic software even when their clinical judgment raises flags.

One study on human-AI interaction discovered something particularly vicious in this dynamic—people doubt themselves specifically because the technology has been positioned as so advanced. Your judgment conflicts with what the system suggests, and instead of trusting your own assessment, you assume the machine knows something you don't. The algorithm didn't just replace your thinking; it undermined your confidence in your capacity to think at all.

Now layer in decision fatigue, which operates on its own brutal logic. Every choice depletes a finite cognitive resource, and when that resource runs low, humans default to whatever requires least effort. Psychologists have documented this thoroughly—when presented with a default option, a fatigued brain will take it because choosing requires active deliberation that the depleted system can no longer sustain.

Now situate this in the modern professional context, where you're expected to produce constantly, maintain presence across platforms, and demonstrate thought leadership whether or not your job involves having public thoughts. You're already running on cognitive fumes when the AI offers to carry the load, and the relief of completion overrides any recognition of mediocrity. You needed a post, and now you have a post, and the fact that it sounds like every other post registers only vaguely if at all. This is the exploitation at the center of the whole thing—systems designed to think for you, marketed to people too exhausted to notice they'd stopped thinking at all.

AI – The Oracle or the Workbench

This is where the path forks, and the terrain gets more interesting for those willing to stay on it. For those who find the sameness unbearable, who recognize themselves in the exhaustion but refuse to accept the bargain, there's a distinction worth understanding that changes how you relate to these tools entirely. The question isn't whether to use AI but how you position yourself in relation to it, and the difference between the two primary modes determines everything that follows.

Oracle mode is the default, the path of least resistance that the tools were designed to encourage. You pose a question, accept the answer, publish the result, and judgment gets outsourced along with production while the machine decides what's worth saying and how to say it. You become a relay station between algorithm and audience, your role reduced to quality control you're not actually performing because quality control requires the cognitive engagement you've already surrendered.

Workbench mode requires something fundamentally different, a posture that runs against the grain of how these tools were marketed. You bring raw materials to the table—a perspective even if it's only half-formed, a question you're genuinely trying to answer, context about what you're attempting and why it matters to you specifically. 

Workbench mode requires something fundamentally different, a posture that runs against the grain of how these tools were marketed. You bring raw materials to the table—a perspective even if it's only half-formed, a question you're genuinely trying to answer, context about what you're attempting and why it matters to you specifically. You use the machine to pressure-test your thinking, to surface angles you hadn't considered, to generate variations you can react against and argue with.

The tool remains the same in either case, and the difference is whether you show up to think or merely show up to publish, whether you treat the machine as a collaborator in your own intellectual work or as a replacement for having intellectual work in the first place.

Building Your Workbench

The question that surfaces once you understand the fork is practical and immediate — what does workbench mode actually look like when you're sitting in front of a blank prompt with a deadline breathing down your neck? The concept makes sense in theory, but theory dissolves quickly when exhaustion arrives and the machine is right there, ready to carry the load you're too tired to lift.

I've been using AI throughout the development of this piece, which means I've had to answer that question for myself in real time. Using Claude not as replacement for thinking but as a surface to think against requires deliberate structure that the speed-to-output marketing trained everyone to skip. What I've found actually matters comes down to architecture—the deliberate structures that keep you in the work rather than floating above it, and each one runs against the grain of how these tools trained us to interact with them.

Context means providing enough information for genuine usefulness, which most people skip entirely. Dumping a prompt into a blank space expecting magic resembles asking a new hire to write a quarterly strategy on day one without company history, audience information, or examples of what good looks like. Generic input yields generic output, and this is structural rather than mysterious—the machine can only work with what you give it.

Voice means protecting your sound from the machine's defaults, which requires active and ongoing resistance. Every AI has stylistic tendencies—preferred sentence patterns, vocabulary reaches, structural habits that emerge across millions of outputs. Without deliberate counterpressure, your writing drifts toward the mean, toward everyone else using the same tool, toward those tell-tale patterns we started with.

Boundaries mean defining what the AI should and shouldn't do, and this is where most people fail entirely. Boundaries require knowing what you want before you ask for it. You can instruct the machine to offer options instead of answers, push back on your thinking instead of validating it, identify argument weaknesses instead of just polishing prose. It means structuring your projects with guardrails so precise that Claude will say "I can't do this because you've told me not to write for you," which is exactly the kind of friction that keeps thinking active.

All of this might sound like a lot of effort for what amounts to writing a newsletter or finishing a report, and if output quality were the only thing at stake, you'd be right to wonder whether the friction is worth it. But there's something else happening when you defer to the machine without resistance, something that extends far beyond the mediocrity of any single piece of content.

The workbench isn't just about producing better work—it's about protecting something you might not notice losing until the loss is already significant.

From User to Used

Here's the part that should genuinely unsettle you, the part that moves beyond productivity concerns into something that touches on who you're becoming. If you approach AI as a way to avoid thinking, you will produce mediocre work regardless of how sophisticated your setup becomes — but the deeper risk isn't the quality of your output. The deeper risk is the slow atrophy of your capacity for original thought, happening so gradually you won't notice until it's already far along.

Each acceptance without critical engagement trains the pattern of deference a little deeper into your cognitive habits. Each time you take the output without pushing back, without arguing, without asking whether this is actually what you meant, you weaken the muscle that generates distinctive ideas. Recent research found that heavy AI use correlates with elevated cognitive miserliness — the more you defer, the more you train yourself to avoid cognitive effort entirely, not just in AI interactions but across everything you do.

The feedback loop closes so elegantly it would be beautiful if it weren't so troubling. Exhaustion drives reliance on the machine, reliance weakens your cognitive capacity over time, and weakened capacity increases the exhaustion that drove you to rely on the machine in the first place.

You're not just outsourcing production anymore—you're outsourcing the very judgment that would tell you the production isn't good enough.

The machines will keep improving at sounding human, at mimicking the patterns and rhythms that used to signal a mind at work, and that trajectory isn't going to reverse. What remains in your control is whether you keep the work of thought close even when systems offer to carry it for you.

This means treating AI output as raw material rather than finished product, as something to react against rather than passively accept, and it means slowing down in an environment that relentlessly punishes slowness. The alternative is watching your own thinking flatten into the same sameness, one accepted output at a time.

Share this article
Share

Written by

Macala Rose
Macala Rose
mindmeaningmatter.substack.com
LinkedIn

What's Next?