Breaking out of the "brain fry" spiral of AI
If you're going to use AI, you can avoid letting it fry your brain
I’ve been doing a lot of reading on AI lately, and not the kind you might expect. I’m not talking about how to use AI to increase my efficiency or supercharge my output. What I’m actually thinking about is what AI is doing to us as a culture, as a society, and as humans who still have to live inside our own heads.
I’ve been sitting a lot at the intersection of psychology and AI use in the real world (I figured I should finally use my college degrees at some point), and I keep coming back to the same uneasy feeling.
There’s this pressure in tech right now, and honestly beyond tech, to do more, work faster, and get more done in less time. There’s an assumption underneath all of this that states if you’re using AI, you can absolutely increase your output. No excuses. For many companies, this has crossed a line. Using AI in your day-to-day is expected, and if you’re not using it, your job may be at risk. It’s not a hypothetical for a lot of people right now.
I understand the appeal of moving fast. Right now I have several different terminal tabs open, with each running a different agent: one reviewing SLO data, one planning my day, one investigating a bug. All different things, all running simultaneously. But I can’t actually focus on all of them at the same time, and neither can you, and neither can anyone.
Humans are not built to multitask. Think of your cognitive bandwidth as a plate. The plate doesn’t grow. You can’t create more room just by adding more items. You move things around, you allocate, you juggle, but you’re always working within the same fixed space. Adding AI to the mix doesn’t change that; it just gives us more things to pile on.
When you shift from writing code yourself to throwing four JIRA tickets at four different agent worktrees, you’re no longer doing the creative problem-solving. You’re orchestrating. You’re managing. Every agent is essentially pinging you, “I need attention, I need attention,” and all you’re doing is context switching between them. It ends up feeling less like being an engineer and more like managing a team of engineers you can’t fully trust yet. There’s real value in that kind of delegation, but AI still lacks nuance. It only knows what you’ve given it, and no matter how large its context window, it won't have everything it needs without you staying in the loop.
This has become a normal way of working in tech, and has actually shown to increase overall output. Most companies have small but annoying bugs that are super easy to throw at AI to identify the issue and resolve it with minimal involvement. Models are also becoming much better at breaking down a much more complex task and working through that as well. I’m not going to pretend that there aren’t productivity gains happening here.
But running multiple agents doesn’t mean you’re doing less work individually. You’re doing different work, and sometimes harder work. Reviewing, directing, correcting, and integrating AI output is a cognitive load in its own right, and in some cases, it’s heavier than just doing the thing yourself.
The Harvard Business Review article When Using AI Leads to “Brain Fry” puts a finer point on this. They define AI brain fry as mental fatigue from excessive use or oversight of AI tools beyond one’s cognitive capacity, basically that feeling of brain fog at the end of the day when you’ve spent so much energy just trying to keep up with everything running in parallel. And they found that the most mentally taxing form of AI engagement wasn’t using AI, it was overseeing it.
I was talking to one of my engineering managers about this recently and they shared their take on avoiding the cognitive load. It seems simple on the surface, but it’s something I think a lot of folks would struggle to adopt. The most helpful thing they’d found when managing multiple agents simultaneously was to stop watching them work.
Watching an agent in real time, waiting to catch a mistake, scanning every line it produces, is like standing over someone’s shoulder while they do a task. Their approach instead: let the agent work, and respond when it actually needs you. It’s a reasonable approach, but not without its own tension. People crave control, and stepping back requires a level of trust that the agent isn’t going to go off the rails without you catching it in time. But compulsive monitoring is a fast track to exactly the kind of brain fry the HBR article describes.
I believe one contributor to brain fry is skill atrophy through AI adoption. Think about how you navigate. If you’re like most people, you open Google Maps or Apple Maps before you’ve even started the car. You follow the directions, you arrive, and it works really well. But a lot of people will self-confess that somewhere along the way, they lost the ability to navigate without it, and I don’t mean it’s just faster with the app. They genuinely lost the ability to build a mental map, pick up landmarks, and understand where they are in relation to where they’re going. That internal GPS that humans have been developing for thousands of years quietly atrophied because we stopped using it.
That’s where I get nervous about AI. Are we going to lose the ability to write, to formulate our own thoughts without a model prompting us? Are we going to lose the ability to think critically, to challenge assumptions, to troubleshoot? And there’s a compounding problem: if you can’t do the thing yourself, you lose the ability to evaluate whether the AI did it well. You need the underlying knowledge to catch the mistakes, and when that knowledge erodes, you’re not just dependent, you’re blind to the errors you’re accepting. I keep coming back to Idiocracy. It’s a comedy, but not a subtle one, and it feels a little less funny every year.
Back to using my psychology and social work degrees. If you’re feeling fried at the end of the day, the first and most useful thing you can do is sit with that feeling for a minute and ask yourself what actually drained you. Not in a vague, general sense, but specifically. Walk through your day. Which parts felt like you were doing something, and which parts felt like you were just managing things that were doing something?
Once you can see it clearly, the next step is asking yourself what’s one thing you could change, or bring back. Don’t aim for a complete overhaul, just choose one thing. Maybe it’s writing your own first draft before handing it to AI to clean up. Maybe it’s actually sitting down and working through a problem yourself before spinning up an agent. Maybe it’s closing a few of those tabs and doing one thing at a time, because you remember what it felt like to actually finish something.
The goal isn’t to reject AI or pretend it doesn’t have real value. Most of us are going to keep using it, and there are genuinely good reasons to do so. But there’s a version of using it that keeps you sharp and a version that slowly hollows you out, and the difference is usually whether you’re still the one doing the thinking. Hold onto the skills that bring a human flavor to your work. Outsource the mundane, the repetitive, the stuff that was never interesting to begin with. But don’t outsource your ability to reason, to write, to create, to troubleshoot, because those are a lot harder to get back once you’ve let them go.


