I spend my weekends coding. Not because I have to. Because I love it. And a while back, I noticed something I couldn’t stop thinking about: with AI tools, I’m somewhere between 100 and 1,000 times faster building things on my own than I was before.

That should have been good news when I walked back into the office Monday morning. It wasn’t.

My engineers are at least as smart as me. They’re using the same tools. So why wasn’t I seeing 100x acceleration? Why were the metrics barely moving? I sat with that question for a long time, and the answer I landed on was uncomfortable: it wasn’t the people, and it wasn’t the technology. It was us. Our habits. Our processes. Essentially, it’s our culture, designed years before AI existed and never updated to account for it.

AI didn’t create those problems. It just made them impossible to ignore.

Here’s a real example. We’ve always had a loose relationship with the internal tools and components we build — utilities, shared libraries, small pieces of infrastructure that get written as a byproduct of building something else. They work, they get used, and then they sit. No owner. No maintenance plan. Security patches don’t get applied, bugs accumulate, documentation goes stale. Before AI, the problem was manageable mostly because the volume was manageable. Producing one of these things took real effort, so there was a natural brake on how many could exist. Then AI removed the brake. Now these components are flourishing everywhere, generated in an afternoon, dropped into codebases across the organization, and owned by nobody. The accountability gap didn’t change. The rate of production did. And what was once a minor housekeeping problem is now a sprawling inventory of undocumented, unmaintained, unpatched components that we are actively struggling to keep up with. AI didn’t create the ownership problem. It just funded it at scale.

This is the thing nobody wants to say out loud when they’re announcing an AI rollout: the tool will find your weaknesses before you do. Teams that skip documentation ship undocumented code faster. Teams that skip code review ship unreviewed code faster. Teams where accountability is fuzzy will now generate a much larger volume of work that nobody fully owns. AI is an amplifier. It doesn’t care what it’s amplifying.

The Foundation That Determines Everything

Most conversations about AI adoption start with the tools. I want to start earlier than that. Before any tooling conversation, the question worth asking is: what does ownership actually mean on this team?

Not in theory. In practice. Does every engineer know exactly what “done” looks like for their work? Can they define what success means, what failure means, and at what point they need to surface a problem without being asked? These aren’t soft skills. They’re the load-bearing infrastructure of a high-functioning engineering organization. And when that infrastructure is shaky, AI makes the shaking louder.

A high-performing team, before AI, operates with what I’d call accountable autonomy. Leaders have genuine ownership of their domains, and they drive resolution without waiting to be told. They communicate proactively, especially when things go sideways. They have a shared, explicit framework for how work gets delegated, how success gets defined, and how feedback flows. When that team picks up AI tooling, the acceleration is real and it compounds. They know how to direct it, correct it, and refine their prompts. They treat AI the way a conductor treats an orchestra: they’re not playing every instrument, but they are absolutely in charge of the music.

Without that foundation, you’re just handing a louder instrument to someone who hasn’t learned to play.

There are teams that genuinely should not be adopting AI coding tools yet. Probably more than we realize. If your engineers are still working out how to do code reviews with any real rigor, adding AI to the mix will help them produce more code in need of better review. If your sprint planning is mostly theater, AI will help you fill those sprints with more of the wrong work, faster. The discipline has to come first. The accelerant comes after.

Where the ROI Calculation Breaks Down

The other place leaders consistently get this wrong is in how they measure the return. Most ROI conversations about AI tooling focus on output volume: lines of code generated, tickets closed, velocity numbers. And yes, those move. But that’s the wrong frame, and it masks the actual opportunity.

Here’s the structural problem. Most engineering organizations run on two-week sprints. The sprint is the minimum unit of work estimation, which means that regardless of how fast AI makes execution, the container stays the same size. Work, like gas, expands to fill the space you give it. So what actually happens is this: AI makes a task that took a week take two days, and the engineer fills the remaining time with other sprint work. The velocity numbers tick up slightly. Leadership calls it a win. Meanwhile, the compounding potential of the tool is sitting almost entirely untouched.

The real ROI question isn’t “are we going faster?” It’s “what are we now attempting that we never could before?” AI should be changing the ambition of what gets planned, not just the execution speed of what was already on the list. The teams that figure this out are the ones restructuring how they think about work, not just how they do it. I’ve been experimenting with shorter sprint cycles for this reason, not to demand more output, but to force a rethinking of how work gets estimated and scoped in an environment where execution is no longer the bottleneck.

What Good Actually Looks Like

The signal I look for when AI adoption is working is deceptively simple: are engineers spending more time thinking and less time typing? That’s the unlock. AI is already better than your engineers at typing code. It has read every documentation page. It doesn’t forget syntax. It doesn’t have bad days. Let it type.

The engineer’s job is now to lead it, direct it, challenge its output, and solve the problems no prompt can frame correctly on its own. That requires more cognitive engagement, not less. It means asking harder questions, catching the places where AI is confidently wrong, and bringing judgment that no model can replicate. When I see teams operating that way, where AI handles the mechanical execution and humans handle the judgment, that’s when the numbers start to look like what I experience on my weekends.

The amplification is already happening. The only question is whether you’re feeding it something worth scaling.