Moving into AI-first development is a journey, and we’re all learning together. I want to share some bittersweet lessons from my recent experience that might save you from hitting the same walls I did.

The “Secret” Everyone Knows

Let’s address the elephant in the room. By now, there are probably a million YouTube videos titled “A Super Secret Trick To Make Your Coding Agent 20x Better.” You know the trick, I know the trick: create a detailed plan in a markdown file and direct the agent to execute it step by step.

Armed with this knowledge, my trusted army of agents and I were happy campers for several days of non-stop AI coding. In AI terms, that’s significant—countless tokens, kilowatts of electricity, and increasingly capable agents working in harmony. It was an idyll with me being the conductor of the agentic orchestra, or if you want a warmer metaphor, my agents being trusty golden retrievers happily bringing the ball back over and over again.

The project grew to 158 source code files (not counting tests, documentation, or build scripts). While some were adapted from a permissively licensed open source SDK, most were new or substantial rewrites. For a prototype, it was a considerable codebase.

When Things Go South

Everything was smooth sailing while the codebase remained small. I wasn’t meticulously reviewing every line (“I’m a trained professional – don’t do that at home”, or more appropriately, “don’t do that at work”), but the plan was solid, and the app did what it needed to do.

But as the codebase grew, my agent hit a wall like a test car in a crash test. Well, at least that’s how it felt when, despite numerous attempts to re-prompt around or through that wall, the agent was getting nowhere. Sure, I could have dug through the code myself, but I was too lazy to read and debug a bunch of “not mine” code written on frameworks I’d never worked in, especially after the agent had made multiple “off-plan” modifications trying to solve the problem.

The Hard-Won Lessons

From this failure (and my past successes), I’ve extracted valuable insights that will fundamentally change how I approach AI-driven development. “In it to win it.”

1. Architecture-First Approach

Old way: Plan → Execute

New way: High-level plan → For each module:

  • Develop module_architecture.md (defining key data structures, interfaces, control flow, and design patterns)
  • Create module_execution_plan.md
  • Execute the module plan step-by-step
  • Move to the next module

The key insight? I never truly “discussed” the architecture with my agent. Without that shared understanding, I couldn’t fully trust the foundation—a much bigger problem than doubting a single function. Next time, I’ll co-own both the plan and the architecture doc, so I would feel that it’s my app, even if a lot of the code isn’t mine.

2. Testing Standards from Day One

I would define my testing standards up front and force the agent to follow them. EVERY STEP would require building new regression tests and executing the full set of regression tests. Without it, the agent was creating random tests to debug random problems and either auto-cleaning those tests or leaving them in inconsistent places.

3. Comprehensive Logging Strategy

I would define my logging standards upfront, including verbosity levels and some decorators to auto-log a lot of stuff without bloating the code with debug messaging. That would keep the code readable and the logs detailed.

The Payoff

With this approach, I’m confident several good things will happen:

  • Higher capability ceiling: My agent would be able to solve the gnarly issue that got it running in circles. With well-organized tests and logs, it’s much easier to identify and solve complex issues.
  • Better human intervention points: When I need to step in, I’ll know exactly where to look.
  • Fewer architectural problems: Having good architecture would help avoid the most significant problems. Small stuff is small by definition.

And of course, when it comes to production, there’s going to be a security review, code review, and more thorough testing.

The Investment

This isn’t a light lift; it takes effort. In traditional development, proper architecture for critical components can easily take ⅓ of the project timeline. It’s high-skill, high-value work – your principal architect likely earns (and is worth) at least five of your juniors (and that’s before you start counting the equity…). So this is not free cheese.

But here’s the key: this approach front-loads the strategic work, done collaboratively between you and AI, leaving the more mundane backlog to AI alone.

Redefining Collaboration

When I say “co-own architecture,” I don’t mean you need a decade of “architecturing” experience. I’m an engineer by training, a product guy by heart, and a business guy by trade. I am pretty rusty when it comes to coding, but I have a keen mind and endless curiosity.

When working on architecture, I’m not alone. Whenever I have a question, whether it’s about some options to solve the problem, or our codebase, or open-source comparables, my trusted agents are there to run some background research and queries for me. This is one of the easiest things to parallelise and multitask, which means you are getting the biggest leverage from AI.

We’re essentially redefining the division of labor: humans focus on architecture, standards, and strategic decisions while AI handles the implementation details within those well-defined boundaries. This is where we envision AI and humans in the future – we want AI to create jobs and help multiply human capabilities/velocity/productivity.

What’s Next

In Part 2 (when my busy work allows for another deep dive session), I’ll share specific examples of how this architecture-first approach solved real problems, including the exact templates and prompts that made the difference. Stay tuned.