Codebases are as diverse, unique and interesting as the people who work on them. But almost all of them have this in common: they grow over time (the codebases, not the people). Teams expand, requirements grow, and time, of course, marches on; and so we end up with more developers writing more code to do more things. And while we’ve all experienced the joy of deleting large chunks of code, that rarely offsets the overall expansion of our codebases.

If you’re responsible for your organization’s codebase architecture, then at some point you have to make some emphatic choices about how to manage this growth in a scalable way.  There are two common architectural alternatives to choose from.

One is the “multi-repo” architecture, in which we split the codebase into increasing numbers of small repos, along subteam or project boundaries. The other is the “monorepo,” in which we maintain one large, growing repository containing code for many projects and libraries, with multiple teams collaborating across it.

The multi-repo approach can initially be tempting, because it seems so easy to implement. We just create more repos as we need them! We don’t, at first, appear to need any special tooling, and we can give individual teams more autonomy in how they manage their code.

Unfortunately, in practice the multi-repo architecture often leads to a brittle, inconsistent and change-resistant codebase. This in turn can encourage siloing in the engineering organization itself. In contrast, and perhaps counterintuitively, the monorepo approach is frequently a better, more flexible, more collaborative, long-term scaling solution.

Why is this the case? Consider that the hard problem in codebase architecture involves managing changes in the presence of dependencies, and vice versa. And in a multi-repo architecture, repos consume code from other repos via published, versioned artifacts, which makes change propagation much harder.

Specifically, what happens when we, the owners of repo A, need some changes in a consumed repo B? First we must find the gatekeepers of repo B and convince them to accept and publish the change under a new version. Then, in an ideal world, someone would find all the other consumers of repo B, upgrade them to this new version, and republish them. And now we must find the consumers of those initial consumers, upgrade and republish *them* against the new version, and so on, recursively and ad nauseam. 

But who is the “someone” who will do all this work? And how will they locate all these consumers? After all, dependency metadata lives on the consumer, not the consumed, and there is no easy way to backtrack dependencies. When a problem’s ownership is not immediate and its solution not obvious, it tends to get ignored, and so none of this effort actually happens in practice. 

And that may be fine, at least for a short while, because the other repos are (hopefully!) pinned to the earlier version of the dependency. But this comfort is short-lived, because sooner or later several of these consumers will be integrated into a deployable artifact, and at that point someone will have to pick a single version of the dependency for that artifact. So we end up with a transitive version conflict caused by one team in the past and planted in the codebase like a time bomb, to blow up just as some other team needs to integrate code into production.

If this problem seems familiar, it’s because it’s an in-house version of the infamous “dependency hell” problem that commonly plagues codebases’ external dependencies. In the multi-repo architecture, first-party dependencies are treated, technically, like third-party ones, even though they happen to be written and owned by the same organization. So with a multi-repo architecture we’re basically choosing to take on a massively expanded version of dependency hell.

Contrast all this with a monorepo: all consumers live in the same source tree, so finding them can be as simple as using grep. And since there is no publishing step, and all code shares a single version (represented by the current commit), updating consumers transitively and in lockstep is procedurally straightforward. If we have good test coverage then we have a clear way of knowing when we’ve gotten it right.

Now, of course, “straightforward” is not the same as “easy”: upgrading the repo in lockstep might itself be no small effort. But that’s just the nature of code changes. No codebase architecture can remove the irreducible part of an engineering problem. But a monorepo at least forces us to deal with the necessary difficulty now, without creating unnecessary difficulty later.

The multi-repo architecture’s tendency to externalize dependency hell onto others in the future is a manifestation of a wider problem related to Conway’s Law: “Any organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure”. A converse of sorts is also true: your organization’s communication structure tends to emulate the architecture around which that communication occurs. In this case, a fragmented codebase architecture can drive balkanization of the engineering organization itself. The codebase design ends up incentivizing gatekeeping and responsibility-shedding over jointly achieving shared goals, because those shared goals are not represented architecturally. A monorepo both supports and gently enforces organizational unity: everyone collaborates on a single codebase, and the lines of communication this imposes are exactly those that our organization needs in order to succeed in building a unified product.

A monorepo is not a panacea. It does require suitable tooling and processes to preserve performance and engineering effectiveness at scale. But with the right architecture and the right tooling you can keep your unified codebase, and your unified organization, humming along at scale.