Project management 101 teaches that, when managing outcomes, you cannot alter scope, schedule, or cost (resources) without affecting at least one of the other dimensions. This interrelationship is known colloquially as the “Iron Triangle.” Sometimes we put “quality” in the middle to show how it is unavoidably shaped by choices on the other constraints:
Lots of Dilbert cartoons derive their humor from the unwillingness of the Pointy Haired Boss (PHB) to acknowledge this relationship. These cartoons are funny because they are so eerily similar to conversations we’ve all had, where someone wants us to deliver ultra-high quality, on a limited budget, in an aggressive timeframe, with a boatload of features.
It ain’t gonna happen, folks. We engineers are clever, but we’re not magicians. Triangles don’t work that way.
You’ve learned some good principles when you can articulate this geometry lesson.
But there’s more.
Truth 1: Scope is a trickster
Many well meaning managers and executives understand this trilemma, and they distance themselves from Dilbert’s PHB by acknowledging that something has to give. “I pick scope,” they’ll say. “We absolutely must have the product before the summer doldrums, and we only have X dollars to spend, but I’m willing to sacrifice a few features.”
This can give product management heartburn–feature sets sometimes hang together in ways that make slicing and dicing dangerous. An airplane that’s good at takeoffs but that can’t land is unlikely to be a commercial success. Good product managers will point this out, and they’ll be right.
Can feature-cutting be done judiciously? Yes. If you’re careful. But that’s still not the whole story.
Most software projects are not building version 1.0. This means that what you’re releasing at the end of the project is your new features PLUS all the old features that you already had. On mature products, the ratio of old to new features may be enormous–easily 100:1. I’ve worked on software that was 15 years old, had millions of lines of code in the codebase, and represented hundreds or thousands of man-years of investment. When you pull 1 or 2 features out of the next release in that kind of a codebase, how much are you really saving?
The PHB is foolishly optimistic. “We have 6 major initiatives slated for the next release, and I’m cancelling 2. We just reduced scope by 33%.”
Well, sorry, Charlie. The trickster got the better of you.
What usually happens in these scenarios, if engineering is not able to articulate the carrying cost of old features in a way that execs grok, is that cost and schedule remain fixed, and the scope vertex shifts much less than execs believe. Pressure is not alleviated; instead, it steadily mounts. Since all vertices are fixed, the nice straight lines that define the sides of the triangle begin to bow inward, squeezing the area available to quality. Result: an on-time, on-budget release, with the constrained feature set, but far less quality than anybody wanted. Nobody is happy.
If the execs, PMs, customers, and engineers in your orbit talk regularly about quality, but you can’t seem to make headway, I predict that this phenomenon is at least partly to blame.
The problem of sacrificing quality when we meant to reduce scope is so ubiquitous that sometimes the iron triangle is formulated like this:
Fast. Good. Cheap. Pick any 2.
In this world view, scope is not a lever, and the tradeoffs with quality are explicit.
If you’ve learned truth 1, then you’re probably an industry veteran with battle scars, and you’re the kind of person I want on my team when we do project planning.
But there’s more.
Truth 2: Quality vs. speed is a false dichotomy
This assertion is bound to raise some eyebrows. In fact, I nearly got in a shouting match about it with a brilliant coworker who has lots of wisdom. Think of the TV show MASH. How many times does Hawkeye lament that he can’t save the lives of the wounded because he doesn’t have the time to operate properly? How often do we see young soldiers die because he’s too tired, or has to improvise solutions because there’s no time to requisition proper equipment?
Trying to do too much, with too little, is a recipe for quality failure. No question.
Flip the scenario on its head for a minute. Focus less on the quality of Hawkeye’s work, and more on the quality of the patients. Is it faster to operate on lightly wounded soldiers who were physically healthy before their injury, or on those who are riddled with shrapnel, and went into battle with a bad heart, diabetes, kidney failure, tuberculosis, and cancer?
Now translate. Think of a codebase like a patient, and an engineer like a doctor.
Can engineers get more done in a high-quality codebase, or a low-quality one? I claim the former, even if the high-quality codebase disallows kludges that look like they save time in the short run.
I have personally worked in codebases that are modular, well encapsulated, thoroughly unit tested, and automated to the hilt. And I have worked in codebases that were just the opposite. There is no question where an engineer is more productive. The comparison is not even close. The speed with which you can reproduce, isolate, and fix a bug is greater in high-quality code. Adding incremental features can be orders of magnitude faster. Altering architecture to reinvent functionality is doable in such a codebase, and virtually impossible in spaghetti code.
But can we handle the truth?
Part of the reason why my colleague had strong emotion about this claim is because he’d been burned by the facile belief that you can hold quality constant (or increase it) as you push relentlessly for speed. That belief is dangerous. If a Dilbertesque PHB is told that he can have both, misery will ensue. That’s not opinion–it’s historical fact, as most of us can witness.
That way lies madness.
Quality yields speed
In a way, I’m suggesting the opposite strategy: if you push on quality in the right way, speed will accrue organically. Not at first, especially if you’re starting with an unhealthy codebase. Not with every checkin; sometimes you have to take one step back to take two steps forward. But over time, if you continue to invest in quality, your patient will get more healthy, and you will see your speed go up, not down. The mental models of your engineers and the entire value chain will align. You’ll create virtuous cycles that perpetuate the right kinds of tradeoffs for performance, scalability, and encapsulation.
There are limits, of course. Hawkeye might be amazingly fast with mostly healthy patients, but he’ll never operate on a thousand patients an hour.
Within those limits, though, it’s amazing what quality can do for you.
In order to pursue this strategy, you have to get management to take their foot off the gas pedal and let you build things right. That can be a difficult (maybe even impossible) task. I’m not claiming it’s easy. I’m not offering a recipe to convince them (though momentum will probably be an ingredient). I’m just saying it’s worth the effort, because there is a happy land on the other side of the rainbow where you get better and faster at the same time.
I’ve been there.