In Refactoring, Martin Fowler (a brilliant engineer whom I greatly admire) articulates an idea that I have heard from smart engineers for a long time: first make it work, then make it fast. He puts it this way:
“Until I profile I cannot tell how much time is needed for the loop to calculate or whether the loop is called often enough for it to affect the overall performance of the system. Don’t worry about this while refactoring. When you optimize you will have to worry about it, but you will then be in a much better position to do something about it, and you will have more options to optimize effectively.”
I mostly agree. Certainly, premature optimization can cause lots of problems (pollute an otherwise clean design, overvalue corner cases, dilute conceptual integrity), and profiler-driven optimization (science, not black magic!) is the way to get the best results. Donald Knuth famously observed that “premature optimization is the root of all evil” — a bit over the top, maybe, yet true often enough to give me fits.
But implicit in Fowler’s advice are the following problematic notions:
- Optimization is a discrete activity from ordinary coding. Especially, it is discrete from refactoring.
- Between the time that you code an original or refactored version, and the time you optimize, the existence of unoptimized code will have negligible effect on how the rest of the code’s ecosystem evolves.
The flaws in the first notion should be obvious; optimization often requires concommitant refactoring. I won’t beat that dead horse here. The second idea, however, deserves further comment.
Before you get around to optimizing, what happens if programmers go looking for an API that does X, find your works-correctly-but-suboptimally function, and wrinkle their nose. “Code smell!” they cry. And they write their own function that does a binary rather than linear search, etc. They don’t have time to investigate whether the original version was coded that way for a reason (and thus should simply be refactored); they just need something that works AND that is fast, and your function doesn’t cut it.
I have seen this happen over and over and over again. Late optimization, while smart in many cases, must be managed (communicated, commented, evangelized, trained, reinforced, audited, planned for) carefully or else it will provoke a lot of NIH and contempt from (ironically) less savvy programmers. Result: more guck that needs refactoring.
Action Item
Find a notoriously sub-optimized function in your code. Study how its existence in non-optimal form has influenced how other code has evolved.
Related articles
- Premature Optimization (antognini.ch)
- Refactoring is Sloppy (michaelfeathers.typepad.com)
One thought on “A Quibble With Martin’s “Optimize Later” Notion”