All I Really Need To Know I Didn’t Learn In Compugarten

I’m glad newly minted software engineers are exposed to data structures, compilers, concurrency, graph theory, assembly language, and the other goodies that constitute a computer science curriculum. All that stuff is important.

But it’s not enough.

Not all classroom material for CS folks should be technical. Photo credit: uniinnsbruck (Flickr).

Since I’m half way to curmudgeon-hood, I frequently find myself lamenting educational blindspots in the young. I’ve even toyed with the idea of teaching at the nearest university, some day when I Have More Timeâ„¢. If academia would take me, my lesson plans might cover some of the following topics:

Humility

I was applying for a very senior architect role. I’d already been through several rounds of interviews with a whole committee of thought leaders in the department. I’d taken a technical proficiency test, and (I hope) given a good impression about how I’d be able to contribute.

The CEO cleared a block on her schedule and sat down with me. She poked a bit at my business experience, my ideas of process, and my aspirations. Then she said, “Tell me your thoughts on humility.”

I think it’s the best job interview question anyone has ever asked me.

A great perspective on humility. Photo credit: Chiot’s Run (Flickr).

Real humility

A person trying to fake humility says, “I’m not very good” — but doesn’t mean it.

A person trying to be humble, but misunderstanding its nature, says, “I’m not as good as X” — and tells himself it’s probably true.

A truly humble person Continue reading

A Quibble With Martin’s “Optimize Later” Notion

In Refactoring, Martin Fowler (a brilliant engineer whom I greatly admire) articulates an idea that I have heard from smart engineers for a long time: first make it work, then make it fast. He puts it this way:

“Until I profile I cannot tell how much time is needed for the loop to calculate or whether the loop is called often enough for it to affect the overall performance of the system. Don’t worry about this while refactoring. When you optimize you will have to worry about it, but you will then be in a much better position to do something about it, and you will have more options to optimize effectively.”

I mostly agree. Certainly, premature optimization can cause lots of problems (pollute an otherwise clean design, overvalue corner cases, dilute conceptual integrity), and profiler-driven optimization (science, not black magic!) is the way to get the best results. Donald Knuth famously observed that “premature optimization is the root of all evil” — a bit over the top, maybe, yet true often enough to give me fits.

But implicit in Fowler’s advice are the following problematic notions:

  • Optimization is a discrete activity from ordinary coding. Especially, it is discrete from refactoring.
  • Between the time that you code an original or refactored version, and the time you optimize, the existence of unoptimized code will have negligible effect on how the rest of the code’s ecosystem evolves.

The flaws in the first notion should be obvious; optimization often requires concommitant refactoring. I won’t beat that dead horse here. The second idea, however, deserves further comment.

Sometimes the only time to optimize is before decisions get made :-). Image credit: xkcd

Before you get around to optimizing, what happens if programmers go looking for an API that does X, find your works-correctly-but-suboptimally function, and wrinkle their nose. “Code smell!” they cry. And they write their own function that does a binary rather than linear search, etc. They don’t have time to investigate whether the original version was coded that way for a reason (and thus should simply be refactored); they just need something that works AND that is fast, and your function doesn’t cut it.

I have seen this happen over and over and over again. Late optimization, while smart in many cases, must be managed (communicated, commented, evangelized, trained, reinforced, audited, planned for) carefully or else it will provoke a lot of NIH and contempt from (ironically) less savvy programmers. Result: more guck that needs refactoring.

Action Item

Find a notoriously sub-optimized function in your code. Study how its existence in non-optimal form has influenced how other code has evolved.

Why Weakened Dev Teams Suffer From NIH Syndrome

(NIH = Not Invented Here)

So here’s the situation. Product Management says feature X must be built. Development scopes the feature and discovers that the “right” solution involves using code developed elsewhere. But unfortunately, the other team’s code uses different assumptions and will take some extra time to shoehorn into the new context. In other words, the right solution is too costly.

If a development team doesn’t have an architect who can advocate for the right solution based on first principles, then it uses the tried-and-true scope/schedule/resources lever (about its only negotiating tool) to push back.

The problem is that in this scenario, product management isn’t really forced to come to terms with the long term strategic consequences of doing things wrong. So the scope is adjusted, or the schedule is adjusted, or the resources are adjusted — but only enough to rescue the feature, not enough to do it right.

As a result, dev has to invent a half-baked alternative to the other team’s solution. Ad nauseum.

We look at the dev team and say that they have “Not Invented Here” Syndrome, because they never seem to use the solutions that other teams have built. In many cases, the real problem is that the Product Management and Dev don’t have an architect mediating and looking at the situation from the standpoint of maximizing the ROI of strategic technology investments.