Interrupting my interruptions

Tonight I was just settling down for a ponder on some personal stuff when I noticed an email from my brilliant brother-in-law (hi, Stephen!), recommending an article about the cost of interrupting programmers. Half an hour later, I’m blogging about it. Yes, I see the irony in the read, the blog, and the shout-out, but I just can’t help it.

I’ve heard lots of estimates of the cost of interrupting, but the research in this article seems particularly clear. I think the article oversimplifies by assuming that the problem and solution derive purely from memory, but there’s enough insight and clever thinking in the article to make it worth a read…

We’ve all known that interruption = bad. We’ve nodded our heads at this wisdom for years. Occasionally we give lip service to it. We try to clump meetings in one portion of the day, leaving blocks of time for serious thinking and work. We advise our teams to use “lighter” interruptions (“ask your question by chat/email instead of in person; it’s less disruptive…”). We decline non-essential meetings and urge others to keep their invite lists small. We buy “cones of silence” and “Do Not Disturb” signs and set them up outside the cube of the guy who’s trying to finish urgent work for an impending release.

And then we fall off the bandwagon.

At least, I do.

Hi. My name is Daniel, and I’m addicted to interruptions. :-)


Image Credit: xkcd


You would see my addiction if you walked past my desk and looked at the tabs in my browser: two for email (work, personal), two or three for calendaring, some chat sessions, a task list, several programming topics, a man page, a python reference, an interesting blog post or two, three wikipedia pages, a ticket I looked up before I ran to my last meeting, a wiki page I’m in the middle of editing, a competitor’s product portfolio, a LinkedIn discussion forum on cloud computing, a Google spreadsheet, the PDF of a resume I’m supposed have read before I do an interview in an hour, half a dozen random sites that I visit during the day as I check gossip on a competitor or read the Dilbert cartoon someone emailed me…

How am I supposed to think Deep Thoughts when I’ve got that much noise?

Continue reading

George and the Flood

Here’s a simple little test that teaches an important lesson. Take a moment to work through all 3 questions. I promise it won’t take long. :-)


Question 1. A flood is coming. George can only swim for a little while. What should George do?

Screen Shot 2012-12-09 at 12.27.01 PM


Question 2. A flood is coming. George can only swim for a little while. What should George do?

Screen Shot 2012-12-09 at 12.28.05 PM


Question 3. A flood is coming. George can only swim for a little while. What should George do?

Screen Shot 2012-12-09 at 12.28.43 PM

Ready to grade your answers?

The Yellow Belt Answer

Most people say “go right, toward higher ground” if picture 1 is the only input to their analysis. The logic is pretty indisputable. But…

Continue reading

Don Kleinschnitz: Put a stake in the ground.

(A post in my “Role Models” series…)

My huddle was not going well. I’d called a meeting to debate a tricky architectural problem with other senior engineers, and consensus was scarcer than working markers for our whiteboard. We were going round and round in circles.

Don Kleinschnitz walked in. It was our first interaction–he’d only been introduced to the company as our new CTO a few days before–and I wondered whether he’d help us get off the dime.

Five minutes later, the meeting was over, and the controversy was settled. Don had “put a stake in the ground,” as he called it — picked a spot and made a tangible, semi-permanent choice to anchor our behavior.

A stake in the ground. :-) Photo credit: Wikimedia Commons.

Answer the hard questions

I don’t remember the question or the answer, but I do remember some of Don’s solution. He immediately pushed us from generalities into specifics–what use case, exactly, would be affected by the decision? How much, exactly, would tradeoffs pay or cost in either direction?

Of course we couldn’t answer Don’s questions very well; nothing is more certain than ambiguity in software. But Don refused to let us off the hook, because he understood that imperfect but specific answers are better than vague generalizations. Even if you have to guess. (More rationale for this principle is elaborated in the RPCD manifesto.)

By putting a stake in the ground, Don wasn’t being arrogant or unwilling to listen. He was simply recognizing that we had incomplete input, that the right answer was maybe guessable but not clear-cut, and that we’d be better off making a tangible experiment instead of debating intuitions. Maybe our answer would be wrong; if so, we’d adjust later. The cost of altering our trajectory would not be so high that it would invalidate the benefit of immediate progress.

Understand your assumptions

I saw Don model this pattern again when he was general manager of a newly created business unit inside Symantec. We were planning the first release of a suite melded from independently acquired products; the sales force’s compensation and training were in flux; our outbound marketing strategy was unknown.

I think product management gulped when Don asked them for a credible sales forecast, a granular competitive analysis, a rationalized pricing strategy, and a business case justifying the feature sets they proposed to map to line items in budgets. Who wouldn’t gulp? It was a tall order.

But Don wouldn’t settle for finger-in-the-air answers. He dug out a set of spreadsheets from his MBA days and tweaked it. Hundreds of cells held our best-guess scores for the usefulness of features in our products and those of our competitors. Sheets captured assumptions. More sheets ran linear regressions to see if our price/performance fell in line with the industry.

He got pushback. “You’ve got so many assumptions that the model can’t possibly be correct.”

“Yes,” Don would admit. “But insight, not perfect correctness, is what we’re after. You get insight out of specifics. Should we give our installation’s ease-of-use a 5 out of 10, or a 6? Not sure. But notice that the overall price/performance of our product doesn’t change at all when we vary that number. We wouldn’t know that unless we’d plugged in something. Forcing ourselves to give an answer has taught us something about our assumptions.”

Don’s model enabled smart action in the face of tremendous uncertainty.

Show, don’t tell

Don is always tinkering with tools and processes that instill this habit in his troops. On another memorable occasion, we were wrestling with long, dry requirements documents. They were auto-generated from a requirements database, and they ran into the scores–maybe hundreds–of pages. Because they were so long, nobody liked to read them carefully. And because nobody liked to read them, they weren’t producing the interlock we needed to keep five development sites aligned in a healthy way.

Don asked us to build a UI that manifested the existence of all features that would be exposed in our next release. It didn’t have to do anything–just display what sorts of things would be possible.

At first I thought he was crazy. I told him it would take a long time.

“Not as long as requirements docs,” he said.

“It will be ugly–completely lacking in any usability.”

“So? We’re not trying to model the customer experience. We can throw it away. The value is in forcing ourselves to be specific about what features we have.”

I built the UI. And I learned a ton. A picture–even a sloppy one–is easily worth a thousand words. Especially if it’s interactive.

Product managers saw the UI. After a suitable re-education so they knew not to worry about ugliness or awkward workflow, they started saying really insightful things, like: “If we expose features A and C but not B, a user who wants to do A→B→C will be frustrated.” The concreteness of the UI was almost like magic.

I could go on and on about how Don lives this principle, but you get the idea: pose hard questions, get the best answers you can… then force yourself to put a stake in the ground, experiment with specifics, and learn from it.

Thanks for the lesson, Don.

Action Item

Consider a current (or past) design decision that’s been difficult to make. How could you make the decision easier by guessing about the likelihood of a use case, the preference of a user, or other aspects of context? If you got the answer wrong, how soon would you be likely to discover your mistake, and how expensive would it be to adjust your trajectory?