Learned Helplessness, Rats, and People Power

In the 1950s, researchers at Johns Hopkins conducted some very troubling experiments. They caught wild rats and squeezed them in their hands until they stopped struggling, teaching them that nothing they did would let them escape the crushing grip of their human captors. Then they dropped the rats in a bucket of water and watched them swim.

Now, wild rats are superb swimmers. On average, rats that had not received the squeeze treatment lasted around 60 hours in the bucket before they gave up from exhaustion and allowed themselves to drown. One unsqueezed rat swam for 81 hours.

A later rats-in-bucket experiment (not quite so brutal). Photo credit: MBK (Marjie) (Flickr).

The average squeezed rat sank after 30 minutes.

In the 1960s and 1970s, Martin Seligman became interested in this phenomenon–he called it “learned helplessness“–and he was able to trigger similar “giving up” behavior in dogs and other animals. He theorized that human depression Continue reading

Book Review: Poke the Box

I just finished reading Seth Godin’s Poke the Box, and I recommend that you add it to your reading list. It’s short, punchy, and thought-provoking.


The main idea he advocates is that we should not wait around for the world to give us permission, and we should not be afraid to fail. We should just jump in with two feet and make things happen.

My favorite phrase from the whole book–and a great three-word summary–is “Now beats soon.” Kind of reminds me of the favorite motto of a wise leader that I admire: “We must lengthen our stride. And we must do it now.” (Spencer W. Kimball; he had “Do it now!” on a plaque on his desk.)

Yes, there are a few caveats. Some people are forever starting, but never finishing. That can be a problem. And you have to do your homework before you start; you don’t want to jump in until you know whether you’ve picked a smart place to swim across the river.

The only critique I have is that Godin could have said the same thing in about half the space. He has lots of short anecdotes, which are fun, but he had me convinced long before I got to the end.

Action Item

Go out and do something great! Now.

Roland Whatcott: Manage momentum.

(A post in my “Role Models” series…)

In late 2000, I joined a small team tasked with rewriting the core technology at PowerQuest. The old codebase–despite embodying a number of patent-pending concepts, and serving as the foundation for all our revenue–was fragile, rife with technical debt, and unfriendly to localization, new platforms, and other roadmap priorities.

Building our new engine wasn’t exactly rocket science, but we expected our output to be cool and generate lots of thrust. We took our work as seriously as NASA… Photo Credit: Wikimedia Commons

This rewrite had been attempted before–more than once, by some of the brightest engineers I’ve ever worked with. Each time, the press of looming releases, and the lack of obvious progress, culminated in a “we’ll have to come back to this later” decision.

Our little team was confident that This Would Not Happen To Us. We were going to build an engine that was cross-platform from the ground up. No weird dependencies, no assumptions about compiler quirks or endianness, would permeate the code. Internationalization (i18n) and localization (l10n) support would be baked in. Errors would be clearer. Modules would be small, beautiful, and loosely coupled. Gotos would disappear. Vestiges of C dialect would be replaced by best-practice STL, boost, metaprogramming, and other cutting-edge C++ ideas.

Experience Versus Enthusiasm

Before I tell the rest of the story, make a prediction. Do you think we crashed and burned, muddled through, or succeeded wildly?

How you answer will say a lot about you.

If you’re young and optimistic, you may expect me to tell a story with a happy ending. But if you’re an industry veteran, you probably expect I’m going to tell a cautionary tale framed by failure. You know that rewriting a core technology from scratch almost never succeeds, and you can give a dozen excellent reasons why that’s the case.

If you’ve got Roland Whatcott’s genius, you can imagine either outcome — and more importantly, you know how you can choose the future you want.

Continue reading

Steve Tolman: It depends.

(A post in my “Role Models” series…)

My friend and long-time colleague Steve Tolman has a standing joke with people who know him well. He gives the same answer to every question: “It depends.”

Unlike most jokes, this one gets funnier the more often you hear it, especially if Steve gives a cheesy wink during delivery.

Nope, not Steve. Photo credit: Wikimedia Commons.

Dead serious, high stakes discussion. Breathless delivery: Should we build feature A or B?

“It depends.” Wink, wink.

How fast can we do a release?

“It depends.”

Do you want ketchup on your hamburger?

“It depends.”

Maybe you have to be there.  ;-)

Steve doesn’t use his favorite answer because he’s lazy; he’s one of the hardest-working, hardest-thinking guys I know. He uses it because important questions often have rich answers, and unless all parties share an understanding about the priorities and assumptions underlying a question, sound bite responses aren’t very helpful. The answer is supposed to remind everyone that a thoughtful approach to business priorities, not slavish conformance to a rule book, is what will drive economic success. Steve generally follows his answer up with a series of probing questions to help everyone rediscover that truth, and to get our creative juices flowing. “It depends” is an invitation to deep discussion, which often produces insight we sorely need.

I see a strong connection between Steve’s tongue-in-cheek answer and the sort of tradeoff analysis that informs smart engineers’ thinking. As I said elsewhere, good code is balanced. You take many competing considerations and find the approach that best addresses business priorities (note the plural on that last word). You don’t get dogmatic, because you know that no extreme position is likely to optimize competing considerations. But you don’t get so pragmatic that you give up on vision and passion, either.

Steve gets this.

You might feel that “it depends” thinking is incompatible with the “put a stake in the ground” principle I blogged about recently. “It depends” invites further discussion, whereas stakes end debates. Right?

I don’t think so. You just use these strategies under different circumstances. “It depends” makes sense when shared understanding hasn’t yet developed. Stakes make sense when you all know the context and are still unsure how to proceed.

So what’s the right ratio of winks to stakes?

It depends. ;-)

Thanks for the lesson, Steve.

Action Item

Next time someone asks you for an overly simple answer, tell them “it depends,” and then let them make the next move. Betcha they’ll ask for detail and listen to your explanation more thoughtfully.

Don Kleinschnitz: Put a stake in the ground.

(A post in my “Role Models” series…)

My huddle was not going well. I’d called a meeting to debate a tricky architectural problem with other senior engineers, and consensus was scarcer than working markers for our whiteboard. We were going round and round in circles.

Don Kleinschnitz walked in. It was our first interaction–he’d only been introduced to the company as our new CTO a few days before–and I wondered whether he’d help us get off the dime.

Five minutes later, the meeting was over, and the controversy was settled. Don had “put a stake in the ground,” as he called it — picked a spot and made a tangible, semi-permanent choice to anchor our behavior.

A stake in the ground. :-) Photo credit: Wikimedia Commons.

Answer the hard questions

I don’t remember the question or the answer, but I do remember some of Don’s solution. He immediately pushed us from generalities into specifics–what use case, exactly, would be affected by the decision? How much, exactly, would tradeoffs pay or cost in either direction?

Of course we couldn’t answer Don’s questions very well; nothing is more certain than ambiguity in software. But Don refused to let us off the hook, because he understood that imperfect but specific answers are better than vague generalizations. Even if you have to guess. (More rationale for this principle is elaborated in the RPCD manifesto.)

By putting a stake in the ground, Don wasn’t being arrogant or unwilling to listen. He was simply recognizing that we had incomplete input, that the right answer was maybe guessable but not clear-cut, and that we’d be better off making a tangible experiment instead of debating intuitions. Maybe our answer would be wrong; if so, we’d adjust later. The cost of altering our trajectory would not be so high that it would invalidate the benefit of immediate progress.

Understand your assumptions

I saw Don model this pattern again when he was general manager of a newly created business unit inside Symantec. We were planning the first release of a suite melded from independently acquired products; the sales force’s compensation and training were in flux; our outbound marketing strategy was unknown.

I think product management gulped when Don asked them for a credible sales forecast, a granular competitive analysis, a rationalized pricing strategy, and a business case justifying the feature sets they proposed to map to line items in budgets. Who wouldn’t gulp? It was a tall order.

But Don wouldn’t settle for finger-in-the-air answers. He dug out a set of spreadsheets from his MBA days and tweaked it. Hundreds of cells held our best-guess scores for the usefulness of features in our products and those of our competitors. Sheets captured assumptions. More sheets ran linear regressions to see if our price/performance fell in line with the industry.

He got pushback. “You’ve got so many assumptions that the model can’t possibly be correct.”

“Yes,” Don would admit. “But insight, not perfect correctness, is what we’re after. You get insight out of specifics. Should we give our installation’s ease-of-use a 5 out of 10, or a 6? Not sure. But notice that the overall price/performance of our product doesn’t change at all when we vary that number. We wouldn’t know that unless we’d plugged in something. Forcing ourselves to give an answer has taught us something about our assumptions.”

Don’s model enabled smart action in the face of tremendous uncertainty.

Show, don’t tell

Don is always tinkering with tools and processes that instill this habit in his troops. On another memorable occasion, we were wrestling with long, dry requirements documents. They were auto-generated from a requirements database, and they ran into the scores–maybe hundreds–of pages. Because they were so long, nobody liked to read them carefully. And because nobody liked to read them, they weren’t producing the interlock we needed to keep five development sites aligned in a healthy way.

Don asked us to build a UI that manifested the existence of all features that would be exposed in our next release. It didn’t have to do anything–just display what sorts of things would be possible.

At first I thought he was crazy. I told him it would take a long time.

“Not as long as requirements docs,” he said.

“It will be ugly–completely lacking in any usability.”

“So? We’re not trying to model the customer experience. We can throw it away. The value is in forcing ourselves to be specific about what features we have.”

I built the UI. And I learned a ton. A picture–even a sloppy one–is easily worth a thousand words. Especially if it’s interactive.

Product managers saw the UI. After a suitable re-education so they knew not to worry about ugliness or awkward workflow, they started saying really insightful things, like: “If we expose features A and C but not B, a user who wants to do A→B→C will be frustrated.” The concreteness of the UI was almost like magic.

I could go on and on about how Don lives this principle, but you get the idea: pose hard questions, get the best answers you can… then force yourself to put a stake in the ground, experiment with specifics, and learn from it.

Thanks for the lesson, Don.

Action Item

Consider a current (or past) design decision that’s been difficult to make. How could you make the decision easier by guessing about the likelihood of a use case, the preference of a user, or other aspects of context? If you got the answer wrong, how soon would you be likely to discover your mistake, and how expensive would it be to adjust your trajectory?