Julie Jones: Learn voraciously.

(A post in my “Role Models” series…)

When you’re a twenty-something computer geek, the pace of the software industry doesn’t worry you much. You’re full of energy and enthusiasm for your chosen career, and you’re confident you’ll quickly absorb whatever wasn’t covered during college.

Add a decade or so, and you may see the world a bit differently.

Expertise and Moving Targets

Is mastery possible when your instrument keeps changing? Photo credit: Wikimedia Commons

The rate at which new technologies burst on the scene, acquire mindshare, and demand the attention of engineers can be overwhelming. A journeyman’s knowledge on a given subject may be obsolete long before it matures into mastery. Gartner’s hype cycle can make your head hurt.

If you aspire to excellence, the dawning realization that you will never learn it all (you can’t be perfect on breadth), and that even in a narrower problem domain where you might specialize, flux will erode your skills (you can’t be perfect on depth) might cause some angst.

Maybe you’ve read Malcom Gladwell’s Outliers. Yo Yo Ma put in 10,000 hours to become great at the cello; what are you supposed to do if your instrument changes every year or two?

My friend Julie has an answer.

Learning to Learn

I first met Julie Jones when I was in my early 30’s. We worked together on the “new engine” initiative at PowerQuest (a pivotal experience I’ve blogged about before). Julie had much more experience than I did; she taught me abstract factories and dependency injection, loose coupling, declarative programming, streams, unit testing, XP, and many other key concepts.

At first I thought Julie just plucked these ideas out of an impressive mental archive. She had rich and varied experience, and she seemed to know so much.

As I came to know her better, however, I realized that knowledge was not her great talent. Sure, she knows a lot–but more importantly, she learns a lot. The XP she taught me–that came from a book by Kent Beck that she’d just finished. The loose coupling? Scott Meyers, Effective C++. Again, recent reading. Fancy template techniques? Alexandrescu, Modern C++ Design, just barely published.

Julie was constantly, steadily learning. She subscribed to the C++ User’s Journal. She played around with new boost libraries. When she got an assignment to improve the build system, she tinkered with Perl, then decided she wanted to learn Python and threw herself into the effort with gusto. One weekend, she wrote a Python module to manage advancement in her foosball tournament; within a month or two, she was teaching the rest of us. She was always adding new books to her bookshelf. She went to conferences. When we had to have support for LDM and dynamic disks in our disk management layer, Julie volunteered, studied the relevant specs, and quickly rolled out library extensions.

What I learned from Julie’s example is that a superb software engineer isn’t so much an expert on a technology; a superb software engineer is an expert student of technology.

Learn to learn. In the long run, the engineer who’s mastered that skill will deliver far more business value than a narrow subject matter expert. And she or he will have a lot more fun surfing the turbulence of the tech industry.

Thanks for the lesson, Julie.

Action Item

Identify a few habits you could form to help you learn more, more often, and more easily. (Check out this post for some ideas.) Start working on them.

Roland Whatcott: Manage momentum.

(A post in my “Role Models” series…)

In late 2000, I joined a small team tasked with rewriting the core technology at PowerQuest. The old codebase–despite embodying a number of patent-pending concepts, and serving as the foundation for all our revenue–was fragile, rife with technical debt, and unfriendly to localization, new platforms, and other roadmap priorities.

Building our new engine wasn’t exactly rocket science, but we expected our output to be cool and generate lots of thrust. We took our work as seriously as NASA… Photo Credit: Wikimedia Commons

This rewrite had been attempted before–more than once, by some of the brightest engineers I’ve ever worked with. Each time, the press of looming releases, and the lack of obvious progress, culminated in a “we’ll have to come back to this later” decision.

Our little team was confident that This Would Not Happen To Us. We were going to build an engine that was cross-platform from the ground up. No weird dependencies, no assumptions about compiler quirks or endianness, would permeate the code. Internationalization (i18n) and localization (l10n) support would be baked in. Errors would be clearer. Modules would be small, beautiful, and loosely coupled. Gotos would disappear. Vestiges of C dialect would be replaced by best-practice STL, boost, metaprogramming, and other cutting-edge C++ ideas.

Experience Versus Enthusiasm

Before I tell the rest of the story, make a prediction. Do you think we crashed and burned, muddled through, or succeeded wildly?

How you answer will say a lot about you.

If you’re young and optimistic, you may expect me to tell a story with a happy ending. But if you’re an industry veteran, you probably expect I’m going to tell a cautionary tale framed by failure. You know that rewriting a core technology from scratch almost never succeeds, and you can give a dozen excellent reasons why that’s the case.

If you’ve got Roland Whatcott’s genius, you can imagine either outcome — and more importantly, you know how you can choose the future you want.

Continue reading

Ken Ebert: Kill three birds.

(A post in my “Role Models” series…)

Killing birds is just a metaphor; who’d want to hurt something this cute? Photo credit: Valerian Gaudeau (Flickr).

Most people optimize for a single outcome.

Ken Ebert is made of smarter stuff.

When I began my career, I was like many engineers–keenly aware of issues like code quality and performance, but not so clued in to customer experience or business priorities. I’d go to team meetings, get new assignments, and head back to my cube to code away. If I was unsure how to accomplish my goals, I’d ask a senior engineer, and do my best to reflect the values they cared about. After all, they were the experts.

It gradually dawned on me that some of my engineer mentors (not many; geeks are better-rounded than popular stereotypes make them out to be…) suffered from a kind of myopia. They could build complicated and highly functional things, but they had little interest in how their constructions mapped into sales, revenue, and company success. I went through a revolution in my perspective, got religion about user experience, and spent a lot of time learning from product management.

In 2003, Symantec acquired several companies in rapid succession, and threw five sites together into a new business unit with marching orders to build a suite and conquer the world. I was tasked with architecture across the picture, and my perspective expanded again. Now it wasn’t just a single product’s success I was worrying about–I had to make or shepherd design decisions that enhanced the profitability of multiple independent products as well as the amalgam of all of them. Was it more important to make codebase X localizable, or to invest in a better install for codebase Y? If team A moved to a newer version of tomcat, would it break the integration story for our suite or impose undue burdens on team B? Resources were constrained (they always are!), time was short…

I began to sense that the way I made decisions needed to change, but it wasn’t until I worked intensely with Ken Ebert, years later, that I could put into words exactly how.

Making wise decisions about architecture, SOP on dev teams, product features, investment in R&D versus marketing–it’s like trying to solve an equation with multiple variables. Profit maximization requires that you consider more than one input at a time. This is partly what I had in mind when I blogged about the need for balance in good code, and when I blogged about optimization.

Many times I’ve heard Ken say something like this: “I think we should make feature N the centerpiece of our next release.” And my first response has been, “That’s doable, but kinda boring. Why not focus on M instead?”

When Ken nods sagely at my question, I know what’s coming.

N is glamorous, and low-risk, but it’s only useful to 3% of our customers. If we build M, we’ll address a sales inhibitor for half the funnel, plus we’ll be positioned to do features P and Q on our long-term roadmap, plus I’ll be able to get an analyst to write about it, plus …, plus …”

Ken doesn’t just kill two birds with one stone. He kills three, or five, or six.

I’m not sure how Ken manages to be so good at this, but here’s what I’ve done to try to approximate his success:

  • Evaluate in multiple time horizons.
  • Evaluate for the channel as well as direct sales.
  • Evaluate for support and professional services as well as dev.
  • Evaluate for marketing value.
  • Evaluate for opportunity cost.
  • Evaluate with an eye to domino effects.
  • Evaluate for morale and momentum.
  • Evaluate for technical debt.

Of course, nobody can do all of these, exhaustively, all the time. Don Kleinschnitz’s wisdom about putting a stake in the ground is an important counterbalance. But I find that I can think, at least briefly, about ramifications of a choice from multiple perspectives–and when I do, I often make better choices.

Thanks for the lesson, Ken.

Action Item

Look at the bulleted list of perspectives above; find one you’ve been neglecting. Pick a significant decision that’s on the horizon, and spend some time thinking seriously about that decision in the context of the neglected perspective. What did you learn?

Steve Tolman: It depends.

(A post in my “Role Models” series…)

My friend and long-time colleague Steve Tolman has a standing joke with people who know him well. He gives the same answer to every question: “It depends.”

Unlike most jokes, this one gets funnier the more often you hear it, especially if Steve gives a cheesy wink during delivery.

Nope, not Steve. Photo credit: Wikimedia Commons.

Dead serious, high stakes discussion. Breathless delivery: Should we build feature A or B?

“It depends.” Wink, wink.

How fast can we do a release?

“It depends.”

Do you want ketchup on your hamburger?

“It depends.”

Maybe you have to be there.  ;-)

Steve doesn’t use his favorite answer because he’s lazy; he’s one of the hardest-working, hardest-thinking guys I know. He uses it because important questions often have rich answers, and unless all parties share an understanding about the priorities and assumptions underlying a question, sound bite responses aren’t very helpful. The answer is supposed to remind everyone that a thoughtful approach to business priorities, not slavish conformance to a rule book, is what will drive economic success. Steve generally follows his answer up with a series of probing questions to help everyone rediscover that truth, and to get our creative juices flowing. “It depends” is an invitation to deep discussion, which often produces insight we sorely need.

I see a strong connection between Steve’s tongue-in-cheek answer and the sort of tradeoff analysis that informs smart engineers’ thinking. As I said elsewhere, good code is balanced. You take many competing considerations and find the approach that best addresses business priorities (note the plural on that last word). You don’t get dogmatic, because you know that no extreme position is likely to optimize competing considerations. But you don’t get so pragmatic that you give up on vision and passion, either.

Steve gets this.

You might feel that “it depends” thinking is incompatible with the “put a stake in the ground” principle I blogged about recently. “It depends” invites further discussion, whereas stakes end debates. Right?

I don’t think so. You just use these strategies under different circumstances. “It depends” makes sense when shared understanding hasn’t yet developed. Stakes make sense when you all know the context and are still unsure how to proceed.

So what’s the right ratio of winks to stakes?

It depends. ;-)

Thanks for the lesson, Steve.

Action Item

Next time someone asks you for an overly simple answer, tell them “it depends,” and then let them make the next move. Betcha they’ll ask for detail and listen to your explanation more thoughtfully.

Don Kleinschnitz: Put a stake in the ground.

(A post in my “Role Models” series…)

My huddle was not going well. I’d called a meeting to debate a tricky architectural problem with other senior engineers, and consensus was scarcer than working markers for our whiteboard. We were going round and round in circles.

Don Kleinschnitz walked in. It was our first interaction–he’d only been introduced to the company as our new CTO a few days before–and I wondered whether he’d help us get off the dime.

Five minutes later, the meeting was over, and the controversy was settled. Don had “put a stake in the ground,” as he called it — picked a spot and made a tangible, semi-permanent choice to anchor our behavior.

A stake in the ground. :-) Photo credit: Wikimedia Commons.

Answer the hard questions

I don’t remember the question or the answer, but I do remember some of Don’s solution. He immediately pushed us from generalities into specifics–what use case, exactly, would be affected by the decision? How much, exactly, would tradeoffs pay or cost in either direction?

Of course we couldn’t answer Don’s questions very well; nothing is more certain than ambiguity in software. But Don refused to let us off the hook, because he understood that imperfect but specific answers are better than vague generalizations. Even if you have to guess. (More rationale for this principle is elaborated in the RPCD manifesto.)

By putting a stake in the ground, Don wasn’t being arrogant or unwilling to listen. He was simply recognizing that we had incomplete input, that the right answer was maybe guessable but not clear-cut, and that we’d be better off making a tangible experiment instead of debating intuitions. Maybe our answer would be wrong; if so, we’d adjust later. The cost of altering our trajectory would not be so high that it would invalidate the benefit of immediate progress.

Understand your assumptions

I saw Don model this pattern again when he was general manager of a newly created business unit inside Symantec. We were planning the first release of a suite melded from independently acquired products; the sales force’s compensation and training were in flux; our outbound marketing strategy was unknown.

I think product management gulped when Don asked them for a credible sales forecast, a granular competitive analysis, a rationalized pricing strategy, and a business case justifying the feature sets they proposed to map to line items in budgets. Who wouldn’t gulp? It was a tall order.

But Don wouldn’t settle for finger-in-the-air answers. He dug out a set of spreadsheets from his MBA days and tweaked it. Hundreds of cells held our best-guess scores for the usefulness of features in our products and those of our competitors. Sheets captured assumptions. More sheets ran linear regressions to see if our price/performance fell in line with the industry.

He got pushback. “You’ve got so many assumptions that the model can’t possibly be correct.”

“Yes,” Don would admit. “But insight, not perfect correctness, is what we’re after. You get insight out of specifics. Should we give our installation’s ease-of-use a 5 out of 10, or a 6? Not sure. But notice that the overall price/performance of our product doesn’t change at all when we vary that number. We wouldn’t know that unless we’d plugged in something. Forcing ourselves to give an answer has taught us something about our assumptions.”

Don’s model enabled smart action in the face of tremendous uncertainty.

Show, don’t tell

Don is always tinkering with tools and processes that instill this habit in his troops. On another memorable occasion, we were wrestling with long, dry requirements documents. They were auto-generated from a requirements database, and they ran into the scores–maybe hundreds–of pages. Because they were so long, nobody liked to read them carefully. And because nobody liked to read them, they weren’t producing the interlock we needed to keep five development sites aligned in a healthy way.

Don asked us to build a UI that manifested the existence of all features that would be exposed in our next release. It didn’t have to do anything–just display what sorts of things would be possible.

At first I thought he was crazy. I told him it would take a long time.

“Not as long as requirements docs,” he said.

“It will be ugly–completely lacking in any usability.”

“So? We’re not trying to model the customer experience. We can throw it away. The value is in forcing ourselves to be specific about what features we have.”

I built the UI. And I learned a ton. A picture–even a sloppy one–is easily worth a thousand words. Especially if it’s interactive.

Product managers saw the UI. After a suitable re-education so they knew not to worry about ugliness or awkward workflow, they started saying really insightful things, like: “If we expose features A and C but not B, a user who wants to do A→B→C will be frustrated.” The concreteness of the UI was almost like magic.

I could go on and on about how Don lives this principle, but you get the idea: pose hard questions, get the best answers you can… then force yourself to put a stake in the ground, experiment with specifics, and learn from it.

Thanks for the lesson, Don.

Action Item

Consider a current (or past) design decision that’s been difficult to make. How could you make the decision easier by guessing about the likelihood of a use case, the preference of a user, or other aspects of context? If you got the answer wrong, how soon would you be likely to discover your mistake, and how expensive would it be to adjust your trajectory?