Mountains, Molehills, and Markedness

In my previous three posts, I explained why the semantics of programming languages are not as rich as they could be. I pointed out some symptoms of that deficit, and then made recommendations about bridging the gap. Finally I introduced “marks”–a feature of the intent programming language I’m creating–and gave you a taste for how they work.

In this post, I’m going to offer more examples, so you see the breadth of their application.

Aside

Before I do, however, I can’t resist commenting a bit on the rationale for the name “marks”.

In linguistics, markedness is the idea that some values in a language’s conceptual or structural systems should be assumed, while others must be denoted explicitly through morphology, prosodics, structural adjustments, and so forth. Choices about markedness are inseparable from worldview and from imputed meaning. Two quick examples:
Continue reading

George and the Flood

Here’s a simple little test that teaches an important lesson. Take a moment to work through all 3 questions. I promise it won’t take long. :-)

 
 

Question 1. A flood is coming. George can only swim for a little while. What should George do?

Screen Shot 2012-12-09 at 12.27.01 PM

 
 

Question 2. A flood is coming. George can only swim for a little while. What should George do?

Screen Shot 2012-12-09 at 12.28.05 PM

 
 

Question 3. A flood is coming. George can only swim for a little while. What should George do?

Screen Shot 2012-12-09 at 12.28.43 PM

Ready to grade your answers?

The Yellow Belt Answer

Most people say “go right, toward higher ground” if picture 1 is the only input to their analysis. The logic is pretty indisputable. But…

Continue reading

Don Kleinschnitz: Put a stake in the ground.

(A post in my “Role Models” series…)

My huddle was not going well. I’d called a meeting to debate a tricky architectural problem with other senior engineers, and consensus was scarcer than working markers for our whiteboard. We were going round and round in circles.

Don Kleinschnitz walked in. It was our first interaction–he’d only been introduced to the company as our new CTO a few days before–and I wondered whether he’d help us get off the dime.

Five minutes later, the meeting was over, and the controversy was settled. Don had “put a stake in the ground,” as he called it — picked a spot and made a tangible, semi-permanent choice to anchor our behavior.

A stake in the ground. :-) Photo credit: Wikimedia Commons.

Answer the hard questions

I don’t remember the question or the answer, but I do remember some of Don’s solution. He immediately pushed us from generalities into specifics–what use case, exactly, would be affected by the decision? How much, exactly, would tradeoffs pay or cost in either direction?

Of course we couldn’t answer Don’s questions very well; nothing is more certain than ambiguity in software. But Don refused to let us off the hook, because he understood that imperfect but specific answers are better than vague generalizations. Even if you have to guess. (More rationale for this principle is elaborated in the RPCD manifesto.)

By putting a stake in the ground, Don wasn’t being arrogant or unwilling to listen. He was simply recognizing that we had incomplete input, that the right answer was maybe guessable but not clear-cut, and that we’d be better off making a tangible experiment instead of debating intuitions. Maybe our answer would be wrong; if so, we’d adjust later. The cost of altering our trajectory would not be so high that it would invalidate the benefit of immediate progress.

Understand your assumptions

I saw Don model this pattern again when he was general manager of a newly created business unit inside Symantec. We were planning the first release of a suite melded from independently acquired products; the sales force’s compensation and training were in flux; our outbound marketing strategy was unknown.

I think product management gulped when Don asked them for a credible sales forecast, a granular competitive analysis, a rationalized pricing strategy, and a business case justifying the feature sets they proposed to map to line items in budgets. Who wouldn’t gulp? It was a tall order.

But Don wouldn’t settle for finger-in-the-air answers. He dug out a set of spreadsheets from his MBA days and tweaked it. Hundreds of cells held our best-guess scores for the usefulness of features in our products and those of our competitors. Sheets captured assumptions. More sheets ran linear regressions to see if our price/performance fell in line with the industry.

He got pushback. “You’ve got so many assumptions that the model can’t possibly be correct.”

“Yes,” Don would admit. “But insight, not perfect correctness, is what we’re after. You get insight out of specifics. Should we give our installation’s ease-of-use a 5 out of 10, or a 6? Not sure. But notice that the overall price/performance of our product doesn’t change at all when we vary that number. We wouldn’t know that unless we’d plugged in something. Forcing ourselves to give an answer has taught us something about our assumptions.”

Don’s model enabled smart action in the face of tremendous uncertainty.

Show, don’t tell

Don is always tinkering with tools and processes that instill this habit in his troops. On another memorable occasion, we were wrestling with long, dry requirements documents. They were auto-generated from a requirements database, and they ran into the scores–maybe hundreds–of pages. Because they were so long, nobody liked to read them carefully. And because nobody liked to read them, they weren’t producing the interlock we needed to keep five development sites aligned in a healthy way.

Don asked us to build a UI that manifested the existence of all features that would be exposed in our next release. It didn’t have to do anything–just display what sorts of things would be possible.

At first I thought he was crazy. I told him it would take a long time.

“Not as long as requirements docs,” he said.

“It will be ugly–completely lacking in any usability.”

“So? We’re not trying to model the customer experience. We can throw it away. The value is in forcing ourselves to be specific about what features we have.”

I built the UI. And I learned a ton. A picture–even a sloppy one–is easily worth a thousand words. Especially if it’s interactive.

Product managers saw the UI. After a suitable re-education so they knew not to worry about ugliness or awkward workflow, they started saying really insightful things, like: “If we expose features A and C but not B, a user who wants to do A→B→C will be frustrated.” The concreteness of the UI was almost like magic.

I could go on and on about how Don lives this principle, but you get the idea: pose hard questions, get the best answers you can… then force yourself to put a stake in the ground, experiment with specifics, and learn from it.

Thanks for the lesson, Don.

Action Item

Consider a current (or past) design decision that’s been difficult to make. How could you make the decision easier by guessing about the likelihood of a use case, the preference of a user, or other aspects of context? If you got the answer wrong, how soon would you be likely to discover your mistake, and how expensive would it be to adjust your trajectory?