Why I don’t blog about “great code”

Last week I heard a Stanford researcher describe how failure can be a good thing, if we are prepared to learn from it.

I agree, although this mindset is easier to describe than to achieve. So here I’m kicking off a new series of posts about mistakes I’ve made over the years, and what I’ve learned from them. Look in the “Mistakes” category for more like this.

If you’ve followed my blog at all, you’ll know that I regularly return to the theme of what constitutes good code. Ever wonder why I don’t get more ambitious and talk about “great code” instead?

A big reason is that in software, great can be the enemy of good.

If you’re a fan of aphorisms, you’ve probably heard the opposite statement a few times: “good is the enemy of great.” People who say this are emphasizing the value of setting lofty goals, and then aligning our day-to-day lives to deeply held priorities. They remind us that settling for mediocrity is almost guaranteed not to create deep meaning or purpose. And they are quite right.

However, I submit that the greatness you should be pursuing in software is less about producing great code, and more about becoming a great producer of code. And great producers of code know that most of their creations will not optimize business value if they aim for a magnum opus. Not every commission can be the Sistine Chapel.

Detail from the Sistine Chapel, by Michaelangelo. Image credit: Wikimedia Commons.

Don’t get me wrong. I care about the quality and artistry of code, and there is definitely great code out there. It’s just that I’ve got these battle scars…

Daniel builds a tower

In 2001, I helped design and code Continue reading

Are You Designing an Apex Predator?

(This post is an excerpt from my forthcoming book, Lifeware. Follow my blog [see right sidebar] to catch the publication announcement.)

A paleontologist walks along a canyon. She finds part of a fossilized crocodilian skull in a layer of sandstone. She immediately knows that this creature’s environment was wet and warm, with an abundance of animal and plant life at all layers in the energy pyramid. In fact, that single clue implies an entire ecosystem with an amazing amount of detail.

Crocodylus Porosus: eats competitors for lunch. Photo credit: Molly Ebersold (Wikimedia Commons)

Her predictions derive from the fact that adult crocodiles are apex predators. They’re at the top of a food chain which stretches down through medium-sized herbivores, omnivores, or carnivores (in modern times, a zebra or jackal, for example), to smaller omnivores and herbivores (fish, turtles), to insects and nematodes and molluscs and arachnids, to plants in a thousand varieties… At every trophic level in the energy pyramid, there’s roughly 10x the biomass of the layer above it, so one crocodile implies a lot of bugs and reeds and mangroves.

Ask a kid for his favorite animal, and you’re likely to hear about an apex predator: tiger, orca, polar bear, great white shark, eagle. The freedom, fearlessness, and power of these animals captures our imagination.

Kids who grow into software engineers have an affinity for complex software “organisms” with highly developed competitive advantage. They still love apex predators.

As my friend Chris was pointing out to me yesterday, a highly evolved design comes naturally in software. Structural engineers are constrained by gravity and tensile properties of steel to not attempt bridges across the ocean; software, on the other hand, has much fuzzier (and more poorly understood) limits.

But here’s an important lesson from biology: apex predators require a rich, layered ecosystem in which their adaptations can shine. Take a crocodile and plop it down in the middle of the Sahara or the artic tundra; how long will it survive?

Software is the same way. If you are designing an enterprise application that offers rich reporting, seamless transitions to and from the cloud, sophisticated licensing, integration with management frameworks and directory services, single sign-on, and industrial-strength security, you are designing an apex predator. That’s cool. But is there enough “biomass” in the environment to sustain your killer app? The enabling environment has to include trained support and sales staff, analysts, VARs, professional IT organizations with time and budget, a friendly legal climate, whitepapers, a developer community, and so on and so on. If you don’t have this, your apex predator will die. Make sure you understand the ecosystem implied by your design; addressing the environment is as much a part of the design and development as the UML diagrams and the WSDL.

Cheating is possible. You can keep crocodiles alive in a zoo without an acre of mangrove swamp, and you can short-circuit VARs and skimp on whitepapers and so forth. But cheating is expensive and suboptimal. And even cheating requires plenty of planning and investment.

An alternative to cheating is to not try to build your apex predator until the ecosystem evolves. Maybe you should settle for a handful of snapping turtles and a snake, until the swamp’s big enough to house a crocodile. Snapping turtles might not be a bad business… This choice isn’t the same as abandoning a vision–it’s just realizing that ignoring the timetable of evolution may be unwise.

Action Item

Look at your roadmap. Is the next round of releases going to drop an “apex predator” on the market like a crocodile in the arctic? What must be true of the ecosystem for the product to succeed, and how can you make those things true?

Good Code Plans for Problems

(Another post in my “What is ‘Good Code’?” series…)

What should you do when software doesn’t work the way you expect?

Surprising behavior. Photo credit: epicfail.com.

You have to have a plan. The plan could bring one (or several) of the following strategies to bear:

  • Reject bad input as early as possible using preconditions.
  • Get garbage in, put garbage out.
  • Throw an exception and make it someone else’s problem.
  • Return a meaningful error.
  • Log the problem.

These choices are not equally good, IMO, and there are times when each is more or less appropriate. Perhaps I’ll blog about that in another post…

Regardless of which strategy or strategies you pick, the overriding rule is: develop, announce, and execute a specific plan for handling problems.

This rule applies at all levels of code — low-level algorithms, modules, applications, entire software ecosystems (see my post about how software is like biology). The plan can be (perhaps should be) different in different places. But just as the actions of dozens of squads or platoons might need to be coordinated to take a hill, the individual choices you make about micro problem-handling should contribute to a cohesive whole at the macro level.

Notice that I’ve talked about “problems,” not “exceptions” or “errors.” Problems certainly include exceptions or errors, but using either of those narrower terms tends to confine your thinking. Exceptions are a nifty mechanism, but they don’t propagate across process boundaries (at least, not without some careful planning). Sometimes a glut of warnings is a serious problem, even if no individual event rises to the level of an “error.”

Action Item

Evaluate the plan(s) for problem-handling in one corner of your code. Is the plan explicit and understood by all users and maintainers? Is it implemented consistently? Is it tested? Pick one thing you can do to improve the plan.

What Role Are You Playing in RPCD?

When you role play in RPCD, you could be standing in for either software, or a flesh-and-blood human being. As RPCD explicitly acknowledges, both are part of your final system.

A key insight of RPCD is that the distinction between these two entities is a lot less meaningful than we usually think. Most software is created to solve a problem that ordinary human beings are already addressing in some fashion. In fact, I might go so far as to say that this is true of all software. The quick exceptions that come to mind just take a little more pondering to map. Biometric scanners? Just a replacement for humans judging whether a person matches that photo on their driver’s license. RFID tags? Just making the human inventory team’s job easier. Firmware in high-speed switches on the internet backbone? Just a step or two removed from postmen and switchboard operators. Word processors and IMEs? Just a replacement for a transcriptionist. Signal analyzers for SETI? Genome sequencers? Protein folders for Folding@Home? Just substitutes for human researchers.

human researcher

photo credit: armymedicine (Flickr)

Even if you’re playing a role that you’re sure will ultimately be encapsulated in a tidy software abstraction, RPCD helps you remember that the problem that entity solves is ultimately a human problem. We don’t pay engineers the big bucks to make computers happy.

Long-Term Benefits of RPCD

In a previous post, I wrote about a methodology for designing software that has role-playing at its heart. If you take a look at the example RPCD interaction, I think some benefits will immediately become apparent.

But there are subtler advantages that might not be obvious:

The system never needs to wait for functional code to be deployable.

As soon as you have a set of reasonably clear roles defined, and at least one specific use case, you can observe the system in action. Granted, you’re observing people instead of software, but the full complexity and dynamism springs to life right before your eyes. People can hold clipboards and use checklists to model how wizards populate databases; they can move post-notes on a whiteboard to explore organizational algorithms. Seeing in toto, from the beginning, is incredibly useful.

checklist, from http://www.flickr.com/photos/alancleaver/

The first deployments you do are demos. I am a big fan of demos; you should demo something at the end of every sprint or milestone, to guarantee that you’ve actually accomplished something useful and that it’s aligned with the needs of stakeholders. But with RPCD, you take demos to a whole new level, because your demos become interactive. If sales asks “What happens when the user pushes the big red button?”, you don’t have to say, “Sorry, we can’t demo that today.” You walk over to the person modeling the UI, push the red button on the piece of paper or the whiteboard that they’re holding, and let the system chug.

Later in the evolution of code, when most parts of the system are automated, the roles that humans still have to model give you critical information. Are these roles something that you ought to permanently delegate to human intelligence? Is the wizard you expected to build so complex and error-prone that online chat with a human is a better option, at least in this release? Perhaps you should hire an online chat person as a component of your system, take the risk out of the near-term release, and then study the behavior of that person intensely to learn how to automate in the next release… If your system has always used humans for cogs, adapting it in this way will be natural, not a major departure from expectations.

Another possible insight is that roles where humans are forced to continue to work represent the high-value aspects of the problem. Solve those problems, and customers will really love you, because you’ve radically changed their work.

The needs of all humans interacting with the system are inherently “baked in” and obvious, instead of being implicit.

UCD says to center your design on the user. But so often we forget support, sales, IT, executive management, and so forth. If you’re writing an app for the iPad that uses social ranking to recommend shows airing in the next hour on cable channels, you probably have a team of data managers that maintains current listings for all channels, and a team of backroom IT that keeps your DB running. These people are part of the system; if one day your data center dies, and no IT people show up to fix it, or one day your channel listings goes stale and no data manager steps in to solve the problem, you don’t have a product.

These people should all have roles in your role plays, and by regularly representing them, you can’t help but make better decisions. Overhead and cost centers become obvious. Risks jump out at you.

The clumsiness of certain touch points becomes obvious.

Or in other words, user feedback and usability testing are built in, because actual people, not abstract boxes on your system, are interacting from the very beginning.