In Which Warnings Evolve Wings

Ignoring warnings is a bad idea. At some level, we all know this. If we see a sign that says “Warning: Dangerous Undertow” at the beach, we pause (I hope!) and think twice before we get in the water.

Ignore warnings at your peril. :-) Image credit: xkcd.com

Yet we sometimes get cavalier about warnings in software. Specifially, I have heard programmers describe compiler warnings as being less severe than errors–as if worrying about them is optional.

This is simply not true.
Continue reading

Add some more extra redundancy again

It’s the season for coughs and sniffles, and last week I took my turn. I went to bed one night with a stuffy nose, and it got me thinking about software.

What’s the connection between sniffles and software, you ask?

Let’s talk redundancy. It’s a familiar technique in software design, but I believe we compartmentalize it too much under the special topic of “high availability”–as if only when that’s an explicit requirement do we need to pay any attention.

Redundancy can be a big deal. Image credit: ydant (Flickr)

Redundancy in nature

Mother Nature’s use of redundancy is so pervasive that we may not even realize it’s at work. We could learn a thing or two from how she weaves it–seamlessly, consistently, tenaciously–into the tapestry of life.

Redundancy had everything to do with the fact that I didn’t asphyxiate as I slept with a cold. People have more than one sinus, so sleeping with a few of them plugged up isn’t life-threatening. If nose breathing isn’t an option, we can always open our mouths. We have two lungs, not one–and each consists of huge numbers of alveoli that does part of the work of exchanging oxygen and carbon dioxide. Continue reading

All I Really Need To Know I Didn’t Learn In Compugarten

I’m glad newly minted software engineers are exposed to data structures, compilers, concurrency, graph theory, assembly language, and the other goodies that constitute a computer science curriculum. All that stuff is important.

But it’s not enough.

Not all classroom material for CS folks should be technical. Photo credit: uniinnsbruck (Flickr).

Since I’m half way to curmudgeon-hood, I frequently find myself lamenting educational blindspots in the young. I’ve even toyed with the idea of teaching at the nearest university, some day when I Have More Time™. If academia would take me, my lesson plans might cover some of the following topics:

Why Exceptions Aren’t Enough

(This post is a logical sequel to my earlier musings about having a coherent strategy to handle problems.)

Back in the dark ages, programmers wrote functions that returned numeric errors:

if (prepare() == SUCCESS) {
  doIt();
}

This methodology has the virtue of being simple and fast. We could switch based on the error code. A “feature” of our apps was that our users could google an error code to see if they had company:

Image credit: xkcd.com

However, as we wrote code, we sometimes forgot to check errors, or tell users about them:

prepare();
doIt();

Admit it; you’ve written code like this. So have I. The mechanism lets a caller be irresponsible and ignore the signal the called function sends. Not good. Even if you are being responsible, the set of possible return values is nearly unbounded, and you get subtle downstream bugs if a called function adds a new return value when a caller is switching return values.

Another problem with this approach to errors Continue reading

Good Code Plans for Problems

(Another post in my “What is ‘Good Code’?” series…)

What should you do when software doesn’t work the way you expect?

Surprising behavior. Photo credit: epicfail.com.

You have to have a plan. The plan could bring one (or several) of the following strategies to bear:

  • Reject bad input as early as possible using preconditions.
  • Get garbage in, put garbage out.
  • Throw an exception and make it someone else’s problem.
  • Return a meaningful error.
  • Log the problem.

These choices are not equally good, IMO, and there are times when each is more or less appropriate. Perhaps I’ll blog about that in another post…

Regardless of which strategy or strategies you pick, the overriding rule is: develop, announce, and execute a specific plan for handling problems.

This rule applies at all levels of code — low-level algorithms, modules, applications, entire software ecosystems (see my post about how software is like biology). The plan can be (perhaps should be) different in different places. But just as the actions of dozens of squads or platoons might need to be coordinated to take a hill, the individual choices you make about micro problem-handling should contribute to a cohesive whole at the macro level.

Notice that I’ve talked about “problems,” not “exceptions” or “errors.” Problems certainly include exceptions or errors, but using either of those narrower terms tends to confine your thinking. Exceptions are a nifty mechanism, but they don’t propagate across process boundaries (at least, not without some careful planning). Sometimes a glut of warnings is a serious problem, even if no individual event rises to the level of an “error.”

Action Item

Evaluate the plan(s) for problem-handling in one corner of your code. Is the plan explicit and understood by all users and maintainers? Is it implemented consistently? Is it tested? Pick one thing you can do to improve the plan.