3 reasons to prefer references over pointers (C++)

I still remember what it was like, as a C programmer, to be introduced to the newfangled concept of references in C++. I thought: “This is dumb. It’s just another way to use pointers. More syntactic sugar for no good reason…”

For a long time, I thought of pointers vs. references as a stylistic choice, and I’ve run into lots of old C pros who feel the same. (The debate on this comment stream is typical.) If you’re one of them, let me see if I can explain why I now think I was wrong, and maybe convince you to use references where it makes sense. I won’t try to enumerate every reason–just hit a few highlights.

image credit: xkcd.com

1. References have clearer semantics

NULL is a perfectly valid value for a pointer, but you have to do some headstands to create a reference to NULL. Because of these headstands, and because testing a reference for NULL-ness is a bit arcane, you can assume that references are not expected to be NULL. Consider this function prototype, typical of so much “C++” code written by folks with a C mindset:

void setClient(IClient * client);

With only that line, you don’t know very much. Will client’s state change during the call? Is it valid to pass NULL? In the body of the call, will client ever point to anything other than the value it had at the top of the function?

Veteran C programmers recognize that the semantics are unclear, so they come up with doc conventions to plug the gap, and they check assumptions at the top of the function:

/**
 * @param client IN, cannot be NULL
 */
void setClient(IClient * client) {
    if (client != NULL) { //...do something

This is fine, except why depend on a comment and a programmer’s inclination to read and obey, when you can enforce your intentions at compile time, and write less code, too? Continue reading

Are You Losing Enough Battles?

Portrait of McClellan. Image credit: Wikimedia Commons.

General George B. McClellan was a brilliant planner, but his overly cautious style may have tacked years onto the U.S. civil war. Lincoln became frustrated, commenting with devastating wit that “McClellan is always almost ready to fight.” Eventually McClellan’s risk aversion forced Linoln to relieve him of command, after sending a telegram that read, “If General McClellan isn’t going to use his army, I’d like to borrow it for a time.”

Contrast Colin Powell, who recommends: “Once information is in the 40% to 70% [certainty] range, go with your gut.”

I don’t recommend that you take stupid risks, that you make no effort to gather data, or that you spend your political capital carelessly. But I do recommend that you follow Powell’s example, not McClellan’s. To quote Powell again, “You don’t know what you can get away with until you try.”

Thomas Edison tried 1000 times to invent a light bulb before he succeeded. Why do we expect in software to get our designs right on the first attempt? I submit that if you aren’t losing a battle now and then–if you don’t make any failed experiments–you are not working smart enough or courageously enough.

If you don’t lose the occasional battle, you will never win the war.

Losing an occasional battle keeps us humble. It means we’re grounded in reality rather than ivory tower imagination. It means we value balance and pragmatism over theoretical perfection, and it helps build a healthy regard for the needs of other people.

Go try. A lost battle of the sort we fight with software is never an Antietam or Gettysburg.

“To live a creative life, we must lose our fear of being wrong.” Joseph Chilton Pearce.