Why you should use an IDE instead of vim or emacs

With a title like the one above, you may be expecting a rant from an IDE bigot. I know there are plenty of flame wars on this topic, on both sides, and if I raised your hackles (or whet your appetite), I’m sorry.

This is not that kind of post. For one thing, I don’t take myself so seriously:

There’s a keystroke for that! Image credit: xkcd.com

What I’m hoping to do here is point out some subtleties to this debate that don’t get a lot of airtime, and explain to my supercharged-text-editor friends why I work the way I do. However, I also plan to write a companion to this post, explaining why you need to learn a tool in the vim/emacs category, and I’ll have plenty to say on that topic as well. Hopefully that buys me a few minutes of an open mind. :-)

From a distance

If you step back from the debate over IDEs vs. supercharged text editors, and squint to suppress the details, you’ll see that most exchanges on this topic look like this: Continue reading

A Comedy of Carelessness

Act I

The day after Thanksgiving I went on a long road trip to eastern Wyoming. Total driving time: about 7 hours, each way.

At a gas station about 2 hours from home, on the way back, my credit card was declined. Apparently, fraud prevention algorithms at my credit card company had decided that it was suspicious for me to use my credit card out of state. This was rather irritating, since I’d just driven 12 hours on this same highway, using this same credit card to fill up the car, out of state, 2 other times in the previous 24 hours.

Photo credit: herzogbr (Flickr)

I used an alternate card and finished my trip. When I got home, I called American Express to get the card unblocked. (The block didn’t just apply to gas stations in Wyoming–once suspicious, the company wouldn’t let me buy gas a block from my house, either.) I spent 5-10 minutes working my way through an automated phone queue. It asked for several pieces of info to prove I was, indeed, the card holder–all 16 digits of the card, the 3-digit code on the back, last 4 digits of my social security number, phenotype of my dog… I had to call back twice–once because a family member asked me a question while I was fishing my card out of the wallet, and the system lost patience waiting for a response, and once because I pushed the wrong key and had no way to backspace.

Finally, on my third attempt, I got to the concluding challenge in this interview with mindless software: “Enter the first four letters of your place of birth.”

I was flummoxed. Of course I know where I was born, but which place did they want? I didn’t know whether this question was soliciting a city name, a state name, a hospital name… I hesitated. I guessed that city name was most likely to be the answer I’d provided to this security question in the past, but I can think of one or two accounts where hospital or state is what’s wanted.

While I was biting my lip, the system rejected my answer, and I was back to square one. I called in again and remained silent until I got routed to a live operator. She unblocked the card and offered to give me a link to an app that I could download, to make it convenient for me to unblock my card whenever this happens in the future.

Well, sorry, American Express–despite your cheerful customer service representative, you did not make a happy customer with this experience. If you’re going to challenge me when I go out of state, Continue reading

Why we need try…finally, not just RAII

The claim has been made that because C++ supports RAII (resource acquisition is initialization), it doesn’t need try…finally. I think this is wrong. There is at least one use case for try…finally that’s very awkward to model with RAII, and one that’s so ugly it ought to be outlawed.

The awkward use case

What if what you want to do during a …finally block has nothing to do with freeing resources? For example, suppose you’re writing the next version of Guitar Hero, and you want to guarantee that when your avatar leaves the stage, the last thing she does is take a bow–even if the player interrupts the performance or an error occurs.

…finally, take a bow. Photo credit: gavinzac (Flickr)

Of course you can ensure this behavior with an RAII pattern, but it’s silly and artificial. Which of the following two snippets is cleaner and better expresses intent? Continue reading

3 reasons to prefer references over pointers (C++)

I still remember what it was like, as a C programmer, to be introduced to the newfangled concept of references in C++. I thought: “This is dumb. It’s just another way to use pointers. More syntactic sugar for no good reason…”

For a long time, I thought of pointers vs. references as a stylistic choice, and I’ve run into lots of old C pros who feel the same. (The debate on this comment stream is typical.) If you’re one of them, let me see if I can explain why I now think I was wrong, and maybe convince you to use references where it makes sense. I won’t try to enumerate every reason–just hit a few highlights.

image credit: xkcd.com

1. References have clearer semantics

NULL is a perfectly valid value for a pointer, but you have to do some headstands to create a reference to NULL. Because of these headstands, and because testing a reference for NULL-ness is a bit arcane, you can assume that references are not expected to be NULL. Consider this function prototype, typical of so much “C++” code written by folks with a C mindset:

void setClient(IClient * client);

With only that line, you don’t know very much. Will client’s state change during the call? Is it valid to pass NULL? In the body of the call, will client ever point to anything other than the value it had at the top of the function?

Veteran C programmers recognize that the semantics are unclear, so they come up with doc conventions to plug the gap, and they check assumptions at the top of the function:

/**
 * @param client IN, cannot be NULL
 */
void setClient(IClient * client) {
    if (client != NULL) { //...do something

This is fine, except why depend on a comment and a programmer’s inclination to read and obey, when you can enforce your intentions at compile time, and write less code, too? Continue reading

The Scaling Fallacy

If X works for 1 ___ [minute | user | computer | customer | …], then 100X ought to work for 100, right? And 1000X for 1000?

Sorry, Charlie. No dice.

One of my favorite books, Universal Principles of Design, includes a fascinating discussion of our tendency to succumb to scaling fallacies. The book makes its case using the strength of ants and winged flight as examples.

Have you ever heard that an ant can lift many times its own weight–and that if that if one were the size of a human, it could hoist a car over its head with ease? The first part of that assertion is true, but the conclusion folks draw is completely bogus. Exoskeletons cease to be a viable structure on which to anchor muscle and tissue at sizes much smaller than your average grown-up; the strength-to-weight ratio just isn’t good enough. Chitin is only about as tough as fingernails.

Tough little bugger — but not an olympic champion at human scale. Image credit: D.A.Otee (Flickr)

I’d long understood the flaws in the big-ant-lifting-cars idea, but the flight example from the book was virgin territory for me.

Humans are familiar with birds and insects that fly. We know they have wings that beat the air. We naively assume that at much larger and much smaller scales, the same principles apply. But it turns out Continue reading