3 Commandments of Performance Optimization

In my experience, most programmer attitudes on speed fall into one of these categories:

laissez-faire

Programmers with this mindset think about performance on occasion, but it’s not a big focus. Occasionally they’re forced to tackle problems because a particular design is too slow, a customer is unhappy, or new scaling requirements materialize. In such cases, they experiment until behavior improves, and then go back to the work they really care about.

passionate

Programmers with this mindset have a hard time not thinking about performance. Every design they do reflects elaborate consideration of how to minimize footprint and/or how to complete a task in the shortest possible time. (Note that those two priorities often conflict.) Programmers who are passionate about performance often feel like their laissez-faire counterparts are derelict in their duty.

I don’t think either of these extremes is healthy in all cases. I have seen programmers who chronically think about performance too late,  creating large refactoring burdens and sabotaging their company’s success. Sometimes when you go from “make it work” to “make it fast” you find that all your original work is a waste, because a totally different design (even different tests, conceivably) is the only way forward; I wrote about this in “A Quibble with Martin’s ‘Optimize Later’ Notion“.

On the other hand, it is possible to be too passionate about performance; optimizing the performance of the dev team (by decreasing coding and testing time) is often a better business choice than optimizing execution speed in ways that make code more complex and harder to verify. I have encountered performance zealots disqualifying a perfectly good design on the grounds that it’s not performant enough in a use case that only 2 customers on the entire planet would ever care about. Not smart. As I’ve said many times, good code is balanced.

ThrustSSC — the first car to break the sound barrier. Sometimes speed is the ultimate criterion. However, most money is made on cars with more modest performance requirements. Photo credit: cmglee (Wikimedia Commons)

Let’s assume you buy my criticism of the extremes, and you’re willing to apply the “it depends” doctrine. Continue reading

How Sutter’s Wrong About const in C++ 11

Herb Sutter recently gave a talk about how the const keyword and the mutable keyword have subtle but profoundly different semantics in C++ 11. In a nutshell, he says that C++ 11 corrects the wishy-washy definition of const in C++ 98; const used to mean “logically constant,” but now it means thread-safe. And mutable now means thread-safe as well. His summary slide says:

const == mutable == thread safe (bitwise const or internally synchronized)

Editor’s note: Since this post was written, Herb has updated his slide. See Herb’s note in the comment stream below.

Now, I think Herb’s talk is quite informative, and I don’t dispute the core of what he was trying to convey. It’s a good insight, well worth the community’s attention. I learned something important; I recommend that you watch the talk. Using const well is an essential skill. But I think in his enthusiasm about the way the language has evolved to make semantics clearer, Herb does us a disservice by oversimplifying.

When Herb uses the C++ == operator to boil his point down to a pithy summary, he’s implying true equivalence; what’s on one side of the operator is, for all intents and purposes, identical to or indistinguishable from what’s on the other side. And while const and mutable and thread-safe are highly related concepts, they are not equivalent enough to each other for ==.

To understand why, answer the following question: Why would good code use const and/or mutable even if it’s single-threaded?

Ah. I imagine you nodding your head sagely. You see where I’m going, don’t you?

These two keywords don’t just define semantics for cross-thread access; they define the semantics a variable or object supports when accessed by various scopes (e.g., subroutines or code blocks) on the same thread. If you pass a const Widget & to a function, that function can’t call Widget::modifyState() even if it’s the only thread in the universe. If you declare a m_lazy_init member variable to be mutable, you are telling the compiler to let you change it where it would normally be disallowed, including on the same thread.

So: const means unchangeable in whatever scope sees const (including many threads), which is why it also implies thread-safe (if all threads see const); mutable means changing safely in one or many threads, which is why it also implies thread-safe (if all threads see const). In C++ 98, these semantics were a bit loose. You could use them carelessly, cast away parts of their guarantees, and generally operate as a law unto yourself. In C++ 11 the semantics of const and mutable are explicit and exacting; the standard library demands thread-safe copy construction. As a result, their role in thread safety is clarified, and we all write better code. Mutexes and atomics and certain kinds of queues are inherently safe to change from any thread; they deserve and require the mutable keyword.

Instead of Herb’s final equation, I’d propose a Venn diagram:

The const and mutable keywords are not equivalent in C++ 11, but they do share guarantees about thread safety.

The const and mutable keywords are not equivalent in C++ 11, but they do share guarantees about thread safety.