Ignoring warnings is a bad idea. At some level, we all know this. If we see a sign that says “Warning: Dangerous Undertow” at the beach, we pause (I hope!) and think twice before we get in the water.
Yet we sometimes get cavalier about warnings in software. Specifially, I have heard programmers describe compiler warnings as being less severe than errors–as if worrying about them is optional.
This is simply not true.
You have probably seen plenty of warnings that highlight serious problems; I know I have. And you’ve probably wrestled with annoying “errors” that tools should have fixed without bothering you–or suppressed in the first place.
What’s your intent?
In general, compiler warnings aren’t less severe than errors–they are simply more ambiguous. The compiler isn’t sure whether a signed/unsigned comparison is evidence of logic mistakes, or is perfectly harmless. So it warns you, and lets you decide.
Messy ways to answer
One evidence that ambiguous intent is in play is that our usual method for eliminating warnings is to tweak code until the ambiguity is gone. If you’re assigning a 64-bit number to a 32-bit number, and you explicitly cast the right-hand side to a 32-bit value, then there’s no longer any question about whether you intend the truncation. The compiler notices your tweak, and the warning disappears.
Some warnings aren’t susceptible to this approach. If the compiler warns you that you have unreachable code, you can delete it or
#ifdef it, but there’s no way to tell the compiler you intend it that way. If you get a warning about a function being deprecated, you either have to stop calling the function, or live with the nag. If you want to avoid warnings about alignment and packing, you probably have to use
Marks make it better
The marks that I’ve recently described provide a nice, uniform solution to this hodgepodge of warning-answering mechanisms. Since they’re evaluated at compile-time, they can play the same role that
#pragma does in some languages. Sophisticated attachment and propagation get you away from all the silly push/pop gyrations. They can attach to any portion of the code DOM–functions, variables, statements, code blocks, classes, packages, applications–and they can express arbitrary semantics, including answers to any question the compiler dreams up. One simple, clean technique across the board.
But there’s more…
However, I want to push our vision even further.
Imagine that you could address the compiler’s questions (aka warnings) in powerful new ways:
- Yes, it’s okay that I’m calling deprecated functions–but only on the old OS where I’m currently working, and only until we reach beta.
- I’m not worried about this exception as long as I have a unit test that proves we handle it in every caller in the call graph.
- Assume I know what I’m doing and build the binary anyway, because right now my goal is just to figure out what libraries I need on this new platform to make the port work. But don’t let me accidentally check in anything that hides warnings from others.
- Ignore all the reassuring answers (about warnings) that I gave you in the past; if you went back to your paranoid state, what would you ask me about?
- Automatically expire all my answers about warnings next week, after we deliver a prototype that exhibits stubs for key features.
- Warn about problem X, and get an answer from every developer who edits this module, individually.
- I want to warn about constructs that are not traditionally represented in code, such as impractical use cases, insecure features, ill-defined personas, and so forth.
In order to provide this sort of experience to developers, you don’t just need marks. You also need the ability to record answers about warnings outside the code itself, because different developers might have different answers for different circumstances, and because answers must expire or vary without the code changing.
But as soon as you answer warnings outside the code, you have a stability problem. Code changes; what was line 72 in your module yesterday might be line 93 today. It does no good to remember Fred’s answer to warning W-2046 on line 72 of moduleX, if line 72 might have a different meaning each time we compile.
I want to support this powerful approach to warnings in the
intent programming language, so I’ve been pondering the problem of stable references to code. I think I have some satisfying answers that make this vision for warnings achievable. I’ll blog about code as hypertext soon. In the meantime, just assume it’s possible–and not onerous for the developer.
Imagine, then, that in addition to baking answers to warnings directly into the code with marks, developers can layer additional answers like masks or filters. The team that’s porting to Windows might have one shared mask; Fred and Sally might have their own personal filters on top of the team one. These filters can be checked in with code, and the compiler smartly decides what applies in the active context.
You can achieve something a bit like this today, if you create custom projects for each unique perspective on the same codebase, or if you make projects depend on environment variables to adjust their behavior. However, this is a maintenance nightmare, and you’re working with a very blunt instrument, against the grain of the compiler. I’ve never seen it work well.
What I’m proposing is different because flexible, context-sensitive warning filters would be a first-class feature of the compiler. The expressiveness and propagation of marks, the code DOM and stable hyperlinks, and the ability to express sophisticated answers all combine synergistically to help warnings evolve wings.
I think those wings could lift the quality and artistry of our software.
 I can think of several reasons why you might want to do this. One is that you’re making a temporary change, and you don’t want to be bothered to delete something, only to have to reinsert it later. Another is that you may want to use dead code to force linkage. Or maybe you just want to be able to explore an alternate path by resetting EIP while you debug. A final reason is that you want to prove the code compiles (e.g., because it’s quoted in documentation), even though you will never run it. This is the reason why you disable gtest methods but do not #ifdef them; continuing to compile them prevents staleness.