Decoupling Interfaces as Versions Evolve, Part 2

This is part 2 of a series. You can read part 1 and part 3 as well.

Alternative Approaches to Interface Versioning

Lublinsky wrote a great article about interface versioning a while back (see page 38 of this issue of Microsoft’s Architecture Journal). This describes the state-of-the-art thinking about interface versioning in the web services world. Essentially he recommends versioning each method in an interface separately. (Sounds a lot like Win32’s approach of adding …Ex to every function when the original behavior no longer sufficed…) This approach is based on the insight that many parts of an interface will be stable for long periods of time, and that the most common kind of change to an interface is an addition. By increasing the granularity of the versioning, incompatibilities are less likely to arise for spurious reasons. This solves the classic problem where a .wsdl describes a dozen classes, a client uses only the first three, and yet the client breaks when something in the fourth class changes. However, it proliferates .wsdls and points of presence.

Another important discussion of this issue is “A SOA Versioning Covenant”, by Rocky Lhotka. This is an excellent review of the problem. (Note that the Lublinsky article, which is newer, discusses the covenant idea briefly.) Essentially Lhotka recommends that all objects accept messages (parameter lists to functions, recast as documents or self-contained packages of information); since each logical function will always have the signature DoSomething(message), the need to version interfaces goes away as long as changes just involve new message types. Instead, the messages are versioned using schema capabilities. Lhotka further recommends changing from contract-oriented thinking (X is required) to a covenant (If you do X, I will do Y). This approach has some of the same benefits as the invention, but it still relies on versioning a full interface rather than the subset someone wishes to use, and the difficulty of managing versions of messages is ignored.

Although both of these treatments (and the sources they cite in their own reviews of the problem) are nifty, they leave me unsatisfied. The bottom line is that I want to evolve interfaces whenever it makes sense, without worrying about breaking people — and I also want people who use my interface to be able to do so with confidence.

Tune in to part 3 of this series for my proposed solution.

Decoupling Interfaces As Versions Evolve, Part 1

This is part 1 of a series. You can read part 2 and part 3 as well.

The Goal

Software interfaces were invented to promote encapsulation and loose coupling. In theory this enables developing and deploying without undue interdependence, which is a very good thing.

“Why the ‘in theory’ caveat?”, I hear you saying. “Surely interfaces deliver on their promise…”

Well, yes and no. Interfaces certainly provide a nifty mechanism for information hiding if your scope of concern is a tidy programming problem over the horizon of one implementation. That’s just the sort of scenario that CS academics love to use to teach their acolytes.

But most commercial software development is done in a messier world. Versioning interfaces can cause enough headaches to water down their benefits considerably, and mainstream software development tools have not done enough to address the issue.

Immutability and Versioning

Current thinking on interface versioning calls for an interface to be immutable; each change to its semantics (as manifest in an .idl, a .h, or a .wsdl, for example) should cause a change to the interface number/name/guid. Consumers of an interface bind to a specific interface version to allow compile-time validation of interface usage. Modern IDEs typically leverage early binding to provide extra goodies like autocomplete, UML class diagrams, and doc comment generation.

This immutability is less than perfect. In non-ivory-tower development, it is common to alter the semantics of an interface dozens or hundreds of times during a given dev cycle as a team converges on the final implementation. Bob adds the DoNothing() and DoSomething() functions to IWidget on day 1, then realizes a week later that he also needs DoSomethingElse() for a corner case he hadn’t fully explored. On week 23, he decides to collapse the DoSomething functions both to DoSomethingEx() because by then the differences between them feel like they should be generalized.

If all code were written by Bob as part of a single cohesive deliverable, this evolution would be uninteresting. But suppose that on week 15, Sally gets a snapshot of Bob’s .idl, and begins to build a new component to interact with IWidget. It is critical that Sally’s expectations about IWidget semantics line up with Bob’s.

What makes this ugly is that in today’s highly distributed, highly oursourced, complex projects, Bob may not actually know that Sally is using his .idl. He may think it’s okay to keep cheating on interface immutability. Either Bob has to be obsessive about versioning his interface with each change — ending up with IWidget497 by the end of the project — or else Sally is forced to communicate with Bob that she is using his interface and needs it to be stable. Neither alternative is very attractive.

Evolution Isn’t Always Forward

Best practice is usually to require that IWidget5 be a strict superset of IWidget4. Despite enthusiastic lip service, practical considerations force us to cheat here as well. A security vulnerability forces us to start encrypting the string we return from a function. A change to the underlying OS forces us to throw an exception on a function that used to be exceptionless. Over time the assumptions about semantics attached to an interface accumulate enough drift that it is impractical to ever treat an IWidget9 as an instance of IWidget2. How does Sally know when that threshold has been passed by Bob?

And What About Deployment and Upgrade?

If you want to tease out mistakes in interface versioning, just poke at the deployment and upgrade scenarios you’re going to support. Do you require that a central manager be at least as new as all the components it’s managing? Or worse, do you require the whole system to be at the same revision level? In theory, this should be unnecessary; producers (managees) are free to expose functionality in new interfaces that older consumers (managers) don’t know about, and consumers can progressively downcast until they find a mutually supported interface, so it ought to be possible to have free variation in versions. However, in practice in rich, interdependent fabrics of services, the same actor may simultaneously provide one interface while consuming another, and the intermingled dependencies often cause ISVs to force broader upgrades than a customer would like. My favorite recent, real-world example of deployment problems is the infamous IE7 dwmapi.dll problem (see also this useful discussion of the problem).

Traditional Approach – Pros and Cons

What Can Be Done?

So if interfaces don’t provide as much separation of concerns as we wish, how do we cope?

Well, one alternative to traditional interface versioning is to do “late binding”. Only the most general characteristics of language syntax are validated when code is written; whether a particular object has a particular property of a particular data type is not tested until code actually executes. This is how interpreted languages like Python, PHP, and javascript work. It provides tremendous flexibility, and it is often the solution of choice in the free-wheeling, ad hoc universe of general web apps. I am a big fan, in many cases. I love the way RESTful interfaces support ad-hoc connections, for example.

But late binding is not a panacea. For one thing, late binding typically means that development tools can’t help you validate your usage very much. You end up writing and maintaining a lot of manual glue. For another, QA teams often push back against late-bound solutions because it increases the testing burden. Where a compiler could effectively validate millions of potential code paths at compile time for early bound code, testers struggle to achieve similar coverage. Result: bugs discovered later in the process. There is also a cost in performance and robustness that typically deters ISVs building standard enterprise or consumer applications.

There are subtler costs as well. When you late bind, you still have to use the interface you ultimately invoke, and the knowledge about how to use it has to be baked into the code ahead of time. It may not be baked in in the same way — maybe you use reflection or GetProcAddress to find the DoSomething function you’re after — but to late bind an interface, you have to early bind all the logic that handles the cases where GetProcAddress fails.

Another disadvantage of late binding is that you introduce a new dependency — this time on the supporting infrastructure. Maybe you’re using a great SOAP toolkit for PHP and that toolkit makes it easy to late bind to a web service. But now you depend on your SOAP toolkit. What if another actor in your system doesn’t have the same version of the toolkit?

What we’d like is a mechanism that combines the predictability and robust tool support of the traditional approach to interface versioning with the flexibility of late binding to get the best of both worlds. In part 2 of this series, I’ll look at some approaches to that goal, and discuss why they still leave me unsatisfied. In part 3, I’ll offer my own solution.