What should code look like when we squint at it?

It’s the start of another school year, and my seventh-grade son is learning algebra. As I sat beside him to coach him through some homework the other night, I shared my favorite bit of wisdom about how to make math problems—even complex ones—simple and error-free:

Write the progression from known to unknown, one step at a time.

In my experience, the surest recipe for disaster is to short-circuit this rule. Collapse a few steps in your head in the name of efficiency, and you’ll forget a minus sign, or you’ll group incorrectly, or you’ll lose track of an exponent or an absolute value—and you’ll end up with a mess. You’ll have to debug your solution by slogging back through the problem from the beginning until you figure out where you went wrong.

It’s interesting—and maybe, profound—how nicely this piece of advice maps onto the design principle of progressive disclosure. The human mind is simply wired to perceive in broad outlines, and then to gradually clarify, a few details at a time.

Don’t believe me? Try a short experiment: draw this fractal.

Fractals embody the principle of progressive disclosure. Image credit: Fábio Pinheiro (Flickr).

I’ll bet that instead of laying down every pixel, like a printer, you immediately produce a simplification that captures the general shape as lines, with a lot of detail suppressed. You did this as a kid, when you drew stick figures and triangle+half-circle sailboats.

Artists sometimes squint to blur out what they don’t want to see, leaving only general patterns and colors. But coders never do, because we don’t expect code to work that way. Continue reading

My First Tangle With the Tower of Babel

A while back, I was reading the blog of somebody smart (can’t remember who), and a comment jumped out at me: “If you really want a black belt in computer science, try writing a programming language. The depth and breadth of experience you get when you invent Python or Lisp or Smalltalk or C++ or C#–and implement its ecosystem, not just code a parser for a CS class–gives you a wisdom and education that’s rare and precious.” (I’m paraphrasing here, but that’s the gist of it.)

Sounds good, I thought. I think I’ll give it a shot.

“Confusion of Tongues”, by Gustave Doré. The Tower of Babel resonates beyond moral history. Image credit: Wikimedia Commons.

I began doing research and taking notes. I thought hard about which features I liked and detested in programming languages. I read critiques and tributes to various languages by detractors and fans. I identified pieces of syntactic sugar that I wanted to support. I took a wad of existing code and tried to rewrite it using the language I was drafting. I picked some conventions for filenames. I played with yacc and antlr and experimented with definitions of context-free grammars.

And then I stalled.

It wasn’t good enough.

My new language was nifty. It combined a lot of the best features of my favorite languages: closures, list comprehensions, lambdas, static if, robust type inference, unified function call syntax, with blocks, variadic templates, mixins, nullable primitives, built-in support for design by contract, and more. I actually believed (perhaps naively) that I knew how to implement a good portion of these ideas in a compiler.

But I began to intuit that nifty != great. And the longer and harder I thought about it, the more convinced I became.

Continue reading