My First Tangle With the Tower of Babel

A while back, I was reading the blog of somebody smart (can’t remember who), and a comment jumped out at me: “If you really want a black belt in computer science, try writing a programming language. The depth and breadth of experience you get when you invent Python or Lisp or Smalltalk or C++ or C#–and implement its ecosystem, not just code a parser for a CS class–gives you a wisdom and education that’s rare and precious.” (I’m paraphrasing here, but that’s the gist of it.)

Sounds good, I thought. I think I’ll give it a shot.

“Confusion of Tongues”, by Gustave Doré. The Tower of Babel resonates beyond moral history. Image credit: Wikimedia Commons.

I began doing research and taking notes. I thought hard about which features I liked and detested in programming languages. I read critiques and tributes to various languages by detractors and fans. I identified pieces of syntactic sugar that I wanted to support. I took a wad of existing code and tried to rewrite it using the language I was drafting. I picked some conventions for filenames. I played with yacc and antlr and experimented with definitions of context-free grammars.

And then I stalled.

It wasn’t good enough.

My new language was nifty. It combined a lot of the best features of my favorite languages: closures, list comprehensions, lambdas, static if, robust type inference, unified function call syntax, with blocks, variadic templates, mixins, nullable primitives, built-in support for design by contract, and more. I actually believed (perhaps naively) that I knew how to implement a good portion of these ideas in a compiler.

But I began to intuit that nifty != great. And the longer and harder I thought about it, the more convinced I became.

Continue reading

Big Crud Isn’t Big Data

“Big Data” is another one of those buzz words that seems to be everywhere these days. We hear stories regularly about how fast the world’s data grows and how big it’s going to be by 20xx. Vendors then reason that we should buy their wares to cope. This infographic is typical:

dataneversleeps_2-0_v2

I have several deep professional connections to big data[1], going back decades, so when I say I think a lot of it is manufactured silliness, I’m hoping you’ll pause before laughing me off.

The fact is, most of the “data” that’s exploding is not hard-won intellectual treasure for the ages; it’s marginal stuff like the viewing history on Fred Flintstone’s deleted Netflix account. More than big data, we’re experiencing a “big crud” wave, because we’re pack rats. This comic has it right: Continue reading

Adios to “computer programming”

Have you noticed how seldom people put the modifier “computer” in front of “programming” nowadays?

This may be because our formerly esoteric discipline is now so mainstream that it needs no elaboration.

It may be that we’re all growing lazy.

But I think there’s something deeper.

“Software Engineering” isn’t good enough

The set of things besides traditional computers that need to be programmed is growing by leaps and bounds: TV remotes, holiday light displays, e-readers, smartphones and tablets, Arduino boards, fuel injectors, point-of-sale terminals, MRI machines, 3D printers, LEGO MindStorm robots, networks (software-defined networking / SDN), storage (software-defined storage / SDS), nanobots, social networks, clouds…

Nanobots replicating in a petri dish. Is it fair to say we “program” nanobots? Photo credit: PhOtOnQuAnTiQuE (Flickr)

“Right,” I hear you say. “That’s why I like the term software engineering. Wherever you see programming, it’s software that’s in play. And engineering implies a more sophisticated approach than mere hackish programming.”

Okay.

I think that’s true, but it misses the really big insight. Continue reading