Learned Helplessness, Rats, and People Power

In the 1950s, researchers at Johns Hopkins conducted some very troubling experiments. They caught wild rats and squeezed them in their hands until they stopped struggling, teaching them that nothing they did would let them escape the crushing grip of their human captors. Then they dropped the rats in a bucket of water and watched them swim.

Now, wild rats are superb swimmers. On average, rats that had not received the squeeze treatment lasted around 60 hours in the bucket before they gave up from exhaustion and allowed themselves to drown. One unsqueezed rat swam for 81 hours.

A later rats-in-bucket experiment (not quite so brutal). Photo credit: MBK (Marjie) (Flickr).

The average squeezed rat sank after 30 minutes.

In the 1960s and 1970s, Martin Seligman became interested in this phenomenon–he called it “learned helplessness“–and he was able to trigger similar “giving up” behavior in dogs and other animals. He theorized that human depression Continue reading

All I Really Need To Know I Didn’t Learn In Compugarten

I’m glad newly minted software engineers are exposed to data structures, compilers, concurrency, graph theory, assembly language, and the other goodies that constitute a computer science curriculum. All that stuff is important.

But it’s not enough.

Not all classroom material for CS folks should be technical. Photo credit: uniinnsbruck (Flickr).

Since I’m half way to curmudgeon-hood, I frequently find myself lamenting educational blindspots in the young. I’ve even toyed with the idea of teaching at the nearest university, some day when I Have More Time™. If academia would take me, my lesson plans might cover some of the following topics:

Put Your Const Foot Forward

Here are two C++ style habits that I recommend. Neither is earth-shattering, but both have a benefit that I find useful. Both relate to the order in which constness shows up in your syntax.

1. When you write a conditional, consider putting the constant (unchangeable) value first:

if (0 == i)

… instead of:

if (i == 0)

The reason is simple. If you forget/mis-type and accidentally write a single = instead of two, making the expression into an assignment, you’ll get a compile error, instead of subtle and difficult-to-find misbehavior. (Thanks to my friend Doug for reminding me about this one not long ago.)

Ah, the joys of pointers… Image credit: xkcd.

2. With any data types that involve pointers, prefer putting the const keyword after the item that it modifies:

char const * VERSION = "2.5";

… instead of:

const char * VERSION = "2.5";

This rule is simple to follow, and it makes semantics about constness crystal clear. It lets you read data types backwards (from right to left) to get their semantics in plain English, which helps uncover careless errors. In either of the declarations of VERSION given above, the coder probably intends to create a constant, but that’s not what his code says. The semantics of the two are identical, as far as a compiler is concerned, but the first variant makes the mistake obvious. Reading right-to-left, the data type of VERSION is “pointer to const char” — so VERSION could be incremented or reassigned.

Use the right-to-left rule in reverse to solve the problem. If we want a “const pointer to const char”, then we want:

char const * const VERSION = "2.5";

That is a true string literal constant in C++. (Thanks to my friend Julie for teaching me this one.)

This might seem like nit-picky stuff, but if you ever get into const_iterator classes and STL containers, this particular habit helps you write or use templates with much greater comfort. It also helps if you have pointers to pointers and the like. (For consistency, I prefer to follow the same convention for reference variables as well. However, this is not especially important, since references are inherently immutable and therefore never bind to const.)

Action Item

Share a tip of your own, or tell me why you prefer different conventions.

Metrics, Plumb Lines, and System Thinking

Friday morning I was at a seminar taught by Jason Taylor, CTO at Allegiance. We were discussing how dev team velocity and product quality can compete for our attention; sometimes we trade one for the other. Jason mentioned that he’s a fan of competing metrics, and some neurons connected in my brain.

Plumb line suspended from the center point of multiple balancing legs. Photo credit: suttonhoo (Flickr)

I’m a big believer in measurement. As the old adage goes, you can’t improve what you don’t measure. Next time someone urges you to change a behavior, or tells you she’s going to, ask what measurement of change is being proposed. If you get an unsatisfying answer, I predict you’ll also get an unsatisfying outcome.

I’m also a big believer in balance, as I’ve written about before. Good software balances many considerations.

Besides these existing predispositions, I’d recently read a blog post by Seth Godin, cautioning about the need to choose wisely what we measure. And I’ve been digesting The Fifth Discipline, by Peter Senge, which advocates wholistic, systemic thinking, where we recognize interrelationships that go well beyond simplistic, direct cause-and-effect.

All of these mental ingredients Continue reading

Big Data In Motion

I’ve been at Cloud Expo this week, listening to lots of industry hoopla about building cloud-centric apps, managing clouds, purchasing hardware for clouds, buying private clouds from public cloud providers, and so forth.

Photo credit: aquababe (Flickr)

One interesting decision made by the organizers of the conference was to bring “big data” under the same conference umbrella. There’s a whole track here about big data, and it gets mentioned in almost every presentation.

And I’ve sensed a shift in the wind.

Years and months ago, “big data” was all about mining assets in a data warehouse. You accumulated your big data over time. It sat in a big archive, and you planned to analyze it. You spun up hadoop or used some other map-reduce-style tool to crunch for days or weeks until you achieved some analytical goal.

What I’m hearing now is an acknowledgement that an important use case for big data–perhaps the most important use case–has little to do with data at rest. Instead, it recognizes that you’ll never have time to go back and sift through a vast archive; you have to notice trends by analyzing data as it streams past and disappears into the bit bucket. The data is still big, but the bigness has more to do with volume/throughput, and less to do with cumulative size.

This has interesting implications. Algorithms that were written on the assumption that you can corral the data set under analysis need to be replaced by ones based on statistical sampling; exactness needs to give way to fuzziness.

Interestingly, I think this will make computer-driven data analysis much more similar to the way humans process information. As I’ve said elsewhere, when faced with a difficult design problem, a smart question to ask is: how does Mother Nature solve it?