The IT response to Heartbleed is almost as scary as the hole itself. Patching it, installing new certificates and then changing all passwords is fine as far as it goes, but a critical follow-up step is missing. We have to fundamentally rethink how the security of mission-critical software is handled.
Viewed properly, Heartbleed is a gift to IT: an urgent wake-up call to fundamental problems with how Internet security is addressed. If the call is heeded, we could see major improvements. If the flaw is just patched and then ignored, we're doomed. (I think we've all been doomed for years, but now I have more proof.)
Let's start with how Heartbleed happened. It was apparently created accidentally two years ago by German software developer Robin Seggelmann. In an interview with the Sydney Morning Herald, Seggelmann said, "I was working on improving OpenSSL and submitted numerous bug fixes and added new features. In one of the new features, unfortunately, I missed validating a variable containing a length."
After Seggelmann submitted the code, a reviewer "apparently also didn't notice the missing validation, so the error made its way from the development branch into the released version." Seggelmann said the error was "quite trivial," even though its effect wasn't. "It was a simple programming error in a new feature, which unfortunately occurred in a security-relevant area."
What Seggelmann did was fully understandable and forgivable. The massive planet-destroying problem is that our safety mechanisms for simple math errors are all but nonexistent. If our checks and balances are so fragile that a typo can obliterate all meaningful security, we have some fundamental things to fix. Let's not forget that when Robert Tappan Morris unleashed the Internet Worm back in 1988 — the first major instance of the Internet crashing due to a worm — it was also the result of a math error. He never intended to cause servers to crash, but crash they did.
David Schoenberger, CIO of security vendor Transcertain, argues that the real fundamental security flaw at play here is, bizarrely enough, an overabundance of trust exhibited by IT security folk. Personally, when I think of the best IT security specialists I've worked with over the years, having too much trust is not the first thought that comes to mind. But Schoenberger makes a good point.
"This is going to make people rethink what we're doing. There are so many things overlooked, taken for granted. In the IT world, we've relied on the trust factor for so long," he said. "Just look at these billion-dollar companies who are relying on peer-reviewed open source. We're not taking the time to prove it [is secure] ourselves. Because something mostly works and, as far as perception goes, it works well, it passes all our tests. It sucks the way testing is occurring right now with open source. But I won't even limit it to open source, as this could have happened to a commercial provider. Could have happened to anyone."