We’ve had some time for the shock of the Heartbleed announcement to sink in and there’s a lot to consider. While the first impressions might be about the serious, exploitable bug and the repercussions of its abuse, the incident casts light on both the value and risks of open source.
What is Heartbleed
Heartbleed is the populist name given to the discovery of a serious flaw in the implementation of the OpenSSL library, used widely to implement the cryptography needed by internet communications. When a feature was added back in 2010 to keep a web connection alive by sending a regular “heartbeat”, an error was made in the implementation. The heartbeat mechanism involved one end of the connection sending a known text to the other end, which then returned the text to show the connection was alive. The known text is supplied as a string parameter accompanied by a length parameter. The implementation fails to clear the memory in which the string is stored so when the requested length of text is returned, it includes whatever was using the memory before. Since a large buffer can be requested, this means that the implementation allows a remote request to read large blocks of memory that have previously been used by other tasks — notably involving passwords and the text of encryption keys.
Why didn’t it get found?
All programmers make mistakes, so there’s no huge surprise that it happened in OpenSSL. But why did it go undetected for so long? There are several contributing factors.
Hard to spot error in complex code
First, the OpenSSL code is huge, years old and implements a set of algorithms that need specialist cryptographic knowledge to understand them. Reading someone else’s code in this context is a difficult, dull and time consuming task. That’s not a recipe for scrutiny even with a large paid team.
Community too small
There is no large, paid team. The whole community developing and maintaining OpenSSL was no larger than 11 people before Heartbleed, with only one of them working full-time on the code. Despite being widely deployed, the community did not receive regular participation from developers using OpenSSL in other projects. There’s been speculation why. An obvious explanation is that the cryptographic complexity of the code meant that non-specialists were not effective as community participants, but US government open source specialist David A Wheeler has speculated that the unusual licensing arrangement for OpenSSL — a custom open source license with non-GPL-compatible copyleft effects and potentially challenging advertising clauses — discouraged community involvement as well.
Exploit detection turned off
What can a small community do to spot uninitialised memory and buffer over-run errors? String handling libraries today often include detection at compile-time or even run-time for such errors, and the OpenBSD code used by OpenSSL for this purpose is no exception. But according to The de Raadt, these detection capabilities had been turned off at compile time in the OpenSSL build for performance reasons long ago and had never been turned on again despite the permormance issues being addressed in the code.
In many ways, this is a “perfect storm” story. A subtle error in the implementation of a new feature in a complex codebase involving specialist mathematics, combined with a team too small to conduct detailed reviews and a failure to re-enable automated test facilities. It could actually happen in any codebase; this time it happened in an open source one at the heart of the Internet.
Does this invalidate open source?
So what happened to “open source is more secure” and “many eyes make all bugs shallow”? I don’t think it in any way affects the reality of open source; it’s still the best way to develop shared code that implements open standards, especially where security is involved.
As several security experts have pointed out, open source is not inherently more secure than any other approach to developing software. It does inherently encourage behaviours that are healthy for security software — public algorithms, incremental change, documented commits, the need to explain and defend the rationale for changes. But there’s no guarantee any will in fact happen. It’s possible to write bad open source code just like it’s possible to write bad proprietary code.
The idea Eric Raymond proposed — that “many eyes make all bugs shallow” — isn’t invalidated here either, as long as you realise that to work there need to be “many eyes” and they need to both be looking and to understand what they see. Cargo-cult understandings of open source that assume the mere application of an OSI-approved license results in public scrutiny help no-one, and Heartbleed is something of a wake-up call to that effect.
Would proprietary be better?
Doing this with proprietary code would be unlikely to make things better. Hiding a development team behind NDAs and corporate secrecy, having their priorities driven by unseen managers and having their code kept invisible to potential users all constitute an anti-pattern for security software. In addition, the ability of open source to bring all the best hands to the problem once it’s identified would simply not exist with a proprietary solution. Engaging would need permission and bureaucracy, and many contributors would just say no. Open source is definitely more promising, and OpenSSL is probably the lowest risk open source answer to this particular programming need.
Should I make donations?
What’s the right way to react? Fork the code? No, that’s been done and probably just increases the problem by removing potential expert participants from the OpenSSL pool. Re-implement it? That’s been tried as well. Even with various forks and re-implementations, OpenSSL remains widely used because the problem is complex and the experts re few. Those new projects are unlikely to surpass in a few months what OpenSSL has largely succeeded at doing over a decade. Donate money? Several corporations, acting under the auspices of the Linux Foundation, have actually done that.
While cash donations to a project can be a short-term fix, in the long term it is unlikely to help unless it leads to more developers both writing and testing the code. The best fix would be for the companies most dependent on OpenSSL to hire experts and pay them to join the community and work on the code.
This, after all, is the key to open source. It’s not about free stuff; rather, open source delivers the liberties that allow developers with differing motivations and origins to collaborate on a codebase without needing ask ask permission from anyone other than each other, and even then only out of social effectiveness. Throwing money at an open source project doesn’t automatically make anything better; that takes people with actual skills.
Heartbleed has shown us that open source is no guarantee of invulnerability. Fortunately, the crisis has highlighted a set of needs that are being met in a way no other approach would have allowed. Looking past the crisis, it’s possible Heartbleed is actually making things become better, faster.
[By Simon Phipps. First published in Linux Voice Issue 4]
Reblogged this on Wild Webmink and commented:
Here’s my column from Issue 4 of Linux Voice magazine. Currently working on the column for issue 6.
LikeLike
Pingback: OpenSSL and the Linux Foundation | Adventures in systems land