Skip to main content

Heartbleed Software Snafu: The Good, the Bad and the Ugly

The ramifications from the years-long security hole are both better and worse than we initially thought


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Nine days after the announcement of the Heartbleed vulnerability in OpenSSL, software widely used to secure Internet traffic, we have a much better understanding of the extent of the damage. (Thanks to the invaluable xkcd Web comic, we also have a better understanding of how Heartbleed works.)   The good news is that the first guesstimate for the number of afflicted Web sites—as many as two thirds of the total—proved to be wildly pessimistic. University of Michigan computer scientists Zakir Durumeric, David Adrian, Michael Bailey and J. Alex Halderman have conducted repeated scans of Alexa Internet’s Top 1 Million Domains(https://zmap.io/heartbleed/). As of 11 A.M. Eastern on April 13, the group found that 6.2 percent of the 45 percent of the top one million sites that support TLS were vulnerable to Heartbleed, down from the 11 percent that the same group found on April 9. So that’s the good news: fewer sites were affected than first thought, and about half of those were patched promptly.   Most of the rest of the news is less happy. The security company CloudFlare raised hopes briefly on April 11 by reporting that its engineers had concluded that it was either extremely difficult or impossible to use Heartbleed to extract a server's private keys. Within hours four respondents had challenged CloudFlare’s findings, and the company retracted its argument. The upshot is that any site or service affected by Heartbleed does indeed need to revoke its present certificate and cryptographic keys and obtain new ones.   We also now know that some unexpected services are or were affected, such as the cloud platform provider Akamai; the Canadian equivalent of the Internal Revenue Service; BlackBerry's BBM messaging service for iOS and Android (although not the company's phones or tablets); Cisco and Juniper networking equipment; and the approximately four million Android phones and tablets running the 2012 4.1.1 version of the operating system.   The volunteer German programmer responsible for introducing the flaw accidentally while updating the software with some bug fixes and new features, Robin Seggelmann, an employee of a subsidiary of Deutsche Telekom, has openly taken responsibility for committing it to the code (saving his changes) on New Year's Eve, 2011.   Heartbleed’s impact on public trust was exacerbated when Bloomberg reported last Friday that the NSA had been exploiting Heartbleed for the past two years. The NSA denies the allegations, but a report published by the New York Times makes them sound plausible. According to the Times, Pres. Barack Obama said that the intelligence agencies should reveal bugs such as Heartbleed to ensure they will be fixed—but he left open an exception for “a clear national security or law enforcement need.” To many, this exception is dangerously broad.   The technical community is still discussing how we got here and what should be done. The problems fall into three broad categories: how the Internet’s critical infrastructure is funded; the tools used to write and test software; and the processes by which software becomes widely adopted.   A Wired article pointed out that many open-source projects struggle to find sufficient funding. OpenSSL, for example, is run by four coders, only one of whom is full-time. OpenSSL has never raised more than $1 million in a single year. Many technology companies donate significant sums of money and manpower to open-source projects, but this money is not evenly distributed among projects. As Wired points out, a startling number of pieces of widely used software are controlled by a tiny number of unpaid or poorly paid people.   The other two issues—how software is written, reviewed and adopted—are partly a result of the underfunding problem. Last week the security engineer Dan Kaminsky wrote in a blog post about the urgency of identifying interdependencies. “What are the one million most important lines of code that are reachable by attackers and least covered by defenders?” he asked. The challenge in finding those interdependencies is that reviewing code is like housework: boring and time-consuming; people only notice when it hasn't been done, and doing it brings little reward.   One suggestion, made by Ross Anderson, Rainer Boehme, Richard Clayton and Tyler Moore in a 2008 report on security economics for the European Union Agency for Network and Information Security (ENISA), is to introduce liability for faulty products into the software industry. Under such a regime you still might have millions of people depending on software written by four guys who like to code at midnight on New Year's Eve—but the vendors who then incorporated it into their products would have to pay much more attention to reviewing the code before adopting it.