Lately, I’ve been playing with an idea. Work by both Microsoft and certain open source projects has made finding and exploiting vulnerabilities in their code substantially harder. So, the effort needed to find a vulnerability has gone up. The effort needed to build a working exploit has gone up. Thus, the willingness of a vulnerability researcher to publish a vuln for fame is declining, because the investment has increased, and thus ROI has dropped. (This paragraph is largely
stolen from paraphrasing Halvar Flake.)
I will get to explaining the picture, but I’m going to take my (and possibly your) sweet time about it.
I’ve been slowly drafting this post for a while, and am motivated to post it now by a set of things that I see as closely related:
- Michael Zalewski’s post of a half-baked exploit to bugtraq a few weeks ago. What’s most interesting to me is that Zalewski didn’t finish the POC (Proof Of Concept) into a full-blown exploit. Zalewski is really, really good. His not producing full code is unusual.
- There’s a post on MetaSploit by HD Moore, “Exploit Development: GroupWise Messenger Service” that talks about all the work that goes into research.
- Jennifer Granick has an article in Wired, “Spot a Bug, Go to Jail.”
- Dancho Danchev has a blog post, “Shaping the Market for Security Vulnerabilities Through Exploit Derivatives.” There’s a whole bunch of ways in which I think markets can help us extract useful information, but that are hard to execute on under the threat of lawsuit.
- Dave G has a blog post, “Vulnerability Fishing” with metaphors of fishing with spears and with dynamite.
All of these entail some sense of discomfort with aspects of the working agreement around vulnerability research that we call ‘responsible disclosure.’
When I was a young whippersnapper, I could find a vuln in an afternoon, and then spend the next month wrangling over if the vendor was going to fix it. I didn’t start that negotiation impatient to get to the announcement.
It becomes harder for amateurs to play, but enterprises may fund the work, either for marketing value, or to exploit it. We’re raising costs in other ways, as Granick points out in her Wired article.
So as the economic rules change, the availability of vulnerability information may decline in a way which I think is unfortunate. The net return is dropping (based on the added risk), while the investment is skyrocketing.
To the sketch, it’s the start of an effort to think about these sorts of things: Where in the vulnerability discovery process does effort go? Who can control the shapes of those curves? How do their efforts to control them affect other participants?
The initial search line, by the way, is below the origin because time spent looking for bugs has no utility until you find something. (Pace Sardonix.)
Both researchers and product developers have control points. Researchers are developing tools, from static and dynamic analysis tools to frameworks like Metasploit or Canvas which allow for faster development of exploits. Product developers could make different choices about what level of description is involved. There are other, societal control points, such as the threat of lawsuit.
Thinking about it all in the form of linked curves could be helpful all around.
[Update: Bruce Schneier ponited out that he had drawn a remarkably similar curve, based on risk to victims rather than utility to attackers, and published it in Cryptogram in Sept, 2000. He discusses risk, I discuss utility. The ideas are closely related, as the attacker’s utility ties to the defender’s risk. More to come on this.]