Worst Breach Ever?

There’s a lot of headlines about how the TJX “Data Theft Grows To Biggest Ever” (Washington Post). The trouble is, that claim is wrong, and it’s wrong even amended to “Biggest reported ever.” The biggest reported theft of person data is Scott Levine’s theft of over a billion records from Axciom. As the Department of Justice says in “Justice Department Announces Conviction of Florida Man Accused of Massive Data Theft from Acxiom, Inc.:”

The evidence at trial showed that Levine, working with others at Snipermail, Inc., was responsible for stealing over a billion records containing personal information, physical addresses, and e-mail addresses, among other items, through approximately August 2003.

The Washington Post doubtless regrets the error, and will be running a correction shortly.

Image via Shipbrook.

The Sky Is Not Falling–What Can We Learn?

I’d like to respond to two questions posted to my “Security Breaches Are Good For You” post. Antonomasia writes “there are security events other than customer data disclosure – any thoughts on how those can be subjected to evidence-based assessment?” Blivious writes: “What about other kinds of breaches? The apparent moral standard only applies to personal information.”

A goal in giving the talk was to draw attention to the trend, which is that we’re talking about some breaches, and the sky is not falling. Who’dda thunk?

My hope is that over the next decade, we will mature in how we discuss breaches. 1386 will be looked back apon as a watershed event that got us talking. If that happens, then we’ll start to see other events being discussed. (This happens in the airline industry, and again, the sky is not falling.)

So yes, today, the moral standard and the law apply to personal information, but I believe that they can help transform the way we perceive and discuss other kinds of issues.

Photo: “Falling from the heavens,” from Stock.xchng.

Names Don’t Matter, Accountability Does

Riffing on what Arthur has said, I’ll take a slightly different exception to Mike Rothman’s rant on anonymity.

Kathy Sierra’s been treated pretty shabbily. The problem isn’t anonymity, it’s a lack of accountability. These people are behaving unacceptably, and we don’t know who they are. However, there are cases where people have acted in similarly inappropriate ways, and the salient point is actually the opposite of anonymity — celebrity. If an anonymous person said that a Supreme Court justice should be poisoned, ha ha, that’s just a joke, they might get their IP address tracked down and then get a good talking-to. It seems possible that the very people defending the celebrity I refuse to name might feel differently if it were an anonymous person and a different Justice.

I’m really not in the mood to compare and contrast all of this with cartoonists and mullahs, slurs on a civilization that lived where you live now, insults upon a national leader, videos about political candidates, and so on. They’re all about when inappropriate speech is so inappropriate that it’s not just talk it’s action. I’ll just say again that Kathy Sierra has been treated shabbily, and I will smile when just desserts is handed to the perps. I know that Rothman’s outburst is just an eye-rolling stupid thing said in the heat of righteous anger.

However, I’m going to part company with Arthur, because I’m not writing anonymously. I’m writing under a pseudonym and there’s a load of difference. One of the major differences is, in fact, accountability. You, Gentle Reader, have some impression of me by now, and if I say something stupid, that will get mixed into your opinion.

A pseudonym is merely a mask. Several people have figured out from an ill-timed writing of mine that I am actually Steve Jobs. No one seemed to bite my hints about the Presidency. The point, anyway, is that acting and speaking in a way that is connected and accountable is often a good thing, anonymity or pseudonymity is if anything just a means to then end, which is separation, not a lack of accountability. The Cigarette-Smoking Man was anonymous, but we all knew who he was, and his actions also had their consequences.

Security Breaches Are Good for You: My Shmoocon talk

security-breaches.jpgAt Shmoocon, I talked about how “Security Breaches are Good for You.” The talk deviated a little from the proposed outline. I blame emergent chaos.

Since California’s SB 1386 came into effect, we have recorded public notice of over 500 security breaches. There is a new legal and moral norm emerging: breaches should be disclosed. This is the most significant event in information security since Aleph1 published “Smashing the Stack for Fun and Profit,” and brought stack-smashing to the masses.

The reason that breaches are so important is is that they provide us with an objective and hard to manipulate data set which we can use to look at the world. It’s a basis for evidence in computer security. Breaches offer a unique and new opportunity to study what really goes wrong. They allow us to move beyond purely qualitative arguments about how bad things are, or why they are bad, and add quantifatication. The public awareness of the data lost on laptops is one example of this. There’s no doubt that the data we get from these laws is imperfect, but look at the alternative: the FBI/CSI survey.

The talk will cover why breaches are an important opportunity, cover some threats to the emergent data, and discuss what we can do to improve the quality and quantity of the data that can drive security science.

Rather than posting slides, I’ve posted slides with a running commentary, because I didn’t think the slides were particularly self explanatory.

[Update: fixed spelling.]

On Anonymity

So Mike Rothman thinks that anonymity is for cowards:

During the discussion last night, one guy pointed out that sometimes things are too sensitive or controversial or unpopular to say, so anonymity allows folks to do that. I call bullshit on that. Anonymity is the tool of a coward.

And while I agree with Mike that the treatment that Kathy Sierra has received is reprehensible and highly unacceptable, it makes me awfully glad that he’s not in charge of the company that I work for or a member of our government. Just because the tool has been abused does not mean that it is inappropriate. Our founding fathers felt the need for anonymity when they wrote the Federalist Papers and I don’t think anyone would consider them cowards. At the extreme end, anonymity is often the only way that people can speak out more than once in oppressive regimes.
Anonymity also serves other purposes. Say for instance the ability to travel freely without having to show papers. Or to keep things in the vein of speech, I post not out of a need to protect myself, but rather to give my employer plausible deniability. By not using my real name here, I remove the association of my personal opinion from that of my bosses. This allows me speak my mind freely and openly without having to worry about what my PR people will think or be concerned that the press might pick up what I say and inadvertently make me into a involuntary spokesperson for my company.
Well, at least Mordaxus and I are in good company.

Holding a Lighted Brand up to Damage

Adam comments on some breach commentary, and quotes Nick Owen saying that breaches are a sign of incompetence.

I can’t let this stand un-commented-upon. I believe that that is a dangerous comment, and one that needs to be squashed early. It’s like saying that a bug tracking system with lots of bugs in it is a sign of engineering incompetence. It actually means the opposite. A truly incompetent management team wouldn’t know they’d been breached. A slightly less incompetent team would bury it under the rug. This is true for software developers as well as operations people.

This is a very dangerous comment because it rewards the truly incompetent who don’t know how screwed up they are. It is a dangerous comment because it rewards the mendacious, who hide that they’ve been breached — or who design their operations so they won’t know when they’re breached. Stop. You’re going to set us backwards if you keep that up.

It doesn’t matter how good you are, some day you will be breached. Accept that. As a consumer, that’s a mildly unpleasant thing to think of, but it’s true. However, you want people who lose your data to have the wit to know they’ve lost it, and the morality to own up to it.

I also want to comment on Allan Friedman’s comment about Iron Mountain, as I’ve noticed the same thing, that many breaches involved Iron Mountain losing tapes. But I’m not an economist, I’m a guy who’s spent times in operational groups, and I have an alternative hypothesis.

Let us assume an organization that makes daily backups and sends them to a data warehouse. Let us suppose that the tape monkeys have a Very Bad Day. Sam’s on vacation. Ginger broke up with her boyfriend and came in late. Two tapes verified bad and had to be re-done, Networking misconfigured something and you couldn’t get to C Building at all. The Iron Mountain guys come in to get the tapes from you, and you tell them the horror story. They say hey, no problem, just give them what you have. They’ll take it off to the warehouse, and as long as there’s no disaster tomorrow, it’ll all be taken care of in the next incremental. The CIO never has to know. Whew! Thanks, Iron Mountain! You’re a life saver.

Iron Mountain is being smart. The real customer is the supervisor of the tape monkeys, and if you help him shine, he helps you shine. Alas, they’re being smart until lost data is not simply a gap in the backup history, it’s a breach. Then this habit of mutual back-scratching all falls apart. If someone does an audit and finds out that a backup of the Order Database is missing, Iron Mountain takes the fall. All the paperwork says that the database was backed up, put onto tape 1723-A5, and sent to the warehouse. And therefore, so it was. Iron Mountain can’t say, “Um, actually, for years now, we’ve been covering for our customers and letting them claim data was in the warehouse when we all know it wasn’t.” They just have to take it on the chin.

You know what? The real customers, the tape monkeys who have been let off the hook yet again know that Iron Mountain kept them out of even bigger trouble. They know that the Iron Mountain guys can’t let them hand over an empty box any more. But they aren’t going to switch to another company, either.

My hypothesis could be wrong. I don’t know if it is. I can’t admit to ever having been in a situation like my hypothesis. I am, however, a cynic, and I know that if Iron Mountain were in the habit of losing tapes, it may or may not show up in their stock price. But if they were in the habit of making the tape monkeys look more competent than they actually are, it is consistent with observed phenomena. It doesn’t mean my hypothesis is right; heck, the magic blue smoke theory of semiconductor physics is consistent with observed phenomena. But when I noticed Iron Mountain showing up in a number of breaches, the smoke I smelled seemed to have a hint of electrolytic capacitor in it, and whiff of insulation.

Breaches and Brand Damage

Tim Erlin runs some numbers in “Is Brand Damage a Myth” at Ncircle, and Nick Owen piles follows on with some diplomatically presented thoughts in “Brand Damage, Stock Price and Cockroaches:”

My theory is that information security breaches are an indicator of a lack of management competence. Moreover, as discussed previously, information security breaches are like cockroaches, they rarely travel alone and seeing one guarantees there are more that can’t be seen. The question becomes: does the bad security mean bad security, or bad management?

I refer (again) to “Is There a Cost to Privacy Breachs? An Event Study,” by Alessandro Acquisti, Allan Friedman, and Rahul Telang. Nick, why are they wrong? Why aren’t TJX and CPS outliers?

I also don’t buy the bad management argument. Allocating resources to security is an art, not a science. I’ll offer up a simple experiment to illustrate that shortly.

(Thanks to the several readers who sent in links.)

[Update: Don’t miss Allan’s informative reply in the comments.]

Privacy’s Other Path

Dan Solove writes:

Professor Neil Richards (Washington University School of Law) and I have posted on SSRN our new article, Privacy’s Other Path: Recovering the Law of Confidentiality, 96 Georgetown Law Journal __ (forthcoming 2007). The article engages in an historical and comparative discussion of American and English privacy law, a topic that has been relatively unexplored in America.

Although the tort law of privacy in America and England arose from the very same common law cases, the law has developed on very different paths in each country. For example, in England, a friend, spouse, lover, or nearly anybody else who violates a confidence can be liable. In America, people are said to assume the risk of betrayal for many breaches of confidence; the law, however, protects against the invasion of privacy by strangers. How and why did the law develop so differently in America and England? Our new article explores the answers to these questions and debunks many myths in the conventional wisdom about privacy law.

See “Privacy’s Other Path” on Concurring Opinions. The article is pretty easy reading, and I recommend it highly.

Thumbing A Ride…

The DailyBreeze tells us about how Lorna Herf discovered South Bay BMW in Torrance’s sales policy of “No fingerprint, no car.” The dealership claims that this is an effort to prevent identity theft, though how this would help the customer is unclear. Additionally, this effort is being actively supported by the sheriff’s office. I think Ms Herf said it best:

They’re going about this with the best of intentions, but in the wrong way. A private dealer shouldn’t take the law into their own hands.

Things like this and the nonsense at Disney World add no discernible security or protections and serve only to get people used to having their privacy further invaded.
As always, all non-trivial privacy fears come true.

A Different X-Box Hack

Back in the day, I was a member of FIRST. (Btw, rumor has it Chris and Adam are presenting at their annual conference this summer). At the time, one of the more prolific posters to the mailing list was Robert Hensing from Microsoft (Adam, if you haven’t met Rob, you should look him up).
Anyways, I recently re-discovered that Rob has a blog, and I found this entertaining post about about resurrecting a dead x-box. I love solutions to problems like this. Not only is it ingenious, but it would also have made Douglas Adams proud.

DoS == Vulnerability?

I think that a Denial of Service condition is a vulnerability, but lots of other people don’t. Last week Dave G. over at Matasano posted a seemingly very simple explanation that nicely sums up the way I’d always been taught to think about these sorts of issues:

The ability to halt or shutdown most modern operating systems usually requires credentials (you must hava an account or be on console) and privilege (you must be in the wheel or admin group). If you can bypass authentication and authorization requirements and cause a machine to panic (let alone gracefully shutdown), then I think we have a security problem.

Security being the contentious field that it is, plenty of folks didn’t agree with his assessment. The discussion in comments (now up to 32) is well worth reading and brings up some great alternative viewpoints. Where do you stand on this issue?

Why BitLocker Won’t Help Most Companies

A couple of weeks ago, Mike Rothman linked to an article by George Ou about using EFS and BitLocker under Vista. There he made an extraordinary claim:

Since BitLocker won’t encrypt additional hard drive volumes, whether they’re logical partitions on the same physical disk or additional disks, you must use EFS to encrypt those volumes by selecting all the folders and files from the root.

I said to myself. “Surely this must be wrong, Microsoft would never do anything like that….” So I set about Googling and I discovered that I was in fact, wrong. According to the technical overview of BitLocker on technet (Section 3.4.1):

Volumes other than the operating system volume and the system volume are called “data volumes”. BitLocker encryption of data volumes is only supported in Windows Server “Longhorn” in v1.

We’ve now confirmed that BitLocker only works on the system volume. This makes it completely useless for a huge chunk of corporate America. Why? Because in most companies tend to configure their machines with at least two partitions, a systems partition where all of the OS and software goes and a data partition where all documents, emails and what not are stored. This is done for both ease of backup as well as giving IT the ability to reinstall the operating systems without the worry of overwriting users’ data. Additionally, companies are increasingly giving users external hard drives of one variety or another so that they can do their own backups.
So either companies won’t use BitLocker because it doesn’t give them anything, or worse will deploy BitLocker and think they are protected when they aren’t. I’ve already harassed Adam about this, but I’m curious about why this design decision was made.
Update: Sean from MS provided a link in the comments to the command line magic incantation to enable BitLocker on any NTFS volume. Thanks Sean!