Giant Waves

Chandler Howell has a great post about giant waves. He quotes extensively from “Monster Rogue Waves” at Damninteresting:

More recently, satellite photos and radar imagery have documented the existence of numerous rogue waves, and it turns out that they are far more common than previously thought. During a three-week study in 2001, radar scanning detected ten monster waves in a 1.5 million square kilometer area. Satellites and direct observations have also established that rogue waves can happen anywhere, but they are most numerous in the North Atlantic and off the western shore of South Africa. In spite of their frequency, monster waves rarely meet with sea vessels because they are so short-lived.

He has interesting things to say about the waves and risk management, and I’d like to tie in my current thinking on breach analysis. The wave of reports about how people lose control of data entrusted to them is rocking some boats, and sinking a very few. As we get more and more data, we’ll be able to better analyze it, and focus our risk management techniques better on what matters most.

Speaking of the effects of naval risk management, don’t miss Nick Szabo on Genoa.

The Hugo Chavez Test for Voting Machines

malcomx.jpgAt first I thought that the stories around Sequoia Voting Systems and Smartmatic having connections to Hugo Chavez were silly. I still do think that, but I also think that they’re coming out for an important reason: we have lost trust in the machinery of voting, and that is a criminal shame.

The right to vote, and to have one’s vote counted is fundamental to how and why we accept our government, even when it makes colossal mistakes. This is an ideal which people around the world recognize and aspire to. The imprint of legitimacy which an election confers on a leader is important enough that even the Soviets faked elections so they could claim that mantle.

If we had voting systems that were trustworthy, transparent and understood by those operating them, then we could buy our voting machines from Hugo Chavez or Mahmoud Ahmadinejad and not have to worry a lot about it. We do not, and cannot. We have transitioned from paper ballots and their understood problems into a brave new world of computerized and untrustworthy voting systems, and we are poorer for it.

I propose we call this the Hugo Chavez test, and see how all new voting technology fares under the test. We could realistically consider buying paper ballots, punch cards, or other verifiable voting technologies from the Chavez government, and be reasonably confident in our ability to test them and be sure we were getting what we specified. (I’m confident someone will point out an exceptionally clever trick, so read the comments.) I’m also confident that we can’t say the same of any computerized system on the market today. Our ability to audit them is simply too lacking, and the skills to do so too rare.

The photo is Malcom X, because we sometimes forget that within living memory, not all Americans had a right to vote. We forget that that right was important enough for Malcom X to declare 1964 might be be the “year of the ballot or the bullet.” That the ballot is so powerful that men ready to commit acts of violence could be placated by giving them the right to vote. It’s an important right, and the value of trust that our votes are counted accurately and securely is nearly incalculable.

While We’re Talking TSA

thoughtcrime.jpgAirport ID programs plagued by delays,” “Newarks screeners might just suck,” “TSA delays cargo worker background checks ” and finally, “To the hassles of air travel, add $1.4 million in fines:”

Passengers can be fined for their actions too. For example, “interference with screening” that includes physical contact could cost a traveler $1,500 to $5,000, and “non-physical contact” $500 to $1,500.

In reviewing incident reports, TSA officials consider factors such as whether the passenger tried to conceal the item or the “attitude of the violator.”

I’d ask what “non-physical contact” entails, but I’m taking TSA’s advice and “try[ing] not to over-think these guidelines.”

Like the posters say: “Thoughtcrime doesn’t pay. Don’t fuck with TSA.”

Update: don’t miss “Arrested in ATL-plastic bag too big.”

On Printing Boarding Passes, Christopher Soghoian-style.

Yesterday, I blogged about Christopher Soghoian’s print your own boarding pass tool. Quite a few people (including the FBI) are taking the wrong lesson from this. Wrong lessons include “we shouldn’t be allowed to print boarding passes,” “we should check ID at the gate,” and “Christopher Soghoian should be arrested.”

The right lesson is that the TSA is putting us all through a silly wringer based on an ID system they know is so porous as to be irrelevant. Much like they did with the “imminent threat” of “liquid bombs” wherein alleged conspirators didn’t have passports, the powers that be are responding with theater, rather than saying “it doesn’t matter.”

If we wanted useful screening, we would screen passengers at the door of the plane, like they do in, say, the Czech republic. It’s too expensive. We might consider more air marshals. It’s too expensive. Removing a line of seats, and making the flight deck a larger area, with a sealed off washroom and kitchen. It’s too expensive. And frankly, there are much better ways to spend our money, including leaving it in the hands of the taxpayers. I digress.

There’s nothing in the print your own boarding pass that needs fixing, except bad and expensive theater. Let’s fix the problem by admitting that ID checking does no good, rather than acting all shocked at the power of a good demo.

“You’re doing a heck of a job, Kip”


Sure, it’s all over the web, but you might be living under a rock, or in a reality-free zone, and have missed “Make Your Own Fake Boarding Pass” at 27b/6.

The short version of the story is that someone has automated the process of creating your own fake boarding passes. Don’t worry, though, Osama isn’t on the no-fly list, so you can get through security with one of these.

As I mentioned in “What Did TSA Know and When Did They Know It,” TSA has known about this since at least February of 2004. If the no-fly list means anything, then they should have responded at least as effectively as they have to the whole “liquid bomb” scare. That they have not would be recognized and addressed if we had any effective oversight, or managerial accountability in the executive branch.

Risk Management Redux

Earlier this week, Mike Rothman took a swipe at Alex Hutton’s What Risk Management Isn’t by saying:

But I can’t imagine how you get all of the “analysts and engineers to regularly/constantly consider likelihood and impact.” Personally, I want my firewall guy managing the firewall. As CSO, my job is to make sure that firewall is protecting the right stuff. To me and maybe I’m being naive and keeping the proletariat down, but risk management is a MANAGEMENT discipline, and should be done by MANAGERS.

I have to disagree here. Risk management in the end is the responsibility of management and as such the final decision belongs to them. But how can I as a manager make the right decision and know that a firewall is protecting the right stuff, if my team isn’t well educated on what the risks are? How am I supposed to make the right decisions if don’t know what the issues are? I need to have a staff of analysts, architects and engineers that I can trust to be regularly analyzing and evaluating the systems, applications and networks, so I can make the right choices or recommendations. I don’t need someone who blindly follows a help desk ticket. I don’t know a single CSO who wants to be micromanaging those sorts of decisions.

Health Care Privacy


Bob Sullivan has an article at Red Tape, “Health care privacy law: All bark, no bite?” and focuses on the lack of penalties.

Two years ago, when Bill Clinton had heart surgery performed in New York’s Columbia Presbyterian Medical Center, 17 hospital employees — including a doctor — peeked at the former president’s health care records out of curiosity. Earlier this year, Boston-based Brigham and Women’s Hospital repeatedly faxed patient admission sheets to a nearby bank by accident. The faxing continued even after bank employees warned the hospital. In Hawaii, Wilcox Memorial Hospital lost a thumb drive containing personal information on every one of its 120,000 current and former patients.

None of the institutions involved in these incidents has been fined under the highly touted medical privacy law, known as HIPAA (Health Insurance Portability and Accountability Act).

“Since our compliance effort began we have resolved thousands of cases through corrective actions,” said a spokesman for the agency, who asked not to be identified because of agency policies. “We believe it’s inappropriate and misleading to focus exclusively on lack of monetary penalties as a measure of the degree of compliance.”

A process of informal resolutions from the agency, spurred by consumer complaints, has been well-received by health providers, who quickly amend their faulty processes, he said. “Those resolutions bring the benefits of the privacy rule to consumers much more quickly than the adversarial process of civil monetary penalties,” the spokesman said. “It encourages cooperation.”

I’d like to ask two questions:

First, this means complaints are dropping, right, because there’s a measure of compliance, and complaints are going down?

Second, what would it take to get the agency to fine people?

PS: I’ve covered this before, in “Medical ‘Privacy’ ‘Law.’3 Monkeys photo by xericx

Congratulations to Counterpane and Bruce Schneier

Even though Chris got the news before me, I wanted to add my congratulations. I was involved in Counterpane very early, and made the choice to go to Zero-Knowledge Systems. I stayed involved on the technical advisory board, and was consistently impressed by the quality of the many Counterpane employees and executives who I met. I had to leave the TAB when I joined Microsoft, but, regardless, I’m really happy for everyone involved.

Long Term Impact of Youthful Decisions

risk-evolution.jpgThere’s a fascinating article in the New York Times last week, “Expunged Criminal Records Live to Tell Tales” about how companies like Choicepoint which collect and sell public records don’t pick up orders to expunge those records.

I didn’t have much to add, and figured the Times doesn’t need me to pimp their articles (they get a few more readers each day than we do), so I let it alone.

Then I saw Gunnar Peterson discuss “Brian Chess on Evolving Risk Models:”

When a company starts its life it wants to take on as much risk as it possibly can, do something hard and prove it in the marketplace. If it is not too risky then a big company may take you out or there may be no market. Over time a successful company’s market risk should go down as it gains market share.

Where this becomes interesting from a security standpoint is that early in the company’s lifecycle, the business has high market risk, but little security risk, there is not much in the way of assets to target. But over time as the business gains market share its security risks grow. This puts security in a very interesting position where there have to make up for a lot of lost time even if the decisions to delay security made sense at the time, the risk profile have readjusted to the point where more mature businesses who are established in the market and have relatively little residual market risk, at the same time the business takes on more and more security risk. In general this means the code, the config, data and identity architectures all must play catch up to deal with the risk profile over time.

These design and implementation choices also live to tell tales. I expect over the next few years, a rise of highly effective testing tools will act as a force multiplier for elite researchers, making it less and less possible to expunge evidence or records of security choices made. We’re going to have to start asking questions about security activity during the procurement process. Think of it as background checks for your software.

Contactless Credit Cards Cracked

Well calling it cracked implies encryption or some semblance of security of which there is none according to the New York Times. In Researchers See Privacy Pitfalls in No-Swipe Credit Cards we learn that a team of folks from UMass Amherst and EMC/RSA tested a small batch of RFID Credit Cards from Amex, Visa and Master Card and found that they were all susceptible to skimming attacks that revealed a variety of information including, the cardholder name, the complete credit card number, and the expiration date. Though some cards did some obfuscation of the data, the cards were discovered to reuse strings within a short period of time.
One choice bit from the article is MasterCard’s response:

Mr. Kranzley said the MasterCard-issuing banks decided how much security they wanted to implement, but said that with 10 million of the company’s chip-bearing cards on the market, some 98 percent of them used the highest standards.

It’s ever so comforting that MasterCard advertises how secure the technology is but then leaves the implementation up to its member banks. I guess it’s just as well that I don’t carry a MasterCard, now I don’t need to try to convince my bank to tell me what features they did or didn’t implement. This does make me worry about my Visa though…
For those who want more detail, the Times has kindly posted the team’s submission to Financial Cryptography 2007 and a technical report as well.

A Very Silly Idea: #privacy, and


With recent data leaks at AOL, governments seeking information from Google on its users, and no simple user privacy solutions available, a standard for empowering user search privacy has finally been proposed. is spearheading a search privacy revolution with its proposed #privacy standard. Our proposal is that the #privacy flag could be added to the end of searches by users to tell the search engine ‘don’t track this query.’ In response, the search engine should not track the user by IP address or cookie, and the query should not be made public in keyword tools.

This is silly on a number of levels:

  1. It propagates the simplistic “opt-in/opt-out” thinking that the US marketing industry has been promulgating for decades. Look where that thinking has taken us.
  2. It defaults all queries to opt-in, implied by absence of an opt-out. Privacy should be a default, and the “right” way to implement this would be with #trackthis.
  3. It will be prone to user error (typos) and forgetting. It offers no way to say, set a privacy cookie. Even Doubleclick does that.
  4. Implementation is left as an exercise for the search engines, who are supposed to both magically not track your queries, and magically track them if you might be violating a law. (I say magically because I have some understanding of how web logs actually work.)
  5. For some remarkable reason, no search engine has actually bothered to comment on the proposal. Certainly, no one has accepted it yet. So why am I blogging about it?
  6. Really, this idea is one level above an idea I had at the pub last night. Unfortunately, as it turns out, goats are expensive, and probably won’t walk on treadmills. It’s a good thing I sobered up before setting up a web site.

(Article via David Fraser at “Search Engine Privacy Standard Proposed,” “Framed!” photo by Pheonix Lament.)

Diebold goes open source

Well, not intentionally.
Seems that multiple versions of source code (including the one used to run the 2004 primaries in Maryland) were delivered anonymously to a former legislator who has been critical of Diebold.
Note that this is not the same source examined by Avi Rubin, et. al., and found wanting from a security perspective.
The Baltimore Sun has more.