Of Parliaments, Dukes and Queens

map.jpgFour interesting stories recently, all having to do with the ancient relationship between a sovereign and a parliament, or the relationship of hereditary rulership to democracy. I secretly admire the emergent forms of government which have proven stable despite their chaotic origins. I’m fascinated by these imperfectly republican nations like Canada and the United Kingdom, where the assent of the Queen to legislation is still required. And for our Canadian and UK readers who think that isn’t really relevant…well, take note of what happened in Luxembourg.

The Grand Duke of Luxembourg had the termity to not rubber-stamp a bill (on euthenasia). Radio Netherlands is reporting that a mere parliamentary committee has written him out of the process entirely. It’s not clear to me who the Prime Minister of Luxembourg is now prime minister to, but that’s the problem of the Luxembourgers.

Meanwhile, on December 4th, the Canadian “Governor General agrees to suspend Parliament until January.” Formally, prorouging them. I sort of like the idea that someone in a position of authority has declared all of Canada’s parliament to be rouges, but the G.G. is usually understood to be a ceremonial role. It turns out that that’s not entirely the case, although her action was unprecedented. (The Wikipedia article on Governor General is unsurprisingly good – enjoy the section on controverseys — the one on the “2008 Canadian parliamentary dispute,” is not yet well organized.) Still, the G.G. took decisive action to prevent the government from falling.

I’m a little disappointed that Stéphan Dion didn’t call for a meeting of Parliament regardless of the prorougement, and vote Harper out. That would have been quite the republican act and given Americans some fine, and highly confusing chaos to observe. (I’m using republican in the sense of republicans versus monarchists, of course.)

At just about the same time, on Sark, the Chief Pleas have been pleased to replace membership by land ownership with democracy, which promptly elected them back into office.

Finally, in Britian, the Crown has arrested a member of Parliament to nary a whisper. As the Economist mentions, such acts once led to civil war in England. (“England had a civil war?“)

Despite my dislike of monarchies, I don’t want to forget that historically, legislatures wrested power from those monarchs. There’s a value and risk to such a balance, which is that (as Madison wrote):

In a government where numerous and extensive prerogatives are placed in the hands of an hereditary monarch, the executive department is very justly regarded as the source of danger, and watched with all the jealousy which a zeal for liberty ought to inspire. In a democracy, where a multitude of people exercise in person the legislative functions, and are continually exposed, by their incapacity for regular deliberation and concerted measures, to the ambitious intrigues of their executive magistrates, tyranny may well be apprehended, on some favorable emergency, to start up in the same quarter. But in a representative republic, where the executive magistracy is carefully limited; both in the extent and the duration of its power; and where the legislative power is exercised by an assembly, which is inspired, by a supposed influence over the people, with an intrepid confidence in its own strength; which is sufficiently numerous to feel all the passions which actuate a multitude, yet not so numerous as to be incapable of pursuing the objects of its passions, by means which reason prescribes; it is against the enterprising ambition of this department that the people ought to indulge all their jealousy and exhaust all their precautions.

I’ll have a little more to say about the indulgence of that jealousy shortly.

In the map, Sark is marked by the pin. Canada is not shown.

As easy as dialing a phone

People often make the claim that something is “as intuitive as dialing the phone.”

As I was listening to “Dave Birch interviewing Ben Laurie,” I was reminded of this 1927 silent film:

how to dial the telephone.jpg

Ben commented on people having difficulty with the CardSpace user interface, and it not being as intuitive as having your email address being a login identifier.

Anyway, fascinating interview. Worth a listen, even if takes twice as long as learning what a dial tone is.

Working Through Screens

Jacob Burghardt has a very interesting new ebook, “Working Through Screens.”

If one was to summarize the status quo, it might sound something like this: when it comes to interactive applications for knowledge work, products that are considered essential are not always satisfactory. In fact, they may be deeply flawed in ways that we commonly do not recognize given our current expectations of these tools. With our collective sights set low, we overlook many faults.

Unless knowledge workers are highly motivated early adopters that are willing and able to make use of most anything, their experiences as users of interactive applications can vary drastically. These differences in experience can largely depend on the overall alignment of an individual’s intentions and understandings with the specifics of a tool’s design.

Poorly envisioned knowledge work applications can … present workers with confusing data structures and representations of information that do not correlate to the artifacts that they are used to thinking about in their own work practices.

I’m only a little ways into the book, but a great deal of what he says resonates with me. Much of the problem I saw with previous generation threat modeling tools were that they were created by and for those ‘highly motivated early adopters,’ and then delivered to people who were not used to thinking about their software from the perspective of assets and entry points. (Thus the third excerpt.) In creating the v3 SDL Threat Modeling Tool, I struggled with a lot of these issues.

If you encounter problems like this, there’s no reason to not invest some time in “Working Through Screens.”

Via Information Aesthetics.

Do Security Breaches Cost Customers?

Adam Dodge, building on research by Ponemon and Debix, says “Breaches Cost Companies Customers,” and Alan Shimel dissents in “Do data breaches really cost companies customers?”

Me, I think it’s time we get deeper into what this means.

First, the customers. Should they abandon a relationship because the organization has a security problem? To answer this, we first need to look at the type of organization. For governmental organizations, it’s very hard. They won’t let you go, and if they do, they won’t destroy your dossier the dossier about you.

For regulated entities, they generally may not delete the information they collected for some number of years (varies, but always sufficient for them to lose control of the data again).

For unregulated entities, you can’t (in the US) ask them to delete the database record either.

So for most breaches, the only value to abandoning the relationship is to stop paying the company. Which is a reasonable bit of retribution, but doesn’t actually add to security, and may subtract from it. It could subtract because (assuming you replace the service you were getting) there’s now an additional dossier about you.

Second, what’s the discrepancy? Why do 30% of customers report having closed a relationship, but Ponemon’s own numbers show a range of 2-7%? There are three hypothesis which spring to mind.

  1. Consumers are confused or lying. This would only make sense if you think the American people are idiots. The sort of folks who would think Iraq had chemical weapons in 2002 buy books titled “neurosurgery for dummies.”
  2. Consumers are right, and closing one of several relationships. All those numbers could be right, if consumers are getting more notices than we think. This would be one of many problems with our volunteer based systems for tracking breaches.
  3. The discrepancy is really notices sent versus notices received. That is, people are not opening the “Dear John Doe” letters.

Privacy Rights & Privacy Law

First, the European Court of Human Rights has ruled that the UK’s “DNA database ‘breach of rights’:”

The judges ruled the retention of the men’s DNA “failed to strike a fair balance between the competing public and private interests,” and that the UK government “had overstepped any acceptable margin of appreciation in this regard”.

The court also ruled “the retention in question constituted a disproportionate interference with the applicants’ right to respect for private life and could not be regarded as necessary in a democratic society”.

The Police are aghast that they will not be able to do whatever it takes to solve crimes. Similar past rulings have involved forced confessions, indefinite detention, and a presumption that the accused are guilty until proven innocent. They have put forth figures about how many criminals have been caught. The BBC also reports:

The court says the figures appear “impressive” – but on closer analysis it acknowledges, as the Nuffield Council on Bioethics and GeneWatch UK also have, that they are unconvincing.

The Supreme Court of Newfoundland has ruled that airport searches may not be used for blanket law enforcement purposes:

a reasonable expectation of privacy with respect to the contents of his luggage, save and except for searches by [airport] personnel for items that could be used to jeopardize the security of an aerodrome or aircraft. The drugs and money found in his baggage, which are the subject of this proceeding, are not such items and thus Brian Crisby had a reasonable expectation of privacy.”

This is in stark contrast to the US, where John Perry Barlow was arrested when they found small amounts of drugs in his checked luggage. His appeal was denied, although pages related to that seem to have hit the memory hole.

What’s relevant about this is the difference between Canada and the EU and the US. Privacy law in the US is in disarray. At a Constitutional level, the 4th amendment protections have been utterly eviscerated. At a broader level, privacy laws seem to emerge after bad cases.

The result is expensive investment in poor protection. We can and should do better. It would be possible to put in place a data protection or privacy law which protects privacy and respects the rights of free speech. The key is to recognize the role of the government in enabling correlation and linkage. Privacy law should kick in (hard) when the government is involved, either as the gatherer or guarantor of information. That is, if I have to give my legally documented name or my SSN, I should get strong protection. If I can sign up as Mickey Mouse, then privacy law shouldn’t apply.

However we do it, we need a sane privacy law for the US.

Two Buck Barack

So the New York Times is breathless that “Obama Hauls in Record $750 Million for Campaign.” A lot of people are astounded at the scale of the money, and I am too. In a long, hard campaign, he raised roughly $2.50 per American, and spent slightly less than that.

Unusually, he ended his campaign not in debt, but with a small surplus. Everyone and their brother is now grubbing after that, according to the Times article. If we had a campaign finance system with transparency and accountability for donations, we would likely see spending levels like this more often, and we might well see a broader range of interesting candidate emerge and get voters engaged again.

The reality is while $750 million is a lot of money, it’s also a surprisingly small amount of money. For comparison, the 2008 Federal budget was 2.9 trillion dollars, or roughly 3900 times larger than the budget Obama just oversaw. It’s also only 1/20th of the amount we’re spending to keep Rick Wagoner in a job.

Previously: “Obama vs McDonalds,” “Already Donated the limit,” and way back in 2004, “Shut down these shadowy groups?

Eric Drexler blogging

At Metamodern.com. Way cool. I look forward to what he has to say.

Unfortunately, one of his early posts falls into the trap of believing that “Computation and Mathematical Proof” will dramatically improve computer security:

Because proof methods can be applied to digital systems, and in particular, will be able to verify the correctness (with respect to a formal specification) of compilers [pdf], microprocessor designs [pdf] (at the digital-abstraction level), and operating system microkernels (the link points to a very important work in progress). Software tools for computer-assisted proof are becoming more usable and powerful, and they already have important industrial-strength applications [pdf]. In a world which increasingly relies on computers for everything from medical devices to national governance, it will be be important to get these foundations right, and to do so in a way that we can trust. If this doesn’t seem important, it may be because we’re so accustomed to living with systems that have built on foundations made of mud, and thinking about a future likewise based on mud. All of us have difficulty imagining what could be developed in a world where computers didn’t crash, were guaranteed to be immune from virus attack, and could safely download code written by the devil himself, and where crucial pieces of software could be guaranteed to not leak data.

The trouble with this approach is that you demonstrably can’t make a useful computer which is immune from virus attack. The proof: a useful computer is one on which I can install software. The user of the computer will have to make a decision about a piece of software. Con men and frausters will continue to convince people to do things which are obviously not in their best interests.

Therefore, however well proven the operating system is, you can’t usefully guarantee them to be free of viruses, because computers are useful when they are generative and social.

That’s implied by his parenthetical “with respect to a formal specification.”

Similarly, the data may be guaranteed not to leak, but can also be guaranteed to be shown to people. (Otherwise, it’s not useful.) Those people can and will leak it. (Ross Anderson’s work on medical systems demonstrates this with a higher level of formality.)

This is not to say that formal methods won’t provide useful results on which we can build. They have, and will continue to in those areas where the problems don’t involve humans, our decisions, or our societies. But human beings are not rational result maximizers who adhere to computer security policies, and all the math in the world won’t change that.

DataLossDB announces awesome new feature

The Data Loss Database, run by the Open Security Foundation, now has a significant new feature: the inclusion of scanned primary source documents.
This means that in addition to being able to determine “the numbers” on an incident, one can also see the exact notification letter used, the reporting form submitted to state government, cover letters directed at (for example) an attorney-general, and the like. Importantly, all the documents have been OCRed, making it possible to search within them.
There are currently several hundred documents in the archive, most of which arrived in the last few days. In order to link the docs to existing breach records quickly, the folks at DataLossDB latched onto a key insight: this is an embarrassingly parallelizable problem. Therefore, a screen is provided to do a bit of matching of scanned docs to existing breach entries. For those without research assistants, crowdsourced data entry is the way to go :^).
If you’re the type of person who is into the details of breaches — and who isn’t? — you should check this out.
Full disclosure: I contributed many of the documents in the archive, and am extremely pleased at what has come of this. The DataLossDB interface is vastly superior to even the vaporware version of my site.

The Costs of Fixing Problems

I enjoyed reading Heather Gerkin’s article: “The Invisible Election.”

I am one of the few people to have gotten a pretty good view of the invisible election, and the reality does not match the reports of a smooth, problem-free election that have dominated the national media. As part of Obama’s election protection team, I spent 18 hours working in the “boiler room,” the spare office where 96 people ran national election day operations. Obama’s election protection efforts, organized by Bob Bauer, were more generously funded, more precisely planned, and better organized than any in recent memory. Over the course of the day, thousands of lawyers, field staff, and volunteers reported the problems they were seeing in polling places across the country. A sophisticated computer program allowed the lawyers and staffers in the boiler room to review these reports in real time.

[…list of problems elided…]

I draw three lessons from the time I spent watching the invisible election unfold, all of which point to the need to make the invisible election visible to the public, to policymakers, and to election administrators themselves.

First, it is essential that the public see the invisible election. We are never going to get traction on reforming our election system until we have a means of making these problems visible to voters. Virtually every media outlet has reported that the election ran smoothly.

First, I’m a huge fan of transparency. I’m not going to advocate sweeping anything under the rug. But I do question if we really need to draw attention to the problems with voting systems before we have consensus on what to do about them?

See, a working democracy is a tremendously valuable asset. It takes years to start up, and (when working) gives us a way to transition between legitimate governments. The thousand years of European wars of succession didn’t allow for much liberty or wealth creation. Democracy has huge value, and it’s under threat. In 2000, we had a real risk of a crisis. If Al Gore had contested the 5-4 vote in Washington, we had no real way to address it and choose a legitimate next leader. Gore understood this, which is why he was clear that we all had to respect the decision, “for the strength of our democracy.” Despite the damage of the Bush years, it was the right call. Because a working democracy is a fragile thing. Trust that the election machinery has gotten the right result and will get the right result next time is an absolutely vital part of the legitimacy of government. Risking it should not be undertaken lightly.

I’ve been at occasional meetings between voting officials and computer scientists for about eight years now. There’s a tremendous gap. The two groups don’t understand each other well, although folks like Avi Rubin are working really hard to bridge that gap. Until there’s a rough political and technological consensus that’s inline with the ‘Help America Vote act’ or its replacement, we should be cautious about undercutting the system we have now.

I also wanted to juxtapose a little with Ryan Singel’s story, “Chertoff: We’re Closing that Boarding-Pass Loophole.” There are now scanners which read a bar code off your boarding pass to make sure you haven’t altered it, and the TSA folks can match your ID to the boarding pass. This was known for years, but driven heavily by Chris Soghoin’s make your own boarding pass toy.

Between the airline software, the scanners and the training, we’ve probably spent tens of millions of dollars to fix the loophole. (Oddly, I haven’t been able to find a statement of the costs.) But the truth is, it’s a silly thing to fix. Good fake ID is easy to get, and will remain easy to get unless we choose a different balance between terrorism prevention, immigration and kids drinking.

Chris has some other entertaining discoveries, which I’m hoping he keeps to himself. I think they’re worth not fixing. That is, the cost of the fix is too high. There are better things to spend money on.

The next few years are going to be rough for the United States. The costs of the Iraq war, our broken health care system, the financial melt-down, the bursting of the housing bubble, infrastructure that’s starting to fail, and global climate change are all going to be competing for a slice of budgets while revenues are falling.

We need to ask ourselves which problems we need to fix, and what the costs of fixing it are really going to be. Not every problem needs a fix, and not every problem that needs fixing needs fixing now.

You versus SaaS: Who can secure your data?

In “Cloud Providers Are Better At Securing Your Data Than You Are…” Chris Hoff presents the idea that it’s foolish to think that a cloud computing provider is going to secure your data better. I think there’s some complex tradeoffs to be made. Since I sort of recoiled at the idea, let me start with the cons:

  1. The cloud vendor doesn’t understand your assets or your business. They may have an understanding of your data or your data classification. They may have a commitment to various SLAs, but they don’t have an understanding of what’s really an asset or what really matters to your business in the way you do. If you believe that IT doesn’t matter, then this doesn’t matter either.
  2. The cloud vendor doesn’t have to admit a problem. They can screw up and let your data out to the world, and they don’t have to tell you. They can sweep it under the rug.

In the middle, slightly con:
Its hard to evaluate security of a cloud vendor. Do you really think a SAS-70 is enough? (Would you tell your CEO, “we passed our SAS-70, nothing to worry about?”) This raises the transaction costs, but that may be balanced by the first pro:

  1. Cloud vendors involve a risk transfer for CIOs. A CIO can write a contract that generates some level of risk transfer for the organization, and more for the CIO. “Sorry, wasn’t me, the vendor failed to perform. I got a huge refund on cost of operations!
  2. Cloud vendors have economies of scale. Both in acquiring and operating the data center, a cloud vendor can bring in economies of scale of operating a few warehouses, rather than a few racks. They can create great operational software to keep costs down, and that software can include patch rollout and rollback, as well as tracking and managing changes, cutting overall MTTR (mean time to repair) for security and other failures.
  3. Cloud vendors could exploit signaling to overcome concerns that they’re mis-representing security state. If a Cloud vendor contracted to publish all their security tickets some interval after closing them, then a prospective customer could compare their security issues to that of the Cloud vendor. Such a promise would indicate confidence in their security stance, and over time, it would allow others to evaluate them.

That last is perhaps a radical view, and I’d like to remind everyone that I’m speaking for the President-Elect and his commitment to transparency, not for my employer.