Over on my work blog, I asked:
I’m working on a paper about “Experiences Threat Modeling at Microsoft” for an academic workshop on security modeling. I have some content that I think is pretty good, but I realize that I don’t know all the questions that readers might have.
So, what questions should I try to answer in such a paper? What would you like to know about? No promises that I’ll have anything intelligent to say, but I’d love to know the questions you’re asking. So please. Ask away!
Comment here or there.
There’s an article in “destination CRM,” Who’s Really Calling Your Contact Center?
…the identity questions are “based on harder-to-steal information” than public records and credit reports. “This is much closer to the chest than a lot of the public data being used in other authentication systems,” she says, adding that some companies using public data include Acxiom, ChoicePoint, and LexisNexis. Higginson gives the example of asking someone the birth date of an individual who used to share an address with him. “There is no public data source to have a question like that answered,” Higginson says, arguing that it would take multiple documents to try and piece together exactly who the other individual is, where she lives now, verify that she did at one time share an address with the caller — and then still have to verify her birth date.
A couple of comments:
A company which deploys these sort of things will lose me as a customer. As Debix points out, your real customer knows who they are. Involve them via multi-factor or multi-channel communications.
More generally, this seems like it would be symptomatic of a company that had lost sight of their customers. Who stops and thinks, “what our customers really want is to be interrogated. That will make them feel better?”
I know there’s a lot of people who prefer text to audio. You can skim text much faster. But there are also places where paper or screens are a pain (like on a bus, or while driving). So I’m excited that the Silver Bullet Podcast does both. It’s a huge investment in addressing a variety of use cases.
That all to say you can now read the text of Gary McGraw’s interview of me in PDF form: Adam Shostack on Gary McGraw’s Silver Bullet podcast.
If you missed it, the audio is available at the Silver Bullet site. (Fixed link to point to Silver Bullet.)
Congratulations to Arvind Narayanan and Vitaly Shmatikov! Their paper, “Robust De-Anonymization of Large Sparse Datasets,” has been awarded the 2008 Award for Outstanding Research in Privacy Enhancing Technologies. My employer has a press release which explains how they re-identified data which had been stripped of identifiers in the Netflix dataset. In their acceptance remarks, they mentioned the relevance to the Google-Viacom discussions over how much data would be given to Viacom.
Photo: Nikita Borisov. Shown, from left to right, are Michelle Chibba, of the Ontario Privacy Commissioners Office, presenting the award, Arvind and Vitaly, and Matthew Wright, chair of the award committee, is in the background.
What made this particular work different was that the packets we captured came through a Tor node. Because of this difference, we took extreme caution in managing these traces and have not and will not plan to share them with other researchers.
Response to Tor Study
I won’t get into parsing what “have not and will not plan to share” means, and will simply assume it means “haven’t shared, and will not share”. So, what we have here are data that are not personally identifying, but are sensitive enough that they cannot be shared, ever, with any other researchers.
What is it about the traces that makes them sensitive, then?
Given this policy, how can this work be replicated? How can it be checked for error, if the data are not shared with anyone?
Bonus rant, unrelated to the Tor paper
I am growing increasingly perturbed at the hoarding of data by those who, as scientific researchers, are presumably interested in the free flow of information and the increase of knowledge for the betterment of humanity.
Undoubtedly not all keep this information to themselves out of a base motive such as cranking out as many papers as possible before giving anyone else (especially anyone who might be gunning for that same tenure-track position) a shot, but others who play this game no doubt are. It’s unseemly and ultimately counterproductive.
It’s funny — the infosec community just went through an episode where a respected researcher said, in effect, “trust me — I found something important, but I can’t give you the information to verify my claim, lest it be misused by others less noble than we”, and various luminaries took it to be a sign of lingering institutional immaturity. Perhaps, as EE/CS becomes increasingly cross-pollinated with the likes of sociology, psychology, law and economics the same observation will hold. If so, we should see it coming and do the right things. This is one they teach in pre-school: “Sharing is Caring”.
As an example of what could be done, consider this and this.
Several weeks ago, in “A Question of Ethics“, I asked EC readers whether it would be ethical “to deliberately seek out files containing PII as made available via P2P networks”. I had recently read an academic research paper that did just that, and was left conflicted. Part of me wondered whether a review board would pass such a research proposal, or whether the research in the paper was even submitted for review. Another part told me that the information was made publicly available, so my hand-wringing was unwarranted. In the back of my mind, I knew that as information security researchers increasingly used the methods of the social sciences and psychology these ethical considerations would trouble me again.
Through Chris Soghoian’s blog post regarding the ethical and legal perils possibly facing the authors of a paper which describes how they monitored Tor traffic, I realized I was not alone. Indeed, in a brief but cogent paper, Simson Garfinkel describes how even seemingly uncontroversial research activities, such as doing a content analysis on the SPAM one has received, could run afoul of existing human research subject review guidelines.
Garfinkel argues that strict application of rules governing research involving human subjects can provide researchers with incentives to actively work against the desired effect of the reviews. He further suggest thats
society would be better served with broader exemptions that could be automatically applied by researchers without going to an IRB [Institutional Review Board].
My concern at the moment is with the other side of this. I just read a paper which examined the risks of using various package managers. An intrinsic element of the research behind this paper was setting up a mirror for popular packages under false pretenses. I don’t know if this paper was reviewed by an IRB, and I certainly don’t have the expertise needed to say whether it should have been allowed to move forward if it was. However, the fact that deception was used made me uneasy. Maybe that’s just me, but maybe there are nuances that such research is beginning to expose and that we as an emergent discipline should strive to stay on top of.
[Update: The researchers whose Tor study was examined by Soghoian have posted a portion of a review conducted by the University of Colorado:
Based on our assessment and understanding of the issues involved in your work, our opinion was that by any reasonable standard, the work in question was not classifiable as human subject research, nor did it involve the collection of personally identifying information. While the underlying issues are certainly interesting and complex, our opinion is that in this case, no rules were violated by your not having subjected your proposed work to prior IRG scrutiny. Our analysis was confined to this IRG (HRC) issue.
This conclusion is in line with Richard Johnson’s comment below, that this research was not on people, but on network traffic.]
Vox Libertas, a blogger at the Daily Kos has written an analysis of the new US FISA law in his article, “I think I understand the FISA bill. Do I?”
Vox Libertas has taken an approach that I can appreciate. On the one hand, many people are unhappy with the telecom immunity. I’m one of them. But people I respect are also saying that it’s a good compromise, and compromise means you don’t get everything you want.
Vox Libertas goes to the trouble of (shock, horror) reading the primary sources and explaining what’s in the new FISA bill. He also shows his own sources.
No matter what you think, this is worth reading.
The European Court of Human Rights has ordered the Finnish government to pay out €34,000 because it failed to protect a citizen’s personal data. One data protection expert said that the case creates a vital link between data security and human rights.
The Court made its ruling based on Article 8 of the European Convention on Human Rights, which guarantees every citizen the right to a private life. It said that it was uncontested that the confidentiality of medical records is a vital component of a private life.
The Court ruled that public bodies and governments will fall foul of that Convention if they fail to keep data private that should be kept private.
The woman in the case did not have to show a wilful publishing or release of data, it said. A failure to keep it secure was enough to breach the Convention.
“Data blunders can breach human rights, rules ECHR” on Pinsent Masons Out-Law blog.
I’m getting ready to leave for the 2008 Privacy Enhancing Technologies Symposium. I love this event, and I’m proud to have been involved since Hannes Federrath kicked it off as a workshop on design issues anonymity and unobservability.
I’m also happy that Microsoft has continued to sponsor an award for outstanding research in Privacy Enhancing Technologies. I used to participate in paper selection, before I took a job at Microsoft, but we are hands off as to the recipient. The award goes where the research community thinks it should go.
Finally, I’m happily reminiscing because on my last trip to Belgium, I met Andrew Stewart, leading, along a very chaotic road, to us writing The New School of Information Security.
I’ll be taking a couple of days off to get over jet lag and enjoy some fine beer and frites, and [my] blogging will be light.
Photo: some delicious beer, Awynhaus.
I have an article in the latest MSDN magazine, “Reinvigorate your threat modeling process:”
My colleague Ellen likes to say that everyone threat models all the time. We all threat model airport security. We all threat model our homes. We think about threats against our assets: our families, our jewelry, and our sentimental and irreplaceable photographs (well, those of us old enough to have photos that never existed in digital form do). We model threats based on architecture: there’s a wall here, a picture window there, and an easily climbed tree that we can use when we forget our keys. And we model threats based on attackers. We worry about burglars and kids falling into pools. We also worry about the weather, be it earthquakes, snow, or tornadoes.
If I wanted to sound like a management consultant, I’d say you employ a mature, multi-dimensional assessment process, with a heavy reliance on heuristics and low reproducibility across instances. At the same time, it’s likely you won’t have thought of everything or implemented defenses against every possible attack. It’s very unlikely you have a home defense management plan or have ever run a penetration test against your home.
There’s a lot in there talking about how and why some threat modeling methods became “heavy” and what to do about it. Underlying that is the start of a way of thinking about threat modeling as a family of related activities, and some ways of breaking that down. In particular, there’s a breakdown into asset-centric, architecture-centric, and attacker-centric threat modeling, which I think is a useful step forward.
What works for you in threat modeling? What hasn’t worked that you needed to replace?
To start from the obvious, book publishers are companies, hoping to make money from the books they publish. If you’d like your book to be on this illustrious list, you need an idea for a book that will sell. This post isn’t about how to come up with the idea, it’s about how to sell it.
In a mature market, like the book market, you need some way to convince the publisher that thousands of people will buy your book. Some common ways to do this are to be the first or most comprehensive book on some new technology. You can be the easiest to understand. You can try to become the standard textbook. The big problem with our first proposal was that we wanted to write a book on how managers should make security decisions.
That book didn’t get sold. We might rail against the injustice, or we might accept that publishers know their business better than we do.
Problems with the idea include that there aren’t a whole lot of people who manage security, and managers don’t read a lot of books. (Or so we were told by several publishers.) We didn’t identify a large enough market.
So a proposal for a new book has to do two main things: first identify a market niche that your idea will sell, and second, convince the publisher that you can write. You do that with an outline and a sample chapter. Those are the core bits of a proposal. There are other things, and most publishers have web sites like Addison Wesley’s Write for us or Writing For O’Reilly. Think of each of these as a reason for some mean editor who doesn’t understand you to disqualify your book, and make sure you don’t give them that reason.
With our first proposal, we gave them that reason. Fortunately, both Jessica Goldstein (Addison Wesley) and Carol Long (Wiley) gave us really clear reasons for not wanting our book. We listened, and put some lipstick on our pig of a proposal.
Funny thing is, that lipstick changed our thinking about the book and how we wrote it. For the better.
Today on the Dataloss mailing list, a contributor asked whether states in addition to New Hampshire and Maryland make breach notification letters available on-line.
I responded thusly (links added for this blog post):
I know only of NH and MD. NY and NC have been asked to do it, but have no plans to. NJ won’t do it because the reports are held by the state police and not considered public. IN had that provision stripped from their revised law. I saw no evidence that ME has them on-line at the AG’s site. Unless I missed any, those are all the states with central reporting.
I personally have several hundred notices to NY and NC that I am slowly scanning and making available. Unfortunately, my site is off the net for probably a couple weeks.
A later response pointed out that Wisconsin publishes some data as well. Actually, so does New York, but it’s pretty measly.
I forgot to mention in my email that California also considered central reporting — including a web site — as part of an update to its breach law. We blogged about this at the time. I understand these features were cut because of lack of resources.
EC reader Iang made a perspicacious comment at the time:
At some stage we have to think about open governance being run by the people. That is, expect to see some quality control from open institutions, ones that arise for a need. E.g., blogs like this and other aggregators of info.
I am very happy to report that the Open Security Foundation yesterday announced just such a resource. The press release tells the story, but basically it’s crowd-sourcing information on breaches. I am very enthusiastic about getting my primary sources archive back on-line so that I can link with, and otherwise contribute to, this new DataLossDB.
There’s a huge amount of interesting stuff from a recent workshop on “Security & Human Behavior.” Matt Blaze has audio, and Ross Anderson has text summaries in the comments on his blog post.
Also, see Bob Sullivan, “How magic might finally fix your computer”