I was going to title this “Painful Mistakes: Torture, Boyd and Lessons for Infosec,” but then decided that I wanted to talk about torture in a slightly different way.
The Washington Post reports that “Detainee’s Harsh Treatment Foiled No Plots” and [UK Foreign & Commonwealth Office] Finally Admits To Receiving Intelligence From Torture. From the Post story:
When CIA officials subjected their first high-value captive, Abu Zubaida, to waterboarding and other harsh interrogation methods, they were convinced that they had in their custody an al-Qaeda leader who knew details of operations yet to be unleashed, and they were facing increasing pressure from the White House to get those secrets out of him.
The methods succeeded in breaking him, and the stories he told of al-Qaeda terrorism plots sent CIA officers around the globe chasing leads.
In the end, though, not a single significant plot was foiled as a result of Abu Zubaida’s tortured confessions, according to former senior government officials who closely followed the interrogations.
The torture committed in our names undermines our claim to moral superiority. It doesn’t demolish it completely. Intentional mass murder of civilians is worse, but in war, you don’t want to have such arguments. You want to clearly have a right side and a wrong side, and torture usually sets you on the wrong side. Boyd laid out conflict as happening in a moral-mental-physical atmosphere, with moral being the most important. If you don’t have a moral claim to rightness, then your side’s mental willingness to fight for the cause is subject to alienation through propaganda. (This is why Al Qaeda shows so many videos of Guantanamo, Abu Ghraib, etc.) More on this in Chuck Spinney’s When Strategic “Genius” is Mortal Blunder.”
So why do people commit acts of torture? It’s because they believe that it works, and under the ticking time bomb theory, it’s the lesser evil. That what counts is “why the President thinks he needs to do that.”
There are two arguments against torture, the moral and the practical. Both are outlined in the articles cited at the top. I’d now like to turn back to the idea of best practices.
Best practices are ideas which make intuitive sense: don’t write down your passwords. Make backups. Educate your users. Shoot the guy in the kneecap and he’ll tell you what you need to know.
The trouble is that none of these are subjected to testing. No one bothers to design experiments to see if users who write down their passwords get broken into more than those who don’t. No one tests to see if user education works. (I did, once, and stopped advocating user education. Unfortunately, the tests were done under NDA.)
The other trouble is that once people get the idea that some idea is a best practice, they stop thinking about it critically. It might be because of the authority instinct that Milgram showed, or because they’ve invested effort and prestige in their solution, or because they believe the idea should work.
The next time someone suggests something because it’s a best practice, ask yourself: is this going to work? Will it be worth the cost?