There’s a story in USA Today, “Most fake bombs missed by screeners.” It describes how screeners at LAX find only 25% of bombs, at ORD, they find 40%, and at SFO, 80%:
At Chicago O’Hare International Airport, screeners missed about 60% of hidden bomb materials that were packed in everyday carry-ons — including toiletry kits, briefcases and CD players. San Francisco International Airport screeners, who work for a private company instead of the TSA, missed about 20% of the bombs, the report shows. The TSA ran about 70 tests at Los Angeles, 75 at Chicago and 145 at San Francisco.
I could go on at length about how bad air travel has gotten, and how security theatre is crushing the travel and tourism industries in the US. Rather I’d like to focus on the emergent chaos aspects of this story: the reality that even TSA bureaucracy can’t impose standards on airports, and why that would be a good thing, if they could accept it.
Before I do, I want to comment that missing 75% of the bombs is probably ok. There are very few airliners bombed in the US. I think it’s less than 10 in history. So the issue is not really false negatives, where the screener misses a real fake bomb, but false positives, where the screener shuts down either someone’s day or the airport. Given that every single bomb smuggled past security last year at US airports was fake, they are far more likely than real bombs.
Now, there’s an opportunity for dramatic improvement in the way we run airport security. “Just run them all like they run SFO!” Orin Kerr makes this point, “I would think the real story is the dramatic gap between the performance of TSA employees and private sector employees.”
More importantly, what comes out of this study for me is the emergent chaos of running a large mission like airport security, and the value of that variation for learning.
If all airports were run exactly the same, we’d have missed this opportunity for learning.
So ask yourself, what do I standardize on too much? Where is there too much structure, inhibiting learning? How can we harness chaos, and what emerges? (I talk in more deatil about a very similar point in the latest post in my threat modeling series on the SDL blog, “Making Threat Modeling Work Better.”)
Photo: Frisk, by Tim Whyers. (Machine by Tim Hunkin, we’ve mentioned it previously.)