A long-time reader whose read my posts on pseudoscience and the like sent me a link to an article I had missed. Cory Doctorow quotes a post by Bruce Schneier reacting to a news report by Stewart M. Powell and Yang Wang in the Houston Chronicle:
Interesting data from the U.S. Government Accounting Office:
But congressional auditors have questions about other efficiencies as well, like having 3,000 “behavior detection” officers assigned to question passengers. The officers sidetracked 50,000 passengers in 2010, resulting in the arrests of 300 passengers, the GAO found. None turned out to be terrorists.
Yet in the same year, behavior detection teams apparently let at least 16 individuals allegedly involved in six subsequent terror plots slip through eight different airports. GAO said the individuals moved through protected airports on at least 23 different occasions.
I don’t believe the second paragraph. We haven’t had six terror plots between 2010 and today. And even if we did, how would the auditors know? But I’m sure the first paragraph is correct: the behavioral detection program is 0% effective at preventing terrorism.
I’m sure that there will be those who argue that these people just weren’t trained properly and that real experts would be much more effective – or if TSA just did what Israeli security does, behavioral detection would work. A commenter on Bruce’s blog makes that point nicely. But as another commenter replied, the TSA is not currently capable of doing what the Israeli screeners do, in part because of the differences in the quality of the employees and training.
Regardless of your explanation, can everyone just agree that as implemented, TSA’s behavioral detection program has not been an effective tool in preventing terrorist attacks? Or can’t we even agree on that?
For our country to invest in security, there should be a return on the investment that justifies the expense. There should also be empirical data supporting the efficacy of methods. So far, I’ve seen neither, have you?