Intelligence failure?

Spencer Ackerman argues that attempted suicide bombing of Flight 253 on Christmas Day didn’t necessarily represent an intelligence failure. I think the key part of his post is:

The intelligence community is drinking from a fire hose of data, a lot of it much more specific than what was acquired on Abdulmutallab. If policymakers decide that these thin reeds will be the standard for stopping someone from entering the United States, then they need to change the process to enshrine that in the no-fly system. But it will make it much harder for people who aren’t threatening to enter, a move that will ripple out to effect diplomacy, security relationships (good luck entering the U.S. for a military-to-military contact program if, say, you’re a member of the Sunni Awakening in Iraq, since you had contacts with known extremists), international business and trade, and so on.

As someone whose day job involves analyzing lots of data, I think I agree with most of this. Synthesizing all the related pieces of information – a warning from his father that Abdulmutallab might be dangerous and in Yemen, rumors that a Nigerian might be part of an Al Quaeda plan, generalized threats from Yemen – seems quite hard. If heading off terror attacks requires drawing the conclusion that a specific individual is likely to attempt an attack out of hundreds of thousands (or millions?) pieces of small information like that, it’s probably hopeless; what you’ll get is a lot of noise and very little signal. The complexity involved seems huge and predictive ability seems low.

On the other hand, I believe simple things can work. In the case of Abdulmutallab, a simple thing would have been to take action based on the warning from his father, ignoring all the other factors. And the action that was apparently taken was to put his name in a list of half a million people, but that wasn’t the smaller no-fly list (~4K people) nor the “selectee” list (~14K people).

Matt Yglesias writes that, because actual terrorists and terrorist incidents are so rare, there will be lots of false positives for any methods for identifying potential terrorists. That seems right. That the level of evidence against Abdulmutallab put him in the top half million of suspected terrorists, but not the top eighteen thousand, confirms for me both that the false positive rate is very high (it’s unlikely that there are half a million active terrorists our there) and that we can’t predict very well (Abdulmutallab clearly should have been in a higher scrutiny category).

One particularly worrisome class of false positive is, of course, false reporting. I’m sure for every legitimate case of a warning that so-and-so is a terrorist, there are hundreds or thousands of cases of self-interested accusations. (False reporting could be considered the spam problem in intelligence.)

Rather than inherently ruling out doing anything, though, I think that the large false positive rate means that the consequences of a false positive should be fairly minimal – an interrogation, a more intensive physical search, perhaps an investigator making a phone call or two to understand the reason for travel – instead of denying boarding or shipping a suspect off to Bagram. No question that extra screening at the airport would be inconvenient, unpleasant, and intimidating. And no question that it could be expensive to implement more screening.

I think the relevant policy question is “for our false positive rate, what is the appropriate action to take?” Right now, it appears that there’s very little middle ground between no-fly and “just another passenger,” which appears to keep the no-fly list small. (Yet, it’s famous for having lots of false positives.) Given poor predictive ability of any of these lists, I don’t think that makes sense. Instead, a much more widespread system of heightened scrutiny would seem more likely to prevent terrorist attacks, but still not affect more than a small fraction of travelers.