Here is a quiz for you. Is predicting crime before it happens: (a) something out of Philip K. Dick's Minority Report; (b) the subject of of a Department of Homeland Security research project that has recently entered testing; (c) a terrible and dangerous idea which will inevitably be counter-productive and which will levy a high price in terms of civil liberties while providing little to no marginal security; or (d) all of the above.
If you picked (d) you are a winner!
The U.S. Department of Homeland security is working on a project called FAST, the Future Attribute Screening Technology, which is some crazy straight-out-of-sci-fi pre-crime detection [ http://bigthink.com/think-tank/pre-crime-detection-system-now-being-tested-in-the-us ] and prevention software which may come to an airport security screening checkpoint near you someday soon. Yet again the threat of terrorism is being used to justify the introduction of super-creepy invasions of privacy, and lead us one step closer to a turn-key totalitarian state. This may sound alarmist, but in cases like this a little alarm is warranted. FAST will remotely monitor physiological and behavioral cues, like elevated heart rate, eye movement, body temperature, facial patterns, and body language, and analyze these cues algorithmically for statistical aberrance in an attempt to identify people with nefarious intentions. There are several major flaws with a program like this, any one of which should be enough to condemn attempts of this kind to the dustbin. Lets look at them in turn.
First, predictive software of this kind is undermined by a simple statistical problem known as the false-positive paradox [ https://en.wikipedia.org/wiki/False_positive_paradox ]. Any system designed to spot terrorists before they commit an act of terrorism is, necessarily, looking for a needle in a haystack. As the adage would suggest, it turns out that this is an incredibly difficult thing to do. Here is why: let's assume for a moment that 1 in 1,000,000 people is a terrorist about to commit a crime. Terrorists are actually probably much much more rare, or we would have a whole lot more acts of terrorism, given the daily throughput of the global transportation system. Now lets imagine the FAST algorithm correctly classifies 99.99 percent of observations -- an incredibly high rate of accuracy for any big data-based predictive model. Even with this unbelievable level of accuracy, the system would still falsely accuse 99 people of being terrorists for every one terrorist it finds. Given that none of these people would have actually committed a terrorist act yet distinguishing the innocent false positives from the guilty might be a non-trivial, and invasive task.
Of course FAST has nowhere near a 99.99 percent accuracy rate. I imagine much of the work being done here is classified, but a writeup in Nature [ http://www.nature.com/news/2011/110527/full/news.2011.323.html ] reported that the first round of field tests had a 70 percent accuracy rate. From the available material it is difficult to determine exactly what this number means. There are a couple of ways to interpret this, since both the write-up and the DHS documentation [ https://info.publicintelligence.net/DHS-FAST.pdf ] (all pdfs) are unclear. This might mean that the current iteration of FAST correctly classifies 70 percent of people it observes -- which would produce false positives at an abysmal rate, given the rarity of terrorists in the population. The other way of interpreting this reported result is that FAST will call a terrorist a terrorist 70 percent of the time. This second option tells us nothing about the rate of false positives, but it would likely be quite high. In either case, it is likely that the false-positive paradox would be in full force for FAST, ensuring that any real terrorists identified are lost in a sea of falsely accused innocents.
The second major problem with FAST is the experimental methodology being used to develop it. According to a DHS privacy impact assessment [ http://www.dhs.gov/xlibrary/assets/privacy/privacy_pia_st_fast.pdf ] of the research, the technology is being tested in a lab setting using volunteer subjects. These volunteer participants are sorted into two groups, one of which is "explicitly instructed to carry out a disruptive act, so that the researchers and the participant (but not the experimental screeners) already know that the participant has malintent." The experimental screeners then use the results from the FAST sensors to try and identify participants with malintent. Presumably this is where that 70 percent number comes from.
The validity of this procedure is based on the assumption that volunteers who have been instructed by researchers to "have malintent" serve as a reasonable facsimile of real life terrorists in the field. This seems like quite a leap. Without actual intent to commit a terrorist act -- something these volunteers necessarily don't have -- it is likely to be difficult to have test observations that mimic the actual subtle cues a terrorist might show. It would seem that the act of instructing a volunteer to have malintent would make that intent seem acceptable within the testing conditions, thereby altering the subtle cues that a subject might exhibit. Without a legitimate sample exhibiting the actual characteristics being screened for -- a near impossible proposition for this project -- we should be extremely wary of any claimed results.
The fact is that the world is not perfectly controllable and infallible security is impossible. It will always be possible to imagine incremental gains in security by instituting increasingly invasive and opaque algorithmic screening procedures. What we should be thinking about, however, is the marginal gain in security in relation to the marginal cost. A program like FAST is doomed from the word go by a preponderance of false positives. We should ask, in a world where we are already pass through full-body scanners [ http://www.tsa.gov/approach/tech/ait/index.shtm ], take off our shoes, belts, coats and only carry 3.5 oz containers of liquid, is more stringent screening really what we need and will it make us any safer? Or will it merely brand hundreds of innocent people as potential terrorists and provide the justification of pseudo-scientific algorithmic behavioral screening to greater invasions of their privacy? In this case the cost is likely to be high, and there is little evidence that the gain will be meaningful. In fact, the results may be counter-productive as TSA and DHS staff are forced to divert their attention to weeding through the pile of falsely flagged people, instead of spending their time on more time-tested common-sense screening procedures.
Thinking statistically tells us that any project like FAST is unlikely to overcome the false-positive paradox. Thinking scientifically tells us that it is nearly impossible to get a real, meaningful sample for testing or validating such a screening program -- and as a result we shouldn't trust the sparse findings we have. And thinking about the marginal trade off we are making tells us the (possible) gain is not worth the cost. Pick your reason, FAST is a bad idea.
“No,” King stated plainly. “Obviously, we always have to be looking out at all times that the police maintain their proper role. But I think the Department of Homeland Security, and the police I deal with — whether it’s the FBI or the New York City Police Department — no, I think civil liberties are being protected. Privacy is being protected. And considering the nature of the threat against us, I would say the police are remarkably restrained.”
As chairman of the House committee on Homeland Security, King has been a vocal critic of the Associated Press’s Pulitzer Prize-winning investigation [ http://www.ap.org/content/press-release/2012/ap-wins-pulitzer-prize-for-investigative-reporting-on-nypd-surveillance ] into the the NYPD’s anti-terrorism policies — including its collusion with the CIA and indiscriminate unlawful surveillance of Muslims [id.] (even outside New York City [ http://www.nj.com/news/index.ssf/2012/02/christie_slams_nypd_over_musli.html ]). “Disgraceful,” King crowed when I mentioned the investigation. “First of all, they cannot find one thing the NYPD did that was illegal or wrong. Everything was open-source; they did not violate one law, not one provision of the Constitution. Meanwhile, there have been 14 plots against New York that have been stopped. We are the No. 1 target in the world. At any given time there are plots either in place or being contemplated, and they’ve just done a phenomenal job. They’re not violating anyone’s civil liberties or civil rights. The Associated Press — it was a terrible cheap shot and disgrace.”
I asked King if he believed the highly renowned journalists responsible for the reports, Matt Apuzzo and Adam Goldman, had integrity. “No, absolutely not. They have no moral integrity. Absolutely not.”
“Yeah, sure. I’ve heard of it,” King said. “But I work with the NYPD all the time. I know what’s going on in New York. I know what’s going on in some of the communities. I know what they’ve been doing. And the police are doing a tremendous job in New York.”
According to King, generally, corruption in the NYPD was not much of a problem: “There’s always some corruption in any department; whether it’s in the media, whether it’s in the military, police, welfare agencies — wherever you go. The New York City Police Department is extremely honest, and the level of security they’re providing is top-rate. First-rate.”
If he concedes that some corruption will inevitably exist within the NYPD, did King think Mayor Mike Bloomberg was exercising adequate oversight to root it out?
“It’s the best police department in the world,” King proclaimed. “So obviously he’s a good mayor, and Ray Kelly is the best police commissioner. We should thank God every night for the NYPD, and we should pray for the souls of the AP.”
I also asked about civil libertarian objections with respect to the the recently announced “Total Domain Awareness [ http://www.huffingtonpost.com/2012/08/09/nypd-domain-awareness-surveillance-system-built-microsoft_n_1759976.html ]” system, a joint surveillance initiative launched by the NYPD in concert with Microsoft. “To me, it’s too bad if they’re uncomfortable about it. It’s keeping New Yorkers safe,” King declared. “It’s totally legal. And I really don’t care what the Civil Liberties Union has to say, or the Associated Press, or the New York Times.”
Well then, which media outlets were fairly representing the nature of these NYPD initiatives in his mind? “Oh, I don’t know. I’ll just say which ones have been bad. The AP and the New York Times have been terrible,” he said.
“How about the New York Post?” I asked.
“The Post has been good, yeah,” King confirmed. “Daily News. Wall Street Journal.”
While I had King riled up, I figured I might as well transition to drone strikes. Turns out he was not eager to engage on the subject: “I have no comment on any drone strikes. No comment on drone strikes,” he said when asked if he had prior knowledge of the strikes in 2011 that killed three U.S. citizens [ http://ccrjustice.org/newsroom/press-releases/rights-groups-file-challenge-killings-of-three-americans-u.s.-drone-strikes ]. “I’m on the Intelligence Committee. By law, I’m not allowed to comment on it.”
“It’s common knowledge that they occurred,” I said.
“Well, first of all, it’s only people like you, the Associated Press — I don’t care about that. I’m gonna get elected. You said, ‘Do I care?’ How many Americans are going to be upset about whether or not we’re using drones? The fact is, are we going to win? We’ve been able to kill Awlaki. We’ve been able to kill Samir Khan …”
“In any war, there’s collateral damage. That’s life,” King advised me. “That’s life and it’s death and it’s reality; you’d better accept it. Look, I’m not going to argue all night.”
“There’s been no war declared in Yemen!” I exclaimed.
“Look — it’s, we are … It’s the enemy. I’m watching this,” King said. “Disappear.”
Michael Tracey is a writer based in New York. His work has appeared in The Nation, Mother Jones, Reason, The American Conservative, and other publications.