“Obviously, Muslims would be someone you’d look at, absolutely,” former-Senator Rick Santorum said during a GOP presidential debate last year. “Radical Muslims are the people that are committing these crimes by and large, as well as younger males,” he explained.
While religious profiling may not seem like a hot topic, America’s intrusive airport security process will force the question into debates between Barack Obama and Mitt Romney this fall.
With that in mind, the recent debate between best-selling author Sam Harris and security expert Bruce Schneier may well prove quite relevant this election year.
Harris argued on his blog earlier this month that “we should profile Muslims, or anyone who looks like he or she could conceivably be Muslim, and we should be honest about it…. There are people who do not stand a chance of being jihadists, and TSA screeners can know this at a glance.”
This intuition, according to Bruce Schneier, is just wrong. “Complexity is the enemy of security,” he told Harris. “Adding complexity to a security system invariably introduces additional vulnerabilities. Simple systems are easier to analyze. Simpler systems have fewer security assumptions. Simpler systems are more robust against mistakes in analysis. And simpler systems are more secure. More specifically, simplicity tends to completely remove potential avenues of attack.” In this sense, security is like the economy: simple rules are better.
Schneier notes that although our intuition tells us that profiling would increase efficiency by narrowing agents’ focus, it doesn’t work this way in practice. Richard Reid’s attempted shoe-bomb, for example, led TSA to require all thick-heeled shoes to go through X-ray screening.
On the surface, this targeted approach seemed better than requiring everyone to remove their shoes, even ones that weren’t a threat. But because this targeted-rule required TSA agents to spot “thick-heeled” shoes and travelers didn’t know if they would be stopped, it took more time and resources to enforce — all at a greater risk — than a uniform rule.
In other words, increased specificity doesn’t mean increased efficiency. Even if we assume that every “radical Muslim” was a terrorist, argues Schneier, profiling is bad security. Because Islam is a belief system and cannot be detected by TSA agents, TSA would need a proxy that would be available to every screener in TSA’s Standard Operating Procedures manual. Schneier argues that “once you start trying to specify your profile exactly, it will either encompass so many people as to be useless, or leave out so many people as to be dangerous.”
There is reason to believe that it would be the former for reasons also familiar to economists. Airport Security represents a clear case of a principal-agent problem. On the one hand, the goals of the principals — the policy makers or TSA upper-management — are at odds with the goals of the agents.
As Schneier puts it, “because the cost to the agent of a false positive is zero, but the cost of missing a real attacker is his entire career, screeners will naturally tend towards ignoring the profile and instead fully checking everyone…. Discretionary systems tend to gravitate towards zero-tolerance systems because ‘following procedure’ is a reasonable defense against being blamed for failure.” The precautionary principle almost always makes sense for government agents, even though it hurts society.
Super-agents (“like the Israelis have!”… supposedly) might be able to overcome this tendency, but even if they could create a terrorist detection system that is 90 percent accurate, it wouldn’t radically improve airport security. Imagine, writes Britain’s Michael Blastland, “you’re in the House of Parliament demonstrating the device when you receive urgent information from MI5 that a potential attacker is in the building. Security teams seal every exist and all 3,000 people inside are rounded up to be tested. The first 30 pass. Then dramatically, a man in a mac fails. How sure are you that this person is a terrorist? A. 90% B. 10%, or C. 0.3%?” asks Blastland.
The answer is C: out of the 3,000 people, 10 percent will be labeled terrorist inaccurately, which means your criteria will label 300 innocent people terrorists along with the one real attacker (1/301=0.332 percent). Now scale this base rate fallacy up to hundreds of millions of travelers and lower your criteria’s accuracy, and you start to get the idea of the futility of profiling.
Religious profiling is dangerous because complex security rules lower efficiency and create new avenues for attack. The impossibility of reducing the error rate for such a profiling system is compounded by the principal-agent problem. Ignoring the public-choice problems inherent in any political entity crafting an accurate test (instead of just reflecting the public’s or special interests’ perception of what a terrorist “looks like”), even a relatively successful “profile” wouldn’t provide significant security benefits.
To these problems, we must add bureaucracies’ inflexibility. If government creates a profile, terrorist organizations will adapt within months. Consider the fact that al Qaeda was already planning methods to thwart presumed U.S. racial profiling within months of 9/11. Any profile wouldn’t be able to adjust as fast as a potential enemy could.
As Schneier put it, “the proper reaction to screening horror stories isn’t to subject only ‘those people’ to it; it’s to subject no one to it. (Can anyone even explain what hypothetical terrorist plot could successfully evade normal security, but would be discovered during secondary screening?) Invasive TSA screening is nothing more than security theater. It doesn’t make us safer, and it’s not worth the cost. Even more strongly, security isn’t our society’s only value. Do we really want the full power of government to act out our stereotypes and prejudices? Have we Americans ever done something like this and not been ashamed later? This is what we have a Constitution for: to help us live up to our values and not down to our fears.”