Homeland insecurity
The fact that U.S. intelligence agencies can't tell terrorists from children
on passenger jets does little to inspire confidence.
- - - - - - - - - - - -
By Bruce Schneier
Jan. 9, 2004 | Security can fail in two different ways. It can fail to
work in the presence of an attack: a burglar alarm that a burglar
successfully defeats. But security can also fail to work correctly when
there's no attack: a burglar alarm that goes off even if no one is there.
Citing "very credible" intelligence regarding terrorism threats, U.S.
intelligence canceled 15 international flights in the last couple of weeks,
diverted at least one more flight to Canada, and had F-16s shadow others as
they approached their final destinations.
These seem to have been a bunch of false alarms. Sometimes it was a case of
mistaken identity. For example, one of the "terrorists" on an Air France
flight was a child whose name matched that of a terrorist leader; another
was a Welsh insurance agent. Sometimes it was a case of assuming too much;
British Airways Flight 223 was detained once and canceled twice, on three
consecutive days, presumably because that flight number turned up on some
communications intercept somewhere. In response to the public embarrassment
from these false alarms, the government is slowly leaking information about
a particular person who didn't show up for his flight, and two
non-Arab-looking men who may or may not have had bombs. But these seem more
like efforts to save face than the very credible evidence that the
government promised.
Security involves a tradeoff: a balance of the costs and benefits. It's
clear that canceling all flights, now and forever, would eliminate the
threat from air travel. But no one would ever suggest that, because the
tradeoff is just too onerous. Canceling a few flights here and there seems
like a good tradeoff because the results of missing a real threat are so
severe. But repeatedly sounding false alarms entails security problems, too.
False alarms are expensive -- in money, time, and the privacy of the
passengers affected -- and they demonstrate that the "credible threats"
aren't credible at all. Like the boy who cried wolf, everyone from airport
security officials to foreign governments will stop taking these warnings
seriously. We're relying on our allies to secure international flights;
demonstrating that we can't tell terrorists from children isn't the way to
inspire confidence.
Intelligence is a difficult problem. You start with a mass of raw data:
people in flight schools, secret meetings in foreign countries, tips from
foreign governments, immigration records, apartment rental agreements, phone
logs and credit card statements. Understanding these data, drawing the right
conclusions -- that's intelligence. It's easy in hindsight but very
difficult before the fact, since most data is irrelevant and most leads are
false. The crucial bits of data are just random clues among thousands of
other random clues, almost all of which turn out to be false or misleading
or irrelevant.
In the months and years after 9/11, the U.S. government has tried to address
the problem by demanding (and largely receiving) more data. Over the New
Year's weekend, for example, federal agents collected the names of 260,000
people staying in Las Vegas hotels. This broad vacuuming of data is
expensive, and completely misses the point. The problem isn't obtaining
data, it's deciding which data is worth analyzing and then interpreting it.
So much data is collected that intelligence organizations can't possibly
analyze it all. Deciding what to look at can be an impossible task, so
substantial amounts of good intelligence go unread and unanalyzed. Data
collection is easy; analysis is difficult.
Many think the analysis problem can be solved by throwing more computers at
it, but that's not the case. Computers are dumb. They can find obvious
patterns, but they won't be able to find the next terrorist attack. Al-Qaida
is smart, and excels in doing the unexpected. Osama bin Laden and his troops
are going to make mistakes, but to a computer, their "suspicious" behavior
isn't going to be any different than the suspicious behavior of millions of
honest people. Finding the real plot among all the false leads requires
human intelligence.
More raw data can even be counterproductive. With more data, you have the
same number of "needles" and a much larger "haystack" to find them in. In
the 1980s and before, East German police collected an enormous amount of
data on 4 million East Germans, roughly a quarter of their population. Yet
even they did not foresee the peaceful overthrow of the Communist
government; they invested too heavily in data collection while neglecting
data interpretation.
In early December, the European Union agreed to turn over detailed passenger
data to the U.S. In the few weeks that the U.S. has had this data, we've
seen 15 flight cancellations. We've seen investigative resources chasing
false alarms generated by computer, instead of looking for real connections
that may uncover the next terrorist plot. We may have more data, but we
arguably have a worse security system.
This isn't to say that intelligence is useless. It's probably the best
weapon we have in our attempts to thwart global terrorism, but it's a weapon
we need to learn to wield properly. The 9/11 terrorists left a huge trail of
clues as they planned their attack, and so, presumably, are the terrorist
plotters of today. Our failure to prevent 9/11 was a failure of analysis, a
human failure. And if we fail to prevent the next terrorist attack, it will
also be a human failure.
Relying on computers to sift through enormous amounts of data, and
investigators to act on every alarm the computers sound, is a bad security
tradeoff. It's going to cause an endless stream of false alarms, cost
millions of dollars, unduly scare people, trample on individual rights and
inure people to the real threats. Good intelligence involves finding meaning
among enormous reams of irrelevant data, then organizing all those disparate
pieces of information into coherent predictions about what will happen next.
It requires smart people who can see connections, and access to information
from many different branches of government. It can't be seen by the various
individual pieces of bureaucracy; the whole picture is larger than any of
them.
These airline disruptions highlight a serious problem with U.S.
intelligence. There's too much bureaucracy and not enough coordination.
There's too much reliance on computers and automation. There's plenty of raw
material, but not enough thoughtfulness. These problems are not new; they're
historically what's been wrong with U.S. intelligence. These airline
disruptions make us look like a bunch of incompetents who cry wolf at the
slightest provocation.
- - - - - - - - - - - -
About the writer
Bruce Schneier is the CTO of Counterpane Internet Security, Inc. His latest
book is "Beyond Fear: Thinking Sensibly About Security in an Uncertain
World," and he publishes the monthly security newsletter Crypto-Gram.
--
Kevin S. Bankston
Attorney, Equal Justice Works / Bruce J. Ennis Fellow
Electronic Frontier Foundation
454 Shotwell Street
San Francisco, CA 94110
ph: (415) 436-9333 x126 / fx: (415) 436-9993
bankston@xxxxxxx / www.eff.org