[IP] more on Why's a Retired Army Lieutenant Colonel on the"No-Fly" List?]
Begin forwarded message:
From: Ian Koxvold <ian@xxxxxxxxxxx>
Date: February 27, 2006 7:24:13 AM EST
To: dave@xxxxxxxxxx
Subject: RE: [IP] mo Why's a Retired Army Lieutenant Colonel on
the"No-Fly" List?]
Professor Farber,
While I agree with Bruce that one cannot assess the overall
effectiveness
and "sensibility" of a screening system based on a single example,
you can
use a single example to display flaws in the methodology of a system.
Using the case of Dr Robert Johnson, and some research associated
with this
case, one can reasonably conclude that:
1) Nobody is willing to take responsibility for putting people's
names on
the list.
2) Nobody is willing to take responsibility for taking people's names
off
the list.
3) There does not appear to be any sort of aggressive or systematic list
management - i.e. the list has grown enormously, while the number of
potential terrorists in the US have (hopefully!) not done so.
These are significant flaws in the watch list system - whatever the
logic
might be at the back end (i.e. putting the right people's names on the
list).
The dumb thing is that these are fixable flaws - and many people
(including
myself, and - presumably - Dr Robert Johnson) wonder why they aren't
being
fixed.
It is then only a short step to wonder if they are not being fixed
because
(in the view of those authorities who have established and who are
managing
the watch list system) they are not flaws at all.
Is it conceivable that someone involved in the watch list system would
prefer critics of the current military action in Iraq not to be able to
easily travel?
Best wishes,
Ian K.
-------- Original Message --------
Subject: Re: [IP] Why's a Retired Army Lieutenant Colonel on the"No-Fly"
List?
Date: Sun, 26 Feb 2006 23:13:35 -0800 (PST)
From: Krulwich <krulwich@xxxxxxxxx>
Reply-To: krulwich@xxxxxxxxx
To: dave@xxxxxxxxxx
Dave, this is the wrong criticism. Scientifically, from the
perspective of
Artificial Intelligence and Machine Learning (my PhD area), any good
methodology that attempts to inductively generalize from a sample set to
predictions of future set membership, or to deductively generalize from
a set
of criteria describing a sample set to predictions of future set
membership,
is going to have false positives and false negatives. Any
methodology that
had zero false positives and false negatives would be so limited as
to be
useless.
To put this in non-scientific terms, the only way to 100% avoid false
identifications is to have the system so limited as to be useless, like
saying
"suspect someone only if they're carrying fuse wire and muttering 'allah
akbhar' under their breath." On the other hand, the only way to 100%
avoid
missing anyone is to have the system so broad that it's useless
because it
suspects everyone, like saying "suspect everyone unless they're
wearing a
purple heart and have had their picture on TV shaking the President's
hand."
Any system that attempts to do something intelligent will inherently
have some
mistakes in both directions.
That said, there are clear ways to evaluate such methodologies. What
percentage of predicted group memberships are clearly wrong? What
percentage
of obvious examples that should be suspected are in fact suspected?
But finding one example, even a prominent example, is scientifically
not a
reason to reject a methodology.
--Bruce
-------------------------------------
You are subscribed as roessler@xxxxxxxxxxxxxxxxxx
To manage your subscription, go to
http://v2.listbox.com/member/?listname=ip
Archives at: http://www.interesting-people.org/archives/interesting-people/