Re: RSA SecurID SID800 Token vulnerable by design
On Bugtraq and several other security forums, Hadmut Danisch
<hadmut@xxxxxxxxxx>, a respected German information security analyst, recently
published a harsh critique of one optional feature in the SID800, one of the
newest of the six SecurID authentication tokens -- some with slightly different
form-factors, others with additional security functions -- sold by RSA
Security, Inc. It's raised quite a stir, and I'd like to respond.
A personal authentication token, by classical definition, must be physical,
personal, and difficult to counterfeit. The most popular implementations in
computer security move the calculation of a pseudo-random authentication code
-- a so-called "One-Time Password," or OTP-- off an employee's PC and into a
hand-held hardware fob, small enough to be attached to a personal key chain.
RSA's mainstay token, the SID700 SecurID -- millions of which are used in over
20,000 enterprise installations worldwide, including many government agencies
and financial institutions -- use AES (the US cryptographic standard) to
process Current Time and a 128-bit token-specific secret to generate and
continuously display a series of 6-8 digit (or alphanumeric) OTP "token-codes"
which change every 60-seconds, and remain valid only for a couple of minutes.
In practice, a RSA authentication server can then independently calculate the
token-code that is appearing on a specific SecurID at this particular moment;
compare that against an OTP submitted by a pre-registered user, and validate a
match. RSA, which first introduced the SecurID in 1987, has always insisted on
the necessity of "two-factor authentication" (2FA), where a remote RSA
authentication server must validate both a SecurID token-code (evidence of
"something held") and a user-memorized PIN or password ("something known.")
A stolen password can be reused indefinitely to masquerade as the legitimate
user, often with the victim wholly unaware. A token-generated OTP, valid only
briefly, is a far more robust authenticator. With 2FA, if a SecurID is stolen
or lost, it still can't be used to obtain illicit access to protected resources
without the second secret: the user's memorized PIN or password.
The elegant simplicity of the traditional SecurID -- and patents on the
mechanism by which the "drift" in each individual SecurID's internal clock is
tracked by the RSA authentication server -- has allowed RSA's "time-synched"
SecurID to dominate the market niche for hand-held OTP authentication devices
for 20 years.
In a typical installation, anyone seeking to log on to a protected PC or
network, or to access restricted online resources, must manually type in the
OTP currently displayed on the SecurID -- as well as his memorized PIN or
password -- to have his identity and access privileges validated. Network
applications handle the combined SecurID "pass-code" like any long traditional
password. The link between the user and the RSA infrastructure is often, but
not always, an encrypted VPN channel. That's a local decision. Data exchanges
between the RSA agent and RSA authentication server -- which typically means
between one of the 350-odd "SecurID-aware" network applications and the RSA
Authentication Manager, using RSA's own protocol -- are always fully encrypted.
Mr. Danisch is an admirer of the classic SecurID (SID700), RSA's traditional
hand-held token. His ire is directed at one of the two new hybrid SecurID
designs that RSA has recently offered in an attempt to respond to new
requirements in the boisterous and rapidly-evolving market for what's called
"strong authentication."
With the nascent prospect of a new billion-dollar market in consumer
authentication for financial services boosted by US federal regulatory
initiatives, RSA announced the SecurID Signing Token, the SID900. The SecurID
Signing Token still has a time-synched OTP, but RSA added a keypad and a
challenge/response function which successively authenticates the user, the
remote server, and a specific financial transaction, before the transaction
(e.g., a funds transfer) is executed.
On the other side of the market -- where again US laws and federal regulatory
initiatives have boosted demand for internal controls and more accountability
measures in enterprise IT -- RSA has introduced the SID800, another hybrid
SecurID, to meet the requirements of organizations that want to move into a
full public key infrastructure (PKI.)
The SID800 SecurID is a multi-function authentication and cryptographic device
that combines, in a single DPA-resistant token, the mobility and availability
of the classic hand-held SecurID, as well as a "smart chip" that implements
v2.1.1 Java tech (essentially a "virtual smart card") in a USB format. It looks
like a slightly smaller version of the classic SecurID key fob, with a USB plug
jutting out at one end. It can carry up to seven X.509 digital certificates for
PKI, as well as account information and complex passwords for up to three
Windows accounts. The SID800's lithium battery allows it to continuously
generate and display 60-second SecurID OTPs for up to five years.
The SID800 "smart chip" has the typical load of standards-compliant smart card
functionality: ANSI X9.31 PRNG, client-side PKI support including key
generation for DES/3DES and 1024-bit RSA Public Key Cryptography, SHA-1
hashing, and 1024-bit RSA digital signatures.To access its local cryptographic
services (key generation, authentication, file encryption, digital signatures,
etc.) and its X.509 certificates -- complex resources that require a
circuit-to-circuit connection and interactive data exchanges -- the SID800
SecurID can be plugged directly into a PC's USB port.
None of these features -- none of the SID800's cryptographic resources -- were
of apparent interest to Mr. Danisch. He ignored them all when he denounced the
SID800 as "vulnerable by design."
The classic SecurID, declared Mr. Danisch on the Cryptography mailing list, "is
a smart device which provides a reasonable level of security in a very simple
and almost foolproof way...."
"It's a pity," he added, "to see it weakened without need..." The traditional
SecurID has the advantages (and disadvantages) of an "air gap." With no direct
circuit connection to the user's PC or client terminal, it has no direct
vulnerability to the various classes of malicious or larcenous malware which --
Mr. Danisch warns -- can potentially overwhelm and totally corrupt PCs, and
particularly Windows PCs.
What particularly disturbs Mr. D is one option among in the SID800 SecurID
features which allows RSA's local client software to poll and retrieve a single
OTP from the token when the SID800 is plugged into the PC's USB port. Given
the potential for malicious malware to invade and take control of any Windows
PC -- malware that can seize and misuse both the user's PIN and an OTP fresh
from the USB bus -- it was irresponsible, Danisch suggests, for RSA to make
such a option available to its customers.
There are actually two versions of the SID800 sold by RSA. In one version,
there is none of the fancy new OTP functionality that worries Mr. D. In this
model, the only way to use the SecurID's OTP is the old-fashioned way: to read
the LCD and type it (and the user's PIN) at the keyboard.
In the second version of the SID800 -- an option selectable by local management
pre-purchase, and burnt into the token's USB firmware by RSA -- the user can
select a menu in which he instructs the SecurID to load one OTP token-code
directly into the paste buffer, presumably for immediate use. Since internal
access to the SecurID's OTP via the USB bus makes it a potential target for
"malware or intruders on the computer," claimed Mr. Danisch, "This is weak by
design." I beg to differ. Effective IT security is about an appropriate
balance, not walls of iron or stone.
Can this token-code in the paste buffer be misused? Not likely, even if it is
immediately stolen by malware and immediately used for some nefarious purpose.
A SecurID token-code can only be used once; no replay is allowed. As a defense
against race attacks, the RSA Authentication Manager will also automatically
reject both of two identical token-codes submitted roughly simultaneously --
even if both are accompanied by the proper PIN -- and log it for investigation
by the security manager. If the legitimate user can use the token-code, he
effectively preempts any misuse of that OTP by a hostile party or malware.
Could hostile malware independently execute the menu request for a new
token-code -- essentially instruct a token plugged into the USB port to produce
a new token-code, without the knowledge of the user -- and then swipe it,
directly or from the paste buffer? Could malware collect the PIN and logon data
of any authentication process with a keyboard logger? Unfortunately, it could.
Mr. Danisch raises a valid concern.
The cryptographic functionality of any smart card -- which typically includes
authentication, encryption, digital signatures, etc. -- can be initialized and
misused by a powerful hostile agent which took control of the user's PC and
snatched the user's password. Just as -- although Mr. Danisch didn't mention
this -- the "virtual smart card" in the SID800, or any similar UBB device,
could be initialized and misused.
The level of malware penetration that Mr. D presumes could corrupt the client
authentication and cryptographic functions in any contemporary PKI environment,
certainly any Windows-based client-side SSL. (See: "Keyjacking: the surprising
insecurity of client-side SSL," by Marchesini, Smith, and Zhao at
<http://www.cs.dartmouth.edu/~sws/pubs/msz05.pdf>.)
Mr. Danisch denounces RSA for implementing an optional ease-of-use feature,
just because it effectively reduces the implicit security of OTP authentication
to no more than what is provided by any PKI smart card environment. Some -- but
not necessarily me -- might suggest that this is carrying an appreciation of
the unique and sterling qualities of the classic SecurID's OTP a bit far.
This has been an ongoing debate within the RSA user community for the past
year, where some of the language used in declaring opinions is not always as
civil or restrained as that used by Mr. Danisch. It is not yet clear how the
market's choices and concerns will affect the next version of the SID800's
firmware, expected later this year -- but it seems unlikely that either of the
two SID800 versions will be removed from RSA's sales list.
In security, ease-of-use (which usually implies internal complexity) is often
the enemy of security. Yet, any enhancement in ease-of-use which will have
little or no impact on overall system security is something of a Holy Grail for
both InfoSec vendors and local IT managers.
Some organizations choose to use SID800 SecurIDs which offer RSA's "OTP paste"
feature, others do not. Those who don't are presumably acting on the basis of a
risk analysis of their environment that determines that the advantages of
enhanced usability do not justify the risks it entails. Many of those, I
presume, have concerns similar to those of Mr. Danisch.
If the hostile malware can wait and capture the initiation password off the
keyboard, it can ask for anything the password can authorize. From the SID800.
From any smart card. From any application. From any network resource. This is
not a new insight. (Ironically, the SID800's OTP output to the USB bus is
relatively more difficult to misuse, since it is time-constrained, while stolen
smart card functionality typically is not.)
Obviously, if a hostile enemy can load malware that "owns" your PCs,
untrustworthy user authentication is only the beginning of your problems.
If the enemy "owns" your Windows box today -- or any other computer, for that
matter -- he probably totally controls everything that passes through it, and
all devices connected to it. Although the firmware in a smart card -- or in
USB plugs like the SID800 which offer "virtual smart cards" -- supposedly won't
allow the PC to directly access the token's internal secrets, a computer under
the control of a hostile party can doubtless gain illicit access to the
cryptographic services provided by those devices.
Assuming imperfect defenses in any given technical context -- certainly true
with the current Microsoft Windows OS, the leading browsers, and the protocols
now in use for both secure data transfer and authentication -- the industry
consensus calls for multiple defensive layers. Where one defensive layer leaves
a gap, another will often overlap to cover it. The logic of such an approach is
based on gritty experience: there is no such thing a perfect security!
Mr. Danisch bemoans -- as do other fretful traditionalists like myself,
including many who work for RSA -- the loss of the "air gap," the isolation of
the SecurID's OTP generation from the potentially corrupted PC and network. (A
networked device will never be as transparent as the classic OTP token, where
everybody knows exactly what the SecurID is doing, and can be certain that it
is doing no more than it is expected to do. The elegance of simplicity.)
Others -- including many fretful traditionalists -- celebrate (despite
imperfect implementations, despite many inherently untrustworthy operational
environments) the powerful security utilities which become available with
interactive PKI, which RSA pioneered with its work on the seminal Public Key
Crypto Standards (PKCS) and the revolutionary RSA public key cipher which is
critical to so much of today's network security services.
The two hardest things to do in computer security are (A) to create a perfectly
secure technical infrastructure, and (B) to second-guess the CIO, CISO, or
local system administrator who has the responsibility to identify his assets,
understand his risks, and select where and how to balance his investments in
usable functionality and information security.
Since no one today argues that perfect security is attainable, security mavens
like Mr. Danisch (and myself) are forever occupied with the second task. Yet,
as Courtney's First Law -- codified 40 years ago by IBM's Bob Courtney, one of
the pioneers in computer security -- puts it: "Nothing useful can be said about
the security of a mechanism except in the context of a specific application and
a specific environment."
Information security vendors like RSA attempt to respond to the perceived
market requirements, juggling concerns about risk, liability, and cost against
demands for functionality, flexibility, and accessibility. When relative
security is slightly compromised for a perhaps critical enhancement in
ease-of-use, it seems smart to at least give the buyers the option. That
leaves the critical judgment to the professionals who know their environment,
their real risks, and their people, best.
Lecturing CIOs about the relative importance of security threats -- when they
have to get real work done in an imperfect universe -- is as presumptuous as it
is almost surely ill-informed. Hectoring vendors who respond to demands from
their customers for alternative ways to address security issues -- some
admittedly more or less robust than others -- is far more appropriate. In that
sense, debates that arise when critics like Mr. Danisch forcefully state their
case are useful, even necessary.
It is no secret that Windows and the browsers have design flaws. Client-side
SSL still has some major architectural issues, particularly in Windows. It is
no secret that PC users today need a safer place -- some sort of restricted
input environment, inaccessible to all but the local user -- by which they can
submit authentication calls to an application over a trusted path. It is no
secret that network protocols require new IETF initiatives to better secure
them against attempts to corrupt them for illicit gain. It is no secret that
simple authentication protocols must today often be supplemented by some form
of mutual authentication, or that high-value transactions may require
supplementary authentication, or that unusual transactions or access claims may
trigger direct oversight or adaptive authentication requirements.
Simple user authentication, simple web server authentication, simple
client-side SSL, basic PKI, is no longer enough when malware is now usually the
sophisticated product of a criminal enterprise. Forensic audit logs have never
been more important. The good news is that -- thanks to concerns raised by
outspoken techies like Hadmut Danisch -- there is public debate and significant
developments in all these areas, and solutions (probably imperfect, but better)
are on the horizon.
Suerte,
_Vin
PS. I have been a consultant to RSA for nearly 20 years and my bias is overt. I
beg the indulgence of the List for the length of my comments.