Re: /dev/random is probably not
Chiaki wrote:
Charles M. Hannum wrote:
Most implementations of /dev/random (or so-called "entropy gathering
daemons") rely on disk I/O timings as a primary source of randomness.
This is based on a CRYPTO '94 paper[1] that analyzed randomness from
air turbulence inside the drive case.
I would agree with the later analysis posted, but
what OSs use disk I/O timing only for /dev/{u,}random device
today?
- Linux? (I don't think so, If we have network and other I/O device
such as keyboard, I thought that would be used, too.
but I want confirmation from people in the know.)
From linux-2.4.31/drivers/char/random.c (the comments on top of
linux-2.6.12.2/drivers/char/random.c are identical);
* Sources of randomness from the environment include inter-keyboard
* timings, inter-interrupt timings from some interrupts, and other
* events which are both (a) non-deterministic and (b) hard for an
* outside observer to measure. Randomness from these sources are
* added to an "entropy pool", which is mixed using a CRC-like function.
* This is not cryptographically strong, but it is adequate assuming
* the randomness is not chosen maliciously, and it is fast enough that
* the overhead of doing it on every interrupt is very reasonable.
* As random bytes are mixed into the entropy pool, the routines keep
* an *estimate* of how many bits of randomness have been stored into
* the random number generator's internal state.
*
* When random bytes are desired, they are obtained by taking the SHA
* hash of the contents of the "entropy pool". The SHA hash avoids
* exposing the internal state of the entropy pool. It is believed to
* be computationally infeasible to derive any useful information
* about the input of SHA from its output. Even if it is possible to
* analyze SHA in some clever way, as long as the amount of data
* returned from the generator is less than the inherent entropy in
* the pool, the output data is totally unpredictable. For this
* reason, the routine decreases its internal estimate of how many
* bits of "true randomness" are contained in the entropy pool as it
* outputs random numbers.
*
* If this estimate goes to zero, the routine can still generate
* random numbers; however, an attacker may (at least in theory) be
* able to infer the future output of the generator from prior
* outputs. This requires successful cryptanalysis of SHA, which is
* not believed to be feasible, but there is a remote possibility.
* Nonetheless, these numbers should be useful for the vast majority
* of purposes.
The algorithm hasn't changed since 1999, when people started getting
interested in connection hijacking and tcp sequence number prediction
(anybody remember juggernaut?).
- Solaris (I don't think so with the latest Solaris (7,8,9,10).
I read somewhere (probably here on bugtraq) that
it uses ever changing OS internal data structure and memory
pool as the partial source of entropy.
But again, I want confirmation from
someone who has seen, say, OpenSolaris source code.)
This leaves
OpenBSD, FreeBSD, NetBSD and the like, and of course
Judging by nmap evaluation of the ip-stack, OpenBSD and FreeBSD have
very strong PRNG's as well. I haven't got access to a NetBSD system to
test with.
Windows family OSs.
Redmond seems to have botched the implementation again, even though they
imported the BSD stack for NT5. Judging by nmap evaluation of ones
chances to success-fully predict the tcp-sequence numbers, it's possible
to degrade the internal state of the windows IP-stack by rhythmically
(but not necessarily rapidly) attempting to connect to a closed or open
port on the host, or one protected by the built-in windows firewall.
Third party firewalls doesn't seem to have this problem.
Note that I've only tested this on a single system, so it might be a
fluke or the result of wishful thinking or miscalculation on nmap's part.
I imagine this fails if someone watches a movie and/or is providing some
input from userland at the same time, although I haven't tested it.
/exon