<<< Date Index >>>     <<< Thread Index >>>

Re: On product vulnerability history and vulnerability complexity



Steven M. Christey wrote:
> The fact that a product has a long history of bugs should not be
> regarded as an indicator of its current level of security compared to
> other products.
>   
Why? Past performance may not be a perfect predictor of future
performance, bit it is very often one of the best predictors available.
It seems to me that past performance on vulnerability is a very good
predictor of future security. Security only changes when some external
factor impinges on the product's life cycle, such as the developer
learns about security, or a new maintainer takes over. Even then, the
changes are gradual, because they don't immediately re-write the code base.

> I've been of the mindset lately that one should look at (1) how
> extensively a piece of software has been audited, (2) by whom, and (3)
> the complexity of any associated vulnerabilities and attacks.
This is very similar to the Sardonix conjecture, where we posited that
the history of code auditors would predict code security. I still
believe that conjecture; Sardonix itself failed as a project because we
failed to get a large body of code auditors to do so in a consistent manner.

>   If you
> look at recent Sendmail holes, they are complex and not immediately
> obvious.
The vulns in Sendmail were gradually mined out. Performance in security
follows a similar trajectory to performance in a mine; past performance
predicts the future, approximately, with an eventual decline in
production of ore or vulnerabilities.

>   Some years ago, Michal Zalewski had to invent the signal
> handler race condition vulnerability class (CVE-2001-1349) which is
> probably latent in many products, but rarely audited.  ISS had to
> perform some non-standard syntactic manipulations involving large
> numbers of special characters (CVE-2002-1337 [sic]).  More recently
> announced Sendmail issues have not been much simpler.
>
> I suspect that the complexity of discovered vulnerabilities *means*
> something about the relative security of software, compared to your
> normal piece of SMTP software that barfs on 100 "A" characters in the
> RCPT TO command, let alone your software with the standard XSS or SQL
> injection.
>   
An interesting idea, and I like it. When the vulnerabilities get complex
and weird, it would tend to indicate that the simple stuff has been
mined out, and the product is starting to become relatively secure.

At least against known patterns of vulnerabilities. And then something
new and weird comes along like printf format string vulnerabilities. If
someone invents a new class of vulnerability, then abruptly large
volumes of code become suddenly vulnerable, and the auditing history is
irrelevant.

On the other hand, software that has been *architected* to be secure,
e.g. by applying the principle of least privilege throughout,
modularization, reducing privileged operations to the smallest
components possible, etc. then the product can remain secure even though
it may contain instances of the new class of code vulnerability. Such as
Postfix.

> Let's not forget how Georgi Guninski in 2005 found a rather obscure
> security issue in qmail itself.  As I understand it, the exploit
> involved consuming resources in the 1GB range, but still - there was a
> bug.  Is qmail immediately suspect now because it had an overflow, and
> overflows have been known about for decades?  No - the vuln and
> exploit were rather complex, and found by one of the top researchers
> in the industry, while showing that even the most respected developers
> might not account for obscure architecture-dependent issues that were
> probably buried deep in include files.
>   
No, that just sets a rate. Based on one data point (crappy statistics,
but it is what we have) we can now forecast that qmail will have
approximately one vulnerability per decade :)

> And to beat a horse long thought dead, people thought that non-IE
> browsers were so secure a couple years ago, but researchers are taking
> a look at non-IE browsers, and they're not quite so bug-proof as
> previously assumed.  I forget where I saw this, but very recently
> someone said "this browser bug was fixed in IE a couple years ago."
>   
IMHO the biggest thing that makes Firefox on Linux more secure than IE
on Windows is that you don't run Firefox as root/administrator, so when
it gets hacked, it doesn't 0wn the machine.

> One difficulty is that we can't really know a product's full audit
> history.  If a researcher looks at a piece of software and finds
> nothing of interest, that doesn't get reported.  (Sardonix, we hardly
> knew ye.)
>   
Agreed. Sardonix was clearly not fun enough to engage the community, but
we still need some way to record "I audited this code and found nothing
wrong."

> It seems counterintuitive, but I'm immediately suspicious of any
> software that doesn't have a well-documented history of security
> vulnerabilities that show increasing complexity and novelty as the
> product matures.  Darwin's theory of evolution might well hold for
> software security.
>   
Kind of: absence of evidence is not evidence of absence, but that
applies both ways. The absence of a vulnerability history does not
indicate that the product is secure or insecure, it indicates that no
one has looked, or at least no one has reported looking.

Consider Postfix and Qmail again; neither has any substantive history of
vulnerabilities, but both have a substantive history of fussing over
whether some arcane anomaly is a vulnerability or not. This indicates
that people were looking really hard at them. This is a very good thing.
We need some way to capture that.

Crispin
-- 
Crispin Cowan, Ph.D.                      http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com