Re: [Full-disclosure] Why Vulnerability Databases can't do everything
On Sat, 16 Jul 2005, Jason Coombs wrote:
> Do these things, some of which require modifications to the present
> fixed-opcode structure of the programmable CPU, and all outsider attacks
> against software will fail to accomplish anything other than a denial of
> service.
No. Do these things and you stop the execution of code that wasn't
installed intentionally. MAYBE. And you better not have a flaw in any of
these implementations.
You do NOT stop outsider attacks against software. You stop execution of
malicious code.
You do not stop, for instance, SQL injection attacks. Or logic problems
in code that release data unintentionally. Or the hundreds of other
things I see when I look at custom programmed web apps. Etc.
And there are some very big gray areas. The easiest one to mention is
"SQL". Is a SQL statement (in Oracle, MySQL, MSSQL, Informix, DB2, etc)
code or data? Macros are the next big area - should users be able to
write an Excel spreadsheet with formulas and/or macros and share it with
others in the workgroup? Is that Excel sheet data or code? Why is it
best to relegate programming (such as writing Excel formulas) to a
priestly caste?
I am not defending computers and processors. I am defending data. There
is a big difference between these two views of information security.
> With a little common sense applied to the design of computers, the only
> threats anyone would have to worry about are data theft, physical device
> tampering/hacking, and insiders.
Yes, I'm actually a lot more worried about these threats then someone
spreading a worm through my system. I'd rather neither happen, but I'll
take the worm over someone exposing information I'm supposed to protect.
Also, let's say that you run a web site hosting center. And some of your
customers - get this - want web sites that run CGI code or something
similar. You have to provide a way to upload that code for your
customers, right? What's to stop a worm from using that vector?
> The company that achieves objectives like these will own the next 100
> years of computing. Everyone who believes that security flaws in
> software are worth effort to discover and fix is very badly confused.
I disagree with the first part of that.
I agree with the second part, somewhat. If you design an architecture so
a flaw in any one piece of software compromises your data, yes, you have
screwed up *badly*. If your only defense to protect your sensitive data
is "Well, I'm patched", you are in bad shape. But I'd say the same if
your defense was processors that separate code and data.
True information security depends on a lot more then preventing buffer
overflows. It depends on good coding practices, testing, personnel and
physical security processes, auditing, secure architectures that allow for
failures in one or more components (number of components required to fail
depends upon your risk analysis), work practices, user and IT support
personnel education, etc.
> Solving present-day systemic defects in the design of computing
> architectures, now that's important.
Perhaps, but not nearly as important as designing secure systems. Too
many "secure" systems have turned out to not be quite so secure. I don't
see why the implementation of your rules would be any less prone to bugs.
My favorite example to illustrate this point - ssh. SSH was *designed* to
be secure, yet it is probably responsible for more compromised Linux hosts
then any other program.
> Windows will no longer exist within 10 years because everyone will have
> realized that it was built on a flawed premise around defective
> hardware.
This has went from a discussion of an idea that is well known in security
circles to simply anti-Windows rhetoric. I hate to ruin your day, but you
use insecure components to build a secure system. You just have to know
the components' limitations and not rely on the components to do things
they cannot do. This is already done with systems regarding reliability
(RAID for instance - it doesn't solve the problem of hard disk failure,
but it mitigates it in a way that is reasonable to the person who
configured the RAID). Security is no different - if you know the
limitations of the components you run, you can plan for that and design a
system that is secure.
What is truly dangerous though is an assumption that any piece of software
or hardware is truly perfect, even in implementing one feature. When that
assumption proves false, you will regret it. (hence the example I used
above with SSH - anyone who used SSH combined with VPN and patched both
reasonably often was not vulnerable to the SSH hole being exploited by
outsiders; Someone who felt either VPN or SSH could have a flaw in it, and
was willing to accept the risk of *both* having unpatched flaws, was
doing pretty good when the SSH vulnerabilities were coming out on what
seemed to be a monthly basis)
There is no silver bullet. Not even for computer security.
--
Joel