<<< Date Index >>>     <<< Thread Index >>>

RE: Update: Web browsers - a mini-farce (MSIE gives in)



> -----Original Message-----
> From: Valdis.Kletnieks@xxxxxx [mailto:Valdis.Kletnieks@xxxxxx]

> The point people are missing is that covering all (or even 
> anywhere *near* "all") the "unfortunate sequences" or "corrupted files" is

> *really really* hard,  Quite often, "unfortunate sequence" means something

> like "issue the command to open a file" followed by "hit 'cancel' while
the 
> program is waiting for the next block from the disk to feed to a
decompressor 
> routine...

If the *only* bugs we saw in software nowadays were that esoteric, I'd agree
with you.  But I'd wager that a good 80-90% of the bugs aren't in this
category; they're "someone typed 'supercalafragalisticexpialadocious' where
the programmer expected them to type 'foo'" bugs.  Basic buffer overrun or
input validation cases that could have been caught by automated testing.
But unfortunately people use excuses like the "we can't catch every bug"
argument you're making to excuse themselves from looking very hard for *any*
bugs.

For another example, consider the recent post about a back door in Hawking
routers.  Apparently no one at Hawking ever thought to run 'nmap' against
their router before shipping it.  Should we excuse them for that because
they "had to ship before the heat death of the universe", as you put it
earlier?

> How much would it have added to development time to have closed *all* the
holes
> *up front* (including *thinking* of them) to stop Liu Die Yu's "Six Step
IE
> Remote Compromise Cache Attack"?

Multi-step IE exploits are another category altogether.  A lot of them stem
from one basic design decision -- the "security zone" model.  I'm
increasingly of the opinion that the zone model is basically flawed and
there is no way to make it completely secure.  But that's another topic.