But isn't this crossing of security boundaries essentially caused by the same mental error that buffer overflows are caused by? Trusting untrustable input is at the foundation of each, isn't it?Maybe I'm alone in this, but I find web browser bugs like these to be among the most complex and difficult-to-understand vulnerabilities that get reported. An aspect of that complexity often seems to involve crossing several intended security "boundaries" in the process, taking advantage of design choices that, by themselves, don't seem to be that security-relevant. Example: one might think that non-random locations for software components would be a good thing, but it's a factor in a number of web client bugs. (Another aspect of that complexity comes from advisories that simply include exploit code using obscure components or elements but don't suggest where the issue actually lies, but that's a different matter.)
If you create a boundary that says, "This is private space. Only trusted data can enter.", yet you decide that, for whatever supposedly legitimate reason you want to allow input from some other space, isn't it incumbent upon you as the programmer to disallow all but "proper" input?
It appears to me that this chaining of weaknesses is nothing more than an extension of the same problem that each weakness has individually, i.e. the failure of the programmer to do "bounds" checking. Granted, it's more complex to figure out how to exploit the weaknesses, but the reason the exploit is possible is because of the same naive trust that fails us every time.
We need a paradigm shift in programming from "allow all but the known bad" to "disallow all but the known good", don't we?
Paul Schmehl (pauls@xxxxxxxxxxxx) Adjunct Information Security Officer The University of Texas at Dallas AVIEN Founding Member http://www.utdallas.edu