Re: Re: Re: Solaris telnet vulnberability - how many on your network?
On 16 Feb 2007 thefinn12345@xxxxxxxxx wrote:
> I believe in the early 90's there was a serious problem discovered in intel
> chips that allowed certain standard code to be run to overflow programs
> arbitrarily and gain access to operating systems in an administrative
> capacity.
>
> Also I remember the redhat (back in the day) repository being hacked and
> backdoored versions of programs being put into it. I believe this also
> happened to an early version of debian or fedora at some point also.
>
> But I think you miss the point.
>
> When they aren't preparing for security problems, the job of most security
> professionals is to observe and react to these kinds of security problems.
>
> The observer will exploit anything you are lax on. Discarding a security
> concern because it doesn't seem important or of value to you is kinda stupid,
> you should probably go find some other kind of work. Everything is important,
> everything should be examined when and if possible. Thus the thread certainly
> has merit.
As mentionedin reflections on trusting trust, you need to check
everything. Your code, the code of the OS loader, the OS, the compiler,
the mothrboard... etc.
Only, this is about trust, and at some point you need to say: resources
and threat wise, my risk stops here. It is a risk and therefore I am
taking chance.
You can't secure everything, but you definitely need to be aware of what
you do not secure.
As an example I like using, unrelated directly to coding, when building
secure networks with perimeters, people usually have two main choices on
one issue:
1. Secure the perimeter, everything inside it is secure.
2. Secure the perimeter, then secure what's inside.
There is no right or wrong, there is only what's right for you. The choice
is not always easy.
I'd normally strive for #2, but can't always choose it for obvious
reasons.
> It really makes me giddy when I see posts by trolls saying that security
> through obscurity isn't really important, or that examining a possible act of
> malice WITHIN one of the companies that is giving you software is not really
> an important factor.
Security by obscurity works (although a lot more often when employed when
attacking, for the atatcking side protecting itself).
Security by obscurity is an amazing tool, but when used alone it is
useless, as when it is blown to bits, nothing remains to protect you. It
must be a part of your arsenal, not the sole defender.
> Even if it isn't an act of malice BY THEM, perhaps they have been hacked at
> the very top levels of their software storage or their source code itself.
> Perhaps something has gone wrong (what? no, couldn't be?).
>
> Dismissing it is as stupid as dismissing the possibility that running some
> unnamed, unknown executable on your windows box isn't a problem.
>
> Scarey stuff. The job is to be paranoid. Not to be dismissive of those who
> ARE.
>
> TheFinn.
>