Re: [PATCH] Remove absolute paths from gpg.rc
On Mon, Mar 26, 2007 at 07:34:51PM -0400, Derek Martin wrote:
> On Mon, Mar 26, 2007 at 06:45:37PM +0000, Dave wrote:
> > I'd counter that a sysadmin who installs software should do a
> > background check to ensure that the thing isn't riddled with
> > security holes unless the program was specifically requested by the
> > system owner.
>
> He should do it whether or not it was requested by the owner, because
> that's simply part of a system administrator's job.
That's only part of the sysadmin's job if the owner delegates it to him.
> If the owner
> wants a peice of software that the sysadmin knows is bad or wrong for
> the job, it's his *job* and his *duty* to make sure he tells the owner.
If the owner is telling the sysadmin what software to install, the owner has
already done the assesment of which software is the right software for the job.
(He might've done so using an empty function, but a sysadmin's domain isn't to
infringe on the owner's decisions.
> If the software breaks, and the company loses money because of it,
> it's the sysadmin who will get fired, not the owner.
The fact the the sysadmin will get fired instead of the owner is a practical
matter of corporate structure: The lowest down guy who can easily be blamed is
the guy who gets fired.
> The owner may
> choose not to listen, but you have to cover your own a$$. Get it in
> writing, whenever possible.
That's why this is very sound practical advice, if you're a sysadmin without a
written description of your job ... and probably even if you were given a
written description of your job, since they'll probably try to blame you,
anyway.
> [Yes, seriously. If you're a sysadmin working for a manager/owner who
> insists on making really bad decisions that you know will cause
> serious problems, write down the problems you think it will cause, and
> get him to sign off on it. If you can not do this, you should
> carefully consider whether you should continue to work there. Asking
> him to sign off on his decision might save you your job, and might
> even convince him that he's potentially making a grave mistake. CYA.]
Again, this is some very wise practical advice, but not everything that _is_
necessary _should be_ necessary. We simply don't live in a perfect world.
That's why we all use Mutt.
> That said, even if the sysadmin does this, there still may be
> undiscovered bugs. He should not be expected to be held responsible
> for those.
It depends on the application. You're making certain assumptions about the
organization throughout your entire email. There are many situations where the
sysadmin's job description includes auditing software before installing it. In
some cases, the organization will allow the sysadmin to subdelegate the audits
to an auditor, but in other cases, the sysadmin is expected to do the audits
himself. Fundamentally, it's the sysadmin's responsibility to ensure the
security of whatever software he elects to install, since it's his job to ensure
the system functions right.
> There may also be undiscovered weaknesses inherent in the
> design of the program which can be exploited by local users to gain
> access to data they should not have.
If there is undiscovered _anything_ in the design of a program, the design of
the program isn't clear, simple, and concise, and the software is obviously not
UNIX-compatible, so we're wasting our time discussing it. The roles I provided
are for a UNIX-compatible system. A system that doesn't abide by the UNIX
philosophy will have a far more difficult time separating out roles, since the
system doesn't provide bulletproof tools for separating roles fully, the way the
UNIX philosophy does.
> Most system administrators are
> not software engineers, and can not be expected to identify these.
That's just another reason for programmers to stick to the KISS principle.
> Especially when the software they're managing is not Open Source,
> which is quite often the case.
If you're not dealing with free software (or at least "open source" software),
and the software doesn't have a clear and concise function, your first job as
sysadmin should be to explain to your boss how much of a raise you should be
getting for this crappy job you're being asked to do with the wrong tools.
> > (Basically, the owner is always right, as far as the sysadmin is
> > concerned. If the owner wants advice, it's his own responsibility
> > to ask for it. If the sysadmin can't live with himself and with his
> > duty to the owner of the system he's in charge of, then he should
> > give up one or the other.)
>
> This is just stupid. Utterly and completely. Owners hire sysadmins
> because a) they don't know how to do the job and/or b) they have more
> important things to do than keep track of all the stupid security bugs
> in the software they use, e.g. running the business and makind sure
> the business stays profitable. If that weren't true, they'd do it
> themselves.
This is why it's always important when you start working to get a written job
description, and to make sure it's not a whole bunch of self-contradictory
nonsense. That forces your boss to give you a clear, simple, and concise
explanation of your own job, if nothing else.
> In a publicly traded company, the owners are the stock holders.
I hope you're not going to try showing me by example why stock holders are the
ideal owners, because they always make the right decisions, because they're
really smart, and really want to build up the company.
> The
> vast majority of stock holders are not interested in what software
> runs on the machines the company uses, or who the sysadmins are.
Well, the agents of those stock holders (i.e., the administrators of the owners
of the shares in the mutual funds, or of the trusts, or of the estates) aren't
interested in learning how to run a business properly. (If they were, they'd
probably be entrepeneurs (spelling?) instead, since profits are far better, if
you're any good.) They're simply interested in maximizing profits for the term
that they're planning to hold the stock for. That's why Ford is building cars
that nobody wants to buy, for example. Shareholders know nothing about
anything. That's why a huge percentage of mutual funds fail to outperform the
Dow Jones average. They could've just bought stock according to the (public)
recipe of the Dow Jones Industrial Average, and maintained it in the published
ratios, and they'd turn a better profit, and as an added bonus, they wouldn't
have to pay the salaries of the entire financial analysis staff.
> They
> don't care one iota about whether a particular peice of software
> honors $PATH, and couldn't care less what security model the infosec
> guy uses, as long as it doesn't negatively impact the share price.
They don't care about anything, since they want to invest nothing but money in
a company. When profits aren't as high as they were hoping, then they suddenly
wake up and decide to fire the design teams for Pontiac, Cadillac, Buick, et al,
so you end up with all new GM cars (save Saturn, whose design team wasn't nuked)
looking almost exactly alike. Did GM recover? No. Does GM have a chance at
recovering, now that they've discontinued almost every top-selling car they had
before? Not really. Does anybody care? Yes, the shareholders periodically
wake up and tell the CEO what to do, even though they know even less than he
does. That's why Toyota has sold more hybrids than all of our "big three"
combined (even though both Ford and GM had working hybrid prototypes long before
Toyota ... the shareholders didn't care ... after all, many of them are also
heavily invested in the oil industry ... who'd want cars to use less fuel???).
> Having a business degree, I can tell you that it's the business
> owner's responsibility to oversee the overall success of the company,
correct
> by delegating different aspects of running the business to people who
> are good at those tasks.
That's ridiculous. It's his responsibility to oversee the overall success of
the company by whatever means make sense. It's stupid for the owner of a tiny
company to delegate administration of the only computer necessary, an old DOS
box that the guy's been using for 20 years, to some new sysadmin. Owners don't
have to delegate anything; their job is to ensure the success of the company.
Delegating responsibilities to others is nothing but a tool available to an
owner.
> If he fails to delegate the right
> responsibilities to the right person, the business will fail.
For example, if he's not bringing in enough to get a sysadmin and doesn't really
need one to begin with, getting one raises the chances that his business will
fail.
> Having been a system admin/security engineer for 12 years, I can tell
> you it's the sysadmin's job to make sure that the software the company
> is using won't shut down the company, get all its proprietary data
> stolen, or otherwise cause the company to lose money.
You seem to be speaking out of both sides of your mouth. Over here, you seem to
imply that a sysadmin needs to audit software. . .
> If he fails, he
> will be *fired*... unless he can legitimately show that the fault lies
> with the programmer, which in most cases is true (lucky us).
...while here, you seem to imply that there's no need for an audit
> The only time the system admin should trust the owner to make the
> right decision is when the system admin is the owner, and he has the
> knowledge and experience to do the job, and the time to maintain it
> (which is basically never).
The notion of a sysadmin "trusting" his own system's owner is ludicrous. You
never need to trust the owner of something - the thing is his to do with as he
pleases. There's an element of trust from the owner to the sysadmin (for
example, that the sysadmin won't screw everything up), but never the other way
around. (The sysadmin isn't involved if the owner decides to screw up his own
system, unless the owner instructed the sysadmin to block him.)
> Otherwise, he needs to find someone else
> to do that for him, and trust in the sysadmin's decisions. That's how
> business works.
You're again giving fairly sound business advice to a system owner. However,
we're not talking about business; we're talking about system ownership, and
delegation of responsibilities on the UNIX platform.
> > > In *your* case, the doesn't own the system. There are other places
> > > where the user partly owns the system (well, participates to the
> > > decisions, at least).
> >
> > If users on a particular system also have other roles (partners in
> > the entity that owns the system, voters in an entity with an
> > advisory role to the entity that owns the system, etc.), those are
> > separate roles.
>
> Pure doublespeak. User's roles are whatever they are.
Okay, then we're not speaking the same language. My language is precise, while
yours is doomed to always be ambiguous. My language allows the full freedom of
expression that yours does (by allowing entities to have multiple roles), while
following the KISS principle. Therefore, my language is of better quality, and
ought to be used instead of yours.
> Often they are
> decision-makers, and often they are not.
Again, your language is ambiguous, just like the responsibilities you assign to
different entities. Ambiguity is an enemy of efficiency.
> It doesn't matter... in the
> end, if the code is written weakly, and someone who has sufficient
> legitimate access (which may be none, or may be some, depending on the
> case) wants to exploit that weakness, they're going to do it.
Again, it's a question of fault.
> Who the
> decision-makers are, and what their titles are, is totally irrelevant.
A decision-maker who screws up a decision within his domain didn't do his job
right, by definition. A user isn't a decision-maker, and therefore isn't
expected to be screwing up decisions. (Consider, for example, the user of a
typical POS system. You can't count on a $6/hour employee to make decisions
that he can easily screw up and cost the company millions.)
> If a user opens holes in the system which can be exploited, he is at
> least partially to blame, whether he's the janitor or the president,
> whether he has a security clue or not.
If a user opens holes in the system, he shouldn't be able to jeopardize anything
that his account doesn't control. If he can, the system design isn't secure to
begin with, and the fault lies somewhere up the chain of command.
> If the programmer fails to
> make adequate protections against the user doing those things, he also
> is at least partially to blame.
Blaming the user is bad enough. Blaming the programmer is taking a misguided
idea to the extreme. The programmer has only one job: to pick a job, and to
make a program that does it right. To you, the programmer needs to have many
more skills, and so you'll need to compromise, since nobody in the world is the
best at several different things. (The best programmer isn't the best
psychologist, for example.) In other words, you're guaranteeing that a program
will never be done right, unless you have a full committee of ten trillion
different "experts" on-staff at the software development firm. Well, I've got
news for you: following the KISS principle allows a single programmer with no
Psychology classes behind him to fire your entire software development firm
along with all its "experts," and to produce a piece of software that does its
job more efficiently, and is easier to debug and maintain, by saving unnecessary
lines of code ... oh, and it's more logical, at the end of the day. (A program
that deletes a file is more logical than a program that prints a dialog box,
waits for the user to click "Yes," and then deletes a file. If you want to
print a dialog box, wait for the user to click "Yes," and then delete a file,
that's three jobs, and ideally, you'd have three separate programs. In
practice, though, the first two can be combined fairly easily into a UI dialog
box tool, since the purpose of the dialog box + waiting for the click is
actually a single job "to get the user's approval," which probably ought to be
done by a single UI tool.)
> But the programmer should know
> better, whereas the user can not be trusted at all... because he just
> might be the janitor.
If you trust a programmer from outside your company more than a user that you
hired yourself, you've got a problem.
> The only question is, what is the appropriate level of paranoia? The
> answer depends entirely on the situation, and usually, just as here,
> agreement is hard to come by.
Put another way, when you ambiguate everything by using a poor quality language,
paranoia becomes necessary if you want a legitimate shot at keeping track of all
the expensive "experts" you're employing in order to clean up your ambiguous
mess. I prefer to follow the KISS principle so I can avoid paranoia.
> > Said otherwise, the sysadmin has the domain of setting systemwide
> > policy within the guidelines of the system owner, while the user has
> > the domain of setting userwide policy within the guidelines of the
> > sysadmin.
>
> More doublespeak. The sysadmin's job is to make sure the users have
> the computing resources they need to do their job, while protecting
> the company's assets (data and intellectual property). Period.
Again, you're assuming a particular type of organization. What you said above
isn't always true. What I said above is. Mine is the definition. Yours is
simply an ambiguous statement about what his job amounts to in a particular type
of organization (which you're, presumably, familiar with).
> As a
> matter of practicality, this often means enforcing security policies,
> because security breaches result in breach of confidentiality of
> company secrets, or loss of availability of the computing resources or
> data stored therein.
Again, if the system owner doesn't want security policies enforced, then your
statement above is outright wrong. My statement above is a definition, and is
therefore always correct.
> The IT role is primarily a service role; the
> sysadmins do what the users want.
What if they don't want security policies enforced???
> The security department sets
> security policies, though often the security department is the
> sysadmin.
You've got it backwards - the sysadmin is in charge of the system. He can
subdelegate the job of setting security policies for the system to a security
department.
> In a well-run company, there must be agreement between the
> security people about what level of security is required to protect
> the company, while allowing the users flexibility required to do their
> work.
Again, you're ambiguating definitions. The security people are responsible for
assessing the security level required to protect the company. The sysadmin is
responsible for assessing the flexibility level required to allow users to do
their jobs efficiently. If the security people and the sysadmin can't agree,
the sysadmin takes precenence, since he's charged with the job of allowing his
users to do their jobs efficiently. The security people can stay up all night
figuring out a patch if need be, but the flexibility level required to get work
done efficiently is a requirement. A system that isn't fully secure is at least
partially useful (like all the sendmail boxes that ran for years with buffer
overflow vulnerabilities, and got plenty of work done while they were waiting
for an attacker to come along), but a system that can't be used for work is
totally useless, even with bulletproof security.
> Neither group is solely responsible for making that decision,
> except when the two groups are one in the same, as with a person's
> private PC, or perhaps in extremely security-sensitive businesses,
> where the infosec group has been given complete autonomy by the
> ownership to do whatever they feel is necessary, i.e. if the security
> of the company is breached, the company very likely will cease to be
> functional.
In those cases, the security team doesn't receive its mandate from the sysadmin;
rather, it and the sysadmin both receive their mandates directly from the system
owner himself, with the additional note that the owner prefers work not to be
done than to be done insecurely. In that case, it's the sysadmin's
responsibility to run hoops around the security team, rather than the other way
around.
> Still, uneducated users continue to complain because
> their PC gets broken into, or they lose data because of something
> stupid they did. Hang around a PC store, you'll meet them. You'll
> hear them complaining; and you'll note that they're not complaining
> about how stupid they are...
The PC store didn't make them stupid. God did. Why should they complain to the
PC store??? They're not _that_ stupid. . .
> they're complaining that the software
> wasn't written better to protect them
I get it. Stupid users who don't understand the purpose of software (to do what
the users ask it to) should redecide the purpose of software (to do The Right
Thing (TM), whatever that might be in any given scenario).
My solution to stupid users is to educate them, if they're working for me. If
they aren't working for me, let them stay stupid. This simple rule allows me to
use programs written by the best programmers, rather than those written by the
best psychologists. Needless to say, programmers program better than
psychologists.
> -- poor, ignorant users that
> they are. And sure, most of those are Windows users, because that's
> what PC stores sell. Hang around enough mailing lists for OSS
> software, and you'll see them there, too.
There's no shortage of idiots using most popular operating systems. Since the
number of idiots is far greater than the rest, no operating system can be
popular without counting plenty of idiots among its users.
> You, sir, have not the slightest of clues.
if you say so
> The only area where these things often don't hold up is the public
> sector, where up is down and people operate by their own fantasy
> rules, as at most government agencies. You'd fit right in there.
ditto
- Dave