<<< Date Index >>>     <<< Thread Index >>>

[IP] more on Sony's Escalating "Spyware" Fiasco





Begin forwarded message:

From: Dave Crocker <dhc2@xxxxxxxxxxxx>
Date: December 2, 2005 6:01:29 PM EST
To: dave@xxxxxxxxxx
Cc: ip@xxxxxxxxxxxxxx, "Synthesis: Law and Technology" <synthesis.law.and.technology@xxxxxxxxx>, Bob Hinden <bob.hinden@xxxxxxxxx>
Subject: Re: [IP] more on Sony's Escalating "Spyware" Fiasco
Reply-To: dcrocker@xxxxxxxx

Blaming Microsoft for software that requires you to click OK seems as silly as blaming GM if someone pumps bad gasoline into your car, no?

No.

The human factors (usability, interaction design, cognitive modeling, decision context, etc.) issues are entirely different.

Presenting users with a simple pop-up to click presumes a number things inappropriately and ignores a number of essential concerns.

Some examples:

1. Users are expected to fully understand the security model of their system. Since computer experts often don't, placing such a burden on non-technical consumers is quite simply silly.

2. The messages that are displayed are cryptic, incomplete and tend to be full of jargon. Even with a good technical model, a user often has difficulty knowing what is going on.

3. The more dangerous a user interaction, the more important it is to protect against the user's performing the action automatically, rather than having to deliberate on the choices. User must click "ok" so frequently, it is far too easy to click ok as a habit.

4. Related to this is the meta-point that users are burdened with so much "system administration" work that they MUST develop a habitual response, so that they can return to doing their primary activity. The habitual response works fine... except when it doesn't.


People bought the CD and ckicked OK because they trusted Sony, not because they trusted Microsoft to protect them against Sony, surely?

Clicking OK is taken to mean informed consent. The reality is that it means nothing of the sort.


Since when did anyone trust Microsoft? Did anyone not wearing a tinfoil hat at the time remotely suspect that we needed protection against Sony? Why should Microsoft be more prescient?

When a product purports to have safety features, there should be a good basis for believing that the features will be effective. In this case, there is quite a bit of basis for knowing that it will be INeffective.

The design of critical user interactions needs to pay far more attention to the nature, capabilities and preferences of the average user.

Unfortunately any serious effort along these lines means finding ways to reduce the overall user burden for system administration, so that critical user interactions are much more distinctive and rare.

d/
--

Dave Crocker
Brandenburg InternetWorking
<http://bbiw.net>


-------------------------------------
You are subscribed as roessler@xxxxxxxxxxxxxxxxxx
To manage your subscription, go to
 http://v2.listbox.com/member/?listname=ip

Archives at: http://www.interesting-people.org/archives/interesting-people/