[IP] Novel way of examining Google?
Begin forwarded message:
From: Thomas Lord <lord@xxxxxxx>
Date: September 3, 2006 7:55:53 PM EDT
To: David Farber <dave@xxxxxxxxxx>
Subject: Novel way of examining Google?
For IP?
-t
=======
People interested in the ethics of Google might find it useful
to review some source materials:
Google's statement of corporate philosophy:
http://www.google.com/corporate/tenthings.html
An article published in 2003 in Wired, "Google vs. Evil",
by Josh McHugh. It contains a review of some controversial
issues and some interesting comments (and "no comment"s)
from Google:
http://www.wired.com/wired/archive/11.01/google_pr.html
It's a tricky business to arm-chair a company like Google. In
spite of that, I'd like to offer what I think is a fairly novel take
on why Google is in an ethically tough spot and how I think
they (and we) should respond.
Consider Google's mission statement:
Google's mission is to organize the world's information
and make it universally accessible and useful.
And later we'll relate that to their motto:
Don't be evil.
Three concepts are central to Google's mission:
1. "the world's information"
2. "accessible"
3. "useful"
While in a vague sense these terms suggest an appeal to
universal values, in fact, they lack universal meaning and
are inherently value-neutral.
"The world's information" does not literally mean all information
that is, in principle, available. Nor does it literally mean all
information to which access is granted by, for example,
a "robots.txt" file. Google must, necessarily, make choices of
inclusion and exclusion. Google must, necessarily, choose
strategies of discovery (e.g., web crawling) and acquisition
(e.g., search logging, gmail, book scanning). These choices
reflect Google's narrowly defined internal values (Sergey's
opinion, ultimately, they say) mediated through external factors
such as regional jurisprudence (e.g., a lawsuit by Scientologists
that results in excluding certain web pages). The delineation
of "the world's information" by Google therefore forms a
privatized, anti-democractic, hegemonic project -- it simply
cannot express universal values.
"Accessible" means "accessible via a specific kind of index."
What a user discovers through Google, and what materials
he or she accesses, are shaped by that indexing. Thus,
for example, a Google policy of punishing the clients of a
search rank optimizing service makes a decision for users
of what is, in fact, accessible and what is not. Are these
choices, imposed by Google, consonant with the values of
users? There is no a priori reason to expect them to be and,
once again, Google is found to be engaged in a privatized,
anti-democratic, hegemonic project.
"Useful" forces us to ask "useful for what?" What is important?
What is our metric of utility? Google's position in a popularity
ranking of competing search engines gives, at best, a relative
impression of utility. It does not begin to speak to how that
utility compares to the utility achievable in general. And if
we hypothesize that Google's project favors some uses over
others then its hegemonic nature becomes clear once more: Google has
no constraints that lead it to make the world's
information "useful" in any general sense -- rather, Google's
economic role will be to shape user ambitions by arbitrarily
emphasizing some uses over others. And so we see such
things as search rank optimization services and a culture of
elite political bloggers who place a high social premium on
gratuitous cross-linking. And we see "attract good AdSense
ads" as a new business model for content providers. "Utility,"
here, is curiously emphatic about uses which reinforce Google's
business model.
Some readers will find that the preceding simply states the
obvious: that Google's stated mission is at best value-neutral
and, at worst, carries considerable moral risk. Should we
then take comfort in Google's motto: "don't be evil"?
Could it not be, for example, that in a market-driven
meritocracy, Google will emerge as the "benevolent
dictator" of search? It is one thing to observe that
Google's project is to promote a privatized, anti-democratic
hegemony but another thing entirely to leap to the conclusion
that that is a *bad* thing. If Sergey and Google generally
define their hegemony well, won't we all benefit?
To get to those questions I think it helps if we look beyond
the general principles of mission statements and mottoes and
turn to consideration of Google's past, present, and future
technological and business degrees of freedom. That is
to say that if we *are* going examine the morality of Google,
surely we must do so by examining Google's actual choices
of consequence.
I am not referring, primarily, to famous choices that
are reviewed when considering Google. For example,
I am not immediately concerned with whether they
have struck the right balance with the government of
China or which records they should or should not hand
over to governments in various circumstances. It seems to me
that those famous questions are mostly of the "which is the
lesser of two evils" variety: morality fails, mostly, except
in that it guides us to rely on popular ethical heuristics to
make those decisions (yes to regional sovereignty, yes to
judicial warrants, no to warrantless law enforcement fishing
expeditions, etc.).
Rather, we should ask how Google places itself in a position
where such questions seem, with increasingly regularity, to arise
and become important. What more basic choices to they make
that create those dilemmas and what alternatives are there to
those more basic choices?
I locate Google's relentless drive to place itself in moral
dilemmas for which there is no good answer in Google's
impulses to monopolize certain things which are better off
not being monopolized: it's "raw data" and it's platform for
implementing indexing and search algorithms.
One cornerstone of Google is its facilities for collecting
information: from web crawling, to hosting email boxes, to
caching, to scanning books. At the most basic level, Google
must organize this information so as to keep track of the most
rudimentary privacy concerns (my email is different from your
home page) and legal concerns (that scanned book is different
from my email). Internally, they must make this information
accessible in the most primitive but important ways: spreading
it out over storage clusters and ensuring it can be MapReduced
and indexed on large compute clusters.
That cornerstone is Google's raw data and platform for examining
it. Their first business model choice -- their first moral choice --
is whether to hoard that raw data and platform or, instead, to
open it up. Google's choice is to hoard it -- they decided to make
money by retaining the exclusive rights to build applications on that
platform. They decided to make money by retaining the exclusive
rights to decide what is and is not included in the raw data. The
raw data and platform which is the foundation of Google have,
or at least it is very well arguable that they have, universal,
objective
value. But once Google decides to hoard these resources and
make itself the arbiter of what services are built upon them -- then
Google's project becomes inherently hegemonic. With that hoarding
decision, Google makes itself the decider of values. Far from
a "don't be evil" company they become a "because we say so"
or "because we can" company.
In an alternate universe, Google would not be a search company,
or an ad company, though it might have subsidiaries or close parnters
in either business. Rather, Google would be in a business at the
conjunction of commodity computing, service hosting, the sale of
raw data, the lease of data collection facilities, and the making of a
market for search results produced by competitive sources.
The massive accumulation of "the world's data" is inevitable and
this alternative Google would begin to democratize the decisions of
inclusion and exclusion that define what "the world's data" consists
of. It might be sensible for this alternative Google to form public
interest non-profits to work out the most basic meta-data to manage
privacy, copyright, etc.
This alternative Google would make the question of "accessibility"
an object of competition and open innovation. Google would
provide a (not even the) platform on which different approaches
might be explored by competing sources.
This alternative Google would liberalize the process by which
"utility" might be discovered and designed.
If Google will not form itself into this alternative, perhaps it is
something the rest of us ought to do for ourselves.
-t
Two postscripts:
1) Google APIs represent a baby step in the direction I advocate.
The real test asks whether Google will put the development of
such APIs above its current proprietary advantages -- will they
sacrifice a near monopoly on search, ads, etc. to emphasize
opening their platform as the right model?
2) In a longer essay, in addition to considering Google's attempts to
hoard raw data and platform, I would want to examine Google's
attempts to hoard talent and the resulting, unseemly, recruiting
practices of billboards, contests, challenge problems, and the
Google Summer of Code.
-------------------------------------
You are subscribed as roessler@xxxxxxxxxxxxxxxxxx
To manage your subscription, go to
http://v2.listbox.com/member/?listname=ip
Archives at: http://www.interesting-people.org/archives/interesting-people/