Casati/Eco on the Web

From: Stevan Harnad <harnad_at_cogprints.soton.ac.uk>
Date: Mon, 4 Mar 2002 13:09:36 +0000 (GMT)

************** SOCIETY FOR PHILOSOPHY AND PSYCHOLOGY ***************
--------------------------------------------------------------------
       To post to entire SPP list, send to SPP_at_UMIACS.UMD.EDU
 To contact list organizer only, send to SPP-REQUEST_at_UMIACS.UMD.EDU
--------------------------------------------------------------------
Comment on Roberto Casati's comment ("The infinite regress problem") on
Umberto Eco's "Authors and Authority." http://www.text-e.org/debats/
Further commentary invited at that site.

-------------------------------------------------------------------
No "Quis Custodiet" Problem Peculiar to the Web

Stevan Harnad

CASATI: "Eco points out a filtering problem which resists various
filtering solutions. The problem is how can we tell, on the web,
relevant (useful, good) from irrelevant (useless, bad, misleading)
information? "

The question is: Who are "we"?

We have a problem far worse than infinite regress if we want to treat
as a single problem, and find a single solution, for a "we" that
includes children looking for games, teenagers looking for music,
house-wives looking for recipes, consumers looking for products,
relatives looking for medical information, students looking for
reference material, and scholars/scientists looking for refereed
research.

The obvious solution is to partition cyperspace into sectors, just as
everything else is, and tag the "authoritative" sectors (such as the
peer reviewed literature) as such.

To put it in context, consider a related non-problem: the "universal
search engine" problem -- the one that will find for you, reliably,
the needle that you are searching for in the ever-expanding cosmic
haystack of the web. People are fond of declaring this problem
insoluble; but is it really a problem at all?

Our thinking is based on the following, I think: The prototype, the
gold-standard, is the library, the written Gutenberg corpus. It is
that sort of order, reliability, retrievability that we are looking
for, as indexed and shelved in our libraries, catalogues and bookstores.

There would be no "universal search-engine" problem, but rather the
contrary, a welcome solution, if the Web consisted of all and only this
canonical Gutenberg corpus. But it does not: It consists of a lot more
(and, alas a lot less, for most of the Gutenberg corpus is not yet
available online).

Now let us simplify, to get to the heart of the matter: Suppose the Web
consisted of the entire canonical Gutenberg corpus, suitably tagged as
such (I will return to this), PLUS every single word ever spoken (or
thought) by every man, woman and child, from the prehistoric onset of
language to the present day, updated daily.

Would we now have a "universal search engine" problem? Of course not.
For we would use the "tags" distinguishing the canonical literature
(and of course all of its subtags, including "refereed journal" and
"journal-name") to restrict our search to ordered subsets of cyberspace
whose rules we would inherit from the paper canon.

End of story. No new problem. Just a matter of tagging and isolating the
old solution, and not being misled by the fact that, in principle (and
with Dan Serber's dictascript, augmented perhaps by some future
telescript), every single verbal production of every single human mind
can be converted to writing and consigned to the web. So what? We know
how to ignore idle chatter in the oral medium. We will continue to be
able to do so on the Web.

Yes, there are some new borderline cases, spawned by the web:
Non-published teaching materials, pearls of wisdom in the chatter, etc.
Those are special cases, and will evolve their own sectors. But sectored
and tagged they will be. And in the meanwhile, let us not try to be
holier than the pope: As a special case, the canonical corpus (or as
much of it as is up there so far) is as tractable on the Web as it was
on paper (indeed moreso). And the rest is just dictascript, which need
no more be "navigated" than what transpires on the airwaves of chat TV
or a hairdresser parlour.

http://oaisrv.nsdl.cornell.edu/pipermail/oai-general/2001-June/000036.html

CASATI: "This is an epistemological problem. Harnad claims that
this is not a new problem and that there already are stable solutions
to the problem. He would hence delete the 'on the web' clause."

Indeed. What we want to continue to be able to access and navigate is
the authoritative corpus. Let us simplify and say that this corresponds
to the peer-reviewed corpus. If/when that is all online, it is all only
a reliable metadata tag away from being navigable at least as reliably
as it was on-paper (and in fact infinitely more efficiently).

CASATI: "But if I understand Eco correctly, the web environment poses
the filtering problem in a new light, for which old solutions are not
easily available. The problem is best framed from the viewpoint of the
lower-end user, someone with no information at all on the Holy Graal,
say, who browses the web in order to improve his knowledge.
Assuming that the relevant bit of information is available, it has to
be separated from irrelevant or misleading bits of information. How can
you find the relevant bit of information?"

Vide supra. (And ask, if there had been no Web, how would this generic
user do it? Do we have to worry that he may be gullible, and ready to
believe whatever he runs into in conversation, on TV, on the drugstore
magazine counters? Or that he has the good sense and capacity to resort
to the library index catalogue?)

CASATI: Surely an expert would help. Suppose now that an expert on the
Holy Graal is somewhere available on the web. How can you find the
expert? Well. Maybe there is a meta-expert somewhere on the web, but
again, How can you find the meta-expert? And: How can you find the
meta-meta-expert? And so on and so forth. Obviously infinite regress
haunts Ecos solution. "

Nothing of the sort. Why is there no "infinite regress" in the Gutenberg
corpus? There is of course one on Chat-Radio (whom do you trust?), but
authenticating all the opinions and misinformation that come out of human
mouths, and even those that find their way onto public airwaves, is a
hopeless (and pointless) task: There's always more opinion than
expertise; it's more like combinatorial explosion than infinite
regress.

We are, in other words, using a spurious tertium comparationis here, in
applying the desiderata of the authoritative Gutenberg canon to the
PostGutenberg Galaxy. But the solution is simple. Carve out the subspace
of cyberspace that corresponds to the old canon as a special case, tag
it accordingly (augment it with any new hybrid productions worthy of
inclusion), and restrict your serious searching to that sector alone.
And let the erstwhile experts (peer review) continue to be your
"authority."

CASATI: "Peer reviewing, as invoked by Harnad, works very
well for people who already know about academic journals and
standards, but wont work for the lower-end user. How do you know
about the good and the bad journal, the good and bad
learned society? At some point the circle has to be broken by some
hearsay type of contact."

How did the "lower-end user" know about it in the Gutenberg age? Same
answer.

CASATI: "But this is not available in the pure version of the problem.
Probably the real solution is in those same large numbers that create
the problem. It is in the link-structure of the web, the same structure
that is exploited by Google's search strategy. Linking is, to some
extent, like a vote given to a page after a review of its content. Each
of us is a peer reviewer. And if we are good peer reviewers, we will
attract links from other pages."

Ah me! If the authority is (democratically? capitalistically?) ceded to
opinion polls among those same multitudes who cannot be assumed to know
how to use a library, where will we be!

On replacing expertise by nose-counts, see:

Harnad, S. (1998) The invisible hand of peer review. Nature
[online] (c. 5 Nov. 1998)
http://helix.nature.com/webmatters/invisible/invisible.html
Longer version: http://www.exploit-lib.org/issue5/peer-review/
http://www.ecs.soton.ac.uk/~harnad/nature2.html

"Peer Review Reform Hypothesis-Testing"
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/0479.html

"A Note of Caution About 'Reforming the System'"
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/1169.html

CASATI: "Expertise, in an universal linking system, is diffuse and
microscopic. There are advantages to this solution. It avoids regress.
It takes authority out of a few hands. It generalises the stable peer
reviewing solution to the pre-web relevance problem."

Or throws out the baby with the bathwater, substituting sheer quantity
of popular opinion for qualified expertise. (Is this perhaps the
current fashion of ceding all authority to market economics and
dollar-democracy, along with a dose of PC populism, now making a bid
for "privatizing" science and scholarship, as has already been done
with the arts?)

CASATI: "There are shortcomings. The link structure is poorly
understood, and some studies will be necessary as to the possible
distortions that the linking system may undergo, as in any other
diffuse system from which we hope to extract information, such as the
system of prices in various types of economy."

Shortcomings there will indeed be. Let us hope we will first look at
what sort of quality this link-economy would yield, before committing
ourselves too deeply to it...

Stevan Harnad
Received on Mon Mar 04 2002 - 13:25:36 GMT

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:27 GMT