Re: Searle's Chinese Room Argument

From: Harnad, Stevan (harnad@cogsci.soton.ac.uk)
Date: Sat Feb 22 1997 - 15:23:23 GMT


> From: Dunsdon, Graham <ged196@soton.ac.uk>
>
> symbols do have meaning I believe.

Yes they do: for us. But they have no meaning for a pure symbol system
(e.g., a computer). Hence we can't be pure symbol systems. So what else is
there in our heads that might embody meaning rather than just symbols
and symbol manipulation rules?

> However, there is no intrinsic way of recognising similarities
> of objects like a horse and a zebra except with yet more symbol(s).
> We would need a different symbol for each 'new' description.

Kid-sib would have problems with "intrinsic": "Is there an "EXtrinsic"
way? And what's that, then?"

Even if we had symbol for each encounter with every entity we know, that
would still just be symbols, whose shapes, remember, are arbitrary,
unlike analog structures and processes, whose shape is NOT arbitrary;
they resemble the objects of which they are the sensory "shadows."

(But be careful not to conclude that these analog shadows must be viewed
by an inner homunculus! The activation of the analog processes and
neural nets IS our seeing of the object.)

> So, Stevan suggests (in his grounding theory) that the mind uses base
> groups (or functions) which are known by their invariant properties of
> description (eg., those which would apply to a horse, to a zebra or to
> a donkey etc); to which can be hung perceptual variants such as piebald
> or stripes or big ears; which enable the identification of different,
> but functionally similar objects using cognitively constructed links.

I'm not sure whether you have this one right: The only way we could
recognise a zebra the very first time we saw one would be if (1) some
person or book had told us that "a zebra looks like a black/white
striped horse" and (2) we already knew what "horse" and "striped" etc.
mean. If we knew horse and striped only from a verbal (= symbolic)
description too, then once we went low enough in this abstract, verbal
hierarchy, we would have to arrive at something other than just more
symbols and descriptions: Analog projections and feature-detecting
neural nets are candidates for what the bottom-up mechanism for
grounding symbolic knowledge might be.

> Thus the visual system has been provided with a direct linkage with the
> symbol system even where a 'new' object has been seen for the very
> first time by that person. (That's how I explain it to myself!)
> Dunsdon, Graham.

Not quite: SOME symbols (category names, arbitrary in "shape") are
connected to the distal objects that they stand for by a mechanism that
takes the proximal analog projection (shadow) on our sensory surfaces
and filters out the features that allow us to identify (categorise,
name) the distal object correctly. The knowledge we get from a symbolic
description like a sentence describing a zebra must be grounded in the
symbols that already have a connection to the distal object through
analog projections and feature-detecting neural nets.

That's just a theory, though, so don't go believing it. All you have
to believe is that it can't just be symbols all the way down.



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:50 GMT