> From: "Harrison, Richard" <RJH93PY@psy.soton.ac.uk>
> Date: Tue, 23 Jan 1996 12:46:17 GMT
> The Chinese Room Argument demonstrates that cognition cannot
> exclusively be computation (a purely symbolic system). This is the
> case because the symbols are not grounded (hence, the "symbol rounding
> problem"). There are different attempts to solve the symbol grounding
> problem which will be discussed.
Kid brother would protest: "But I don't know what the Chinese Room
Argument and the Symbol Grounding Problem are!"...
> An important paradigm in cognitive psychology has been the
> computational approach. Computation involves the manipulation of a
> symbol system. According to proponents (e.g. Fodor, 1980; Pylyshyn,
> 1973, 1984) minds are symbol systems. A symbol system has a number of
> important features; it involves a set of arbitrary physical tokens that
> are manipulated on the basis of explicit rules,the manipulation is
> based on the shape of the physical tokens (i.e. is syntactic and not
> based on meaning), and the system of tokens, strings of tokens and
> rules are all semantically interpretable. There were a number of
> reasons why the symbolic view of mind appeared to be persuasive.
Not sure kid brother could figure out what symbols or computation were
from this (or what semantics was, or even what you meant by "mind")...
But what WERE those reasons the symbolic view of mind (whatever that
means) were persuasive?
> However, a simple thought experiment was provided by Searle (1980) that
> demonstrated that the mind cannot be a pure symbol system.
> Searle (1980) challenged the assumption that a symbol system that could
> produce behaviour that is distinguishable from ours must have a mind.
> If a computer could pass the Turing Test (TT; Turing, 1964) then it was
> thought by many that it would have a mind.
Why? Is this all just a matter of opinion? What was the point of the
> A TT passer would be able to
> produce behaviour like ours in a 'pen pal' situation (i.e. model
> linguistic behaviour).
Not just linguistic "behaviour," but everything a penpal can do in his
> Searle said that if a (symbol system) computer
> can pass the TT in Chinese by merely manipulating symbols then it does
> not, in fact, understand Chinese. This is because Searle (or anybody
> else who doesn't understand Chinese) could take the place of the
> computer and implement the symbol system without understanding
We don't get the force of this argument until you explain the power of
computation (symbol manipulation) and the role of
implementation-independence in modeling the mind...
> The Chinese Room Argument is an example of the Symbol Grounding Problem
> (Harnad, 1990).
Not exactly; it's a symptom a symptom of the Symbol Grounding Problem. A
solution to the problem would be a system that connected its symbols to
what they were about without the mediation of an external interpreter.
> Another example will help to demonstrate this problem.
> If you had to learn Chinese as a second language and you only had a
> Chinese-Chinese dictionary you would endlessly pass from one meaningless
> symbol (or symbol-string) to another, never reaching anything that had
> any meaning. The symbols would not be grounded. A second version of
> this Chinese-Chinese dictionary-go-round is similar, only requiring you
> to learn Chinese as a first language. This time the symbols would not
> be grounded either and it is an analogous to the difficulty faced by
> purely symbolic models of mind. That is; how is symbol meaning to be
> grounded in anything other than meaningless symbols? This is the symbol
> grounding problem.
> One approach that avoids the symbol grounding problem is
> connectionism. According to connectionism (e.g. McClelland Rumelhart
> et al, 1986), cognition is not symbol manipulation but dynamic patterns
> of activity in a multilayered network of nodes or units with weighted
> positive or negative interconnections. However, connectionist systems
> are at a disadvantage to symbolic models as many of our behavioural
> capacities seem to be symbolic. Particularly linguistic capacities,
> but also logical reasoning and some other higher order cognitive
> capacities, appear to have the systematic meaning of symbol systems.
> Perhaps a solution to this problem would be reached by combining the
> advantages of symbol models with the grounding capacity of
> connectionist systems.
Strictly speaking, nets don't have "grounding capacity, they seem to be
good at pattern learning. That MAY be a clue to grounding symbol
> An alternative version of the TT is immune to the Chinese Room
> Argument. The TT involves producing only linguistic capacity that is
> indistinguishable from ours, but if it also included our robotic
> capacities (how we discriminate, manipulate and identify objects) then
> a computer that could pass it would have grounded its symbols and hence
> would understand (have a mind etc).
Huh? (says kid brother) Why is that? Is this just a declaration or do
is there argument or evidence? (The Argument is that a grounded robot
would be immune to Searle's Chinese Room Argument [why? how?]; moreover,
if a robot could interact DIRECTLY and AUTONOMOUSLY with the things
its symbols were interpretable as being about, then that would take the
external interpreter out of the loop, for the robot would be autonomous,
and the link between its symbols and their objects would be
> This version of the test has been
> called the Total Turing Test (TTT; Harnad, 1989). Harnad (1990)
> suggested a possible model that could pass it that would involve a
> symbol system that could be grounded in meaning by a connectionist
"Grounded in meaning"? (Kid brother scowls, and is in danger of going to
turn on the TV...)
> His hybrid solution to the symbol grounding problem was a
> system that involved iconic representations (analogs of proximal
> sensory input)
"analogs"? "proximal"? And why?
> and categorical representations which are feature
> detectors that pick out the invariant features of objects and events.
Yes, yes, but why? what for? Besides itemising these things you have to
motivate them, explain them, make sense out of them to a kid brother
(thereby proving they make sense to you!)
> These could be combined to ground higher order symbolic representations.
And what are those, pray tell?
> Connectionism is a potential mechanism for learning the invariant
> features of categorical representations. In this model symbol
> manipulation would not only be based on the arbitrary shape of the
> symbols, but also on the nonarbitrary shape of the icons and
> categorical representations in which they are grounded.
>From the kid-brother standpoint, so far, these words are just
meaningless symbols: What do they mean? Assume your kid brother knows
enough English so he is not facing a dictionary-go-round, but he knows
none of these technical terms, and does not even quite know what the
PROBLEM is that all this is supposed to be solving...
> While a model of this hybrid type that can actually pass the TTT has
> not been built yet it provides a possible way of passing it in the
How can something that doesn't pass provide a possible way of passing?
And what is this hybrid model? Remember that your kid brother --
brilliant, intensely interested -- has not read what you've read, or
heard what you've read, on this topic: You have to ground his
> and hence solving the symbol grounding problem and avoiding the
> Chinese Room Argument. It combines the advantages of the symbolic
> approach with the connectionist capacity to ground symbols in their
Richard, this may all be clear in YOUR mind, but your mission is to make
it clear in your kid brother's mind, thereby dispelling all ambiguity
about whether you yourself really understand it, or are just
manipulating the symbols!
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:57 GMT