> From: "Smith, Wendy" <WS93PY@psy.soton.ac.uk>
> Date: Tue, 23 Jan 1996 09:23:22 GMT
> Searle (1980) proposed that computation alone was not cognition. He
> suggested that cognition was some sort of computational process and
> output, combined with some form of intentional phenomena.
What are "intentional phenomena"? Also, Searle didn't propose this
combination. He just said cognition was something only the brain (or
anything that had the same "causal powers" as the brain) could do; he
doubted computation had much to do with it.
> the production of the intentional phenomena cannot be merely by running
> a program. The Chinese Room argument is an ingenious mind experiment
> devised to support this hypothesis.
> Before we look at the Chinese Room, we have to look at the Turing Test.
> The Turing Test sets the stage of having a computer in one room, and a
> human being in another room. Identical messages are sent into both
> rooms. If, from the replies , we are unable to distinguish between the
> computer and the person, then deciding that one has a mind and one
> doesn't is, on the surface, arbitrary rather than based on evidence.
If the computer could do everything the person could, and so exactly
that we couldn't tell them apart, then even when we learned that one was
a computer, we would have no nonarbitrary (non-"Granny") basis for
denying of one what we had quite naturally assumed of the other
(since they had the same capacity and could not be told apart): That
it understood what we had been writing to it (as a lifelong penpal
> Searle was criticising strong AI which, he claimed, considered the
> programs to be explanations of cognitive states, rather than tools to
> explore explanations. He quoted a program devised by Schank, which
> provided a script and a story. The script referred to the "usual"
> sequence of events in a certain type of situation, and the story was of
> a specific situation of this type. Questions could then be asked which
> couldn't be answered directly from the story, but could be inferred
> from the knowledge of the script. Some workers considered that the
> ability of a machine running these programs to answer the questions
> correctly, as a human would, was evidence of understanding.
Searle's point about this kind of programme was that the script only
made sense to an outside interpreter; in and of itself, it was just a
lot of meaningless squiggles and squoggles, just as a script on a page
is. The program simulates understanding through the manipulation of
these meaningless symbols using rules that operate only on the
arbitrary shapes of the symbols, not their meanings. The meanings are
projected onto them by us.
Then Searle said, supposing that a computer programme could pass the
Turing Test in Chinese -- could correspond as a penpal,
indistinguishable from a real penpal, for a lifetime, if need be --
using these same means.
> Searle asked us to imagine that he was locked in a room with a set of
> Chinese symbols. He knows no Chinese, and the squiggles are meaningless
> to him. He is then given a second set of Chinese symbols, along with a
> rule book. The rules are written in English, and set out how he can
> correlate the second batch of symbols with the first. He is then given
> a third batch of symbols. By following the rules in his book, he can
> correlate this batch with the other two batches, and return certain of
> the symbols. To the people outside the door, they have provided Searle
> with a script, a story and a question, all in Chinese; he has answered
> in Chinese; ergo, he understands Chinese. From Searle's point of view,
> he took some symbols and manipulated them in various ways based on
> their shape. Understanding never entered the proceedings at any point.
> Searle did not understand Chinese when he received the symbols;
> manipulation of the symbols did not produce understanding; and he did
> not understand Chinese when he gave the "answer" to the question.
> Therefore, if the computer can understand, it isn't by virtue of its
> program, because Searle could "run" the same program and not
> understand. Understanding is not intrinsic within the program. The
> understanding was provided by an outside agent interpreting the input
> and output of the system.
The most relevant version of the Chinese Room Argument is the one where
Searle MEMORISES all the rules, and does all the symbol manipulations
in his head. He would then be passing the Turing Test without
understanding Chinese; hence the computer wouldn't be understanding
Chinese if it did the same thing either, or at least not because it was
executing the right programme.
Our thoughts, in contrast, cannot be like that: They cannot mean what
they mean, be about what they are about, merely because an outside
interpreter can interpret them as so being!
> Therefore, if by computation we mean the manipulation of symbols based
> on their shape, then computation alone is not cognition. There needs to
> be something else. Searle had described this as an "intentional
> phenomenon", but this is not well specified. Searle did not understand
> Chinese: the manipulations were syntactic, and based on the shape of
> the symbols rather than their meanings. However, if the same mind
> experiment had been performed with letters from an English alphabet,
> then a different situation would arise. Searle would have performed the
> same manipulations, but rather than being syntactic, the manipulations
> would have been semantic, and the symbols would have had meaning.
> Searle would have understood them, because the symbols were grounded.
(Well, it would work only if the words were in a language Searle
understood, such as English!) What Searle meant (or ought to have
meant) by having "intrinsic intentionality" was what it is to
understand English and not understand Chinese, as in his case.
"Intentional" here is used in the sense of "intended meaning": Thoughts
have a meaning, they are ABOUT something, and they mean something TO
someone (the thinker). Symbols are merely INTERPRETABLE as if they were
about something by US. So we can't just be symbol systems, or that
would lead to an infinite regress (a homunculus problem with no
homunculus: Interpretable symbols but no interpreter!)
> The next problem is how the symbols are grounded.
> The answer can't be within the symbols. This would be like trying to
> learn Chinese with only a Chinese-Chinese dictionary to help. One
> symbol would lead to a mass of other symbols, and so on, but the
> meaning of the symbols would never appear. Some sort of "Rosetta Stone"
> process is needed to ground the symbols. However, even this won't work;
> it suggests we could learn Chinese from a Chinese-English dictionary.
> Perhaps we could, but we are still left with the question of how we
> learned English in the first place.
> One suggestion is to connect the symbol system to the world. If this is
> done "properly" it will ground the symbols. However, this also begs the
> question, and gives rise to a Homuncular argument. It just replaces the
> English-Chinese dictionary with a "something"-Chinese dictionary. Using
> an arbitrary symbol system to ground an arbitrary symbol system just
> leads into an infinite regress. It can't be grounded that way without
> a Rosetta Stone, and that just involves a homunculus. The symbolic
> approach does not appear successful. So, it would seem that the
> symbols have to be grounded in non-symbolic representations. These are
> not arbitrary, but are grounded in the real world.
Kid brother still not sure what you mean by "grounded"!
In an ungrounded symbol system, the only connection between the symbols
and what they are interpretable as being about is through the mind of
the external interpreter. Grounding has to break out of this dependence
on an external interpreter. The connection between symbols and what
they are about must be direct, and autonomous. A robot that could
actually interact with all the things in the world that its internal
symbols are interpretable as being about INDISTINGUISHABLY FROM THE WAY
WE DO, in other words, a robot that could pass the robotic version of
the Turing Test, T3, would have grounded symbols.
> Two basic, non-symbolic representations can be described. The first is
> an iconic representation. Objects and events in the real world are
> accessed by the sensory equipment, which gives rise to an "iconic
> representation" - an analog of the object or event. From these
> representations, certain invariant features can be extracted (by an
> innate mechanism), to form "categorical representations". From this,
> elementary symbols can arise, and "symbolic representations" can be
> grounded in the elementary symbols.
A bit confusing for kid bro: What are "invariant features"? They're the
features that will allow you to categorise what it and isn't in the
category with a certain NAME. That name in turn is a grounded
elementary symbol. (About features, see the discussion of Classical
Categorisation.) Once elementary symbols are grounded directly (through
"honest toil") in the capacity to pick out what they are about, they
can be used, as in the Chinese/Chinese Dictionary, to define new
symbols by symbol recombination alone (this is the kind of "theft"
language makes possible).
> This still leaves the question of how this could be done. Perhaps we
> need to return to the Turing Test for this. One problem here is that
> the machine and the person are locked in a room, and all that is being
> tested are their linguistic capacities. This is not sufficient - the
> test can be passed with a system with no "mind", as Searle showed.
> However, humans have more than linguistic abilities. For example, if we
> consider a robot which also has sensorimotor capabilities, we can give
> it an object it has to name, a flower. Searle would do this by looking
> at the flower, touching the flower, smelling the flower, and then
> replying "rose". The robot also touches the flower, receives data input
> from the flower, and replies "rose". Before, with only linguistic
> capacity, we could say that the program was not generating
> understanding, because Searle could run the same program without
> understanding. However, once we introduce sensorimotor abilities, the
> situation becomes more difficult. Searle understands the concept "rose"
> through his sensorimotor capacities.
Well, let's not jump right to concepts! He recognises that the object is
in the category "rose."
> The robot appears to be doing the
> same thing. The robot may or may not have understanding, and a "mind",
> but there again, that statement could also apply to Searle! The robot
> may not be performing exactly the same processes to arrive at the same
> results, but nevertheless it is interacting with the object in a
> meaningful way, without he need of an external interpreter. Therefore,
> if we decide that the robot does not have a mind, it is an arbitrary
> decision, rather than because it is observably distinguishable from a
Good. And of course you came round to pre-empting my earlier comments,
had I had the time to read this to the end before beginning to reply...
> To summarise, Searle described the Chinese room argument, which
> demonstrated that the program running a machine could not guarantee
> understanding, because he could "run" the same program and produce the
> same results without any understanding. However, the machine being
> tested had only linguistic capacities. When sensorimotor capacities
> were added, this claim could no longer be made. It was not possible,
> for example, to claim that the robot was not "seeing", in the same way,
> because Searle could "see".
Why not? You have to actually make the "transducer" counterargument,
using the fact that computation is implementation-independent (hence can
be implemented by Searle, without understanding) whereas sensorimotor
transduction is not.
> Therefore, if the robot receives input
> through sensorimotor channels, rather than arbitrary symbols, it is
> more difficult to judge that it is not understanding. One of the
> problems with understanding symbols is that they have to be grounded
> in something other than more arbitrary symbols. One solution is that
> they could be grounded in non-symbolic representations, which are not
> arbitrary, and are acquired through sensorimotor interaction with the
> objects in the real world. Through connectionism, these iconic and
> categorical representations can be associated with their linguistic
You haven't quite said how neural nets could do the trick: They could
learn the features in the sensory "shadow" of the object that allow the
robot to categorise it correctly.
Also, to do double duty with the question about the POSITIVE evidence
for computationalism, you would have to give all the supporting reasons
and evidence suggesting that "Strong AI" (computationalism) might have
been expected to be RIGHT, even though Searle has shown it is wrong:
(1) the success of Artificial Intelligence in generating intelligent
behaviour (and the failure of behaviourist explanation); (2) the power
and generality of computation; (3) the symbol-like nature of language,
reasoning and other aspects of cognition; (4) the light cast by the
implementation-independence of cognition on the mind/body problem --
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:57 GMT