Re: Searle's Chinese Room Argument

From: HARNAD Stevan (harnad@cogsci.soton.ac.uk)
Date: Mon Jun 03 1996 - 20:53:29 BST


> Date: Thu, 23 May 1996 13:19:27 GMT
> From: Pascoe Clare <csp195@soton.ac.uk>
>
> Support for the primacy of robotic capacity comes from a 'thought
> experiment' originally proposed by John Searle.

Be careful: Searle's argument DOES support robotic approaches, but he
himself did not mean them to. He only wanted to challenge computer
approaches.

> This thought experiment
> was called the 'Chinese Room Argument.' This challenged the Turing
> test, (a test with a likelihood of converging on the true necessary and
> sufficient conditions for having a mind). There is a computer where
> Chinese symbols can be used and the machine will respond with symbols
> just like a Chinese pen-pal. No one can tell that it's a machine, not a
> person. You could conclude, the computer had a mind and understood the
> symbols, but Searle challenges this saying, the only thing the computer
> is doing is following rules for manipulating symbols on the basis of
> their shapes. For example, if he took the computer's place and followed
> instructions for manipulating the symbols, this could be done without
> actually understanding them. So the computer would not be understanding
> either, so does not have a mind. This opposes the validity of the
> Turing Test.

It only opposes the validity of a Turing Test that could be successfully
passed by a computer alone. If, in reality, a computer alone could NOT
successfully exhibit lifelong pen-pal capacity, equivalent to and
indistinguishable form our own, then the Turing Test would still be ok;
a computer would simply fail it. Another system (say, a robot) might
pass it, but it would be immune to Searle's Chinese Room Argument,
because a robot is not just a symbol manipulator, so Searle could not do
everything it does, as he can with a mere symbol-manipulator.

> The only way to know if another body has a mind is by
> being the other body. Searle's argument does this. Altough he can't say
> whether the computer understands Chinese, it is not because of the
> computational state it is implementing, because Searle is implementing
> the very same computational state and can say he is understanding no
> Chinese.

Good reply, but for an A, you would need to consider and integrate
other possibilities besides symbol manipulation, e.g., analog
processes, which are immune to Searle's Argument. And what about neural
nets?



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:43 GMT