Re: Searle's Chinese Room Argument

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Thu Mar 16 2000 - 15:27:15 GMT


On Thu, 16 Mar 2000, Brooking, Stephen wrote:

> So if the machine gives the same answers as humans would to the questions,
> then it can be said to be simulating human ability. Agreed. While it is
> not a trivial progression from question to answer in cases like this, I
> disagree that there is any understanding going on. Do Partisans of Strong
> AI claim that the machine can understand what is involved with a man going
> into a restaurant and eating (or not) a hamburger? Can the machine
> visualise this man storming out of the restaurant?

It's not just "question and answer" (unless any and every conversational
interaction is just Q & A). Passing T2 means being able to interact
indistinguishably from a real pen-pal in every respect, including
discussing "what is involved with a man going into a restaurant and
eating (or not) a hamburger" and discussing what it is to "visualise
this man storming out of the restaurant".

We never get to see the understanding or the visualising in either case
(human or computer -- or even robot).

> To the point of symbol manipulation. There is no argument that the meaning
> of the question is entirely captured by the symbols that represent it. So
> a machine could potentially extract as much information as humans could.
> What is there, over and above symbol manipulation, that humans do?

I don't completely understand your question, but, as an example, when
you say "visualise this man storming out of the restaurant," I really
visualise it! And when you say "restaurant," I really understand what
you mean (I don't just manipulate the symbols that generate the string:
"I really understand what you mean"...

> > Blakemore:
> > I agree with Searle again. It does not seem to me that we only manipulate
> > symbols for understanding. We infer things from the text, considering our
> > own opinions, beliefs, feelings and knowledge when answering questions.
>
> This can be seen to reply to my question above, but what, for example, is
> inference if it is not manipulation of symbols in some way? We consider
> our opinions, which are going to be based on the words (symbols) in the
> sentence (question). Could our opinions not be seen as functions of the
> symbols in question, and further that these functions are implemented with
> symbol manipulation?

The contention is not that inference is not symbol-manipulation AT ALL;
it is that it is not JUST symbol-manipulation. Besides manipulating
symbols, we also understand what they mean -- and that involves
"manipulative" capabilities (among other things) that go beyond symbol
manipulation, as in a T3 robot's manipulation of the world of objects
and events and states that its symbols are about.

> If someone was to reply that the program would be too big to memorize,
> then they cannot believe that what the program achieves is exactly what
> humans do, as we all have memorized 'the English version of the program'.
> Or would the reply be that the program would be too large to memorize,
> along side the one that we were already running?

Either way, the reply is not based on a substantial point, but an
irrelevant technicality.

> What beliefs could a thermostat have? "I believe that the temperature is
> 20 degrees celcius"? To push a point, a thermostat could be said to have
> knowledge, taking input from a temperature sensor and the like, but not
> belief.

Simple beliefs (if any at all). And if "knowledge," simple knowledge.
Neither helps; they sink or swim together.

> It's my view that the mind is not separate from the brain. The operations
> that the brain does, are implicit in the neurons and how they are
> arranged. That is, the algorithms are not implemented in 'software'
> running on the brain, but are implicit in the 'hardware'. The mind is
> something that the brain implements.

What does that mean?

> Why is the answer to "Could a machine think?" obviously yes? I would argue
> that it is no, and I certainly wouldn't say that it is obvious. I agree
> with Blakemore's point.

Because it is not at all obvious that we ourselves are not machines! A
machine is just a causal system.

> If you produce artificially a machine (although I'm about to aruge that it
> wouldn't be a machine) that was sufficiently like (exactly the same?) as
> humans, then you're not making a machine, you are making a human. Again, I
> agree with Blakemore's point - if you change the materials, you are not
> going to get the same effects.

First point: What if we ARE machines?

Second point: HOW does the material matter? (Computationally it doesn't,
because of implementation-independence.)

> > Blakemore:
> > Thirdly, strong AI only makes sense given the dualistic assumption that,
> > where the mind is concerned, the brain doesn't matter. In strong AI (and
> > in functionalism, as well) what matters are programs, and programs are
> > independent of their realization in machines.
>
> I don't agree that where the mind is concerned the brain doesn't matter,
> as I have stated before.

Computation is implementation-independent, but it still has to be
implemented. It's the specific details that don't matter.

But it's not clear that the brain is merely the implementation of a
symbol system. Remember hybridism...

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT