Re: Searle's Chinese Room Argument

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Tue Mar 14 2000 - 14:58:39 GMT


On Mon, 13 Mar 2000, Blakemore, Philip wrote:

> Strong AI actually states that all programs with cognitive states have a
> mind. Whilst I can accept a super smart machine which is able to pass the
> Turing Test (say, up to T3) might have a mind, as it would be
> indistinguishable from our own minds. To say a toaster, mobile phone
> or a digital watch has a mind is absurd.

Actually, Searle doesn't say (that Strong AI says) that a toaster,
phone, watch (or even "all programs with cognitive states") have a mind;
he says (that Strong AI says) that a (running) T2-passing computer
programme has a mind.

> We must remember here that we are not talking about Searle learning
> Chinese by correlating the first and second script in Chinese from the
> script in English. That would be possible, but it would be the human
> doing the understanding and not the machine.

Correct.

> This is very clever example that Searle has chosen. Searle himself is
> acting as the computer by executing the program. We can bypass the "Other
> Minds" problem, where you cannot assert whether another person (or
> machine) has a mind, as we can now ask Searle himself (and he knows he has
> a mind).

More important, he knows he doesn't understand Chinese. So rather than
bypassing the other-minds problem, Searle actually creates a unique
"periscope" on the other-minds problem, one that only works on ONE
special kind of candidate, namely, an implementation-independent symbol
system -- because then one can oneself implement, and hence "be" the
system, and confirm at first hand that there is no Chinese-understanding
going on!

> I agree with Searle again. It does not seem to me that we only manipulate
> symbols for understanding. We infer things from the text, considering our
> own opinions, beliefs, feelings and knowledge when answering questions.

But if inference is not symbol-manipulation, then what is it?

> Searle is saying that the way in which we read information and process it
> could be done by a computer program, much like reading in a file from a
> computer disc. The syntax is checked at this stage in the program and
> possibly some level of the semantics. But the understanding comes from a
> much higher abstract level in the brain.

And what on earth does that mean? If not computation, then what?
(Remember the Church/Turing Thesis? Remember Chalmers on "reconfiguring
the computer with a program" to emulate the (relevant part) of the
causal structure of the brain? And Pylyshyn, on processes below and
above the level of the "cognitive architecture"?

If the brain's not just doing computation, then what else IS it doing?

> The other response is that there are two
> minds. The human mind which executes the program, and another which is
> created as a result of the program and the computer. Is there then two
> minds? I don't think so, but I cannot prove it. The computer (the human)
> is aware of his own mind, but does he think there is another mind floating
> around (in his head, since the system is all in his head) understanding
> the program he may not even understand?

That's the point. After all, it's one thing to give OTHER minds the
benefit of the doubt -- when they are (or might be) inside other people,
maybe even inside other machines. But it's asking a little to much to
give the benefit of doubt to another mind IN YOUR OWN HEAD -- especially
when there is a much simpler explanation (I'm just doing
squiggle-squoggling symbol manipulations). Not quite the same as
multiple personality-disorder (induced by early sexual abuse, not by
squiggle-squoggling).

> > SEARLE:
> > The idea is that while a person doesn't understand Chinese, somehow the
> > conjunction of that person and bits of paper might understand Chinese.
>
> It does seem absurd, doesn't it?

And even more absurd when Searle has memorized it all, and he IS the
whole system...

> remember that this is only T2 and there are other, more advanced Turing
> Tests to consider. If we rise to the level of T3, a robot capable of
> interacting with the outside world, as well as all the functionality of
> T2 could produce different answers.

Is it possible to do a Chinese Room Argument against T3? How (or why
not)?

> > McCARTHY:
> > Machines as simple as thermostats can be said to have beliefs, and
> > having beliefs seems to be a characteristic of most machines capable of
> > problem solving performance.
>
> I agree with Searle that this is a silly remark. We should only consider
> a machine to have intelligence when it passes at least T2 (perhaps it
> should be higher).

Note, though, that the "silly remark" is made by John McCarthy, the
father of AI!

> If this claim was true and the thermostate had a mind; why are we
> bothering to understand our own mind, which is more complex? We have
> already suceeded in creating a mind many years ago. The reason is simple.
> We want something smarter, something with more intelligence (or some!).

And something that really DOES have a mind...

> > SEARLE:
> > But the answer to the robot reply is that the addition of such
> > "perceptual" and "motor" capacities adds nothing by way of
> > understanding, in particular, or intentionality, in general, to Schank's
> > original program.
>
> This is easy to see as the robot still has to process the information in
> its "brain". This brain could be substituted by Searle himself, again
> showing that if the robot was trying to communicate in Chinese, Searle
> would still not understand any of it.

Ah, but that would be just T2 again (but this time performed, for some
reason, by a robot, whose program could then be implemented by Searle
again).

But what if the test was T3? Could Searle still implement it all without
really doing/being it all? Could he BE the peripherals? Could he BE the
analog bits?

> > SEARLE:
> > III. The brain simulator reply (Berkeley and M.I.T.). "Suppose we design
> > a program that doesn't represent information that we have about the
> > world, such as the information in Schank's scripts, but simulates the
> > actual sequence of neuron firings at the synapses of the brain of a
> > native Chinese speaker when he understands stories in Chinese and gives
> > answers to them.
>
> On the face of it, this seems to be a very good question. Some of the
> responses to this is that you need to mimic the exact behaviour of the
> brain to have understanding. Thus, if the program copied this, it might
> have understanding.
>
> However, the neuron's firing must be set up in exactly the same way.
> Each neuron copied much be attached to the same neurons in the real brain.
> Also simulation is not the real thing. Back to the virtual furnance - it
> doesn't get hot!

You're 90% right. This would still just be squiggles and squoggles. If
in order to succeed in passing T2 the program must do brain simulations,
fine, Searle can do those too. They're still just squiggles and
squoggles. It doesn't matter what algorithm Searle is executing --
whether it's just a language algorithm or a language and brain
algorithm. As long as it's just an algorithm, he can implement it.

> I again agree with Searle's aside point, but I think AI would still exist.
> The fact computer programs can be ported to other machines shows the
> independence of the program from the computer. In the same light, the
> mind is separate from the brain, but is it? We don't know this.

The porting isn't the problem. Maybe brain states can be ported too, or
cloned, or laser-copied. The only problem is the
implementation-independence, which leaves room for Searle's peekaboo
periscope on the presence/absence of any alleged mental states.

> The simulation in the computer is still not the real neurons firing in the
> brain.

What if, instead of a computational simulation, it were a synthesis
(like synthetic organs) -- made out of other stuff, but with the same
functional capability (T4!).

> > SEARLE:
> > If the robot looks and behaves sufficiently like us, then we would
> > suppose, until proven otherwise, that it must have mental states like
> > ours that cause and are expressed by its behavior and it must have an
> > inner mechanism capable of producing such mental states. If we knew
> > independently how to account for its behavior without such assumptions
> > we would not attribute intentionality to it especially if we knew it had
> > a formal program. And this is precisely the point of my earlier reply to
> > objection II.

Note, though, that Searle's Periscope does not work for a T3 robot -- it
works for a PART of the robot (the computational part), but not the
rest. So here a "System Reply" would be dead right!

> > SEARLE:
> > That is an empirical question, rather like the question whether
> > photosynthesis can be done by something with a chemistry different from
> > that of chlorophyll.
>
> Very interesting. Like photosynthesis, we haven't found any other
> chemical process to do the same thing. However, we could simulate it on a
> computer without much problem. But it wouldn't produce anything physical.
> Unless we could give a computer the physical properties in our brains for
> learning Chinese, rather than a formal computer program, the computer will
> understand nothing.

And you could not do that if the properties were not just computational!

> > SEARLE:
> > "Could a machine think?"
>
> > The answer is, obviously, yes. We are precisely such machines.
>
> Only if you define humans as machines. Although I don't know for
> definite, I do not think I am running a computer program as I have
> physical state (action and consequence), understanding and thought. I
> wouldn't consider myself as a machine.

No, Searle's point is that computers are not the only sort of machine!
In general, a "machine" is just a causal system. A car is a machine (but
not a computer); more generally, a liver is a machine (but not a
computer); etc.

> > SEARLE:
> > "Yes, but could an artifact, a man-made machine think?"
>
> > SEARLE:
> > Assuming it is possible to produce artificially a machine with a nervous
> > system, neurons with axons and dendrites, and all the rest of it,
> > sufficiently like ours, again the answer to the question seems to be
> > obviously, yes.
>
> I totally agree. Completely and exactly (physically) copy a human brain
> and it should behave like us. However, using "other sorts of chemical
> principles" doesn't seem likely. The other chemical process must produce
> EXACTLY the same effects. I don't think you get exactly the same results
> using different material than the original organic matter.

This is T4. But don't you want only the sameness that's RELEVANT? Is all
of brain function relevant?

> > SEARLE:
> > If by "digital computer" we mean anything at all that has a level of
> > description where it can correctly be described as the instantiation of
> > a computer program, then again the answer is, of course, yes, since we
> > are the instantiations of any number of computer programs, and we can
> > think.
>
> Are we made up from computer programs?

In part, no doubt; and the rest, too, is computer-simulable (according
to the Church/Turing Thesis). But the simulation is not the real thing
-- it doesn't move, fly, heat, photosynthesize,... think.

> > SEARLE:
> > the distinction between the program and the realization -- proves fatal
> > to the claim that simulation could be duplication.

Put another way: IMPLEMENTATION-INDEPENDENCE is at the same time
computationalism's greatest strength (the power of computation, the C/T
Thesis) and its fatal flaw (as shown by Searle's Periscope).

> Firstly, computers can only simulate, not duplicate.

They can duplicate anything that really IS implementation-independent,
but they cannot duplicate what is implementation-DEpendent.

> Secondly, humans do information processing and find it natural to think of
> machines doing the same thing. However, all machines do is maniplulate
> symbols.

But Searle does seem to think that, in having shown that cognition
cannot be ALL just computation, he has also shown that cognition cannot
be computation AT ALL. This he has not done; mental states are almost
certainly computational IN PART. We are HYBRID
computational/noncomputational machines.

> Thirdly, strong AI only makes sense given the dualistic assumption that,
> where the mind is concerned, the brain doesn't matter. In strong AI (and
> in functionalism, as well) what matters are programs, and programs are
> independent of their realization in machines.

And that's because you can't SEE the mind -- so you can imagine it's
there when it's not. That's what Searle's Periscope is there to remedy.

> I agree with the argument throughout this paper showing that formal
> programs in no way give a machine understanding.

But not that real understanding in no way involves formal programs. We
are hybrid systems.

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT