Re: Searle: Minds, Brains and Programs

From: HARNAD Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Sat May 26 2001 - 16:42:27 BST


On Sat, 26 May 2001, Graham Clark wrote:

> I have been looking over Searle's paper (Minds, Machines and Programs), and
> I am a bit confused by your reading of it. You say that Searle is saying
> that because of the CRA, cognition can't be computation AT ALL, but he says
> several times that passing the TT can't be SOLELY computation. He does
> mention some sort of sensorimotor capabilities (in the "Robot reply"), but
> seems to counter this completely wrongly (by ignoring any grounding these
> extra capabilities could provide). So although Searle doesn't seem to have
> any clues as to what the "something else" (apart from computation) might be,
> he does seem to agree that computation could be a part of the mind.
>
> Is this correct? Any feedback would be greatly appreciated.

Good question, and good point. I would say that we could give the paper
a more benign interpretation, and say that Searle recognizes is that all
he has shown is that cognition can't be all computation, and not that
cognition can't be computation at all.

But if that is what he means, then why does he say that the what
follows from the Chinese Rook Argument is that we should turn away from
computation and study the brain instead?

"Weak AI" (which just uses computation as a tool for modeling the mind,
just as it can be used as a tool for modeling the brain or the solar
system or a plane) still seems to allow for the possibility that some
of whatever it is that passes the TT could be computational and some
not. A hybrid computational/noncomputational system could be modelled
with Weak AI. Only Strong AI says it's all got to be computation.

So, according to your reading, if some of cognition could be
computation, why does Searle say we need to turn INSTEAD to the brain?

As to the sensorimotor capacities of robots, what Searle doesn't seem to
consider is that they could be PART of cognition, rather than just
input to a (noncognizing) computer. I agree with you that this is
because his critique is negative, and he has not given the positive
solution to the grounding problem enough thought. He's out to show it's
not computers, and he thinks a robot is necessarily just a computer
with I/O -- whereas it could be a hybrid computational/noncomputational
system all the way through.

Stevan Harnad



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST