Searle: Minds, Brains and Programs

From: Hosier Adam (ash198@ecs.soton.ac.uk)
Date: Tue Mar 06 2001 - 10:15:59 GMT


Adam Hosier < >

Searle, John. R. : Minds Brains and Programs (1980)
http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html

Hosier:
The best way to summarise the paper, 'Minds Brains and Programs (1980)'
 by Searle is to quote his own abstract:

SEARLE:
> This article can be viewed as an attempt to explore the consequences
> of two propositions. (1) Intentionality in human beings (and animals)
> is a product of causal features of the brain I assume this is an
> empirical fact about the actual causal relations between mental
> processes and brains. It says simply that certain brain processes are
> sufficient for intentionality. (2) Instantiating a computer program
> is never by itself a sufficient condition of intentionality The main
> argument of this paper is directed at establishing this claim The form
> of the argument is to show how a human agent could instantiate the
> program and still not have the relevant intentionality.

Hosier:

The argument that Searle will put forward has come to be known as the
classic Chinese Room argument. It is this argument that Searle uses to
answer the following question.

SEARLE:
> "Could a machine think?" On the argument advanced here only a machine
> could think, and only very special kinds of machines, namely brains
> and machines with internal causal powers equivalent to those of brains
> And that is why strong AI has little to tell us about thinking, since
> it is not about machines but about programs, and no program by itself
> is sufficient for thinking.

Hosier:
Notice the main point that Searle is trying to make is not that AI is
not possible, merely that it is not possible for a conventional
computational based computer program to 'be intelligent'. In particular
Searle is attacking the Strong AI concept, which he goes on to define as,

SEARLE:
> But according to strong AI, the computer is not merely a tool in the
> study of the mind; rather, the appropriately programmed computer really
> is a mind, in the sense that computers given the right programs can
> be literally said to understand and have other cognitive states.

Hosier:
Searle goes on to give an example of a program by Roger Schank,
(Schank & Abelson 1977). He describes this program as follows.

SEARLE:
>The aim of the program is to simulate the human ability to understand
> stories. It is characteristic of human beings' story-understanding
> capacity that they can answer questions about the story even though
> the information that they give was never explicitly stated in the
> story. Thus, for example, suppose you are given the following story:
> -A man went into a restaurant and ordered a hamburger. When the
> hamburger arrived it was burned to a crisp, and the man stormed out
> of the restaurant angrily, without paying for the hamburger or leaving
> a tip." Now, if you are asked -Did the man eat the hamburger?" you
> will presumably answer, ' No, he did not.'

Hosier:
In simple words the program can answer queries from a knowledge base
 to a similar level that a human would answer questions about the same
 story. Searle goes on to say that,

SEARLE:
> Partisans of strong AI claim that in this question and answer sequence
> the machine is not only simulating a human ability but also
>
> 1. that the machine can literally be said to understand the story and
> provide the answers to questions, and
>
> 2. that what the machine and its program do explains the human ability
> to understand the story and answer questions about it.

Hosier
I am not sure what 'partisans' Searle is referring to, but I think it
is an obvious assumption that they are wrong. These days no one would
interpret an 'expert system', as having cognitive states - and the
program by Roger Schank is essentially just an expert system. However
in order to disprove the 'Partisans of strong AI's claim Searle
creates the Gedankenexperiment - the Chinese Room Argument, (CRA). I
believe that this argument is so effective that beyond serving it's
initial purpose of de-mystifying expert systems it can also be
levelled at all computation based efforts at creating AI. This is in
fact what Searle does go on to express.

Searle's CRA can be briefly described as follows:
Suppose that an exclusively English speaking person is locked in a room
and then given a set of rules for responding to Chinese script. Now
suppose that by following these rules the person can take as input,
Chinese writing and give as output Turing indistinguishable responses
i.e. The person can read and write Chinese. (N.B. This is not to say
that the person understands what he is doing.)

This argument seems flawed only in the respect that although a program
could be made to give the appropriate symbolic responses to some symbol
inputs surely there can be no possible program that could give adequate
responses to all symbol inputs. For instance, humans have trouble
answering 'what is the meaning of life', as would any AI solution.
However more fundamentally than this it seems obvious that a symbol
responding system could not just be a set of rules. The system would
have to include some kind of experience 'history' so that questions
based on previous questions could be answered. As well as this it would
seem to need some kind of actual symbol grounding within the physical
world, so that actual 'meaning' could be attached to the input and
output symbols. It is this 'meaning' that Searle suggests is lacking
from any computational AI system. With respect to the earlier claims,
Searle argues:

SEARLE:
> 1. As regards the first claim, it seems to me quite obvious in the
> example that I do not understand a word of the Chinese stories. I have
> inputs and outputs that are indistinguishable from those of the
> native Chinese speaker, and I can have any formal program you like,
> but I still understand nothing.

Hosier
Thus he argues that Schank's program does not 'understand'. I agree.

SEARLE:
> 2. As regards the second claim, that the program explains human
> understanding, we can see that the computer and its program do not
> provide sufficient conditions of understanding since the computer and
> the program are functioning, and there is no understanding. But does
> it even provide a necessary condition or a significant contribution
> to understanding?

Hosier
Again I agree. As an engineer I am not particularly concerned with
whether it provides a 'contribution to understanding', I see AI
engineering as method of solving problems rather than a problem to be
solved. However I believe that there is in fact a great deal to be
learned from AI and that humans probably are slowly reverse
engineering the brain. (This would seem to be a logical consequence of
the fact that the main source of empirical intelligence data comes
from experiments on human or animal brains.)

SEARLE:
> Notice that the force of the argument is not simply that different
> machines can have the same input and output while operating on
> different formal principles -- that is not the point at all. Rather,
> whatever purely formal principles you put into the computer, they
> will not be sufficient for understanding, since a human will be able
> to follow the formal principles without understanding anything.

Hosier
The previous statement by Searle seems to summarise his entire argument
well. He then goes on to talk about how humans extend their own
'intentionality' on to the tools we create, for instance, "The door
knows when to open because of its photoelectric cell". Maybe humans do
this, and maybe even though we do this, we don't actually think the
door has any kind of understanding. However this is diverging from
Searle's main point that there is and can be no understanding in a
pure symbol system.

Searle then goes on to discuss a number of counter arguments to his
theory. ("Now to the replies:"). I will not repeat all of them here,
as I believe in most cases Searle's replies to these arguments are
correct. One particularly interesting reply given by Searle concerns
the 'Robot reply (Yale)'. Essentially the counter to the CRA is that
if the computational system involved some kind of grounding with the
real world through sensory-motor interaction with the world it would
be infallible to the CRA, i.e. Searle can become a rule system, but
he cannot become a 'robot'.

SEARLE:
> The first thing to notice about the robot reply is that it tacitly
> concedes that cognition is not solely a matter of formal symbol
> manipulation, since this reply adds a set of causal relation with
> the outside world.

Hosier
The above quote seems to give a clue about what Searle thinks will be
needed by a true AI system. However Searle's reply to the 'robot
problem' seems to be correct. In summary Searle suggests that although
it is not physically possible to become the whole robot system, it is
still possible to become the computational core. For instance the
addition of sensory-motor input and output within the real world can
simply be seen as more meaningless input and output symbols for the
computational core.

SEARLE:
> But the answer to the robot reply is that the addition of such
> "perceptual" and "motor" capacities adds nothing by way of
> understanding, in particular, or intentionality, in general, to
> Schank's original program.

> I am receiving "information" from the robot's "perceptual" apparatus,
> and I am giving out "instructions" to its motor apparatus.

> [the robot], it is simply moving about as a result of its electrical
> wiring and its program. And furthermore, by instantiating the program
> I have no intentional states of the relevant type. All I do is follow
> formal instructions about manipulating formal symbols.

Hosier
This seems to me to be a correct answer to the robot reply. It also
leads me to a point I have been considering - I know that in humans
some of the senses seem to extend so far as to actually become part of
the brain, such as the connection between the eyes and the brain.
However at the end of the day, the human brain 'core' simply deals
with all the meaningless symbols that come from the human senses, such
as the eyes or skin. Thus it would seem to me that the human brain
only has symbols as input? (As well as the internal ability to record
and playback these inputs in order and thus have some temporal
awareness.)

Searle also makes the following controversial statement regarding the
Turing test,

SEARLE:
> The only motivation for saying there must be a subsystem in me that
> understands Chinese is that I have a program and I can pass the
> Turing test; I can fool native Chinese speakers. But precisely one
> of the points at issue is the adequacy of the Turing test.

Hosier
The adequacy of the Turing test is not in question. In my opinion the
Turing test does not prove a system's intelligence, or prove some
particular facet of how the system works - such as does it understand?
It is designed simply to prove that from the point of view of the
external examining entity, if a system is indistinguishable from
another then it might as well be the first system.

In fact Searle does in fact seem to elaborate on this point without
realising that the Turing test is not an indication of a particular
type or method of intelligence.

SEARLE:
> If strong AI is to be a branch of psychology, then it must be able to
> distinguish those systems that are genuinely mental from those that
> are not. It must be able to distinguish the principles on which the
> mind works from those on which nonmental systems work; otherwise it
> will offer us no explanations of what is specifically mental about
> the mental. And the mental-nonmental distinction cannot be just in
> the eye of the beholder but it must be intrinsic to the systems.

Hosier
Searle then starts on the 'beliefs' of systems.

SEARLE:
> The study of the mind starts with such facts as that humans have
> beliefs, while thermostats, telephones, and adding machines don't. If
> you get a theory that denies this point you have produced a
> counterexample to the theory and the theory is false.

Hosier
However Searle does not expand on the word 'belief'. I have beliefs. I
do not really know why I have most of them - I simply have them
without much understanding - perhaps I am simply following a rule or a
pattern, in the same way as a computer might. Searle obviously does
not see this as an option. Searle repeatedly suggests the example of a
hunk of metal on the wall 'not having beliefs' - well this would seem
obvious. But what about a computer program that predicts various
probabilities of the weather tomorrow. Perhaps these probabilities are
the simplified basis for the 'strong or weak beliefs' that humans
have. I do not think it is right to entirely dismiss this part of
Strong AI.

Searle also comes up with a strange idea about AI, when he is answering
the 'many mansions reply (Berkeley).

SEARLE:
> I really have no objection to this reply save to say that it in effect
> trivialises the project of strong AI by redefining it as whatever
> artificially produces and explains cognition.

Hosier
I am assuming that Searle is strictly taking the definition of strong
AI here to mean computational based systems only. Also I do not think
that explaining cognition and producing artificial intelligence is a
trivial project. Again this all comes back to Searle's original and
only real argument, that a rule based computational system based on
formally defined elements cannot be intelligent.

Searle concludes his paper with a number of simple questions and
philosophical answers. These lead to his final question to which he
believes the answer is no.

SEARLE:
> "But could something think, understand, and so on solely in virtue of
> being a computer with the right sort of program? Could instantiating
> a program, the right program of course, by itself be a sufficient
> condition of understanding?"

Hosier
To elaborate on this would duplicate what has been written, as the CRA
seems to confirm the falseness of the above statement. The missing
element Searle seems to suggest is some form of causal system uniquely
within the brain that gives the intelligence inherent to it. He then
tries to explain why so many people have become confused over the issue
of AI and whether understanding can really be shown in a computational
system. He tries to answer this by assuming that all AI systems are
merely simulations of the real thing. As such he believes the
simulation can't actually be the real thing. In the examples he gives
this is obviously the case:

SEARLE:
> The idea that computer simulations could be the real thing ought to
> have seemed suspicious in the first place because the computer isn't
> confined to simulating mental operations, by any means. No one
> supposes that computer simulations of a five-alarm fire will burn
> the neighbourhood down or that a computer simulation of a rainstorm
> will leave us all drenched.

Hosier
However consider a film where the actors are drenched in 'simulated
rain' or when a person surfs a simulated wave. These seem more real
and in some ways are real - but again they are simulated. I really
believe that before Searle tries to argue against the possibility of
an intelligent system being made purely from symbols and rules he
needs to know more about how humans work. When I wince at pain, I
supposedly had an 'experience' and applied meaning to the elements
of the situation, as well as this I had to be self-aware in order
to have this experience. What if I winced because my instinctive
reaction was to wince, (cause and effect possibly from a rule), my
experience was simply to record the physical environment and my own
internal state at the time of the pain. And the symbols were
grounded through my senses - in other words I now know the 'red
flame' that burnt me will burn me again if I put my hand near it.
As for being self aware, clearly I know it was me who just felt the
pain, but I do not understand anything about how I work - or we
would not be having this debate, so how self aware am I?

I have digressed here and actually I agree with Searle that a
simulation of intelligence can not actually be intelligence. But this
is certainly not to say that all work in AI involves simulating
intelligence, and it is also not correct to think that AI will not
help understanding of how human minds work.

SEARLE:
> Whatever it is that the brain does to produce intentionality, it
> cannot consist in instantiating a program since no program, by
> itself, is sufficient for intentionality.

> "Could a machine think?" My own view is that only a machine could
> think, and indeed only very special kinds of machines, namely brains
> and machines that had the same causal powers as brains. And that is
> the main reason strong AI has had little to tell us about thinking,
> since it has nothing to tell us about machines

Hosier:
Thus although in principle although I agree with Searle's main point
that a simple rule based system cannot be intelligent, I do not agree
that it is nothing to do with intelligence - it could well be a piece
of the puzzle. I certainly do not agree that it has had or will have
little to tell us about thinking.

Adam Hosier < >



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:21 BST