Searle's Chinese Room Argument

From: Blakemore, Philip (pjb397@ecs.soton.ac.uk)
Date: Mon Mar 13 2000 - 16:49:45 GMT


Searle, John. R. (1980) Minds, brains, and programs. Behavioral and
Brain Sciences 3 (3): 417-457
http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html

We will firstly distinguish between "weak" AI (Artificial Intelligence)
and "strong" AI.

> SEARLE:
> According to weak AI, the principal value of the computer in the study
> of the mind is that it gives us a very powerful tool. For example, it
> enables us to formulate and test hypotheses in a more rigorous and
> precise fashion. But according to strong AI, the computer is not merely
> a tool in the study of the mind; rather, the appropriately programmed
> computer really is a mind, in the sense that computers given the right
> programs can be literally said to understand and have other cognitive
> states. In strong AI, because the programmed computer has cognitive
> states, the programs are not mere tools that enable us to test
> psychological explanations; rather, the programs are themselves the
> explanations.

These definitions are easy enough to understand, but the concepts are
quite mind blowing. I totally accept weak AI, using our current
understanding of the mind to create useful tools. These smart tools seem
to have intelligence beyond a normal program, but only intelligence
entered by a programmer. There is no understanding from the computer or
the program.

Strong AI actually states that all programs with cognitive states have a
mind. Whilst I can accept a super smart machine which is able to pass the
Turing Test (say, up to T3) might have a mind, as it would be
indistinguishable from our own minds. To say a toaster, mobile phone
or a digital watch has a mind is absurd.

Searle deamonstates his beliefs in this paper very convincingly and I
almost totally agree with him. We both have no problem with weak AI and
will only study strong AI. All references to AI from now on will be
concerned with the strong variety.

> SEARLE:
> I will consider the work of Roger Schank and his colleagues at Yale
> (Schank & Abelson 1977)

> SEARLE:
> Very briefly, and leaving out the various details, one can describe
> Schank's program as follows: the aim of the program is to simulate the
> human ability to understand stories. It is characteristic of
> human beings' story-understanding capacity that they can answer
> questions about the story even though the information that they give was
> never explicitly stated in the story. Thus, for example, suppose you are
> given the following story:

> SEARLE:
> A man went into a restaurant and ordered a hamburger. When the hamburger
> arrived it was burned to a crisp, and the man stormed out of the
> restaurant angrily, without paying for the hamburger or leaving a tip."

> SEARLE:
> Now, if you are asked -Did the man eat the hamburger?" you will
> presumably answer, ' No, he did not.' Similarly, if you are given the
> following story:

> SEARLE:
> A man went into a restaurant and ordered a hamburger; when the hamburger
> came he was very pleased with it; and as he left the restaurant he gave
> the waitress a large tip before paying his bill," and you are asked the
> question, -Did the man eat the hamburger?,-' you will presumably answer,
> -Yes, he ate the hamburger."

> SEARLE:
> Partisans of strong AI claim that in this question and answer sequence
> the machine is not only simulating a human ability but also
> 1. that the machine can literally be said to understand the story and
> provide the answers to questions, and
> 2. that what the machine and its program do explains the human ability
> to understand the story and answer questions about it.

The rest of Searle's paper tries to show these claims are false, if the
machine only uses pure symbol manipulation.

> SEARLE:
> Suppose that I'm locked in a room and given a large batch of Chinese
> writing. Suppose furthermore (as is indeed the case) that I know no
> Chinese, either written or spoken, and that I'm not even confident that
> I could recognize Chinese writing as Chinese writing distinct from, say,
> Japanese writing or meaningless squiggles. To me, Chinese writing is
> just so many meaningless squiggles.

So, with no help from the outside world an document of squiggles is handed
to him...

> SEARLE:
> Now suppose further that after this first batch of Chinese writing I am
> given a second batch of Chinese script together with a set of rules for
> correlating the second batch with the first batch. The rules are in
> English, and I understand these rules as well as any other native
> speaker of English. They enable me to correlate one set of formal
> symbols with another set of formal symbols, and all that 'formal' means
> here is that I can identify the symbols entirely by their shapes.

We must remember here that we are not talking about Searle learning
Chinese by correlating the first and second script in Chinese from the
script in English. That would be possible, but it would be the human
doing the understanding and not the machine.

> SEARLE:
> Now suppose also that I am given a third batch of Chinese symbols
> together with some instructions, again in English, that enable me to
> correlate elements of this third batch with the first two batches, and
> these rules instruct me how to give back certain Chinese symbols with
> certain sorts of shapes in response to certain sorts of shapes given me
> in the third batch. Unknown to me, the people who are giving me all of
> these symbols call the first batch "a script," they call the second
> batch a "story. ' and they call the third batch "questions."
> Furthermore, they call the symbols I give them back in response to the
> third batch "answers to the questions." and the set of rules in English
> that they gave me, they call "the program."

This is very cleaver example that Searle has chosen. Searle himself is
acting as the computer by executing the program. We can bypass the "Other
Minds" problem, where you cannot assert whether another person (or
machine) has a mind, as we can now ask Searle himself (and he knows he has
a mind). Searle does not understand any Chinese at all, but from the
outside world, just the written output, it looks like he is fluent in
Chinese. Therefore the computer (and the program) have no intelligence as
Searle has not used any added intelligence to calcualate the output.
 
> SEARLE:
> I produce the answers by manipulating uninterpreted formal symbols. As
> far as the Chinese is concerned, I simply behave like a computer; I
> perform computational operations on formally specified elements. For the
> purposes of the Chinese, I am simply an instantiation of the computer
> program.

> SEARLE:
> Now the claims made by strong AI are that the programmed computer
> understands the stories and that the program in some sense explains
> human understanding. But we are now in a position to examine these
> claims in light of our thought experiment.

> SEARLE:
> As regards the first claim, it seems to me quite obvious in the example
> that I do not understand a word of the Chinese stories. I have inputs
> and outputs that are indistinguishable from those of the native Chinese
> speaker, and I can have any formal program you like, but I still
> understand nothing. For the same reasons, Schank's computer understands
> nothing of any stories. whether in Chinese. English. or whatever. since
> in the Chinese case the computer is me. and in cases where the computer
> is not me, the computer has nothing more than I have in the case where I
> understand nothing.

I agree with this. The computer has not learnt Chinese, understood it
(the computer does not even know it is Chinese), interpreted it or made
it's own conclusions.

> SEARLE:
> As regards the second claim, that the program explains human
> understanding, we can see that the computer and its program do not
> provide sufficient conditions of understanding since the computer and
> the program are functioning, and there is no understanding. But does it
> even provide a necessary condition or a significant contribution to
> understanding? One of the claims made by the supporters of strong AI is
> that when I understand a story in English, what I am doing is exactly
> the same -- or perhaps more of the same -- as what I was doing in
> manipulating the Chinese symbols. It is simply more formal symbol
> manipulation that distinguishes the case in English, where I do
> understand, from the case in Chinese, where I don't. I have not
> demonstrated that this claim is false, but it would certainly appear an
> incredible claim in the example. Such plausibility as the claim has
> derives from the supposition that we can construct a program that will
> have the same inputs and outputs as native speakers, and in addition we
> assume that speakers have some level of description where they are also
> instantiations of a program.

I agree with Searle again. It does not seem to me that we only manipulate
symbols for understanding. We infer things from the text, considering our
own opinions, beliefs, feelings and knowledge when answering questions.

> SEARLE:
> by the example [is that] the computer program is simply irrelevant to my
> understanding of the story."

Searle is saying that the way in which we read information and process it
could be done by a computer program, much like reading in a file from a
computer disc. The syntax is checked at this stage in the program and
possibly some level of the semantics. But the understanding comes from a
much higher abstract level in the brain.

> SEARLE:
> whatever purely formal principles you put into the computer, they will
> not be sufficient for understanding, since a human will be able to
> follow the formal principles without understanding anything.
> No reason whatever has been offered to suppose that such principles are
> necessary or even contributory, since no reason has been given to
> suppose that when I understand English I am operating with any formal
> program at all.

This puts my argument into a more formal approach. Whatever language is
used (or anything needing understanding), the Chinese Room argument can be
used. A human could carry out the information processing manipulation the
appropriate symbols to produce output without understand what the symbols
were, what each one meant and the understanding of the output generated by
the program code.

Searle agrees that understanding normally has more states that just
understood or not. But there are some examples where understanding is
needed and others where it isn't and these examples are all that are
needed to prove the point. For example, I understand English whereas a
car or adding machine understands nothing.

Searle goes on to answer the normal questions which arise from his
argument. I find Searle's answers to these questions incredible
convincing and I agree with him on the whole.

> SEARLE:
> I. The systems reply (Berkeley). "While it is true that the individual
> person who is locked in the room does not understand the story, the fact
> is that he is merely part of a whole system, and the system does
> understand the story. The person has a large ledger in front of him in
> which are written the rules, he has a lot of scratch paper and pencils
> for doing calculations, he has 'data banks' of sets of Chinese symbols.
> Now, understanding is not being ascribed to the mere individual; rather
> it is being ascribed to this whole system of which he is a part."

> My response to the systems theory is quite simple: let the individual
> internalize all of these elements of the system. He memorizes the rules
> in the ledger and the data banks of Chinese symbols, and he does all the
> calculations in his head. The individual then incorporates the entire
> system. There isn't anything at all to the system that he does not
> encompass. We can even get rid of the room and suppose he works
> outdoors. All the same, he understands nothing of the Chinese, and a
> fortiori neither does the system, because there isn't anything in the
> system that isn't in him. If he doesn't understand, then there is no way
> the system could understand because the system is just a part of him.

Some people would reply that the information (program) would be too big to
memorize, or the fact that they might make errors. But, this is not the
point of the argument. The point is that it is possible, no matter how
big the program, to encompass it into one person, thus becoming the whole
system. We can then ask that person (In their native language) if they
understand what they are doing. The other response is that there are two
minds. The human mind which executes the program, and another which is
created as a result of the program and the computer. Is there then two
minds? I don't think so, but I cannot prove it. The computer (the human)
is aware of his own mind, but does he think there is another mind floating
around (in his head, since the system is all in his head) understanding
the program he may not even understand?

> SEARLE:
> The idea is that while a person doesn't understand Chinese, somehow the
> conjunction of that person and bits of paper might understand Chinese.
 
It does seem absurd, doesn't it?

> SEARLE:
> The whole point of the original example was to argue that such symbol
> manipulation by itself couldn't be sufficient for understanding Chinese
> in any literal sense because the man could write "squoggle squoggle"
> after "squiggle squiggle" without understanding anything in Chinese. And
> it doesn't meet that argument to postulate subsystems within the man,
> because the subsystems are no better off than the man was in the first
> place; they still don't have anything even remotely like what the
> English-speaking man (or subsystem) has. Indeed, in the case as
> described, the Chinese subsystem is simply a part of the English
> subsystem, a part that engages in meaningless symbol manipulation
> according to rules in English.

Now I know where Steven Harnad got "Squiggle Squiggle, Squoggle Squoggle"
from!

It is then said that this "other mind" created by the program and the
computer does actually understand Chinese, whilst the human computer does
not.

> SEARLE:
> The only motivation for saying there must be a subsystem in me that
> understands Chinese is that I have a program and I can pass the Turing
> test; I can fool native Chinese speakers. But precisely one of the
> points at issue is the adequacy of the Turing test.

I agree with Searle's point. If this program can fool Chinese people into
believing that an English person can read and write Chinese when we know
he cannot, the computer and the program would pass the Turing Test. But
only the Pen Pal Turing Test (T2). This version of the Turing Test allows
a computer system not to be seen by the outside world, hence called the
Pen Pal Turing Test. The system is allowed input (say, a Chinese letter),
after information processing, an output document presented to the writer
of the original letter. If the author doesn't realise that a computer
program has generated this letter, the program will pass T2.

> SEARLE:
> The example shows that there could be two "systems," both of which pass
> the Turing test, but only one of which understands; and it is no
> argument against this point to say that since they both pass the Turing
> test they must both understand, since this claim fails to meet the
> argument that the system in me that understands English has a great deal
> more than the system that merely processes Chinese. In short, the
> systems reply simply begs the question by insisting without argument
> that the system must understand Chinese.

It does seem that the Turing Test fails to see the difference in both the
understanding of English and Chinese, as they both pass the test. We have
to remember that this is only T2 and there are other, more advanced Turing
Tests to consider. If we rise to the level of T3, a robot capable of
interacting with the outside world, as well as all the functionality of
T2 could produce different answers.

> McCARTHY:
> Machines as simple as thermostats can be said to have beliefs, and
> having beliefs seems to be a characteristic of most machines capable of
> problem solving performance.

> SEARLE:
> Anyone who thinks strong AI has a chance as a theory of the mind ought
> to ponder the implications of that remark. We are asked to accept it as
> a discovery of strong AI that the hunk of metal on the wall that we use
> to regulate the temperature has beliefs in exactly the same sense that
> we, our spouses, and our children have beliefs, and furthermore that
> "most" of the other machines in the room -- telephone, tape recorder,
> adding machine, electric light switch, -- also have beliefs in this
> literal sense.

I agree with Searle that this is a silly remark. We should only consider
a machine to have intelligence when it passes at least T2 (perhaps it
should be higher).

> SEARLE:
> Think hard for one minute about what would be necessary to establish
> that that hunk of metal on the wall over there had real beliefs beliefs
> with direction of fit, propositional content, and conditions of
> satisfaction; beliefs that had the possibility of being strong beliefs
> or weak beliefs; nervous, anxious, or secure beliefs; dogmatic,
> rational, or superstitious beliefs; blind faiths or hesitant
> cogitations; any kind of beliefs. The thermostat is not a candidate.
> Neither is stomach, liver adding machine, or telephone. However, since
> we are taking the idea seriously, notice that its truth would be fatal
> to strong AI's claim to be a science of the mind. For now the mind is
> everywhere. What we wanted to know is what distinguishes the mind from
> thermostats and livers. And if McCarthy were right, strong AI wouldn't
> have a hope of telling us that.

If this claim was true and the thermostate had a mind; why are we
bothering to understand our own mind, which is more complex? We have
already suceeded in creating a mind many years ago. The reason is simple.
We want something smarter, something with more intelligence (or some!).

> SEARLE:
> II. The Robot Reply (Yale). "Suppose we wrote a different kind of
> program from Schank's program. Suppose we put a computer inside a robot,
> and this computer would not just take in formal symbols as input and
> give out formal symbols as output, but rather would actually operate the
> robot in such a way that the robot does something very much like
> perceiving, walking, moving about, hammering nails, eating drinking --
> anything you like. The robot would, for example have a television camera
> attached to it that enabled it to 'see,' it would have arms and legs
> that enabled it to 'act,' and all of this would be controlled by its
> computer 'brain.' Such a robot would, unlike Schank's computer, have
> genuine understanding and other mental states."

> The first thing to notice about the robot reply is that it tacitly
> concedes that cognition is not solely a matter of formal symbol
> manipulation, since this reply adds a set of causal relation with the
> outside world [cf. Fodor: "Methodological Solipsism" BBS 3(1) 1980].

I find this a facinating fact that the people who support strong AI make
this mistake.

> SEARLE:
> But the answer to the robot reply is that the addition of such
> "perceptual" and "motor" capacities adds nothing by way of
> understanding, in particular, or intentionality, in general, to Schank's
> original program.

This is easy to see as the robot still has to process the information in
its "brain". This brain could be substituted by Searle himself, again
showing that if the robot was trying to communicate in Chinese, Searle
would still not understand any of it.

Searle puts it a better way!
> SEARLE:
> To see this, notice that the same thought experiment applies to the
> robot case. Suppose that instead of the computer inside the robot, you
> put me inside the room and, as in the original Chinese case, you give me
> more Chinese symbols with more instructions in English for matching
> Chinese symbols to Chinese symbols and feeding back Chinese symbols to
> the outside. Suppose, unknown to me, some of the Chinese symbols that
> come to me come from a television camera attached to the robot and other
> Chinese symbols that I am giving out serve to make the motors inside the
> robot move the robot's legs or arms. It is important to emphasize that
> all I am doing is manipulating formal symbols: I know none of these
> other facts. I am receiving "information" from the robot's "perceptual"
> apparatus, and I am giving out "instructions" to its motor apparatus
> without knowing either of these facts. I am the robot's homunculus, but
> unlike the traditional homunculus, I don't know what's going on. I don't
> understand anything except the rules for symbol manipulation. Now in
> this case I want to say that the robot has no intentional states at all;
> it is simply moving about as a result of its electrical wiring and its
> program. And furthermore, by instantiating the program I have no
> intentional states of the relevant type. All I do is follow formal
> instructions about manipulating formal symbols.

> SEARLE:
> III. The brain simulator reply (Berkeley and M.I.T.). "Suppose we design
> a program that doesn't represent information that we have about the
> world, such as the information in Schank's scripts, but simulates the
> actual sequence of neuron firings at the synapses of the brain of a
> native Chinese speaker when he understands stories in Chinese and gives
> answers to them.

On the face of it, this seems to be a very good question. Some of the
responses to this is that you need to mimic the exact behaviour of the
brain to have understanding. Thus, if the program copied this, it might
have understanding.

However, the neuron's firing must be set up in exactly the same way.
Each neuron copied much be attached to the same neurons in the real brain.
Also simulation is not the real thing. Back to the virutal furnance - it
doesn't get hot!

> SEARLE:
> The machine takes in Chinese stories and questions about them as input,
> it simulates the formal structure of actual Chinese brains in processing
> these stories, and it gives out Chinese answers as outputs. We can even
> imagine that the machine operates, not with a single serial program, but
> with a whole set of programs operating in parallel, in the manner that
> actual human brains presumably operate when they process natural
> language. Now surely in such a case we would have to say that the
> machine understood the stories; and if we refuse to say that, wouldn't
> we also have to deny that native Chinese speakers understood the
> stories? At the level of the synapses, what would or could be different
> about the program of the computer and the program of the Chinese brain?"

> SEARLE:
> I thought the whole idea of strong AI is that we don't need to know how
> the brain works to know how the mind works. The basic hypothesis, or so
> I had supposed, was that there is a level of mental operations
> consisting of computational processes over formal elements that
> constitute the essence of the mental and can be realized in all sorts of
> different brain processes, in the same way that any computer program can
> be realized in different computer hardwares: on the assumptions of
> strong AI, the mind is to the brain as the program is to the hardware,
> and thus we can understand the mind without doing neurophysiology. If we
> had to know how the brain worked to do AI, we wouldn't bother with AI.

I again agree with Searle's aside point, but I think AI would still exist.
The fact computer programs can be ported to other machines shows the
independence of the program from the computer. In the same light, the
mind is separate from the brain, but is it? We don't know this.

SEARLE:
> However, even getting this close to the operation of the brain is still
> not sufficient to produce understanding. To see this, imagine that
> instead of a mono lingual man in a room shuffling symbols we have the
> man operate an elaborate set of water pipes with valves connecting them.
> When the man receives the Chinese symbols, he looks up in the program,
> written in English, which valves he has to turn on and off. Each water
> connection corresponds to a synapse in the Chinese brain, and the whole
> system is rigged up so that after doing all the right firings, that is
> after turning on all the right faucets, the Chinese answers pop out at
> the output end of the series of pipes.

> SEARLE:
> Now where is the understanding in this system? It takes Chinese as
> input, it simulates the formal structure of the synapses of the Chinese
> brain, and it gives Chinese as output. But the man certainly doesn-t
> understand Chinese, and neither do the water pipes, and if we are
> tempted to adopt what I think is the absurd view that somehow the
> conjunction of man and water pipes understands, remember that in
> principle the man can internalize the formal structure of the water
> pipes and do all the "neuron firings" in his imagination.

The simulation in the computer is still not the real neurons firing in the
brain.

> SEALE:
> The problem with the brain simulator is that it is simulating the wrong
> things about the brain. As long as it simulates only the formal
> structure of the sequence of neuron firings at the synapses, it won't
> have simulated what matters about the brain, namely its causal
> properties, its ability to produce intentional states. And that the
> formal properties are not sufficient for the causal properties is shown
> by the water pipe example: we can have all the formal properties carved
> off from the relevant neurobiological causal properties.

> SEARLE:
> IV. Imagine a robot with a brain-shaped computer lodged in its cranial
> cavity, imagine the computer programmed with all the synapses of a human
> brain, imagine the whole behavior of the robot is indistinguishable from
> human behavior, and now think of the whole thing as a unified system and
> not just as a computer with inputs and outputs. Surely in such a case we
> would have to ascribe intentionality to the system. '

This does look like a good example of intentionality. But it is still
fuzzying the main issue. Behind all the clever robots, can this machine
have mental states like our own?

> SEARLE:
> If the robot looks and behaves sufficiently like us, then we would
> suppose, until proven otherwise, that it must have mental states like
> ours that cause and are expressed by its behavior and it must have an
> inner mechanism capable of producing such mental states. If we knew
> independently how to account for its behavior without such assumptions
> we would not attribute intentionality to it especially if we knew it had
> a formal program. And this is precisely the point of my earlier reply to
> objection II.

This is exactly the same argument (as in II). We would naturally give the
robot the benifit-of-the-doubt and say it behaves in the same way we do.
If we were shown that the robot was made from a formal computer program,
we know from Searle original argument that it has no understanding at all.

There are a couple of other trivial questions in Searle's paper (the Other
Minds reply and the Many Mansions reply), but I will not write comments on
them.

> SEARLE:
> Granted that in my original example I understand the English and I do
> not understand the Chinese, and granted therefore that the machine
> doesn't understand either English or Chinese, still there must be
> something about me that makes it the case that I understand English and
> a corresponding something lacking in me that makes it the case that I
> fail to understand Chinese. Now why couldn't we give those somethings,
> whatever they are, to a machine?

This is interesting. If we were able to give knowledge about Chinese
instantly to a computer, we would not stop there. We could program all
languages and mathematical theories, but also peoples experiences and
feelings - if you accept that all knowledge and experience is stored
formally in the brain. It is just a case of extracting the relevant
information. You could store people's entire minds onto a machine and
then you really could live for ever!

> SEARLE:
> Part of the point of the present argument is that only something that
> had those causal powers could have that intentionality.

> SEARLE:
> That is an empirical question, rather like the question whether
> photosynthesis can be done by something with a chemistry different from
> that of chlorophyll.

Very interesting. Like photosynthesis, we haven't found any other
chemical process to do the same thing. However, we could simulate it on a
computer without much problem. But it wouldn't produce anything physical.
Unless we could give a computer the physical properties in our brains for
learning Chinese, rather than a formal computer program, the computer will
understand nothing.

Searle sums this up by saying:
> SEARLE:
> What matters about brain operations is not the formal shadow cast by the
> sequence of synapses but rather the actual properties of the sequences.

Searle begins to conclude his paper by stating "some of the general
philosophical points implicit in the argument".

> SEARLE:
> "Could a machine think?"

> SEARLE:
> The answer is, obviously, yes. We are precisely such machines.

Only if you define humans as machines. Although I don't know for
definite, I do not think I am running a computer program as I have
physical state (action and consequence), understanding and thought. I
wouldn't consider myself as a machine.

> SEARLE:
> "Yes, but could an artifact, a man-made machine think?"

> SEARLE:
> Assuming it is possible to produce artificially a machine with a nervous
> system, neurons with axons and dendrites, and all the rest of it,
> sufficiently like ours, again the answer to the question seems to be
> obviously, yes. If you can exactly duplicate the causes, you could
> duplicate the effects. And indeed it might be possible to produce
> consciousness, intentionality, and all the rest of it using some other
> sorts of chemical principles than those that human beings use.

I totally agree. Completely and exactly (physically) copy a human brain
and it should behave like us. However, using "other sorts of chemical
principles" doesn't seem likely. The other chemical process must produce
EXACTLY the same effects. I don't think you get exactly the same results
using different material than the original organic matter.

> SEARLE:
> It is, as I said, an empirical question. "OK, but could a digital
> computer think?"

> SEARLE:
> If by "digital computer" we mean anything at all that has a level of
> description where it can correctly be described as the instantiation of
> a computer program, then again the answer is, of course, yes, since we
> are the instantiations of any number of computer programs, and we can
> think.

Are we made up from computer programs?

> SEARLE:
> "But could something think, understand, and so on solely in virtue of
> being a computer with the right sort of program? Could instantiating a
> program, the right program of course, by itself be a sufficient
> condition of understanding?"

> SEARLE:
> This I think is the right question to ask, though it is usually confused
> with one or more of the earlier questions, and the answer to it is no.

Due to the Chinese room argument it does seem that this is not possible.
Any single formal program can always be processed by a human who has no
understanding of what he is doing. We know this because we can ask him!

> SEARLE:
> the distinction between the program and the realization -- proves fatal
> to the claim that simulation could be duplication.

I agree. Searle goes on to show that:
> SEARLE:
> The equation, "mind is to brain as program is to hardware" breaks down
> at several points...

> SEARLE:
> Weizenbaum (1976, Ch. 2), for example, shows in detail how to construct
> a computer using a roll of toilet paper and a pile of small stones.
> None acquire any understanding of Chinese.
> Stones, toilet paper, wind, and water pipes are the wrong kind of stuff
> to have intentionality in the first place -- only something that has the
> same causal powers as brains can have intentionality -- and though the
> English speaker has the right kind of stuff for intentionality you can
> easily see that he doesn't get any extra intentionality by memorizing
> the program, since memorizing it won't teach him Chinese.

> SEARLE:
> The program is purely formal, but the intentional states are not
> in that way formal. They are defined in terms of their content, not
> their form. The belief that it is raining, for example, is not defined
> as a certain formal shape, but as a certain mental content with
> conditions of satisfaction.

> SEARLE:
> Mental states and events are literally a product of the operation of the
> brain, but the program is not in that way a product of the computer.

All these statements show that formal computer programs are not enough to
have understanding.

Searle finishes the paper explaining why people believe AI can
reproduce and thereby explain mental phenomena.

Firstly, computers can only simulate, not duplicate.

Secondly, humans do information processing and find it natural to think of
machines doing the same thing. However, all machines do is maniplulate
symbols.

Thirdly, strong AI only makes sense given the dualistic assumption that,
where the mind is concerned, the brain doesn't matter. In strong AI (and
in functionalism, as well) what matters are programs, and programs are
independent of their realization in machines.

I agree with the argument throughout this paper showing that formal
programs in no way give a machine understanding.

Blakemore, Philip <pjb397@ecs.soton.ac.uk>



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT