I've got brain ache thinking about this one. In many ways I follow, and
agree with what Searle says. However, just as he accuses Berkeley of
moving the goalposts, Searle also varies his point of attack. I agree
that computers, simply by processing symbols, do not exhibit real
understanding or intentionality, as these symbols have zero meaning to
them. However, he also disputes that AI work provides any significant
contribution to understanding.
On this point, I tentatively disagree. Searle claims that putting a
robot in the room, with access to images of the outside world, with
intentionality, in the form of a man, would still not enable the man to
understand Chinese, beyond manipulating symbols. However, to use his
example, if the man read a squiggle, and then saw that (through robot
eyes) displayed over a hamburger restaurant; then read a squoggle and
saw that on a waiters uniform, he would start to 'understand' the
symbols, so that they had real meaning. The' meaning' as opposed to the
symbol arises from the fact that in biological organisms, certain
things are weighted (by genes, instinct) so that they matter eg food.
We also have a limbic system which in effect enables satisfaction i.e
it matters to us that we have the things that matter.
The most obvious difference I see between brains and computers, is that
these factors are extensively interconnected with our information
processing abilities, so that symbols have meaning for us. If this is
so, then it would be unfair to claim that computational models of
information processing, add nothing to our understanding of the mind,
although it would be reasonable to claim that they leave plenty out.
Searle claims that intentionality can only be a biological
This is where I got brain ache, trying to decide whether we could
programme in these extra factors that make things matter to us.
Couldn't a computer that was programmed to 'weight' symbols with a
feedback system- no forget it, I agree with Searle, because the
computer just wouldn't care if it got it right. Aha, yes it would if
you programmed it to resist being unplugged - no that won't work
either. Did you know that Data - the robot was considering this problem
in Star Trek only the other day?
A last question. On p 422 tp left Searle says:
JS> What matters about brain operations is not the formal shadow cast
JS> by the sequences of synapses but rather the actual properties of the
JS> sequences. All the arguments for the strong version of AI that I have
JS> seen insist on drawing an outline around the shadows cast by cognition
JS> and then claiming that the shadows are the real thing'
What does he mean?
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:57 GMT