Re: Symbol Grounding Problem

From: Masters, Kate (cmm93py@soton.ac.uk)
Date: Sun Feb 04 1996 - 18:42:18 GMT


Hi, Kate here. Stevan, it is very hard to write, in a couple of
screeenfuls, what you have already written so eloquently in a couple
of bookfuls! Nevertheless, whilst desperately hoping to avoid
becoming a plagiarist - here is my attempt:

The Chinese Room Argument

The Chinese Room Argument was proposed by John Searle (1980) as a
method of disproving the claims made by symbolic AI. That is, to
show that a system that can pass a T2 (Harnad) Turing test does not
necessary mean that the system has intentionality or gains "intrinsic
meaning" from the symbols it is processing.

Searle's argument asked the reader to imagine Searle being locked in a
room, from where only symbolic messages (Chinese symbols) could
pass in and out. Searle had no knowledge of written or spoken
Chinese. Searle is given input Chinese symbols and sets of English
instructions. These instructions enable him to relate the symbols to
each other and produce a set of output symbols.

These action are intended to be an analogy between Searle and a
computer. In this case those who are passing the symbols into
Searle's room and receiving the sets of symbols which he passes out
are "the programmers". The sets of Chinese symbols are, respectively,
"a script", "a story" and "a question" . The English instructions are
"the program".and the set of symbols that Searle produces in response
to the third set of symbols and instructions in "the answer to the
question". Searle goes on to argue that whilst he could develop
skills which enabled him to produce "answers" which were interpretable
as those of a native Chinese speaker the symbols he was manipulate,
according to the rules he was given, would still have absolutely no
meaning to him. He would still not understand Chinese.

Searle's argument was against that of the computational approach put
forward by Pylyshyn. The right symbol system, which
performs"cognitively" can run on any hardware in order to perform
this way: the implementation of this system is irrelevant. In other
words thee system has "implementation independence".

A T2 computer could pass the Turing test by enabling "penfriend" style
messages which were indistinguishable from those one would expect from
a real penfriend. A T3 computer is indistinguishable from a person at
every level (except for brain function). The problem Searle would
have here is in sensorimotor transduction. A T3 robot would have to
use this in order to communicate signals throughout its body.
Although Searle can act as the symbol manipulation part of the robot
he cannot be the whole robot, hence he is only part of the system; the
system as a whole may understand the symbols even though Searle does
not. Sensorimotor transduction ins not implementation independent.

The Symbol Grounding Problem

Before we can tackle the problem of how symbols are grounded within a
system it is necessary to define both symbols systems and grounding.

A symbol system is a set of physical tokens which are manipulated by
other such tokens (rules). This manipulation is purely syntactic - as
is demonstrated by the consideration of mathematical truths. The
whole system; the symbols and the rules, is semantically
interpretable. This semantic interpretation is the 'grounding' point.

Grounding is when the connections between symbols and what they are
about is direct, part of the system rather than performed by an
outside interpreter. A linguistic proposition is intrinsically
intentional to the proposer. If the proposer connects a symbol to
that which the symbol is about the symbol is grounded. In order fro
a symbol system have an "understanding" the symbols have to represent
something, therefore the system is parasitic upon those symbols which
are grounded.

If we were to try to learn Chinese from a Chinese/Chinese dictionary
we would only ever be stuck in a merry-go-round-like cycle because
each explanation would lead to another explanation all of which were
in Chinese. Similarly, as Searle showed in ins "Chinese Room", while
it is possible to reproduce answers to Chinese questions identically
to the answers a Chinese speaker would make it is not possible to gain
meaning from this symbol manipulation. An English speaker cannot
learn another language without grounding it in English. For example,
we start by saying "Ah, so "Bonjour" means "Hello"" when we are
learning French. If we were a young child learning French as a first
language "bonjour" would be grounded in terms of an a phonetic action
that is made at the time of seeing another person (particularly in the
morning).

Symbol grounding is a problem in two different directions: 1).
Psychologists would like to know how cognition works and 2).
Computation specialists would like to use symbolic AI in order to
create an intelligent system that can convincingly mimic
consciousness. The problem is that of "Where is the meaning of the
symbols grounded?".

Possible Solutions

Harnad, S. (1990) purposes what he calls a "hybrid" solution to the
symbol grounding problem. It relies upon symbols being grounded from
the bottom up in two different kinds of representations. The first of
these are iconic representations, which are analogues of the proximal
sensory projections of distal images and events. The second are
categorical representations, which pick out the invariant features of
the iconic representations and use them in a process of discrimination
and identification which leads to absolute judgements. These two
forms of representatio are both nonsymbolic, and neither of the n
gives meaning in itself. The process of categorisation "names"
categories,. These names are hence symbols which enable the system to
act upon the represented images and events and thus these images and
events can be said, according to the earlier definition, to be
grounded in the world.



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:58 GMT