Re: Harnad (1) on Symbol Grounding Problem

From: Shaw, Leo (las197@ecs.soton.ac.uk)
Date: Wed Mar 22 2000 - 13:38:43 GMT


>Butterworth:
>
>Harnad doesn't actually make clear here what he would consider a 'lifesize'
>task.

> HARNAD:
> Moreover, there has been some
> disagreement as to whether or not connectionism itself is symbolic. We will
> adopt the position here that it is not, because connectionist networks fail to
> meet several of the criteria for being symbol systems, as Fodor & Pylyshyn
> (1988) have argued recently. In particular, although, like everything else,
> their behavior and internal states can be given isolated semantic
> interpretations, nets fail to meet the compositeness (7) and systematicity (8)
> criteria listed earlier: The patterns of interconnections do not decompose,
> combine and recombine according to a formal syntax that can be given a
> systematic semantic interpretation.[5] Instead, nets seem to do what they do
> non symbolically.

>Butterworth:
>I think that I agree with this view intuitively, as well as because of
>the logical reasons given here. The more 'natural' process of training
>a network, and the evolution of weightings which can even produce
>time-dependent outputs where required, seems fundamentally different to
>formal symbol systems. The question really comes up because neural nets
>are often simulated by symbol systems (ie. your average digital
>computer) instead of being fully implemented, but Harnad explains this
>quite well in his footnote...

> HARNAD:
> [5.] There is some misunderstanding of this point because it is often
> conflated with a mere implementational issue: Connectionist networks can be
> simulated using symbol systems, and symbol systems can be implemented using a
> connectionist architecture, but that is independent of the question of what
> each can do qua symbol system or connectionist network, respectively. By way
> of analogy, silicon can be used to build a computer, and a computer can
> simulate the properties of silicon, but the functional properties of silicon
> are not those of computation, and the functional properties of computation are
> not those of silicon.

This sounds like a good argument in favour of the 'hybrid system'
- connectionist systems are criticised because they are not
symbolic in nature, and symbol manipulation seems to be close to
the way we think. But some of their properties are desirable,
like the ability to learn to identify objects and extract
features, and they can perform symbol manipulation by implementing
symbol systems. Would it be fair to say that symbol manipulation
is like another layer that runs on the connectionist 'hardware'?
Perhaps our ability to reason stems from a process like symbol
manipulation, which is something that develops over time. Hence,
when we're young, we have a set of basic reactions to objects in
the world and our behavior changes as our 'neural networks' evolve
through interaction with these objects. But at another level, we
learn to reason about things and form abstractions, which is only
possible once we have something on which to ground 'symbols'.

The discussion moves on to the Hybrid system:

>Butterworth:

>To justify this system and to give some of its aims, Harnad gives
>a model of human 'behavioral capacity'...

> HARNAD:
> They can (1) discriminate, (2) manipulate, (3) identify and (4) describe
> the objects, events and states of affairs in the world they live in, and they
> can also (5) "produce descriptions" and (6) "respond to descriptions" of those
> objects, events and states of affairs...
> To discriminate is to able to judge whether two inputs are the same or
> different, and, if different, how different they are.
> To identify is to be able to assign a unique (usually arbitrary) response - a
> "name" - to a class of inputs, treating them all as equivalent or invariant in
> some respect.

>Butterworth:
>Harnad argues that to be able to discriminate and
>identify objects and classes, we need an internal representation,
>and proposes 'iconic representations', analog transforms of
>received sensory input...

I'd like to comment on one point about icons:

HARNAD:
> According to the model being proposed here, our ability to
> discriminate inputs depends on our forming 'iconic
> representations' of them (Harnad 1987b). These are internal analog
> transforms of the projections of distal objects on our sensory
> surfaces...

Although I can see the justification for icons, I'm a bit confused
about the requirement that they be 'formed' before we can make
discriminations. If memory serves, baby chimps have an innate
fear of snakes - even without past experience or the presence of
an adult to incite nervousness. This would seem to imply that the
brain is already capable of discriminating shapes, and has
pre-defined behavior towards some types. Incidentally, the
paragraph mentions 'projections of distal objects on our sensory
surfaces' and says that 'For identification, icons must be
selectively reduced to those 'invariant features' of the sensory
projection that will reliably distinguish a member of a
category...' - is this something to do with the repeated
'retina-like' structures that are found in the brain?

>Butterworth:
>
>OK, so we now have these icon thingies, such that if the system sees a
>horse, it can say 'Horse!', but very little else - we have a
>classifier. And as Harnad discusses a little later, a very good
>candidate for classifiers is connectionism, ie. neural nets. I don't
>know how many of the class did Neural Nets last semester, but this is a
>classic application for them - apply a feature map as input, and the
>net generates an output indicating one particular class. Very nice,
>but we now want to do something with our 'Horse!'...

> HARNAD:
> For systematicity it must be possible to combine and recombine [categorical
> representations] rulefully into propositions that can be semantically
> interpreted. What would be required to generate these other systematic
> properties? Merely that the grounded names in the category taxonomy be
> strung together into propositions about further category membership
> relations.

>Harnad gives the example that, given awareness of the icons for
>'horse' and 'stripes', it should be a simple matter to define a
>new concept 'zebra' as the conjunction of these two, and this
>concept would 'inherit' a grounding from them.

This seems like an interesting point, because although 'horse' and
'stripes' are grounded, the reasonably accurate application of one
to the other requires some kind of understanding of the physical
nature of the two - for example the fact that the stripes would
follow the contours of the animal.

In conclusion:

>Butterworth:
>
>It may be a boring answer, but I don't have any real problems with this
>paper. I agree that the symbol grounding problem is one intrinsic to
>symbolic AI, and I like the simplicity of the idea behind Harnad's
>solution. I think a good implementation would be a very interesting
>experiment, whether you are a believer in weak or strong AI.

I'm afraid I agree as well, having read the paper, the 'hybrid'
system sounds like a natural solution, but now I'm confused about
some other issues. This paper seems concerned largely with
'learning' - obviously with the emphasis on symbol grounding. As
a hypothetical question, suppose at some point in the future we
are able to scan a brain at an atomic level and produce a
simulation on a computer. The computer (or program) is advanced
in that it is able to accurately model the behavior of atoms but
apart from that it is similar to today's machines. Surely if such
a computer could exist, and ethical issues aside (!) then the
modelled brain would perform exactly like the real one - we could
even artificially stimulate the sensory regions to simulate
sensory input. In this case, the issue would not be whether a
computer could pass the turing test and be deemed capable of
thought, but how we could arrive at such a 'brain' without copying
an existing one - the process of 'learning'. This has some
relation to the question of whether, if all the sensory inputs
were removed from a 'thinking' T3 machine, would it still 'think'.
The answer must be yes, but the question is whether the T3 level
is required to form a consciousness. I hope all that makes sense!

Shaw, Leo, las197@ecs.soton.ac.uk



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT