Re: Harnad (1) on Symbol Grounding Problem

From: Brown, Richard (r.brown@zepler.org)
Date: Tue Mar 21 2000 - 14:54:01 GMT


Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346.
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.sgproblem.html

Harnad begins his paper with a brief description of cognitivism and then
explains two different ways of modelling the mind. The first method is that
of the mind as an implementation independent implementation of a symbol
system that we have reached in class. The second models the mind as a
connectionist system, which Harnad describes as being

> Harnad
> 1.3 Connectionist systems
> Variously described as "neural networks,"
> "parallel distributed processing" and "connectionism," ...
> Connectionism will accordingly only be considered here as a cognitive
> theory. As such, it has lately challenged the symbolic approach to
> modeling the mind. According to connectionism, cognition is not symbol
> manipulation but dynamic patterns of activity in a multilayered network
> of nodes or units with weighted positive and negative interconnections.
> The patterns change according to internal network constraints governing
> how the activations and connection strengths are adjusted on the basis
> of new inputs (e.g., the generalized "delta rule," or "backpropagation,"
> McClelland, Rumelhart et al. 1986). The result is a system that learns,
> recognizes patterns, solves problems, and can even exhibit motor skills.

He then tries to define the scope and limits of both:

> Harnad
> It is far from clear what the actual capabilities and limitations of
> either symbolic AI or connectionism are. The former seems better at
> formal and language-like tasks, the latter at sensory, motor and
> learning tasks, but there is considerable overlap and neither has gone
> much beyond the stage of "toy" tasks toward lifesize behavioral
> capacity.

The fact that no one has been able to develop a non "toy" implementation of
these models suggests that they are incapable of providing anything other
than toys, despite Chalmers attempts at showing computational sufficiency
for cognition.

Harnad then turns to the argument over whether connectionism is symbolic,
and argues that according to his own definition of a symbol system, given
earlier, nets are not symbolic as they

> Harnad
> ... fail to meet the compositeness (7) and
> systematicity (8) criteria listed earlier: The patterns of
> interconnections do not decompose, combine and recombine according to a
> formal syntax that can be given a systematic semantic interpretation.[5]
> Instead, nets seem to do what they do non symbolically. According to
> Fodor & Pylyshyn, this is a severe limitation, because many of our
> behavioral capacities appear to be symbolic, and hence the most natural
> hypothesis about the underlying cognitive processes that generate them
> would be that they too must be symbolic. Our linguistic capacities are
> the primary examples here, but many of the other skills we have --
> logical reasoning, mathematics, chess-playing, perhaps even our
> higher-level perceptual and motor skills -- also seem to be symbolic. In
> any case, when we interpret our sentences, mathematical formulas, and
> chess moves (and perhaps some of our perceptual judgments and motor
> strategies) as having a systematic meaning or content, we know at first
> hand that that's literally true, and not just a figure of speech.
> Connectionism hence seems to be at a disadvantage in attempting to model
> these cognitive capacities.

Here we see what I feel is a very good argument against connectionism. Many
of the things we do are symbolic, therefore a model of the mind should also
be symbolic. Harnad argues that this may be a reason for the limited
successes of neural nets. Rather than propose that only symbol systems be
used instead, Harnad introduces the symbol grounding problem (TSGP), which
may, in turn explain the toy like results that are achieved with symbolic
AI.

Two examples are used to explain TSGP, the first is Searles's Chinese Room
Argument, the second being:

> Harnad
> 2.2 The Chinese/Chinese Dictionary-Go-Round
> My own example of the symbol grounding problem has two versions, one
> difficult, and one, I think, impossible. The difficult version is:
> Suppose you had to learn Chinese as a second language and the only
> source of information you had was a Chinese/Chinese dictionary. The trip
> through the dictionary would amount to a merry-go-round, passing
> endlessly from one meaningless symbol or symbol-string (the definientes)
> to another (the definienda), never coming to a halt on what anything
> meant.[6]
> ...The second variant of the
> Dictionary-Go-Round, however, goes far beyond the conceivable resources
> of cryptology: Suppose you had to learn Chinese as a first language and
> the only source of information you had was a Chinese/Chinese
> dictionary![8] This is more like the actual task faced by a purely
> symbolic model of the mind: How can you ever get off the symbol/symbol
> merry-go-round? How is symbol meaning to be grounded in something other
> than just more meaningless symbols?[9] This is the symbol grounding
> problem.[10]

My response to this would be to plug in some eyes, let the system see, but
Harnad argues against this in the next section. The Chinese/Chinese
dictionary argument also raises another question, perhaps not directly
related to this, what is the language of thought?

> Harnad:
> The standard reply of the symbolist (e.g., Fodor 1980, 1985) is that the
> meaning of the symbols comes from connecting the symbol system to the
> world "in the right way." But it seems apparent that the problem of
> connecting up with the world in the right way is virtually coextensive
> with the problem of cognition itself. If each definiens in a
> Chinese/Chinese dictionary were somehow connected to the world in the
> right way, we'd hardly need the definienda!

> Many symbolists believe that
> cognition, being symbol-manipulation, is an autonomous functional module
> that need only be hooked up to peripheral devices in order to "see" the
> world of objects to which its symbols refer (or, rather, to which they
> can be systematically interpreted as referring).[11] Unfortunately, this
> radically underestimates the difficulty of picking out the objects,
> events and states of affairs in the world that symbols refer to, i.e.,
> it trivializes the symbol grounding problem.

Here we reach the solution, a hybrid system:

> Harnad
> It is one possible candidate for a solution to this problem, confronted
> directly, that will now be sketched: What will be proposed is a hybrid
> nonsymbolic/symbolic system, a "dedicated" one, in which the elementary
> symbols are grounded in two kinds of nonsymbolic representations that
> pick out, from their proximal sensory projections, the distal object
> categories to which the elementary symbols refer. Most of the components
> of which the model is made up (analog projections and transformations,
> discretization, invariance detection, connectionism, symbol
> manipulation) have also been proposed in various configurations by
> others, but they will be put together in a specific bottom-up way here
> that has not, to my knowledge, been previously suggested, and it is on
> this specific configuration that the potential success of the grounding
> scheme critically depends.
> Table 1 summarizes the relative strengths and weaknesses of
> connectionism and symbolism, the two current rival candidates for
> explaining all of cognition single-handedly. Their respective strengths
> will be put to cooperative rather than competing use in our hybrid
> model, thereby also remedying some of their respective weaknesses. Let
> us now look more closely at the behavioral capacities such a cognitive
> model must generate.

A system that takes the best of both methodologies, and Harnad specifies
what this system will achieve, listing the properties of human behaviour
that he wishes this system to be able to produce.

> Harnad
> We already know what human beings are able to do. They can
> (1) discriminate, (2) manipulate,[12] (3) identify and (4) describe the
> objects, events and states of affairs in the world they live in, and
> they can also (5) "produce descriptions" and (6) "respond to
> descriptions" of those objects, events and states of affairs. Cognitive
> theory's burden is now to explain how human beings (or any other
> devices) do all this.[13]

> Harnad
> According to the model being proposed here, our ability to discriminate
> inputs depends on our forming "iconic representations" of them ...
> Same/different judgments would be
> based on the sameness or difference of these iconic representations, and
> similarity judgments would be based on their degree of congruity. ...
> So we need horse icons to discriminate horses. But what about
> identifying them? Discrimination is independent of identification. Will
the
> icon allow me to identify horses? Although there are theorists who
> believe it would (Paivio 1986), I have tried to show why it could not
> (Harnad 1982, 1987b). In a world where there were bold, easily detected
> natural discontinuities between all the categories we would ever have to
> (or choose to) sort and identify -- a world in which the members of one
> category couldn't be confused with the members of any another
> category -- icons might be sufficient for identification. But in our
> underdetermined world, with its infinity of confusable potential
> categories, icons are useless for identification because there are too
> many of them and because they blend continuously[15] into one another,
> making it an independent problem to identify which of them are icons of
> members of the category and which are not! Icons of sensory projections
> are too unselective. For identification, icons must be selectively
> reduced to those "invariant features" of the sensory projection that
> will reliably distinguish a member of a category from any nonmembers
> with which it could be confused.

okay, so we have to generalise to identify, icons tell us that two horses
are different, but the reason we know they are horses is not because they
match some internal icon of a horse but because the horse has certain
features such as four legs and yellow teeth, that make it a horse.

> Harnad
> Note that both iconic and categorical representations are nonsymbolic.
> The former are analog copies of the sensory projection, preserving its
> "shape" faithfully; the latter are icons that have been selectively
> filtered to preserve only some of the features of the shape of the
> sensory projection: those that reliably distinguish members from
> nonmembers of a category. But both representations are still sensory and
> nonsymbolic... Iconic representations
> no more "mean" the objects of which they are the projections than the
> image in a camera does.

It is good to stop here and realise that we aren't using symbols yet in this
system, that discrimination is akin to comparing two photographs, the images
of which are in our heads

> Harnad
> Nor can categorical representations yet be interpreted as "meaning"
> anything.
> "Horse" is so far just an arbitrary response that is reliably made in
> the presence of a certain category of objects. There is no justification
> for interpreting it holophrastically as meaning "This is a [member of
> the category] horse" when produced in the presence of a horse, because
> the other expected systematic properties of "this" and "a" and the
> all-important "is" of predication are not exhibited by mere passive
> taxonomizing. What would be required to generate these other systematic
> properties? Merely that the grounded names in the category taxonomy be
> strung together into propositions about further category membership
> relations.
> For example:
> (1) Suppose the name "horse" is grounded by iconic and categorical
> representations, learned from experience, that reliably discriminate and
> identify horses on the basis of their sensory projections.
> (2) Suppose "stripes" is similarly grounded.
> Now consider that the following category can be constituted out of these
> elementary categories by a symbolic description of category membership
> alone:
> (3) "Zebra" = "horse" & "stripes"[17]
> What is the representation of a zebra? It is just the symbol string
> "horse & stripes." But because "horse" and "stripes" are grounded in
> their respective iconic and categorical representations, "zebra"
> inherits the grounding, through its grounded symbolic representation.

So, in essence, we can use inheritance to ground symbols. If someone is
able to identify a horse(1), and identify stripes(2), and is then told that
the symbol "Zebra" is a stripey horse, then they can recognise one, without
ever having seen it.

> Harnad
> Hence, the ability to
> discriminate and categorize (and its underlying nonsymbolic
> representations) has led naturally to the ability to describe and to
> produce and respond to descriptions through symbolic representations.

To sum up, there have been two main approaches to AI. Both have their
advocates and their detractors, but the fact remains neither of them has yet
achieved their long term goal. Harnad here attempts to describe why they
have failed, and to draw the two methods together in a foundation for the
eventual creation of an AI.

Brown, Richard <r.brown@zepler.org>



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT