Re: Ziemke on "Rethinking Grounding"

From: Shaw, Leo (las197@ecs.soton.ac.uk)
Date: Tue May 16 2000 - 14:32:14 BST


Stevan,

Just a few thoughts I had on reading your response...

> > Shaw:
> > The crux of Ziemke's argument is that neither paradigm produces a
> > sufficiently grounded system.

> It seems to me that it's not helpful to speak of whether a system is
> "sufficiently" grounded. A symbol system alone is ungrounded. A symbol
> system module plus transducer modules interacting with the world might be
> grounded, but until it approaches T3 scale the grounding is trivial. It
> is T3 that determines the "sufficiency" of the grounding.

My understanding of Ziemke's argument was that, while some approaches
can provide grounding of symbols, the problem still remains that the
agent isn't doing things for its own reasons, but because it has been
told to do so. For example, in the case of Regier's system, the system
can recognise faces, but has no reason to do so. On the other hand, if
it was recognising faces because one person 'switched it off', and the
other 'fed it', it would have a reason.

> > Shaw:
> > the concept of a 'fully grounded system' hasn't really been justified

> Correct. "Degree of grounding" still sounds like an arbitrary idea. The
> symbol problem is real enough (how do we connect symbols to their
> meanings without the mediation of an external interpreter's mind?), but
> where does the "degree" come in? A symbol system whose meaning are
> autonomously connected to the things they are about is grounded, but
> only nontrivial symbol systems are worth talking about. (An "on/off"
> system, whose only two symbolic states are "I am on" and "I am off" is
> grounded if it's on when it's on and off when it's off, but so what?)

But surely the point is that the on / off action isn't grounded: an
amoeba moves away from sharp objects, for which it has a good reason.
It may be the simplest kind of behavior, but it could be a step in the
right direction. An on / off switch has nothing.

> But to meet this condition, to be grounded, all a system needs is
> autonomy (and T3 power). With that, it's grounded, regardless of
> whether it is integrated or modular, and regardless of whether (or how)
> its transducers are "designed."
> ...
> The only requirement for groundedness is
> that there should be no human mediator needed in the exercise of its T3
> capacity. How it got that capacity is irrelevant.

Perhaps Ziemke's argument could be interpreted as meaning that trying to
allow a system to define its own behaviour is a SENSIBLE way to go about
creating an artificial intelligence, not the only way. It seems to me
that creating an agent that could pass T3 is a colossal task,
especially if the only way of measuring success is to subject the final
product to a Turing test. Surely, human cognitive capacity evolved
because it provided an advantage over competition. As time progressed,
the capacity got greater. Maybe what we consider 'thought' is just an
extension to this and the best way to produce a system with similar
cognitive capacity to our own is to try to allow it to 'evolve' rather
than attempting to define it artificially.

Shaw, Leo



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT