Re: Dennett: Making a Conscious Robot

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Tue May 02 2000 - 10:39:07 BST


Discussion archived at:

http://www.cogsci.soton.ac.uk/~harnad/Hypermail/Foundations.Cognitive.Science2000/

On Mon, 1 May 2000, Cliffe, Owen wrote:

> > > DENNETT:
> > > It is unlikely, in my opinion, that anyone will ever make a robot that is
> > > conscious in just the way we human beings are.
>
> > Harnad:
> > The trouble is that talking too liberally about different "forms" of
> > consciousness risks admitting one form we definitely don't want to
> > admit, namely, no consciousness at all, "nobody home," a "Zombie."
>
> Cliffe:
> Indeed, and it seems to me that separating human conciousness (holy-grail
> like) into a separate category is totally unnessecary. The underpinnigs for
> a concious system are present in relatively simple life forms (reaction to
> environment). Human conciousness differs _only_ in its complexity and
> diversity.

I'm afraid this is oversimplifying a rather important point: It's not
just a matter of simple-systems plus complexity/diversity. It's all
about the specifics of that "complexity/diversity"!

Yes, "simple" organisms are conscious too, but we have no idea how or why!
We didn't design them.

And we know that tea-kettles and toasters, which are also "simple,"
and which we DID design, are NOT conscious.

So simplicity + complexity says/explains nothing.

T3-power, on the other hand, does. It's the RIGHT "complexity" (as we
know from our own case.

But there is no T3 for simple organisms. So that S + C equation does not
help.

What I meant with my comment about "kinds" of consciousness is that we
have to be careful not to use it to cheat. (A teapot has NO kind of
consciousness; and scaling it up in complexity tells us nothing either.
Scaling it up in CAPACITY -- to T3 -- on the other hand, does. But
Dennett's Cog is (t3 < T3), so that doesn't help...)

> Cliffe:
> Plants are zombies too right?

I hope so! Otherwise we vegetarians (who won't eat anything conscious)
are out of luck!

> Cliffe:
> Amoebe?
> they can show particularly 'life-like' (e.g slime molding) behaviour despite
> the total absence of an electrical nervous system. Not that that is
> neccesarily a basis for conciousness.

Who knows? I suspect consciousness has something to do with having a
nervous system -- which amoebas don't have -- but who knows?

These simple systems may or may not be conscious; there is no way to
tell. Hence even if we could build an amoeba-robot that passes the
amoeba-t3, we wouldn't know whether it was conscious.

On the other hand; you have to start somewhere. And if modelling
animal-t3 gives you some ideas that than manage to scale up to the
human T3, then you might be getting somewhere.

> > Harnad:
> > So, yes, bio-modules might be components in a T3 robot -- but they
> > better be components that still allow us to understand the robot's
> > overall (T3) function and the basis for its T3 success. Otherwise such
> > robots will be more like clones than explanations.
>
> Cliffe
> But that would only be a speed-up excercise wouldn't it? Because the
> behaviour of the bio-components would have to be defined, and thus
> potentially replacable with non-bio parts.

Correct.

> Cliffe
> who is to say the bio+electo+chemical systems required for the T3
> power activites could not be replicated with more efficient/duplicate
> bio+electrical+chemical systems. e.g. if you say that the device can live in
> a sterile room then you might reasonable be able to ignore the immune
> system, or that its blood could be dripped with foodmix and cleaned (things
> which are possible to a certain extend with current medical science) , then
> you can discard digestion, the renal system and so on.

Yes, you are right that a large part of the T3 game is figuring out what
is and is not RELEVANT to generating T3 power. But, as Dennett says, it
could go both ways: the full bio-way could prove to be the easiest way, the
optimal way, or the only way to T3. Or there could be simpler ways.
Let's hope for the best.

> > Harnad:
> > (Exercise: What is the difference between a functional system capable of
> > learning to avoid structural damage to itself -- functioning as-if it
> > were in pain -- and a system capable of feeling pain?)
>
> Cliffe
> I assume in the latter case you are refering to the animal's pain for example.
I'm referring to ANYTHING that really feels pain.

> Cliffe
> The difference is only the level at which the sensation of pain or the
> concept of pain is experienced and responded to.

Owen, you've just begged the question: Are you talking about a real
conscious sensation and concept, or merely a behavioral capacity that is
interpretable (by us) AS-IF it were a conscious sensation or concept?

There's a world of a difference between any real, conscious (= felt)
function and a mere behavioral capacity that resembles it externally.

(This is the problem that dogs everything that is sub-T3-scale, i.e.,
t3 < T3. The buck stops with T3. But T3 is not just a "pain module" --
it is (as Dennett once said) "the whole iguana."

> Cliffe
> In humans the sensation and the bulk of the response to pain is unconcious,

In humans, the bulk of what's going on in the head is unconscious,
otherwise we could design a T3-passer by sitting in our armchairs and
introspecting about how we do everything.

But PAIN certainly is not unconscious! Conscious means FELT. And an
unfelt "pain" is no pain at all (just as an unfelt "feeling" is no
feeling at all).

Distinguish pain, the feeling, from its function, which is to protect a
system from tissue damage. This is, in itself, just a "toy" function: a
tiny part of overall T3. So there are no doubt ways of implementing this
function that are completely unconscious, i.e., feelinglessly, Zombily.
For our purposes, those are the WRONG way, and they will not take us to
T3.

> in the former case
> the machine is not responding automaticaly but learning from scratch that
> pain is bad.

conscious/unconscious (= felt/unfelt) is not the same as
learned/automatic. There can be both conscious and unconscious
learning, and conscious and unconscious automatic behaviour.

And, by the way, "unconscious" (unfelt) can be a weasel-word too. For we
have lots of "unconscious processes" going on in our otherwise conscious
heads. Some of these processes can sometimes become conscious. But that
has nothing to do with the NONconscious processes going on inside a
tea-kettle. They are not, and never will or could be conscious, because
the tea-kettle is a Zombie!

HARNAD, Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT