Re: Dennett: Making Conscious Robots

From: Basto Jorge (jldcb199@ecs.soton.ac.uk)
Date: Fri May 25 2001 - 02:29:12 BST


Basto:
This paper describes Cog, a robot that is meant to learn from and
interact with its surrounding environment. While there is not an
explicit intention of achieving consciousness, there is hope that this
experiment can bring some insight on this matter.

To my understanding, this hybrid project illustrates an application of
both forward engineering and reverse engineering simultaneously. The
former because there is an explicit intention of making something
useful that certainly started with a general description of the
intended capacities and competencies, had an implementation design, and
proceeded tentatively to its physical realization; the later because
there is "hope" that from the success of this endeavor there will be
room to extrapolate by analysis the design considerations that must
have governed our cognitive behavior.

> DENNETT:
> It is unlikely, in my opinion, that anyone will ever make a robot
> that is conscious in just the way we human beings are

Basto:
I argue that probably if you change the variables, you will most likely
change the nature of the outcome. This obviously applies when it comes
to reproduce human-like consciousness. However, it can also be the case
that you can get the same outcome with a different set of parameters (a
genetically modified apple tastes radically different from an organic
apple but it IS nevertheless an apple. But I dont know if the only way
to get the organic apple taste is EXCLUSIVELY with the methods used in
organic culture). To achieve full scale human-level consciousness may
be too complex or at least too dependent of the precise physiology of
humans to exist apart from its embodiment of humans situated in their
environment. Besides, consciousness itself might be an emergent
property of the human "architecture" where the functionality that is
displayed and observable is to a large degree founded on the properties
of the environment. I do think though, that this is not the issue of
the problem. If we have people with the same architecture think in
different ways but nevertheless think, what can assure us a different
architecture wont think at all? I actually believe that whatever
thinking the different architecture demonstrates, it will be different
from our human thinking but can be called thinking anyway. Besides,
there are several degrees of consciousness --like the ones observable
in other animals (at least I do believe that some animals manifest
conscious behavior). I think we can be sure that it is possible to
achieve consciousness of some other kind other than human, but there is
no evidence or counter-example that we cannot get human level
consciousness as a side effect on a different architecture.

> DENNETT:
> Might a conscious robot be "just" a stupendous assembly of more
> elementary artifacts--silicon chips, wires, tiny motors and
> cameras--or would any such assembly, of whatever size and
> sophistication, have to leave out some special ingredient that is
> requisite for consciousness?

Basto:
I have an all or nothing view here: we can either assume we are the
result of physics and chemistry in action, regardless of how
complicated and complex the design, or we can believe there is
something else other then the physics and chemistry that regulates and
controls our human (conscious) universe. The introduction of something
like the mysterious luminous ether used in physics in the past,
something that is not seen and whose presence doesnt bring any utility
or doesnt even affect or interfere with the system and its
interpretation, is in fact disposable; it can be eliminated. So I stick
with the first view. But with this I DO NOT mean that consciousness is
JUST a physics-chemistry process. I do agree Strong AI is wrong in that
thinking is not only computation - there are epigenetic issues, social
and interaction issues, the very important issue of the development
process -the history of the conscious self, and the very important
factor of sensorimotor information retrieval. I take it to be highly
probable that there is a lot of room for an emergent behavior arising
from the complex combination of this (known) issues (plus probably many
more unknown as to now), where the genetic structure determines the
common "architecture", epigenetic factors give this extra diversity and
this extra complexity observable, and the sensorimotor processing
'grounding' what I'll define as the primordial symbols -the ones that
allow other symbols to be built on/of. Thats the reason humans share
the same architecture but nevertheless each has a unique
distinguishable personality.

> DENNETT:
> I take it that no serious scientific or philosophical thesis links
> its fate to the fate of the proposition that a protein- free
> conscious robot can be made, for example. The standard
> understanding that a robot shall be made of metal, silicon chips,
> glass, plastic, rubber and such, Is an expression of the
> willingness of theorists to bet on a simplification of the issues:
> their conviction is that the crucial functions of intelligence can
> be achieved by one high-level simulation or another

Basto:
The current assumption is that if you replicate roughly the
functionality of our human brain in almost every piece you will
eventually reach the overall goal of getting a rough conscious. It is
true that given the architecture of an organism it is possible to
predict or expect a certain degree of its consciousness (a sea star
with its one neuron wont convince anyone of its consciousness faculty,
but a dolphin or even a whale can bring some doubts...). But this is
the same as to say that if we can define formally or axiomatically the
architecture of an organism we can expect FOR SURE to be able to
predict its consciousness emergence. I take it for granted that
conscious IS not ONLY the elements of the organism (dont get me wrong,
no room for ethereal stuff here) - it is also the organism and the
organism history, organization, evolution and circumstance. The
organism is deeply involved with conscious and conscious modifies the
organism. So to keep my view that whats being questioned here is the
possibility of A consciousness and not human consciousness, I think
that it can emerge from out of assemblage of functionally-replicating
sections, given that we have all the other requirements that are not
just computation pieces. To a certain extent, Cog's approach is right
because it gives space for an embodied learning entity, and provides
for analog means of grounding the knowledge for its central processing
unit (or its brain, whatever one might call it). Besides, it still
remains to be proved if it is what we are made of that is fundamental
to our consciousness, or if it is THE WAY it was made and the whole
development process; or both. Parallel to this, we have the question
of building an "artificial" organic robot (whatever this may be). How
artificial would it be? It does sound like a contradiction of terms to
me.
1. If I take a clone of a person, I know it is a person.
>
2. If I assemble a "person" out of genes, atoms & molecules, I can
still believe I end up with a "person".

But what if I replace the atomic units, with silicon genes, steel
molecules and (ran out of substances here) whatever atoms? Will it
still be considered a person as in 1 or 2? I don't really know, but I
do know that I can still give it credit, for it can have a
consciousness of its own, if its observable behavior is
indistinguishable (and hence equivalent) to a person's behavior.
Nowhere here I am assuming that it will be THE person. To me this is
equivalent as saying that if I could scan and see through someone's
brain, I can in a sense "be" that someone. I don't agree. I think that
I can see part of this someone through MY being, but I don't have the
EXPERIENCE of being this someone. These are two radically different
things. Remark: Something that just occured to me is if a
mental-disabled person, to such an extent disabled that it is not
capable of surviving without a third party aid, that does not exhibit
any observable cognitive behavior (because even if there are inner
thoughts, we couldn't know) is nevertheless still considered to be a
conscious human being?

> DENNETT:
> We have just seen how, as a matter of exigent practicality, it
> could turn out after all that organic materials were needed to make
> a conscious robot. For similar reasons, it could turn out that any
> conscious robot had to be, if not born, at least the beneficiary of
> a longish period of infancy. Making a fully- equipped conscious
> adult robot might just be too much work. It might be vastly easier
> to make an initially unconscious or nonconscious "infant" robot and
> let it "grow up" into consciousness, more or less the way we all
> do. This hunch is not the disreputable claim that a certain sort of
> historic process puts a mystic stamp of approval on its product,
> but the more interesting and plausible claim that a certain sort of
> process is the only practical way of designing all the things that
> need designing in a conscious being.

Basto:
This merges with some of my ideas expressed before. I do believe that
evolution played a significant role in what consciousness are today. It
still remains to be shown that there is only consciousness if we have
an evolutionary process (or THIS evolutionary process we have been
subjected to). With this I mean it can be the case that consciousness
can emerge even if we skip the evolutionary course. Consciousness could
possible be backwards compatible and surge even if the stimuli font was
not there. I mean, we can build something today with built in
"historic" and "evolutionary" knowledge and expect its knowledge to
still create its own story and expect it's self to still evolve.

Neuro-Scientific theories of today claim to have found evidence that
evolution had something to do with our meta-knowledge capabilities via
our sensorimotor "devices" (that's what you get out of talking robots
for so long) and our sensorial information was (and IS) essential to
consciousness. As such, consciousness can probably be replicated with
an hybrid system that takes this into its design issues. I think
consciousness is deeply tied with self and self-awareness; self is
related with history: we are our past up till now. Self-awareness is
this miraculous meta-knowledge of these facts about us and the feeling
about our self permanent (invariant could also apply) over time. But as
Dennett points out, this is not to mean it won't be possible to get
consciousness through other means.

> DENNETT:
> After all, a normal human being is composed of trillions of parts
> (if we descend to the level of the macromolecules), and many of
> these rival in complexity and design cunning the fanciest artifacts
> that have ever been created. We consist of billions of cells, and a
> single human cell contains within itself complex "machinery" that
> is still well beyond the artifactual powers of engineers

Basto:
True. It is not clear that it is this quantitative degree of complexity
that is responsible for the conscious outcome. After all, there are
many examples in nature that have same or higher quantitative organic
complexity and we do not observe a comparative cognitive behavior
improvement. The complexity of something by itself is not an opponent
for the replication of its functionality. Besides, I think what Cog's
project team wants is to settle for some "rude" but similar
functionality. But is the functionality replicated the same thing? Is
(or will) Cog be conscious that it is conscious like we are aware of
our consciousness? Why not? How could we tell?

> DENNETT:
> Artificial ears and eyes that will do a serviceable (if crude) job
> of substituting for lost perceptual organs are visible on the
> horizon, and anyone who doubts they are possible in principle is
> simply out of touch. Nobody ever said a prosthetic eye had to see
> as keenly, or focus as fast, or be as sensitive to color gradations
> as a normal human (or other animal) eye in order to "count" as an
> eye. If an eye, why not an optic nerve (or acceptable substitute
> thereof), and so forth, all the way in?

Basto:
100% with DENNETT:. Besides, as I mentioned above, we are not talking
about a conscious robot with our human conscious. We should always be
aware that a simulation is not a complete representation of what it
simulates.

> DENNETT:
> A much more interesting tack to explore, in my opinion, is simply
> to set out to make a robot that is theoretically interesting
> independent of the philosophical conundrum about whether it is
> conscious. Such a robot would have to perform a lot of the feats
> that we have typically associated with consciousness in the past,
> but we would not need to dwell on that issue from the outset. Maybe
> we could even learn something interesting about what the truly hard
> problems are without ever settling any of the issues about
> consciousness

Basto:
I notice that Dennett is biased towards an "emergent" consciousness
faculty, and he expects that the construction of a robot can in fact
shed some light on the debate around consciousness. Since DENNETT:
obviously doesn't believe on a consciousness faculty located elsewhere
outside the embodiment of the conscious being, this comes to show that
most likely, consciousness will emerge from the "assemblage" of the
right pieces, with the right process and the right environment. He
wishes this process this can be discovered iteratively and
constructively.

> DENNETT:
> Since its eyes are video cameras mounted on delicate, fast-moving
> gimbals, it might be disastrous if Cog were inadvertently to punch
> itself in the eye, so part of the hard-wiring that must be provided
> in advance is an "innate" if rudimentary "pain" or "alarm" system
> to serve roughly the same protective functions as the reflex
> eye-blink and pain-avoidance systems hard-wired into human
> infants.

Basto:
I don't see why this is mentioned as a bonus instead of as a
requirement. Human beings are born with small innate self-preservation
mechanisms and it is most likely that this should be the case of any
system that pretends to replicate human cognitive behavior (even if
roughly). This is in contrast with the Cyc project -where it is assumed
that a huge database of facts and knowledge is required in order to
have some similarities with human beings in terms of knowledge. I
actually think the Cog approach is more truthful in that allows Cog to
learn and start with a relatively small knowledge base that is expected
to grow over time. The sensorimotor capabilities can assert provide for
the symbol grounding. Cog can show if it is possible to beat Strong AI
by using sensorimotor information grounding, complementing it with the
symbols and the symbol processing units.

> DENNETT:
> The same sensitive membranes will be used on its fingertips and
> elsewhere, and, like human tactile nerves, the "meaning" of the
> signals sent along the attached wires will depend more on what the
> central control system "makes of them" than on their "intrinsic"
> characteristics. A gentle touch, signaling sought- for contact with
> an object to be grasped, will not differ, as an information packet,
> from a sharp pain, signaling a need for rapid countermeasures. It
> all depends on what the central system is designed to do with the
> packet, and this design is itself indefinitely revisable--something
> that can be adjusted either by Cog's own experience or by the
> tinkering of Cog's artificers.

Basto:
This is enough evidence to me that the conscious outcome of cog will
certainly be of a very different nature (for the time being, at least)
as there are relevant abstractions on the comparative implementation.
It is not only the complexity here: the quantity and the quality of the
model chosen and the simplifications made are bound to provide
radically different information (different from the information our
sensorimotor devices provide) to the higher processing layers. Whatever
the processing layers do with this different "grounding" information is
bound to be something different than our consciousness.

> DENNETT:
> Any feature that is not innately fixed at the outset, but does get
> itself designed into Cog's control system through learning, can
> then often be lifted whole (with some revision, perhaps) into
> Cog-II, as a new bit of innate endowment designed by Cog itself--or
> rather by Cog's history of interactions with its environment.

Basto:
Nice way of getting Cog to have social interactions with its embodied
system, and nice way of providing Cog with a personal "history" plus a
development process.

> DENNETT:
> So even in cases in which we have the best of reasons for thinking
> that human infants actually come innately equipped with
> pre-designed gear, we may choose to try to get Cog to learn the
> design in question, rather than be born with it. In some instances,
> this is laziness or opportunism- -we don't really know what might
> work well, but maybe Cog can train itself up

Basto:
Isn't this the way it all started back in a day? After all, there
should have been a starting point for innate knowledge at some point in
the human evolution.

> DENNETT:
> Notice first that what I have just described is a variety of
> Lamarckian inheritance that no organic lineage has been able to
> avail itself of. The acquired design innovations of Cog-I can be
> immediately transferred to Cog-II, a speed-up of evolution of
> tremendous, if incalculable, magnitude.

Basto:
Wouldn't this be evolving from positive instances of experience only? I
think evolutionary errors and evolutionary mistakes played a key role
in the development of consciousness and I can't see how can they be
replicated if only innovations are transferred and inherited to
following generations. Also, there is no competition and what I think
would be a good exercise would be to let Cog II (or Cog previous)
coexist , cohabit and interact with Cog one and see where this would
take...

> DENNETT:
> And here we run into the fabled innate language organ or Language
> Acquisition Device made famous by Noam Chomsky. Is there going to
> be an attempt to build an innate LAD for our Cog? No. We are going
> to try to get Cog to build language the hard way, the way our
> ancestors must have done, over thousands of generations.

Basto:
Why compromise here on the tabula rasa and let some of the other
faculties possess some innate knowledge? Why should language be
different from vision? What is the rationale for taking this approach
with language that differs from what they 've been doing so far?
Honestly, I think they could "jump" over our language evolutionary
process same way as they skipped the evolutionary process of most of
the other components and also of the symbol-grounding.

> DENNETT:
> There is a big wager being made: the parallelism made possible by
> this arrangement will be sufficient to provide real-time control of
> importantly humanoid activities occurring on a human time scale. If
> this proves to be too optimistic by as little as an order of
> magnitude, the whole project will be forlorn, for the motivating
> insight for the project is that by confronting and solving actual,
> real time problems of self-protection, hand- eye coordination, and
> interaction with other animate beings, Cog's artificers will
> discover the sufficient conditions for higher cognitive functions
> in general--and maybe even for a variety of consciousness that
> would satisfy the skeptics.

Basto:
The project is as more interesting as it is an attempt (in my point of
view) to put together a practical experiment of forward engineering and
reverse engineering. Here it is told that consciousness is definitely
an issue. If Cog succeeds, there is reason to believe that it is
possible to extract some general consciousness requirements and perhaps
extrapolate to our own consciousness behavior.

> DENNETT:
> Anything in Cog that might be a candidate for symbolhood will
> automatically be "grounded" in Cog's real predicament, as surely as
> its counterpart in any child, so the issue doesn't arise, except as
> a practical problem for the Cog team, to be solved or not, as
> fortune dictates.

Basto:
Not sure what DENNETT: means by this. And not sure if this means Cog
will still have the possibility of grounding symbols by its own.
Otherwise, how will the Cog's team expect to symbol ground everything
that might stumble in Cog's way? Why not let Cog's sensori motors
ground Cog's initial knowledge same way as with language?

> DENNETT:
> Finally, J.R. Lucas has raised the claim (at this meeting) that if
> a robot were really conscious, we would have to be prepared to
> believe it about its own internal states. I would like to close by
> pointing out that this is a rather likely reality in the case of
> Cog. Although equipped with an optimal suite of monitoring devices
> that will reveal the details of its inner workings to the observing
> team, Cog's own pronouncements could very well come to be a more
> trustworthy and informative source of information on what was
> really going on inside it.

Basto:
Certainly, given that Cog provides more information about itself than
the information its team can extract, we have a winner. The ability to
interpret PET or MRI scanning it is NOT the same as experiencing the
subject. This could lead to the conclusion that Cog is engaging in
meta-reasoning and self-referencing, both of which we use when we think
of consciousness. Besides, we are the best reference source to talk
about ourselves (why the need of psychiatrist, so...), so who better to
talk about Cog's inner processes then Cog itself?

Given my positive trust in science potential I am optimistic about the
feasibility of the project. Besides, I still see an intelligent
behavior as the behavior that comes up with the right answer
irrespective of how the answer was arrived at. So Cog might not
surprise me if it ends up being "smart". Elephants bury their dead in
a sort of centralized cemetery in what could be described as some sort
of funebre ritual. They do it deliberately so they seem to be aware of
this. But how aware are they? Unfortunately, whatever interpretation we
come up with falls into personification of the scene and relies mostly
on the mind of the beholder. One thing we cannot deny about elephants:
they do succeed on the very essential task of survival and avoidance
of whatever causes themselves pain or damage. In fact, better then any
other AI product up till now, Elephants succeed in those simple AI
tasks that are part of the definition of intelligence behavior
(perceiving, reasoning, learning, communicating and acting successfully
in complex environments). Maybe Cog can do as well as an Elephant and
less than a human, while still being considered conscious.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST