Re: Chalmers: Computational Foundation

From: HARNAD Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Tue Mar 20 2001 - 18:56:02 GMT


On Wed, 28 Feb 2001, Hudson Joe wrote:

> Hudson:
> even if we knew exactly how the mind
> works (if that's possible) I think we would still have a problem building
> one in any way other than to create it in its infantile state and to let
> it grow and develop as we do.

About whether or not there is really something special about real-time
history, see:

http://www.cogsci.soton.ac.uk/~harnad/CM302/Granny/sld007.htm

> Hudson:
> How can you have a mind that is
> recognisable as human without a personality and a large collection of
> memorys? And how can you get those without experiencing life as we know
> it?

A real-time history is probably the usual and the optimal way of
getting those things into a head, but if the current state could be
built in directly, without a real-time history, would that make the
system any different from what it would be if it had earned them the
honest way?

On the other hand, for T3 (or even T2) passing, there has to be a
forward going history, starting from whenever the Testing starts. The
capacity for interacting in real time is part of T3.

> Hudson:
> These don't seem like implementation independant characteristics. But
> even if they were they are not the main issue.

It is not implementation-independence that is at issue with real-time
history. But symbol grounding might be part of what's at issue.

> Hudson:
> If what we mean by mind is
> self awareness, i.e. 'someone home' or consciousness, then we are in a
> situation where no one (that I'm aware of) has the slightest foot-hold on
> how symbols are even relevent let alone on how they can be used to
> bootstrap themselves onto consciousness.

Never mind "self"-awareness. Settle for any kind of awareness, e.g.,
what it feels like to be pinched (which even a worm could have). Yes,
that is what having a mind is. And although it is true no one has a
clue how symbol-processing could generate feelings, no one has a clue
how anything else (including brain activity) could do it either!
Welcome to the mind/body problem (which is always lurking behind AI
and CogSci).

> Hudson:
> Perhaps a computationaly based intelligence could be the platform on which
> a mind could exist, but that is not the same as saying the intelligence
> emcompasses the mind, or that we would know what was giving rise to the
> mind.

It's one thing for it to be true that a mind really does piggy-back on
computation. Maybe it does. But is for: How? Explaining that is the
mind/body problem (which I find it more useful to call the
"feeling/function" problem"):

http://www.cogsci.soton.ac.uk/~harnad/Tp/bookrev.htm

> Hudson:
> how could we possibly verify that 'someone was home' in our
> creation? We can't even do that with each other. As for intelligence
> without a mind then I agree, computation is probably sufficient.

Welcome to the Other-Minds Problem:

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad91.otherminds.html

Stevan Harnad



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:25 BST