Re: Chalmers: Computational Foundation

From: Hudson Joe (jh798@ecs.soton.ac.uk)
Date: Wed Mar 21 2001 - 00:29:13 GMT


Some disagreements.

> Hudson:
> even if we knew exactly how the mind
> works (if that's possible) I think we would still have a problem building
> one in any way other than to create it in its infantile state and to let
> it grow and develop as we do.

>About whether or not there is really something special about real-time
>history, see:

>http://www.cogsci.soton.ac.uk/~harnad/CM302/Granny/sld007.htm

>Hudson:
> How can you have a mind that is
> recognisable as human without a personality and a large collection of
> memorys? And how can you get those without experiencing life as we know
> it?

>HARNAD
>A real-time history is probably the usual and the optimal way of
>getting those things into a head, but if the current state could be
>built in directly, without a real-time history, would that make the
>system any different from what it would be if it had earned them the
>honest way?

>On the other hand, for T3 (or even T2) passing, there has to be a
>forward going history, starting from whenever the Testing starts. The
>capacity for interacting in real time is part of T3.

Hudson
Sure, I agree that in 'principle' if you could somehow download in a flash
all
the myriad experiences and remembered sensations and subtle personality
traits of a mind (and loose nothing in the process) into some mechanical
contraption capable of the same functionality of the realtime variant then
of
course both would (or could if they chose) be indistinguishable in their
behaviour.

But then how on earth could we possibly make a machine with such a
phenomenal data assimilation capacity? (And would we be wise to do so?
Rapid extinction sound good to anybody? )

> Hudson:
> These don't seem like implementation independent characteristics. But
> even if they were they are not the main issue.

>HARNAD
>It is not implementation-independence that is at issue with real-time
>history. But symbol grounding might be part of what's at issue.

Hudson
Originally I was thinking of computation pretending to be a mind when I
wrote this. But then when something runs in real-time doesn't this
place a certain performance requirement on the hardware as well as a
functional one?

> Hudson:
> If what we mean by mind is
> self awareness, i.e. 'someone home' or consciousness, then we are in a
> situation where no one (that I'm aware of) has the slightest foot-hold on
> how symbols are even relevant let alone on how they can be used to
> bootstrap themselves onto consciousness.

>HARNAD
>Never mind "self"-awareness. Settle for any kind of awareness, e.g.,
>what it feels like to be pinched (which even a worm could have). Yes,
>that is what having a mind is. And although it is true no one has a
>clue how symbol-processing could generate feelings, no one has a clue
>how anything else (including brain activity) could do it either!
>Welcome to the mind/body problem (which is always lurking behind AI
>and CogSci).

Hudson
Does a worm feel pain? I don't know. Lets suppose for a moment it could
be aware of pain or other sensations.
Who is being aware? The worm. Who is the worm? No one its just a worm.
Then what does a pinch mean to a worm? If we get pinched the feeling is
always: " 'I' am in pain. " You could say the relevance of the sensation is
'grounded' in the sense of self. If there is no self how is sensation or
feeling
relevant and who is it relevant to? What and who is doing the feeling if
there
is no self awareness? Who is the 'I' in " I am in pain" with the worm?

I think without the anchor point of a sense of self (i.e. self awareness),
awareness has no meaning. I think you need to start by being aware of
youself
before you can be aware of anything else.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:25 BST