Re: MacLennan: Grounding Analogue Computers

From: HARNAD Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Sat Jun 09 2001 - 12:08:22 BST


On Sat, 9 Jun 2001, Hudson Joe wrote:

> > Hudson:
> > whatever analogue computation is a digital computer could approximate
> > the physical behaviour of the implementation to the nth degree
>
> >HARNAD:
> >This just means that a digital computer could simulate any continuous
> >process to as close an approximation as we wish. But this is still
> >just simulation (Turing Equivalence, or even Strong Equivalence -- as
> >close as we like). But, for the same reason that simulated flying (no
> >matter how closely it approximates it) is not real flying, simulated
> >continuity is not real continuity.
>
> Hudson:
> Sorry to go through this yet again but what is real flying?

The stuff that gets you to Singapore. Best not to lose sight of that,
wherever your thoughts about computation/cognition, analog/digital
might take you.

(Gets you, the real you, to Singapore, the real Singapore. No
tricks.)

> Hudson:
> If the
> distinction between real and simulated is just that for the real thing
> the implementation is critical, then we can recreate the same situation
> inside a VR booth.

No the difference isn't just implementation-independence.
"Implementation-DEpendence" is simply one of the properties of a
dynamical (analog) system.

VR's no good because it doesn't get your (real) body to (real)
Singapore, it merely fools your (real) senses into feeling as if
you've gone to Singapore.

> Hudson:
> Now we are inside the fully immersive (caters for
> all the bodies senses) VR system. We 'see' a plane flying over our head
> and at the same time a computer on a desk by a VR booth in front of us.
> Every atom of the computer is fully modelled by the VR simulation. The
> computer is running a VR simulation of a plane flying.
>
> At this point we hastily step outside the VR booth. Now both the VR
> simulation and its simulation of another VR simulation merge into the
> same domain of implementation independent computation, as I'm sure you
> would agree.

Kid-sib doesn't understand all these words. I put on the sensors, I
see a plane flying. I take them off, I see a computer and VR
equipment. What's the point. It's obvious what's really going on: A
computer programme is driving the VR to my senses. It's completely
irrelevant how well it encodes all the properties of a plane
symbolically, and then transduces them into inputs for my senses.
What there is is a computer, a programme it's running
("implementation-independently"), VR peripherals, and me (and my
senses).

> Hudson:
> However if we step back inside the VR booth, from this
> perspective ( and that surely is all we have, just a distorted
> perspective) the plane we see above our heads is not implementation
> independent.

Kid-sib:
Nothing of the sort. We don't have a "perspective."
We have a computer, a programme... etc.

What on earth is a "perspective" supposed to be, in this context?
The only perspective I know of is (1) the technical stuff of artists
and architects, and (2) they way things look to me, via my
eye-balls.

> Hdson:
> If according to the (hidden) aerodynamics model the wings
> are the wrong shape then the plane wont fly. Sure we could say that its
> all really running on a computer and that computer could be any shape
> or size so long as it was Turing equivalent, but then the computer is
> no where to be seen, as we are in the VR sim. i.e there is no evidence
> that we could possibly gather from within the VR sim that the plane we
> see flying above our head is anything but implementation DEPENDENT, or
> for that matter a flower we might touch and smell isn't.

Kid-sib:
This "implementation dependence/independence" thing seems to
have become some sort of talisman here. I thought all it meant was
that it's the programme that matters, not what computer you run it
on. Whereas with a plane, it's being able to fly that matters, and
that DOES depend on the device you try to do it with.

And like I said before, we're not talking about tricking human senses
(VR), but about what's really going on.

(Didn't the course lecturer once warn us about not mixing up "epistemic"
questions [about what you can KNOW, with your senses, for example]
with "ontic" questions, about what's really going on? I mean, I hate
fancy words, but it sounds like that's what's going wrong in your VR
argument: You're mixing up what's really going on there (computer,
etc.) with what your senses can tell you under those conditions.)

> Hudson:
> Now (still in the VR booth) our attention is drawn to the computer on
> the desk. We see a plane flying across the screen. Obviously just an
> implementation independent simulation. We decide against our better
> judgement to step inside this VR booth. Now by the same token what was
> implementation independent becomes implementation dependent. We can
> imagine an infinite regress.

Kid-sib:
Ya lost me...

> Hudson:
> Could all this be countered by pointing out that if your heart stops in
> the real world then all those nested simulations vanish and only the
> real world remains, hence all they ever were were computer simulations?
> Not completely.

Kid-sib:
How did my heart get into this? And what has it to do with what's
really going on there (which is obvious), as opposed to what it looks
like, because of the VR?

> Hudson:
> Whether or not 'we' are causal systems it appears we do need a brain to
> function in the real world.

Kid-sib:
Was there something in this course that would have made you think
otherwise? Like that we were immaterial ghosts or something? (I'm not
even sure it makes sense to say we're "using" are brains: Rather, we
ARE our brains, or some functional parts of them...)

> Hudson:
> I think it can be excepted that our brains
> are causal systems and hence there functionality can be recreated in
> simulation.

Kid-sib:
Our brains (like everything else, as far as I can tell) are "causal
systems" alright. And they can be simulated (i.e., a symbol system
can be designed whose symbol manipulations can be systematically
interpreted as corresponding to properties of the brain). That's the
"Church/Turing Thesis" (physical version, CTTP), right?

> Hudson:
> But could the actual 'physical' implementation of our
> brains matter somehow? Well if we say it is only our brain which
> governs our behaviour and we imagine a body with a brain (TimII) in our
> VR sim then it should act in exactly the same sort of way we do. So
> from a functionalist point of view we are implementation independent.

Kid-sib:
You lost me again. You can simulate my liver too. So does that make
my liver, or liver-function, "implementation-independent"? In fact,
since (just about) everything can be simulated (CTTP), are you saying
that that means everything is implementation-independent? What on
earth does that mean?

Seems to TimII (why Tim?) is a bunch of squiggles and squoggles in a
computer running a programme that is systematically interpretable as
Tim, so can even predict what he's gonna do. So? Could do the same
with a plane flying to Singapore (but it won't get you to
Singapore! You needs the right physics [implementation] for that, not
just squiggles and squoggles).

> Hudson:
> This leads to the observation that if TimII also enters the second VR
> booth and his heart, which was Turing indistinguishable to us from our
> own in the first VR sim, stops then he will cease to function in the
> second VR sim too. All this means is that TimII's existence will halt
> in all VR sim nested levels up to the one where he was first defined
> and physically modelled. For a functionalist there should be no
> essential difference between 'physically defined' and 'physically
> situated'. This objection only illustrates the significance of the
> 'original VR sim nest level', it does not in essence distinguish
> between reality and simulation.

Kid-sib:
I don't know about "functionalists," but to ME is certainly seems you
are just talking about squiggles here...

> Hudson:
> 1/ Implementation independence/dependence (II, ID) depends on your
> plane of awareness, i.e. for TimII the airplane he sees is ID but for
> us (outside all the VR booths) it is II simulation. What is to say we
> are not in the same position as TimII?

Kid-sib:
Sounds like the magic work that "perspective" was doing for you
before is now being done for you by "plane of awareness" and
"position" -- and you're still treating implementation
independence/dependence like some sort of mystic thing. Whereas
what's really going on in so plain (if you're not being tricked by
VR): computers, squiggling...

> Hudson:
> 2/ We have no basis to say that our plane of awareness is somehow the
> original and 'true' physical existence.

Kid-sib:
??? Sounds vaguely Buddhist...

> Hudson:
> 3/ When we say 'real flying' we should be aware of 1/ and 2/.

Kid-sib:
Sounds to me like, after you get your degree and a job, if they ever
tell you you need to fly to Singapore for some reason, you'll be
plenty "aware" of the difference between trying to get there from
Heathrow vs. from a VR booth...

> > HARNAD:
> > But never mind continuity. Perhaps it is more instructive to think of
> > every dynamical physical system, even an airplane, as an "analog device"
> > of some sort (not necessarily a "computer"). That gives us a better
> > sense of the gap between the analog and the symbolic (better than
> > "digital," which focusses too much on just the continuous/discrete
> > distinction).
>
> Hudson:
> Again symbolic systems are disallowed from being analogue. But why?
> Did my example (Tom the AI) not show how a symbolic system is best
> viewed as continuious? I say 'viewed as continuious' and not 'is
> continuious' because any physicaly implemented system (yes even an
> airplane) can be viewed as discrete at one scale and continuious at
> another. Something that is considered analogue (in this sense) just
> means 'best viewed as' continuious.

Kid-sib:
"Interpretable as" or "viewed as" are both "epistemic". I'm
interested in what analog systems ARE ("ontic"), not what they can be
"viewed as" (whether by my mind, intepreting a computer's squiggles, or
by my senses, through VR driven by a computer's squiggles).

Remember the symbol grounding problem? (Nontrivial) Symbol Systems
have the remarkable property of being systematically INTERPRETABLE as
something else (a plane, a brain), but that interpretation is just in
your head, not in the system. The system is actually a computer
squiggling, not a plane, flying... Same for any analog system: it is
not the same as its symbolic simulation, no matter at what grain of
detail the simulation is systematically interpretable as
corresponding to it, property for property.

> > Hudson:
> > First off how can a state be continuous? State implies something which
> > is bounded, so perhaps a sine wave at frequency F1 could be used to
> > represent state S1. But then only the representation of the state (the
> > sine wave) would be continuous, the state S1 itself would be static and
> > discrete. Perhaps MacLennan ment 'the representation of states' rather
> > than, "representational states".
>
> >HARNAD:
> > No, he just meant states in the usual physical sense, described by
> > differential equations, hence continuous.
>
> Hudson:
> I see your point. I do think the flexible use of 'state' causes
> problems though. When relating to differential equations in time the
> 'continuious state' when viewed at one time will appear different when
> viewed at another time, hence they could not be recognised as the same
> state. It gets even more confusing when people say mental states are
> computational states, in my opinion.

Kid-sib:
You are putting too much weight on "view" (which is merely
epistemic): Focus on what things ARE, not just what they LOOK-LIKE.

> > Hudson:
> > Secondly why is transduction a "central issue in symbol grounding"? So
> > long as the method of energy conversion from sound pressure, light,
> > mechanical resistance, etc. to electrical signals provides enough
> > information for the computational bits to function properly why worry
> > about it? Isn't the symbol grounding problem more, 'how do we use the
> > transduced electrical signals to terminate the hierarchical symbolic
> > definition chain?' (But then even if symbols were optimally grounded
> > I've no idea how this would conjure up meaning in the system.) Did
> > Harnad really say that transduction was the central issue?
>
> > HARNAD:
> > Yes he did (if I do say so myself): because sensorimotor transactions
> > with the objects symbols refer to are the only ones that CANNOT be
> > symbolic. And chances are, they are part (literally, physically part)
> > of whatever physical states mental states turn out to be. Hence
> > mental states, unlike computational states, will not be
> > implementation-independent.)
> >
> > So transduction is the most important non-symbolic process, but it's
> > unlikely to be the only one (as neuropharmacology is showing us).
>
> Hudson:
> "whatever physical states mental states turn out to be"? Brain states
> might be physical but what indication is there that mental (feeling)
> states are. Even if there is a 1:1 correlation between the two how do
> we know which causes which? This is like saying 'my hand is feeling'
> when surely it is 'I am feeling'.

Now there you really have put your finger on a problem (traditionally
called the "mind/body" problem, but better called the
"feeling/function" problem). And that one beats the heck out of me...

http://www.cogsci.soton.ac.uk/~harnad/Tp/bookrev.htm

> Hudson:
> Regarding the idea of a complete world simulation where we could in
> principle fastforward to see ourselves that you spoke of on a few
> occasions. In this simulation would be a simulation of this simulation
> (and so on) would it not? Doesn't this leads to an infinite performance
> capacity? (e.g. imagine placing the simulation virtual viewing camera
> at the viewing screen of the simulated simulation). In which case it is
> not physically realisable.

So what if it leads to an infinite regress? So does a mirror facing a
mirror...

> Hudson:
> By the way, how was the Dennett talk?

See:

http://www.cogsci.soton.ac.uk/~harnad/Tp/dennett-chalmers.htm

Stevan Harnad



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST