Re: Lucas: Minds, Machines and Goedel

From: Clark Graham (ggc198@ecs.soton.ac.uk)
Date: Sat Apr 07 2001 - 13:16:01 BST


> Mo:
> Sometimes when making a choice, you do not want it to be completely
> random. When humans need to choose, they assign weights to all the
> options. The more correct facts that you know and believe in, that
> you can associate with an option, the more probable choice the
> option becomes.
> You implicitly assign probabilities of choice to each option. The
> randomness should come in when all the choices have equal
> probabilities.

Clark:
Some people think that there is no such thing as randomness, and I
think that I share this view. "Just because something is indeterminate
(...it cannot be known), we must not conclude that it is undetermined
(has no prior cause)" (Steve Grand, Creation: Life And How To Make It,
p.206). As Lucas was wrong to apply a randomising device to make
decisions (instead of basing them upon some kind of prior knowledge),
I think it would also be wrong to assign randomness at any stage of
decision-making. Even though we may think we are making a random
decision, it is a result of something, perhaps external circumstances
or the present internal "wiring" of our brains.

In the case of it being due to external (ie. outside the body and
beyond our control) circumstances, we don't have to worry, as these
would affect an artificial life, too, if it was grounded in the real
world. In the case of the "random" decision being due to internal
circumstances, we would hope this would be an emergent property of an
artificial mind. In other words, if we built a T3-passing machine, the
internal "layout" of its mind would enable one decision to be taken
instead of another if "all the choices [had] equal probabilities".

> Mo:
> A causal system can be physical or probabilistic, but a important
> property of these systems is that we understand their mechanisms.
> In the case of the brain we do not understand its mechanism,
> therefore we cannot declare it a causal system.

Clark:
I'm not sure what systems there are that do not operate by cause and
effect. In the case of "random" systems, there may either be something
we do not yet understand causing a change in the system, or else
something at some sub-atomic level that we can't see or can't explain.
If every system does adhere to cause and effect, it seems logical to
expect the brain to be one, too (this is not to say that the brain
MUST be a causal system, just that there is a high probability of it
being one).

I'm also not sure that you have to withhold judgment on a system until
you know everything about how it works. I have no idea how half the
components in a jet plane work, but I feel justified in calling it a
causal system, because I can see there is some relation between the
workings of the jets / turbines / whatever, the shape of the wings,
and flight.

> Mo:
> Where would this intelligence performance have a boundary? What
> components of a human brain would a mind-modeling machine be
> required to implement to be assigned "Turing-indistinguishable" and
> intelligent?

Clark:
I think that the question about a performance boundary could only be
answered after a T3-passing system had been built. You could then take
pieces off to find a boundary, and this would undoubtedly teach us a
lot about our own intelligence. This system would also give the answer
to the second question (about Turing-indistinguishability). As this is
the only way the questions can be answered, possibly short of trial
and error, it seems that the best way to proceed would be to build a
system that implemented all the components of a human mind, and only
then start taking bits off it to find out more about how we work.

Graham.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST