Lucas, J. (1961) Minds, Machines and Goedel

From: Grady, James (jrg197@ecs.soton.ac.uk)
Date: Mon Feb 21 2000 - 13:05:39 GMT


http://www.yorku.ca/dept/psych/classics/Lovelace/menabrea.htm
http://www.yorku.ca/dept/psych/classics/Lovelace/lovelace.htm

In this article LUCAS proposes that Goedel's theorem disproves
Mechanism (minds can be explained as machines).

> LUCAS:
> "This formula is unprovable-in-the-system" would be false:
> equally, if it were provable-in-the-system, then it would
> not be false, but would be true, since in any consistent
> system nothing false can be proved-in-the-system, but only
> truths.

LUCAS explains how Goedel claims that in any consistent
system there are always going to be unprovable statements
which we can seen with our human mind to be true.

LUCAS here pins his whole argument on the prophecy that
man will never be able to 'Goedel'. What if this assumption
proves to be false. Given math's incompleteness it must
have been conceivable to him one day the Goedel algorithm
would be born.

> LUCAS:
> It follows that no machine can be a complete or adequate
> model of the mind, that minds are essentially different
> from machines.

For example say "I can't understand what I am saying." To
yourself. Seeing as a machine is unable to lie in this way
it can't be an adequate model of the mind. However it seems
to me that one machine could resolve such a statement on
another. Would it be possible for 2 machines in parallel to
Goedel. And could this be a simplified explanation of the
mind's Goedel algorithm?
 
> LUCAS: explaining the consequence of a mechanism
> Our idea of a machine is just this, that its behaviour is
> completely determined by the way it is made and the
> incoming "stimuli": there is no possibility of its acting
> on its own: given a certain form of construction and a
> certain input of information, then it must act in a
> certain specific way.

Explaining the mind as a cybernetical machine and claiming
we could calculate how a person is going to react in a
certain situation doesn't seem complete. As evolutionary
creatures we have imperfections and are designed to make
mistakes, how could you compensate for this in the
calculation.

Lucas then considers the mechanistic mind. He considers
adding a random action selector to the machine which can
choose as long as the consequent is not inconsistent.
However, he says given that...

> LUCAS:
> Machines are definite: anything which was indefinite or
> infinite we [258] should not count as a machine.

And so has to conclude,

> LUCAS:
> We can (or shall be able to one day) build machines
> capable of reproducing bits of mind-like behaviour, and
> indeed of outdoing the performances of human minds: but
> however good the machine is, and however much better (116)
> it can do in nearly all respects than a human mind can,
> it always has this one weakness, this one thing which it
> cannot do, whereas a mind can. A machine is always going
> to be a incomplete model of a mind.

What if you allowed a machine to perform a computation
which would lead to inconsistency if it were flagged up as
an untruth. This machine would then have 'lied' but still
be consistent, aware/compensating for its fallacy.

LUCAS:
> The Goedelian formula is the Achilles' heel of the
> cybernetical machine. And therefore we cannot hope ever to
> produce a machine that will be able to do all that a mind
> can do: we can never not even in principle, have a
> mechanical model of the mind.

How relevant is the incompleteness. In the same way a
female mind could never be a complete model of a male mind
does an Artificial mind have to be a complete model of a
human mind.

What if we were able to simulate a human mind on a computer.
Just because it seems to be a human mind doesn't necessarily
mean it is.<Turing> What if the differences are simply just
a little more complex than we can grasp. Suppose we created
a replica of a bird egg so identical to the original that
it was impossible to tell them apart. Either the egg was so
similar that no human could tell them apart, or it would
have to be molecularly and spiritually (for want of a better
word) identical that given time, it would hatch. Despite
initial confusion, the destinies of the two alternatives
would come to pass, leading one to disposal, the other to
the sky. Suppose then, that there is more to the human mind
that we can currently see. Even if we were to create a
mechanical mind so similar to a human that we could not tell
the difference, would it have the same glorious destiny we
have as humans: to live and love? I think not and can't
imagine so. Essentially, isn't it arrogant or naive to assume
that simulation of live is actually creation.

> LUCAS:
> We can use the same analogy also against those who,
> finding a formula their first machine cannot produce as
> being true, concede that that machine is indeed inadequate,
> but thereupon seek to construct a second, more adequate,
> machine, in which the formula can be produced as being true.
> This they can indeed do: but then the second machine will
> have a Goedelian formula all of its own...
> However complicated a machine we construct, it will, if it
> is a machine, correspond to a formal system, which in turn
> will be liable to the Goedel procedure [260] for finding a
> formula unprovable-in-that- system...
> We are trying to produce a model of the mind which is
> mechanical---which is essentially "dead"---but the mind,
> being in fact "alive", can always go one better than any
> formal, ossified, dead, system can. Thanks to Goedel's
> theorem, the mind always has the last word.

If in fact we could mend the Goedel in a system 'recursively'
would the mind always be sufficiently intelligent to grasp
each new Goedel. Should the human ever fail would we then
have an artificial mind which was sufficiently complete to be
an indistinguishable approximation to the human mind.

Lucas goes on to look at adding a Goedeling operator to a
machine but says as...

> LUCAS:
> The [261] mechanical model must be, in some sense, finite
> and definite: and then the mind can always go one better.

The mind it seems will always have the last word as the
machine is always limited by what is definite. Any definite
machine is vulnerable to being out-Goedeled. However he goes
on to say that one difference is enough to show that they
are not the same.

> LUCAS:
> In some respect machines are undoubtedly superior to human
> minds; and the question on which they are stumped is
> admittedly, a rather niggling, even (118) trivial,
> question. But it is enough, enough to show that the machine
> is not the same as a mind.

Notable other differences might be.. miscalculation, guessing
(a uniquely human version of randomly choosing) and
imperfection

> LUCAS:
> "there is no question of triumphing simultaneously over all
> machines", yet this is irrelevant. What is at issue is not
> the unequal contest between one mind and all machines, but
> whether there could be any, single, machine that could do
> all a mind can do. For the mechanist thesis to hold water,
> it must be possible, in principle, to produce a model, a
> single model, which can do everything the mind can do.

Lucas clearly isn't aware yet of distributed computing
(just an observation). Also It is a interesting theory but
seems floored. Human minds are clearly not entirely
consistent systems so. Surely someone who could be
described as 'dappy' in nature glows with inconsistency.
Lucas goes on to deal with this.

> LUCAS:
> Deeper objections can still be made. Goedel's theorem
> applies to deductive systems, and human beings are not
> confined to making only deductive inferences. Human minds
> are not purely deductive. Goedel's theorem applies only to
> consistent systems, and one may have doubts about how far
> it is permissible to assume that human beings are
> consistent. Hartley Rogers makes the specific suggestion
> that the {51} machine should be programmed to entertain
> various propositions which had not been proved or disproved,
> and on occasion to add them to its list of axioms.

However, a machine...

> LUCAS:
> cannot accept all unprovable formulae, and add them to
> its axioms, or it will find itself accepting both the
> Goedelian formula and its negation, and so be inconsistent...
> A machine which was liable to infelicities of that kind
> would be no model for the human mind.

It seems it would be far more productive to try to recreate
the essence of humanity outside the bounds of a formal system.
Our mind is not restricted by formal methods so breaking out
is important. However any such an arbitrary machine capable of
shameless contradictions and inconsistencies such as proposed
here would not be a good model for the mind.
> LUCAS:
> To be able to say categorically that the Goedelian formula
> is unprovable-in- the-system, and therefore true, we must
> not only be dealing with a consistent system, but be able to
> say that it is consistent. And, as Goedel showed in his second
> theorem---a corollary of his first---it is impossible to prove
> in a consistent system that that system is consistent.

Any (in)consistency judgments we make are always going to be
vulnerable as our own consistency (both our math and ourselves)
is impossible to prove.

On the subject of 'to what extent are we inconsistent'

> LUCAS:
> A man's untutored reaction if his consistency is questioned
> is to affirm it vehemently: ...
> ---are not men inconsistent too? Certainly women are, and
> politicians; and {53} even male non-politicians (121)
> contradict themselves sometimes, and a single inconsistency
> is enough to make a system inconsistent. Our inconsistencies
> are mistakes rather than set policies. They correspond to the
> occasional malfunctioning of a machine, not its normal scheme
> of operations. Witness to this that we eschew inconsistencies
> when we recognise them for what they are. If we really were
> inconsistent machines, we should remain content with our
> inconsistencies, and would happily affirm both halves of a
> contradiction. When a person is prepared to say anything,
> and is prepared to contradict himself without any qualm or
> repugnance, then he is adjudged to have "lost his mind".
> Human beings, although not perfectly consistent, are not so
> much inconsistent as fallible.

Lucas proposes our discrimination suggests we are not so much
inconsistent as fallible. However this still binds us to
compulsion of choice, which fails somehow to account for our
apparent freedom to make both good and bad decisions.

> LUCAS:
> A machine with a rather recherch
inconsistency, a kind of get-out-clause, a magic wand a child
could wave at his parents if he didn't like what they were
saying.

> LUCAS:
> There are all sorts of ways in which undesirable proofs might
> be obviated. We try out axioms and rules of inference
> provisionally---true: but we do not keep them, once they are
> found to lead to contradictions. We may seek to replace them
> with others...

> A person, or a machine, which did this without being able to
> give a good reason for so doing, would be accounted
> arbitrary and irrational.

The idea of curtailing inconsistency reminds me of King Canute
and the tide.

> LUCAS:
> A machine can be made in a manner of speaking to "consider"
> its own performance, but it cannot take this "into account"
> without thereby becoming a different machine, namely the
> old machine with a "new part" added. But it is inherent in
> our idea of a conscious mind that it can reflect upon itself
> and criticise its own performances, and no extra part is
> required to do this: it is already complete, and has no
> Achilles' heel.

LUCAS, trying to surpass the barrier of self reference in a
machine suggests a kind of division of the machine, perhaps
parallelisation. But dismisses it as he wishes to maintain the
idea of unity within the mind. But who hasn't at some point
had a conversation with oneself. Further current models of the
brain would see them as highly parallel. Maybe this unity is
constraining LUCAS' model.

> LUCAS:
> When we increase the complexity of our machines there may,
> perhaps, be surprises in store for us. He draws a parallel
> with a fission pile. Below a certain "critical" size, nothing
> much happens: but above the critical size, the sparks begin
> to fly.

Fission pile style critical complexity might give a new
dimensionality to machines. The Functionality of the machine
would be somehow more than the aggregate of it's parts. Did
someone mention ANNs?

> LUCAS:
> If the mechanist produces a machine which is so complicated
> that this ceases to hold good of it, then it is no longer a
> machine for the purposes of our discussion, no matter how it
> was constructed. We should say, rather, that he had created
> a mind, in the same sort of sense as we procreate people at
> present.

Lucas does seem jump the gun here. OK we have some kind of
super-machine but LUCAS said earlier that it could be an
adequate simulation of a mind only if it could do everything a
mind can do. LUCAS has no real idea of what this super-machine
could or couldn't do so it seems a little premature to suggest
it could be some kind of procreated mind.

> LUCAS:
> If we were to be scientific, it seemed that we must look on
> human beings as (127) determined automata, and not as
> autonomous moral agents; if we were to be moral, it seemed
> that we must deny science its due, set an arbitrary limit to
> its progress in understanding human neurophysiology, and
> take refuge in obscurantist mysticism. Not even Kant could
> resolve the tension between the two standpoints.

It seems quite exciting that we might be able to marry up the
hows and whys of humanity. There is obviously some truth here.
The idea of critical complexity may well hold water however
it too seems a little abstract. During the Enlightenment
man sought to understand the gaps he had once filled with God,
is science here guilty of the same sin?

Grady, James cm302 <jrg197@ecs.soton.ac.uk>



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:26 GMT