Harnad (2) on Computation and Cognition

From: Terry, Mark (mat297@ecs.soton.ac.uk)
Date: Mon Mar 27 2000 - 16:48:13 BST


COMPUTATION IS JUST INTERPRETABLE SYMBOL MANIPULATION; COGNITION ISN'T
HARNAD, Stevan
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad94.computation.cognition.html

> HARNAD:
> not everything is
> a computer, because not everything can be given a systematic interpretation;
> ... mental states will not just be the
> implementations of the right symbol systems, because of the symbol grounding
> problem: The interpretation of a symbol system is not intrinsic to the
> system; it is projected onto it by the interpreter. This is not true of our
> thoughts.

There may be some argument here along the lines of "we just interpret
our thoughts from some internal symbol system, and project a meaning
onto them".
This extra layer of abstraction doesn't actually matter though, as even
if we give meaning to internal squiggles and squoggles, the
interpretation is still intrinsic in the system (our brains).

> HARNAD:
> We must accordingly be more than just computers. My guess is that
> the meanings of our symbols are grounded in the substrate of our robotic
> capacity to interact with that real world of objects, events and states of
> affairs that our symbols are systematically interpretable as being about.

And computers must therefore be less than us. It is interesting that
Harnad supposes that interaction is key. Defining what level this
interaction must occur at would seem an important problem. ie, is being
told what a donkey looks like enough, or do we have to see a donkey, or
do we have to see a donkey in the correct context to be able to
correctly identify another donkey.

> HARNAD:
> Let me declare right away that I subscribe to
> what has come to be called the Church/Turing Thesis (CTT) (Church 1956),
> which is based on the converging evidence that all independent attempts to
> formalise what mathematicians mean by a "computation" or an "effective
> procedure," even when they have looked different on the surface, have turned
> out to be equivalent (Galton 1990).

So do I, if only for the reason that no one has been able to disprove
it.
This is just to remind us that if we accept this, we know the limits of
computatution, and can't make brash claims about what computers "may be
able to do". I'll assume we are all familiar with the Turing machine's
operation.
Regarding this formal model of computation:

> HARNAD:
> it is still an open question whether people can "compute" things
> that are not computable in this formal sense: If they could, then CTT would
> be false. The Thesis is hence not a Theorem, amenable to proof, but an
> inductive conjecture supported by evidence; yet the evidence is about formal
> properties, rather than about physical, empirical ones.

It's good to keep the above in mind - CTT isn't a theorem. It has not
yet been disproved, and subscribers to it believe it never will.

> HARNAD:
> There is a natural generalisation of CTT to physical systems (CTTP).
> According to the CTTP, everything that a discrete physical system can do (or
> everything that a continuous physical system can do, to as close an
> approximation as we like) can be done by computation. The CTTP comes in two
> dosages: A Weak and a Strong CTTP, depending on whether the thesis is that
> all physical systems are formally equivalent to computers or that they are
> just computers.

Harnad points out that much of his following argument is reliant on his belief
in both CTT and CTTP.

Systematic Interpretability

> HARNAD
> shape-based operations are usually called "syntactic" to contrast them with
> "semantic" operations, which would be based on the meanings of symbols,
> rather than just their shapes.

As we know. Just keep it in mind below:

> HARNAD:
> Meaning does not enter into the definition of formal computation.

This is clearly the crux of the argument. Harand then uses the first
time you were formally taught arithmatic or similar.

> At no time was the meaning of the
> symbol used to justify what you were allowed to do with it. However,
> although it was left unmentioned, the whole point of the exercise of
> learning formal mathematics (or logic, or computer programming) is that all
> those symbol manipulations are meaningful in some way ("+" really does
> square with what we mean by adding things together, and "=" really does
> correspond to what we mean by equality). It was not merely a meaningless
> syntactic game.

When we are given some new symbol, the fist thing you want to know is
what it means. The meaning of the symbol was entirely used to justify
what we could do with it. The first time I was taught algebra, and the
notion of "value X" we were taught that it's any number we like, and
should be treated as such. Maybe I was just taught in a strange way. I
agree that is isn't just syntax, but I think meaning was crucial in the
teaching.

> HARNAD:
> definitional property of computation that symbol manipulations must be
> semantically interpretable -- and not just locally, but globally: All the
> interpretations of the symbols and manipulations must square systematically
> with one another, as they do in arithmetic, at the level of the individual
> symbols, the formulas, and the strings of formulas. It must all make
> systematic sense, in whole and in part (Fodor & Pylyshyn 1988).

This is restating another of the requirements for computation, as
defined in class. The symbols must be interpetable systematically,
throughout the system, and they must make sense. As Harnad states, this
is not trivial.

> HARNAD:
> It is easy to pick a bunch of arbitrary symbols and to
> formulate arbitrary yet systematic syntactic rules for manipulating them,
> but this does not guarantee that there will be any way to interpret it all
> so as to make sense (Harnad 1994b).

The definition of 'make sense' would be interesting. What makes perfect
sense to one person may make no sense to the next. Chinese doesn't make
sense to me, but it does to someone who speaks it. Should the above
read "make sense to somebody" ?

> HARNAD:
> the set of semantically interpretable formal symbol systems
> is surely much smaller than the set of formal symbol systems simpliciter,
> and if generating uninterpretable symbol systems is computation at all,
> surely it is better described as trivial computation, whereas the kind of
> computation we are concerned with (whether we are mathemematicians or
> psychologists), is nontrivial computation: The kind that can be made
> systematic sense of.

So it's pointless to consider symbol systems that make no sense as they
don't do anything usefull. We are only concerned with the sort that
make sense. Further definitions of a trivial symbol system:

> HARNAD:
> Trivial symbol systems have countless arbitrary "duals": You can swap the
> interpretations of their symbols and still come up with a coherent semantics
> . Nontrivial symbol systems do not in
> general have coherently interpretable duals, or if they do, they are a few
> specific formally provable special cases (like the swappability of
> conjunction/negation and disjunction/negation in the propositional
> calculus). You cannot arbitrarily swap interpretations in general, in
> Arithmetic, English or LISP, and still expect the system to be able to bear
> the weight of a coherent systematic interpretation (Harnad 1994 a).

Clearly, if I learn chinese and randomly swap the meaning of words
about, I will still be taking chinese, but not making any sense. Thus
chinese is non-trivial.
Harnad makes a stronger claim:

> HARNAD:
> It is this rigidity and uniqueness of the
> system with respect to the standard, "intended" interpretation that will, I
> think, distinguish nontrivial symbol systems from trivial ones. And I
> suspect that the difference will be an all-or-none one, rather than a matter
> of degree.

Things aren't generally classified as being "a bit trivial" or "half
trivial".

> HARNAD:
> The shapes of the
> symbol tokens must be arbitrary. Arbitrary in relation to what? In relation
> to what the symbols can be interpreted to mean.

I think most people would assume that the shape of letters and numbers
are arbitrary in relation to what they actually mean (apart from maybe
the numbers 1 and 0). As Harnad points out.

Harnad then addresses my earlier question about interpretation:

> HARNAD:
> We may need a successful human interpretation
> to prove that a given system is indeed doing nontrivial computation, but
> that is just an epistemic matter. If, in the eye of God, a potential
> systematic interpretation exists, then the system is computing, whether or
> not any Man ever finds that interpretation.

Isn't it possible that every symbol system has the potential to be
systematically interpretable? Can we ever say "there is no systematic
interpretation to system X" and be guaranteed correctness ?

> HARNAD:
> It would be trivial to say that every object, event and
> state of affairs is computational because it can be systematically
> interpreted as being its own symbolic description: A cat on a mat can be
> interpreted as meaning a cat on the mat, with the cat being the symbol for
> cat, the mat for mat, and the spatial juxtaposition of them the symbol for
> being on. Why is this not computation? Because the shapes of the symbols are
> not arbitrary in relation to what they are interpretable as meaning, indeed
> they are precisely what they are interpretable as meaning.

> Another way of characterising the
> arbitrariness of the shapes of the symbols in a formal symbol system is as
> "implementation independent": Completely different symbol-shapes could be
> substituted for the ones used, yet if the system was indeed performing a
> computation, it would continue to be performing the same computation if the
> new shapes were manipulated on the basis of the same syntactic rules.

So know we also have the implementation independence part of
computation.
If the symbols in a system are not shape independent it is not
computation.

> HARNAD:
> The power of computation
> comes from the fact that neither the notational system for the symbols nor
> the particulars of the physical composition of the machine are relevant to
> the computation being performed. A completely different piece of hardware,
> using a completely different piece of software, might be performing exactly
> the same formal computation. What matter are the formal properties, not the
> physical ones. This abstraction from the physical particulars is part of
> what gives the Universal Turing Machine the power to perform any computation
> at all.

This is, of course, all leading us towards the hybrid system idea.
Could our thoughts really be independent from our bodys ?
Harnad then presents some arguments for Computationalism (C=C).
He talks of the mind-body problem, "a problem we all have in seeing how
mental states could be physical states" and offers how computation and
cognition seemed related (computers can do many things only cognition
can also do, and CTTP states that whatever physical systems can do
computers can).
Harnad mentions Turing's test and his interpretation:

> HARNAD:
> So I see Turing as championing machines in general that have functional
> capacities indistinguishable from our own, rather than computers and
> computation in particular. Yet there are those who do construe Turing's Test
> as support for C=C. They argue: Cognition is computation. Implement the
> right symbol system -- the one that can pass the penpal test (for a
> lifetime) -- and you will have implemented a mind.

This view is what we discussed in the first part of the course. Harnad
then gives Searle's chinese room argument as refuting the above view. I
had problems accepting Searle's test - it always seemed like a trick
(Can we actually say we understand how _our_ minds process input and
produce output? No.
So we no more understand the symbol system going on in our heads that
we do the memorised pen-pal program. So why is our symbol system the
only mind present?)
Anyway, Harnad defends the Turing test:

> HARNAD:
> But, as I suggested, Searle's Argument does not really impugn Turing Testing
> (Harnad 1989); it merely impugns the purely symbolic, pen-pal version of the
> Turing Test, which I have called T2. It leaves the robotic version (T3) --
> which requires Turing-indistinguishable symbolic and sensorimotor capacity
> -- untouched (just as it fails to touch T4: symbolic, sensorimotor and
> neuromolecular indistinguishability).

> meaning, as stated earlier, is not contained in the symbol system.

> Now here is the critical divergence point between computation and cognition:
> I have no idea what my thoughts are, but there is one thing I can say for
> sure about them: They are thoughts about something, they are meaningful, and
> they are not about what they are about merely because they are
> systematically interpretable by you as being about what they are about. They
> are about them autonomously and directly, without any mediation. The symbol
> grounding problem is accordingly that of connecting symbols to what they are
> about without the mediation of an external interpretation (Harnad 1992 d,
> 1993 a).

At this point I'd like to point out my previous problems with Searle's
CRA are well and truly wiped out - this is the difference between
Searle's mind and the program he's memorised.

> HARNAD:
> One solution that suggests itself is that T2 needs to be grounded in T3:
> Symbolic capacities have to be grounded in robotic capacities. Many
> sceptical things could be said about a robot who is T3-indistinguishable
> from a person (including that it may lack a mind), but it cannot be said
> that its internal symbols are about the objects, events, and states of
> affairs that they are about only because they are so interpretable by me,
> because the robot itself can and does interact, autonomously and directly,
> with those very objects, events and states of affairs in a way that coheres
> with the interpretation. It tokens "cat" in the presence of a cat, just as
> we do, and "mat" in the presence of a mat, etc. And all this at a scale that
> is completely indistinguishable from the way we do it, not just with cats
> and mats, but with everything, present and absent, concrete and abstract.
> That is guaranteed by T3, just as T2 guarantees that your symbolic
> correspondence with your T2 pen-pal will be systematically coherent.

> But there is a price to be paid for grounding a symbol system: It is no
> longer just computational! At the very least, sensorimotor transduction is
> essential for robotic grounding, and transduction is not computation.

Harnad then goes over the old "a virtual furnace isn't hot" argument
and points out;

> HARNAD
> A bit less obvious is the equally valid fact that a
> virtual pen-pal does not think (or understand, or have a mind) -- because he
> is just a symbol system systematically interpretable as if it were thinking
> (understanding, mentating).

Harnad goes onto point out that we could simulated a T3 robot, but it
still wouldn't be thinking, it would still be ungrounded symbol
manipulation. Only by interacting with the real world and grounding its
understanding in what it interacts with can something be said to be
cognizing. This seems to fit in with my understanding of how people
work. We can of course imagine worlds different from our own,
inventions not yet real etc. However, all these things must be based on
the world we know. Otherwise, such things would make no sense to us.

> HARNAD
> I actually think the Strong CTTP is wrong, rather than just vacuous,
> because it fails to take into account the all-important
> implementation-independence that does distinguish computation as a natural
> kind: For flying and heating, unlike computation, are clearly not
> implementation-independent. The pertinent invariant shared by all things
> that fly is that they obey the same sets of differential equations, not that
> they implement the same symbol systems (Harnad 1993 a). The test, if you
> think otherwise, is to try to heat your house or get to Seattle with the one
> that implements the right symbol system but obeys the wrong set of
> differential equations.

At this point you may well be thinking "But flying / being hot are
physical states. Thinking is a mental state". So what is a mental state
if it is anything more than a phyical thing? This is back to the Turing
test, and if their is indeed some other thing present, we will never be
able to produce machines that think.

> HARNAD:
> For cognition, defined by ostension (for lack of a cognitive scientific
> theory), is observable only to the mind of the cognizer. This property --
> the flip-side of the mind/body problem, and otherwise known as the
> other-minds problem -- has, I think, drawn the Strong Computationalist
> unwittingly into the hermeneutic circle. Let as hope that reflection on
> Searle's Argument and the Symbol Grounding Problem, and especially the
> potential empirical routes to the latter's solution (Andrews et al in prep;
> Harnad 1987, Harnad et al 1991,1994), may help the Strong Computationalist
> break out again. A first step might be to try to try to deinterpret the
> symbol system into the arbitrary squiggles and squoggles it really is (but,
> like unlearning a language one has learnt, this is not easy to do!).

It becomes eminently clear why we keep coming back to "It's just
sqiggle squoggles" in class know. There was an interesting program
about robots, where scientists had designed a system that used sonar
(like bats) to recognise objects. They could learn the name of a human
face, and if presented with the same face, could identify it again.
This initially seems exciting, but you quickly realise that in order to
learn concpets we need to be able to break the world into categories,
and a signal-wave was completely incapable of doing this. So visual
interpretation of the world (to the same level of detail as us to be as
intelligent) would seem nessecery. I think more than visaul interaction
is only nessacery to identify things in different ways. Having said
that, certain things by their nature are only identifiable to us in one
way (a smell, a noise). It's interesting to note that their would be no
need to stop at our 5 senses when designing a robot - why not
incoporate the bats sonar as well?

Terry, Mark <mat297@ecs.soton.ac.uk>



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT