Instituut voor Taal- en Kennistechnologie
Institute for Language Technology and Artificial Intelligence

Computers don't Follow Instructions

Pat Hayes

Harnad accepts the picture of computation as formalism, so that any implementation of a program --- that's any implementation --- is as good as any other; in fact, in considering claims about the properties of computations, the nature of the implementing system --- the interpreter --- is invisible. Let me refer to this idea as 'Computationalism'. Almost all the criticism, claimed refutation by Searle's argument, and sharp contrasting of this idea with others, rests on the absoluteness of this separation between a computational system and its implementation.

But Computationalism taken this strictly is a caricature. For example, nobody thinks that whether or not a program might 'pass' the Turing test is completely independent of the hardware it might be running on, since speed might well be crucial to the system's success in conversation. (And in any case, the computer does have to be somehow attached to the keyboard or other interaction devices, and this is already a system matter, even if a rather routine and trivial one.) Actual computationalism is the idea of using computation as a metaphor for the mind. Part of the intellectual excitement of computationalism comes from the observation that higher-level functional organisation of working software is often largely independent of the detailed causal properties of the hardware it is running on. This seems worthy of note because it is a new kind of relationship between large-scale functional organisation and low-level mechanical detail, different than we have seen before and one with many surprising consequences. It suggests what a mind might be so that it could arise in a brain, and what a brain might need to be in order that a mind might be in it.

Real computationalism is a research direction, not a philosophical claim. Some of the early attempts to give philosophical accounts of it might perhaps be justly criticised, in retrospect, for having over-emphasised this independence-of-hardware theme. But we need to be careful here. The 'independence of hardware' thesis, even in the caricature form being criticised here, is the claim that the functional structure of the software can be implemented on any hardware you like. But there does have to actually be some hardware on which the software is implemented, and it does have to actually get implemented on it. The independence thesis is not the claim that one doesn't need the hardware at all. This requirement is quite nontrivial. It's not easy to actually make a computer which will run, as Turing knew well. (Talk of Turing machines and universal computability results here is misleading, since this entire body of computability theory is concerned with mathematical functions rather than physical mechanisms. That a Turing machine is a 'universal computer' does not mean that you could buy one and run any piece of software on it, even very slowly.)

Harnad is trying to make a bridge between the software and the hardware which is secure against what he perceives to be Searle's clever trick for getting into boxes where other minds can't possibly be, if Computationalism is right. That any implementation must be real is important here because any specification of how software is implemented on a real machine will provide just the kind of bridge that Harnad is wanting. (I am indebted to Brian Smith for making this clear to me.) There is no particular reason why the computer need not have sensors, arms, or whatever other robotic attachments are considered sufficient to nail down the meanings of its internal symbols. But these need not be made of neural stuff, nor need the system be built without software. Harnad argues for the utility of connectionist models on the grounds that, unlike computational models, they must be properly wired to their transducers: there isn't an intervening level of symbolic interpretation that would allow the symbols to float away in a cloud of formal meaninglessness. But this is a nonsequiteur. That a full account of meaning might require an account of grounding, and that this must somehow relate the structure of software to that of the machine's architecture, does not say anything about the nature of that architecture, or vice versa.

To see this more clearly, consider how a 'transducer' typically works, say a digital camera. One way to do this is to first convert the light to electrical charge, then let the charge leak away at a predetermined rate, using a clock to count how many ticks it takes to do so. That this results in an integer representing the light intensity is a matter of physics. That other parts of the computer represent integers in the same way that the transducer does is a matter of machine construction. That a binary numeral denote an integer is a matter of implementation encoding. That the program has symbols which correctly refer to light intensities is therefore a matter of how the software is implemented on this particular machine. Now, this is "arbitrary" in the sense that we could have done it some other way and still had everything work pretty much as before (the roboticists might buy a new camera which, unbeknownst to them, works on entirely different physical principles), but that is not an argument that this particular machine is somehow disconnected from its world because it uses computation, in a way that would be less true if it were made of neurons. Its beliefs about light intensity, even those encoded in software, are quite firmly grounded. And they are so grounded simply by the machine being an implementation of the program. There is nothing more mysterious (or less arbitrary) than the requirement that the hardware perform arithmetic correctly on binary encodings of digits: but that's just part of what it means to be an implementation.

So: even if we grant that Harnad is right that a full account of how internal symbols can be attached to the world needs something more like the TTT than the TT, there is no reason why this must force us to abandon the insight that mentality might consist largely in the computational manipulation of symbols, or why a touch of computation in the night should somehow divest these symbols of meaning. But I think there is a deeper question here, which is the extent to which internal symbols might get their attachement to the world through language. Perhaps Turing's insight was deeper than we now give him credit for, and he saw that much of our conceptual framework is more connected to the world through language than through the senses. In Harnad's terminology, why should there not be transducers to language as well as to the physical sensory inputs? To pass what might be called the P(hysical)TT requires us to build a robot monkey, but the difference between that and the TT might still be a more significant step to the TTT than all this sensorimotor embedding.

By the way, I would argue that Searle's trick doesn't work, even if it mattered that it did, for all it claims to show is that (if Computationalism is correct) the hardware running the program has no mentality: it doesn't understand Chinese. Searle argues essentially that the CPU chip in the computer running the Chinese-understanding program doesn't understand Chinese (not that ``... we could ourselves become implementations of the very same symbol system that had passed the Chinese TT''.) That hardly seems surprising. There is an implicit claim that only the hardware is really there (one which is sometimes conveyed by emphasising that one rejects Dualism, or using such phrases as `ghostly computational executives'). But this begs exactly the question we are wrestling with. Searle's argument can't persuade me that software isn't real, since it assumes this. In more recent works Searle has become quite explicit on this; he thinks in fact that to talk of software is incoherent.

However, in spite of Searle's authority, it seems to be simply a fact that software does exist. But it is, indeed, very peculiar stuff. For example, is software to be thought of as machinery to be patented, or as text to be copyrighted? Both seem appropriate in some ways but dramatically not others, and the legal system is confused on the matter. Creating software feels like engineering, but no other engines can be sent along telephone wires. At some level, all software consists of symbols which are being `interpreted' by a physical, often electrical, machine. This is `machine code'. But one needs to emphasise just what a very low level this often is, sometimes within the operation of a silicon chip itself; and that the relationship between these symbols and this machine is not in the least like that between some instructions and a human interpreter of them, but is more like that between patterns of holes in a cards and the shifting levers of a mechanical loom. (This contrast is why I believe that ``a human implementation does not count as a real implementation'', or, better, that John Searle simulating a computer is not actually a computer.) Notice also that these `symbols' are not formal in the sense used in these arguments, but have quite determinate, fixed meanings as specifications of state change of the hardware.

Someone who doubts the reality of software will no doubt be ironically amused here, since I may seem to be arguing that computers are even less plausible candidates for cognitive talents. My occupant of the electronic Chinese room can't understand anything, never mind Chinese: all it is, is a bundle of circuits which twitch rapidly in response to a few hundred voltages. But of course that's the central processor, not the entire computer, software and all, and still less the program itself. Searle has taken the entire complexity of a computational system and divided it into the CPU -- casting himself in that role -- and everything else -- which has become a few rules. This may have been a debating trick or ignorance, but in any case Harnad should know better than to follow him. As he says, ``if such mere hand-waving were all that the original Chinese Room Argument had been based on, then that argument would have been wrong too, and the ``System Reply'' -- to the effect that Searle is just part of a system, and that it is the system as a whole, not Searle, that would understand Chinese -- the reply favored by most of Searle's critics, would have been correct.'' It was, three times.


Harnad's response

Harnad's target article

Next article

Table of contents