Instituut voor Taal- en Kennistechnologie
Institute for Language Technology and Artificial Intelligence

Harnad responds

If Dietrich endorses my empirical approach, is there anything left to quarrel about? Or has he too changed the subject? I define computation as syntactic symbol manipulation; Dietrich sees it as the execution of recursive functions. As far as I know, these two are equivalent. Dietrich says every process is a computation, but that this is not vacuous. If he means every process is describable by computation (as it is by English sentences -- another form of computation), I think this is true (I do subscribe to the Church/Turing Thesis), but irrelevant. For the purpose of computationalism (or so I thought) was to distinguish cognitive systems from, say, planetary systems, fluids and electrons (to pick some basic physical systems), or from furnaces, cars and planes (to pick some engineering systems). If all of these are computational systems, then so is the brain, of course, and there seems to be nothing worth disagreeing about -- but we're no closer to knowing what's special about cognitive systems.

But is every process computation? Is there no longer a difference between a furnace and a virtual furnace? I think I can still espy one: The furnace really gets hot, whereas the virtual furnace is just a bunch of symbols that are systematically interpretable as if something was getting hot. ``Getting hot,'' in other words, is not a computational process but a physical process; in particular, unlike computation, it is not implementation-independent. So there's hope for computationalism. For if it is not true that every process is just computation, but only that every process is simulable by computation, then we can still ask whether cognition is something computation is rather than just something computation can simulate or describe; in other words, whether cognition is just implementation-independent syntactic symbol manipulation or something more.

And it turns out that you don't have to turn to the halting problem to falsify this nonvacuous hypothesis: the Chinese Room Argument suffices; or, if that's not enough to persuade you, consider the kinds of obstacles the symbol grounding problem raises for the hypothesis that thinking is just symbol manipulation (Harnad 1990b). I can't see Dietrich's difficulty with understanding what is meant by ``syntactic rules operating only on [symbol] shapes'' [which, like the shape of the physical states in the simulated furnace, are arbitrary in relation to what the symbols can be systematically interpreted as meaning -- see Boyle on pattern matching], but I would be especially interested to hear more about the ``commonplace'' that computation is ``fully semantical'': Does this mean that a simulated furnace is not just squiggles and squoggles that are systematically interpretable by us as if they were getting hot? Do they somehow contain that interpretation on their own, in the way that the sentences in a book, say, do not?

I think Dietrich may be selling semantics short here: For me, at any rate, there's only one kind of meaning, and that's the kind that occurs in the heads of conscious systems. When I say some system or part of a system means ``the cat is on the mat,'' I really mean the system consciously means it, the way I do when I think it. Cognition, after all, is thinking, isn't it? And thinking is conscious thinking (or at least going on in the head of a conscious thinker). So, in order not to beg the question about whether or not computation is indeed ``fully semantical,'' it will have to be shown that the computational system consciously means what it is (otherwise only) systematically interpretable by us as meaning.

Let me hasten to add that this is a much taller order than I undertake in trying to ground symbols in TTT capacity: Just as immunity to Searle's argument cannot guarantee mentality, so groundedness cannot do so either. It only immunizes against the objection that the connection between the symbol and what it is about is only in the mind of the interpreter. A TTT-indistinguishable system could still fail to have a mind; there may still be no meaning in there. Unfortunately, however, that is an ontic state of affairs that is forever epistemically inaccessible to us: We cannot be any the wiser.

(Dietrich misunderstands my points about possible performance limitations of ungrounded symbol systems in general [apart from their cognitive ambitions]; that was just a conjecture, on which nothing else I say depends; and far from wanting to give reasons why parallelism is essential for anything cognitive, I chose transduction, not parallelism, as my motivated candidate for this essential role.)