Instituut voor Taal- en Kennistechnologie
Institute for Language Technology and Artificial Intelligence

Harnad responds

McDermott says transducers and neural nets are just two kinds of computational system. I agree about neural nets (room two, SIM), but I would be interested to know how McDermott would reconfigure his Sun to make it implement an optical transducer (as opposed to a virtual optical transducer). Connecting it to an optical transducer begs the question, of course, because that way I could ``reconfigure'' it into a furnace or an airplane too, just by connecting them. The reason you can't do it otherwise is because optical transduction, heating and flight are not implementation-independent formal properties. There's more than one way to ``implement'' them, to be sure, but none of the ways is computational (for they involve ``reconfiguring'' matter, not just a digital computer's states).

A flip-flop in a digital computer is indeed describable by a differential equation, as surely as any other analog system is (all implementational hardware is of course analog), but the computation it is performing is not. To know what that is you need to look at the level of what the flip-flop patterns are encoding. That's implementation independence.

McDermott suggests that I am holding ``computers'' and ``computation'' to distinctions that are either irrelevant or untenable. If this is meant to endorse ecumenism about computation, I would refer him to my response to Dietrich: If computation is allowed to become sufficiently broad, ``X is/is-not computation'' becomes vacuous (including ``cognition is computation''). McDermott doesn't like my own candidate (interpretable symbols/manipulations) because sometimes you can't specify the symbols. Fine, let it be interpretable code then (is anyone interested in uninterpretable code?). Code that ``refers'' only to its own physical implementation seems circular. Causal connections between the code and computer-external things that it is interpretable as referring to, on the other hand, are unexceptionable (that's what my own TTT calls for), but surely that's too strong for all the virtual things a computer can do and be. (When you reconfigure a digital computer to simulate all others -- say, when you go from a virtual robot to a virtual planetary system -- are you reconfiguring the (relevant) ``causal connections'' too? But surely those are wider than just the computer itself; virtual causal connections to a virtual world are not causal connections at all -- see the Cheshire cat response to Dyer).

One can agree (as I do) that nothing essential is missing in a simulated rainstorm, but the question is: Nothing essential to what? I would say: to predicting and explaining a rainstorm, but certainly not to watering a parched field. So let's get to the point. We're not interested in rainstorms but in brainstorms: Is anything essential missing in a simulated mind? Perhaps nothing essential to predicting and explaining a mind, but certainly something, in fact everything, essential to actually being or having a mind. Let's not just shrug this off as (something interpretable as) ``self-modeling capacity.'' Perhaps the meanings of McDermott's thoughts are just something relative to an external observer, but I can assure you that mine aren't!