Harnad, S. (1993) Grounding Symbols in the Analog World with Neural Nets. Think 2: 12 - 78 (Special Issue on "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach, eds.). Pp. 31-33.
A working hypothesis of computationalism is that Mind arises, not from the intrinsic nature of the causal properties of particular forms of matter, but from the organization of matter. If this hypothesis is correct, then a wide range of physical systems (e.g. optical, chemical, various hybrids, etc.) should support Mind, especially computers, since they have the capability to create/manipulate organizations of bits of arbitrarily complexity and dynamics. In any particular computer, these bit patterns are quite physical, but their particular physicality is considered irrelevant (since they could be replaced by other physical substrata).
When an organizational correspondence is set up between patterns in a computer and patterns in some other physical system, we tend to call the computer patterns "symbols". The correspondence, however, is usually only to some level of organization. In traditional Artificial Intelligence (AI), a small number of symbols may correspond, for instance, to a entire proposition. In Connectionist Modeling (CM), a symbol more commonly will correspond to a single neuron (or perhaps just a single chunk of a neurotransmitter within a neuron). Thus, the major issues that distinguish AI from CM are more involved with what appropriate levels of granularity capture essential organizational dynamics, rather than with any (purported) abandonment of computationalism within the CM paradigm (Dyer 1991). To my knowledge, both paradigms are strongly committed to computationalism.
In analog systems, however, physicality is central to organizational dynamics. For instance, to find the minimal energy state of water flowing downhill, we simply set up the terrain in a gravitational field, add water, and then let nature "be itself". But is there some extra capability that an analog system (A1) has over its organizational counterpart (C1) on the computer?
Clearly, A1 remains physically distinct from C1 -- e.g. simulating the organization of water molecules in a computer will never make the computer physically wet. But what about the organizational capabilities of A1 with respect to C1? Is there some organizational behavior that A1 is capable of but C1 is not? The answer to this question depends on the level at which the organizational correspondence has been established.
If nature is fundamentally discrete (as is the current view of quantum physics), then each symbol could conceivably correspond to some smallest (indivisible) unit of matter-energy or space-time. Thus, C1 modeled at a quantum level would have all the organizational properties of A1 (still without exhibiting any of A1's physicality). Hopefully, this extremely detailed level of organization is not needed to exhibit Mind.
If Minds do not arise solely from the organization of matter (but require specific forms of physicality) then both Harnad and Searle are right -- no computer could ever have Mind just by virtue of its organization. But are there any persuasive arguments for needing some particular physical substratum?
Searle's "Chinese Room" argument is unpersuasive because there should be no expectation that Searle, in acting as an interpreter (whether at AI, CM or more detailed levels of organization), would understand Chinese. When we implement a natural language understanding (NLP) system "on top of", say, a Lisp (or Prolog) interpreter, we do not expect that interpreter to understand what the NLP system understands. Thus, Searle's lack of Chinese understanding should come as no surprise (Dyer 1990a,b).
Harnad's "Transducer" argument is that physical transducers are required for Mind (with analog ones apparently now being Harnad's best candidates). Harnad's argument suffers from the "Out of Sight, Out of Mind" problem. That is, if we build a Mind-like system (for instance, able to read and understand Harnad's position paper and this commentary) and disconnect it's eyes (and any other sensors/effectors), the system (according to Harnad) would lose its Mind. Harnad's argument also suffers from the "Virtual Reality" rebuttal, in which we hook up a Mind-like system M to a Virtual Reality system. M is grounded in a sensory reality, but since that entire reality is computer generated, no physical transducers (only simulated ones) are needed (Dyer 1990a,b).
Where does this leave us? Without definitive arguments for the need for special forms of physicality, we are left with both sides essentially arguing over the definition of Mind. The Computationalists define Mind in terms of Mind-like behavior, resulting from the organization of matter at some level of granularity (usually enough to pass either the TT or TTT). The Physicalists simply define Mind as requiring some extra (as yet unexplained) physicality (analog or otherwise). But until some convincing pro-physicality arguments come along, our best strategy should be to judge potential minds in terms of their Mind-like capabilities and behaviors, not their physical substrata.
Dyer, M. G. Intentionality and Computationalism: Minds, Machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence, Vol. 2, No. 4, 1990a.
Dyer, M. G. Finding Lost Minds (Author's reply to S. Harnad's "Lost in the Hermeneutic Hall of Mirrors"). Journal of Experimental and Theoretical Artificial Intelligence, Vol. 2, No. 4, 1990b.
Dyer, M. G. Connectionism versus Symbolism in High-Level Cognition. In T. Horgan and J. Tienson (eds.). Connectionism and the Philosophy of Mind. Kluwer Academic Publishers, Boston MA. pp. 382-416, 1991.
Dyer thinks thinking corresponds to a level of organization in a computer, the differences among the ways this same organization could be implemented being irrelevant. The question I keep asking those who adopt this position is: Why on earth should one believe this? Look at the evidence for every other kind of example one could think of: Compare the respective virtual and real counterparts of planetary systems, fluids, electrons, furnaces, planes, and cells, and consider whether, respectively, movement, liquidity, charge, heat, flight and life might be an "organizational" property that they share. I see absolutely no reason to think so, so why think so of thought?
But, of course, it's possible, because unlike the observable properties I singled out above, thought is unobservable (with one notable exception, which I will return to). So perhaps thought can be correctly attributed to a virtual mind the same way quarks or superstrings (likewise unobservable) can be attributed to (real) matter. Who's to be the wiser?
The first intuition to consult would be whether those of us who are not willing to attribute motion in any sense of the word to a virtual universe would be more willing to attribute quarks to it. I wouldn't; at best, there would just be squiggles and squoggles that were interpretable as matter, which was in turn interpretable as a manifestation of quarks: and virtual quarks, though just as unobservable as real quarks, do not thereby become real! Ditto, I would say, for virtual thoughts.
But again, it still seems possible (that's part of the equivocality of unobservables: they're much more hospitable to modal fantasies than observables are). This is of course the place to invoke the one exception to the unobservability of thought: The thinker of the thought. he of course knows whether our attribution is correct (or not, although in that case no knowing is going on). And that is precisely the observation-point where Searle cleverly positions himself. For the thesis that thinking is just computation, i.e., just implementation-independent symbol manipulation, with the thinking necessarily "supervening" on every implementation of the symbol system, is exquisitely vulnerable to the Chinese Room Argument. And this has nothing to do with "levels." Whether Searle manipulates the symbols in a higher-level programming language or all the way down at the binary machine code level, the question is: Is anyone home in there, understanding? Searle says "no"; the symbols he manipulates are systematically interpretable as saying "yes." Whom should you believe?
I, for one, see no difference between Searle's implementation of the TT-passing computer and Searle's implementation of the planetary system simulator. In the latter case Searle also manipulates symbols that are systematically interpretable as motion, yet there is no motion in either case. What is the grounds for the special dispensation in the case of the mind simulation? Are we in the habit of thinking that merely memorizing and manipulating a bunch of meaningless symbols gives birth to a second mind?
As I've said before, we risk being drawn into the hermeneutic circle here (Harnad 1990c); such is the power of symbolic oracles that simulate pen-pals instead of planets: We can't resist the interpretation. But a step back (and out of the circle) should remind us that we're just dealing with meaningless squiggles and squoggles in both cases. Does Dyer really think that my readiness to believe that
(1) "If the TTT could be passed by just a symbol-manipulator and some trivial transducers (an antecedent that I really do happen to doubt, and one that certainly does not describe the brain, which is doing mostly transduction and analogs of it all the way through) then, conditional on this unlikely antecedent, ablating the transducers would turn the mental lights off"
is all that much more counterintuitive than the belief that
(2) "if the "organization" of a computer simulating a planetary system is "reconfigured" so it instead simulates a TT pen-pal, that would turn the mental lights on"?
And although there is no problem with a real body with a real mind and real TTT capacity, like mine, whether it is interacting with a real or a virtual world, and likewise no problem with a real robot with real TTT capacity (and hence, by my lights, a real mind), whether it is interacting with a real or virtual world, there is DEFINITELY a problem if you try to make the equation virtual on both ends -- a virtual robot in a virtual world. For then all you have left is the Cheshire cat's smile and a bunch of squiggles and squoggles. It is only the (real!) sensorimotor transducer surface and the real energy hitting it that can keep such a system out of the hermeneutic circle.
I, by the way, do not define mind at all (we all know what it's like to be one) and insist only on real TTT-capacity, no more, no less. I think the science (engineering, actually) ends there, and the only thing left is trust.