Harnad, S. (1993) Grounding Symbols in the Analog World with Neural Nets. Think 2: 12 - 78 (Special Issue on "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach, eds.). Pp. 34-36.
My purpose is to explain, first, that there is an alternative to Harnad's version of the symbol grounding problem, which is known as the problem of primitives; second, that there is an alternative to his solution (which is externalist) in the form of a dispositional conception (which is internalist); and, third, that, while the TTT, properly understood, may provide partial and fallible evidence for the presence of similar mental powers, it cannot supply conclusive proof, because more than observable symbolic manipuation and robotic behavior is involved here, as he admits (Harnad 1991). Carrying the problem further appears to require inference to the best explanation.
Harnad claims that the combined power of symbolic manipulation and robotic behavior affords our best experiential test for understanding both language and cognition. His approach emphasizes the theoretical necessity to resolve the symbol grounding problem by establishing appropriate links between the symbols that a system can manipulate, the behavior that that system can display, and the properties of the external world. The meaning of the symbols that a system manipulates are "grounded" when there is an appropriate isomorphism between those symbols and features of the world.
The purpose of the TTT is to measure the extent to which the symbols manipulated by a system can be systematically interpreted as standing for specific objects and properties in the external world by means of observa- tions of the behavior that it displays in dealing with objects and properties in the external world. This approach is "in the spirit of behaviorism", since the only kinds of tests used to determine the meaning of those symbols are formal criteria for qualifying as a "symbol system" and behavioral criteria for qualifying as a "meaningful" symbol system (cf. Harnad 1990, p. 345).
That the symbols manipulated by one system can be systematically interpreted by another system as standing for specific features of the world, however, does not imply that they actually have that specific meaning--or any other meaning--for that system. The TTT is a measure of the extent to which those symbols can be systematically interpreted as if they stand for specific objects and properties in the external world. But a system can be "semantically interpretable" as if it possessed a certain property even if it does not happen to possess that property--even on the basis of the TTT.
The insufficiency of the TTT, moreover, should not be very surprising. Quine's indeterminacy of translation and Dennett's intentional stance reflect the potential for alternative hypotheses which transcend symbolic manipulation or robotic capacity. The TTT might exhaust our experiential evidence (absent CT scans, X-rays, surgery and the like), but it does not exhaust our theoretical alternatives via inference to the best explanation (Fetzer 1991, 1993). Systems with similar behavioral repertoires, for example, could still differ in their causal origins or in their internal compos- ition, which might support very different inferences and conclusions regarding their mental powers.
There is an important difference, after all, between symbolic manip- ulation and robotic behavior, on the one hand, and the intellectual (mental, cognitive) states of which they may or may not be observable manifestations, on the other. Surely two systems are similar in their intellectual (mental, cognitive) processes only if their unobservable intellectual (mental, cognitive) processes are similar. A more promising approach "in the spirit of behaviorism" would require a conception of behavior broad enough to include any internal or external effect of any internal or external cause.
A conception of this kind, of course, defeats the prospects for any at- tempt to reduce internal mental states to external observable behavior. But it does supply a foundation for interpreting the meaning of signs or symbols as their causal roles in influencing behavior in this broad sense. When minds are viewed as sign-using (or "semiotic") systems, the meaning of a sign for a system becomes its causal role in influencing behavior (Fetzer 1988, 1989, 1990, 1991), and the symbol grounding problem is seen to be a special case of what is better envisioned as the problem of primitives.
An approach of this kind places meanings within systems rather than in their relations to the external world. The meaning of a sign is located in the system's dispositions toward behavior when conscious of that sign's presence, given its other internal states, rather than in any isomorphism that may or may not obtain between those signs and features of the world. While these "other internal states" differ for different kinds of systems, for human beings they include motives, beliefs, ethics, abilities, and cap- abilities. Complete sets of values of these variables thus form a context.
The meaning of a specific sign S for a particular semiotic system then becomes the totality of tendencies toward behavior of various kinds relative to the (possibly infinitely varied) contexts within which that system might find itself, when aware of that sign's presence. Two signs S1 and S2 have the same meaning for a system if that system would have the same dispositional tendencies in the same contexts, when aware of either sign's presence. Their status as sign-using systems requires that those signs be meaningful for those systems themselves, however, and not merely for a user of those systems or for an observer of their behavior (Fetzer 1988, 1989, 1991).
This reflection suggests a crucial point. A system that passed the TT would have shown itself to be as capable as a human being with respect to symbolic manipulation, but not with respect to robotic behavior. A system that passed the TTT would have shown itself to be as capable as a human being with respect to symbolic manipulation and robotic behavior, but not with respect to mental powers. Any conclusion about the presence or the absence of mental powers presupposes a theory about the nature of the mind, which the TT and the TTT both require and the semiotic conception provides. The introduction of technological innovations such as CT scans, X-rays and the like, therefore, can provide the foundation for even stronger tests of intellectual (mental, cognitive) similarity than Harnad recommends. Such tests would compare different systems not only with respect to their capacity for symbolic manipulation, as in the case of the TT, or their capacity for symbolic manipiulation and robotic behavior, as in the case of the TTT, but also for their modes of internal processing. This could yield further evidence of the cognitive similarity of these systems, which in turn might support even stronger inferences and conclusions about their mental powers.
One of the benefits of a dispositional approach of this kind is that it can explain false beliefs and unsuccessful actions taken in the world as a function of the objects and properties that actually exist in the world, relative to our beliefs about it (Fetzer 1990). Another is that it supplies a framework for understanding the manner in which connectionism and cognition may be more successfully related (Fetzer 1991). And a third is that it clearly defines the limitations of various influential but inadequate arguments that have been advanced by Fodor and Pylyshyn (Fetzer 1992). The TTT seems to be a valuable contribution, but it is not the last word.
Fetzer, J. H. (1988), Signs and Minds: An Introduction to the Theory of Semiotic Systems. In Aspects of Artificial Intelligence, ed. J. H. Fetzer. Kluwer Academic Publishers.
Fetzer, J. H. (1989), Language and Mentality: Computational, Representa- tional, and Dispositional Conceptions. Behaviorism 17: 1-39.
Fetzer, J. H. (1990), Artificial Intelligence: Its Scope and Limits. Kluwer Academic Publishers.
Fetzer, J. H. (1991), Philosophy and Cognitive Science. Paragon House Publishers.
Fetzer, J. H. (1992), Connectionism and Cognition: Why Fodor and Pylyshyn are Wrong. In Connectionism in Context, ed. A. Clark and R. Lutz. Springer-Verlag.
Fetzer, J. H. (1993), Philosophy of Science. Paragon House Publishers.
Harnad, S. (1990), The Symbol Grounding Problem. Physica D 42: 335-346.
Harnad, S. (1991), Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem. Minds and Machines 1: 43-54.
Fetzer thinks TTT grounding is based on an isomorphism between internal symbols and features of the world. It is not. You could have isomorphism with the world (under an interpretation) in an ungrounded symbol system. The TTT requires full causal interaction with the real world of objects and their features, and the capacity for this must be indistinguishable from our own. This is not an "experiential" test but an empirical one: The robotic capacities (to discriminate, categorize, identify, manipulate, describe, and discourse about objects) must really be there, and really exercised, and really not discriminably different from those of a real human being.
This is not behaviorism, it is reverse engineering: Fetzer asks for a theory; the theory will be the full description of the internal structures and processes that succeeded in making the robot pass the TTT. I don't know what Fetzer's "motives, beliefs, ethics, etc." are, but once we know exactly what internal (robotic) states actually deliver the TTT goods, we may be able to fill in the blanks with that specific engineering information. What does not look as if it will do is an internal something that is "aware" of signs or symbols, for then the job begins with finding out what internal structures and processes that module consists of. The only internal causes and effects I can imagine adding to this engineering assignment would amount to moving toward the TTTT (which I would consider helpful only inasmuch as it gave hints to accelerate our progress toward passing the TTT).