Verifying Machines' Minds
James T. Culbertson
Consciousness:Natural and Artificial. The Physical Basis and Influence on
Behavior of Sensations, Percepts,
Memory Images and Other Mental
Images Experienced by Humans,
Roslyn Heights. NY: Libra, 1982.
325 pp. $13.95
James T. Culbertson, professor of philosophy and nmathematics emeritus at California Polytechnic State University, is currently engaged in independent research on nerve nets in San Luis Obispo, California. He is author of The Minds of Robots. ~ Stevan Harnad is editor of The Behavioral and Brain Sciences (Princeton, New Jersey). He is author of the chapter "Metaphor and Mental Duality" in T. Simon and R.. Scholes's Language, Mind and Brain.
The question of the possibility of artificial consciousness is both very new and very old. It is new in the context of contemporary cognitive science and its concern with whether a machine can be conscious; it is old in the form of the mind/body problem and the "other minds" problem of philosophy. Contemporary enthusiasts proceed at their peril if they ignore or are ignorant of the false starts and blind alleys that the older thinkers have painfully worked through.
The mind/body problem is simply enough stated: We all know what it is like to have experiences, what it is like to see, to feel, to know. Let us call the domain of such subjective experiences "mental" and the capacity to have them, having a "mind." We also have a fairly clear idea (thanks to our experiences and our theories about them) what objects are, what matter is, what the external world (according to our experiences) seems to be like. Let us call the domain of material objects "physical" and all of its concrete inhabitants, inanimate and animate, "bodies." Then the mind/body problem is simply the stubborn difficulty we have with equating these two domains with one another, with seeing the mental as physical, or vice versa.
If that still sounds too abstract, there happens to be a variant of it (not everyone agrees that it is merely a variant, but for this review, let us assume it is so)-the other-minds problem-which is even more straightforward and suggestive: I know what it is like to have a mind, and I know I have one, on the basis of my subjective experience. So do you. But I do not seem to be able to know in this same direct way that you have a mind (nor you that I have one). You could look and act just like me but without having any experiences at all. You could be a feelingless robot. (Note how the contemporary incarnation of the problem begins to rear its head.) The kind of immediate certainty that settles all my doubts about whether I really have experiences is completely absent when it comes to the question of whether you or anyone else does. I am prepared to believe that you do, based on the way you look and act, and how uncannily it resembles the way I do, but I certainly do not feel that I know it. Such unresolved doubts about other minds (and even about the reality of the outside world) have led some philosophers-called, appropriately, the skeptics-to conclude that the doubts are unresolvable. Skeptics are in general very concerned with what you can and cannot know, and they come to a rather pessimistic conclusion about it. It is important for psychologists and cognitive scientists to note, however, that skepticism is very much like an enormous null hypothesis, with sceptics accepting it a priori and nonskeptics laboring to find grounds for rejecting it. It is fair to state that, at this time, no convincing grounds have yet been adduced for rejection.
To see the mind/body problem as a variant of the other-minds problem as I suggested, consider that, just as one can be skeptical about whether there is really a mind in someone else's body, or whether it just looks that way, one can be skeptical about any belief about mind in the world: Any theory about how matter might em-body mind is open to the possibility that it only looks that way, that the physical part could be correct with the mental part completely absent. A physical universe including the mental seems as objectively indistinguishable from an identical one in which the mental is absent as does a body with a mind from one without. So when you are trying to equate the physical and the mental, do not look for any easy answers.
One hard answer-not convincing to a skeptic, but the best practical criterion proposed to date-was provided by the logician A. Turing in 1950, and this answer already begins to take on the problem's modern machine-theoretic guise: For the skeptic concerning whether a machine has a mind, let the machine be placed out of view (so that no prejudice is engendered by its appearance) and let the skeptic interact with it in any way he or she would interact with a real person who, was similarly out of sight (and hearing range). That is, let the interaction be by teletype. Anything can be discussed (including past experiences, views about subjective life, etc.). The gist of Turing's test is that if a machine can perform so as to be indistinguishable from a real person under the same conditions-psychologists may in fact regard this as a (thought-)experimental demonstration of indiscriminability across many forced choice trials-then we should drop our skepticism- and allow that the machine has a mind. Note that Turing's is a behavioristic or performance criterion, and it really only reduces the machine-mind problem to the other-minds problem, enjoining us to be no more skeptical about artificial minds than we are about natural ones. Indeed, there is no reason why, with a sufficiently lifelike robot, the out-of-sight constraint could not be waived altogether (although this involves some computational, linguistic, and causal fine points that will not be discussed here; see Hamad. 1982a, 1982b, and 1984).
Enter our author, Culbertson, with the belief that he can do better than Turing (presumably, although he makes no mention of Turing, and indeed gives little evidence of being aware of, let alone giving due weight to, the basic problems discussed here). Culbertson too asks how we can know that a robot has a mind. He frames the question in terms of whether a robot has consciousness, whether it perceives, but this amounts to the same thing, because our version simply equates "having a mind" with having any subjective experience at all. Let it quickly be stated that from the outset Culbertson helps himself to an extraordinary assumption, namely, that inanimate events can perceive other inanimate events.
There is no reason why the consciously perceiving events have to be in organisms. Nevertheless the only known nontrivial cases of conscious perception are those where the perceiving events are in the brains of such creatures. (p. 28, emphasis added)
So this radical posit is smuggled in under the guise of being "trivial." Perhaps it will not be regarded as trivial of me immediately to raise the same question about other "consciously perceiving events" that was raised about other minds: How do you know?
Never mind. Let us push on, hoping that Culbertson is motivated here by some harmless animistic notion that will not compromise his proposed test for consciousness in "nontrivial" cases. Along the way, we will have to know a few of the author's "nonstandard" definitions:
Stimulus object: if we trace back along the photon [world-] lines reaching the eye, the stimulus object is that first object we come to which scatters the light. (p. 6, with the claim that this is suitably generalizable to all sensory modalities)
Event: each [scatter-] point on a [world-] line. (p. 7)
Now a few posits:
Events consciously perceive previous events via world-lines. (p. 28)
Events in the brain can look out through the eyes and perceived the events at stimulus objects. (p. 9)
Now Culbertson's test revealing "how [you could] know that an allegedly conscious machine was actually conscious" (from book jacket):
Look through the nervous system [or equivalent] of the "other"... in an extension of the same way [you look through your] own brain. (p. 35)
Experience its experiences firsthand!
There still remains of course the technical problem of constructing the interbrain connections. But the intractable metaphysical problem no longer exists. (p. 35, emphasis added; see Figure 1)
I must confess that I did not have the patience to read this book thoroughly from cover to cover. I become discouraged too easily with talk of "a new paradigm" and of analyses based on "Einstein's Special Relativity Theory" in this context; with repetitious and rambling text and footnotes perpetually cross-referring to footnotes to appendixes to text; with reference lists and footnotes that terminate and then restart (to save resetting costs, no doubt); with ritualistic reiteration of pet buzzwords-"consciousness (sentience)," "computer subnetwork," "picture-making network," passim; with idiosyncratic meanings, awkward, inmpressionistic diagrams, dubious neurologizing and mathematizing, and gratuitous dialogue formats that just-double for didactic exposition and then terminate, forgetting that they began as dialogues. Nor is my confidence increased by unabating signs of disorganization in the form of fresh white-outs and write-ins in my review copy.
Perhaps readers of this review will be moved to give Culbertson's book a more patient reading. Before they do, however, I recommend as a preliminary exercise that they demonstrate to themselves that they have understood this review by showing how, even if the "technical" problems of building the suitable inter-brain connectiom were solved, the "metaphysical" problem we have been discussing would indeed reappear, as intractable as ever.
Harnad, S. (1982a). Consciousness: An afterthought. Cognition and Brain Theory, 5, 29-47.
Harnad, S. (1982b). Neoconstructivism: A unifying theme for the cognitive sciences. In T. W. Simon and R. J. Soholes (Eds.), Language, mind and brain (pp. 1-11). Hillsdale, NJ:
Harnad, S. (1984). Minds, machines and Searle. Unpublished manuscript.
Turing, A. (1950). Computing machinery and intelligence. Mind, 59, 433-460.
Figure 1. Picture of observer experiencing the same percepts as the subject is experiencing. (Figure 66 from Consciousness: Natural and Artificial by James T.
CuIbertson. Roslyn Heights, NY: Libra, 1982, p.141. Copyright by Libra Publishers, Inc. Reprinted by permission.)