Instituut voor Taal- en Kennistechnologie
Institute for Language Technology and Artificial Intelligence

Grounding Analog Computers

Bruce J. MacLennan

Is Cognition Discrete or Continuous?

Although I hate to haggle over words, Harnad's use of `analog' confuses a number of issues. The problem begins with the phrase `analog world' in the title, which does not correspond to any technical or nontechnical usage of `analog' with which I'm familiar. Although I don't know precisely what he means by `analog', it is clearly related to the distinction between analog and digital computers, so I'll consider that first.

In traditional terminology, analog computers represent variables by continuously-varying quantities, whereas digital computers represent them by discretely-varying quantities (typically, voltages, currents, charges, etc. in both cases). Thus the difference between analog and digital computation lies in a distinction between the continuous and the discrete, but it is not the precise mathematical distinction. What matters is the behavior of the system at the relevant level of analysis. For example, in an analog computer we treat charge as though it varies continuously, although we know it's quantized (electron charges). Conversely, in a digital computer we imagine we have two-state devices, although we know that the state must vary continuously from one extreme state to the other (voltage cannot change discontinuously). The mathematical distinction between discrete and continuous is absolute, but irrelevant to most physical systems.

Many complex systems are discrete at some levels of analysis and continuous at others. The key questions are:

  1. What level of analysis is relevant to the problem at hand?
  2. Is the system approximately discrete or approximately continuous (or neither) at that level?
One conclusion we can draw is that it can't matter whether an analog computer system (such as a neural net) is ``really'' being simulated by a digital computer, or for that matter whether a digital computer is ``really'' being simulated by an analog computer. It doesn't matter what's going on below the level of relevant analysis. So also in the question of whether cognition is more discrete or more continuous, which I take to be the main issue in the symbolic/connectionist debate. This is a significant empirical question, and the importance of connectionism is that it has tipped the scales in favor of the continuous.

Having considered the differences between analog and digital computers, I'll now consider their similarities, which I think are greater than Harnad admits.

First, both digital and analog computers provide state spaces, which can be used to represent aspects of the problem. In digital computers the set of states is (approximately) discrete, e.g., most of the time the devices are in one of two states (i.e., 0 and 1). On the other hand, in analog computers the set of states is (approximately) continuous, e.g., in going from 0 to 1 it seems to pass through all intermediate values. In both cases the physical quantities controlled by the computer (voltages, charges, etc.) correspond to quantities or qualities in the problem being solved (e.g., velocities, masses, decisions, colors).

Both digital and analog computers allow the programmer to control the trajectory of the computer's state through the state space. In digital computers, difference equations describe how the state changes discretely in time, and programs are just generalized (numerical or nonnumerical) difference equations (MacLennan 1989; 1990a, pp. 81, 193). On the other hand, in analog computers, differential equations describe how the state changes continuously in time. In both cases the actual physical quantities controlled by the computer are irrelevant; all that matters are their ``formal'' properties (as expressed in the difference or differential equations). Therefore, analog computations are independent of a specific implementation in the same way as are digital computations. Further, analog computations can support interpretations in the same way as can digital computations (a point elaborated upon below).

In the theory of computation we study the properties of idealized computational systems. They are idealized because they make certain idealizing assumptions, which we expect to be only approximately instantiated in reality. For example, in the traditional theory of discrete computation, we make such assumptions as that tokens can be unambiguously separated from the background, and that they can be unambiguously classified as to type.

The theory of discrete computation has been well developed since the 1930s and forms the basis for contemporary symbolic approaches to cognitive modeling. In contrast, though exploration of continuous computation has been neglected until recently, we expect that continuous computational theory will provide a foundation for connectionist cognitive models (MacLennan 1988, in press-a, in press-b). Although there are many open questions in this theory --- including the proper definition of computability, and of universal computing engines analogous to the Universal Turing Machine --- the general outlines are clear (MacLennan 1987; 1990c; in press-a; in press-b; Wolpert & MacLennan submitted; see also Blum 1989; Blum & al. 1988; Franklin & Garzon 1990; Garzon & Franklin 1989; 1990; Lloyd 1990; Pour-El & Richards 1979; 1981; 1982; Stannett 1990).

In general, a computational system is characterized by:

  1. a formal part, comprising a state space and processes of transformation; and
  2. an interpretation, which
    1. assigns meaning to the states (thus making them representations),
    2. assigns meaning to the processes, and
    3. is systematic.
For continuous computational systems the state spaces and transformation processes are continuous, just as they are discrete for discrete computational systems. Systematicity requires that meaning assignments be continuous for continuous computational systems, and compositional for discrete computational systems (which is just continuity under the appropriate topology).

Whether discrete or continuous computation is a better model for cognition is a significant empirical question. Certainly connectionism shows great promise in this regard, but it leaves open the question of how representations get their meaning. The foregoing shows, I hope, that the continuous/discrete (or analog/digital) computation issue is not essential to the symbol grounding problem. I don't know if Harnad is clear on this; sometimes he seems to agree, sometimes not. What, then, is essential to the problem?

How Do Representations Come to Represent?

After contemplating the Chinese Room Argument for about a decade now, I've come to the conclusion that the ``virtual minds'' form of the Systems Reply is basically correct. That is, just as a computer may simultaneously be several different programming language interpreters at several different levels (e.g. a machine language program interpreting a Lisp program interpreting a Prolog program), and thereby instantiate several virtual machines at different levels, so also a physical system could simultaneously instantiate several minds at different levels. There is no reason to suppose that these ``virtual minds'' would have to be aware of one another or that the system would exhibit anything like multiple personality disorder. Nevertheless, Harnad offers no argument against the virtual minds reply, although perhaps we are supposed to interpret his summary dismissal (``unless one is prepared to believe,'' 4.2) as an argument ad hominem. He admits in Hayes & al. (1992) that it is a matter of intuition rather than of proof.

However, I agree with Harnad and Searle that symbols do not get their meanings merely through their formal relations with other symbols, which is in effect the claim of computationalism (analog or digital). In this sense, connectionist computationalism is no better than symbolic computationalism.

There is not space here to describe an alternate approach to these problems, but I will outline the ideas and refer to other sources for the details. Harnad argues that there is an ``impenetrable `other-minds' barrier'' (Hayes & al. 1992), and from a philosophical standpoint that may be true, but from a scientific standpoint it is not. Psychologists and ethologists routinely attribute ``understanding'' and other mental states to other organisms on the basis of external tests. The case of ethology is especially relevant, since it deal with a range of mental capabilities, which, it's generally accepted, includes understanding and consciousness at one extreme (the human), and their absence at the other (say, the amoeba). Therefore it becomes a scientific problem to determine whether an animal's response to a stimulus is an instance of it understanding the meaning of a symbol or merely responding to its physical form (Burghardt 1970; Slater 1980).

Burghardt (1970) solves the problem of attributing meaning to symbols by defining communication in terms of behavior that tends to influence receivers in a way that benefits the signaller or its group. Although it may be difficult in the natural environment to reduce such a definition to operational terms, the techniques of synthetic ethology allow carefully-controlled experimental investigation of meaningful symbol use (MacLennan 1990b; 1992; MacLennan & Burghardt submitted). (For example, we've demonstrated the evolution of meaningful symbol use from meaningless symbol manipulation in a population of simple machines.)

Despite our differences, I agree with Harnad's requirement that meaningful symbols be grounded. Furthermore, representational states (whether discrete or continuous) have sensorimotor grounding, that is, they are grounded through the system's interaction with its world. This makes transduction a central issue in symbol grounding, as Harnad has said.

Information must be materially instantiated --- represented in a configuration of matter and energy --- if it is to be processed by an animal or a machine. A pure transduction changes the kind of matter or energy in which information is instantiated. Conversely, a pure computation changes the configuration of matter and energy --- thus processing the information --- without changing its material embodiment. We may say that in transduction the form is preserved but substance is changed. In computation, in contrast, the form is changed but the substance remains the same. (Most actual transducers do not do pure transduction, since they change the form as well as the substance of the information.)

Observe that the issue of transduction has nothing to do with the question of analog vs. digital (continuous vs. discrete) computation; transduction can be either continuous or discrete depending on the kind of information represented. Continuous transducers transfer an image from one space of continuous physical variables to another; examples include the retina and robotic sensor and effector systems. Discrete transducers transfer a configuration from one discrete physical space to another; examples include photosensitive switches, toggle switches, and on/off pilot lights.

Harnad seems to be most interested in continuous-to-discrete transduction, if we interpret his `analog world' to mean the world of physics, which is dominated by continuous variables, and we assume the output of the transducers are discrete symbols. The key point is that the specific material basis (e.g. light energy) for the information ``out there'' is converted to the unspecified material basis of formal computation inside the computer. Notice, however, that this is not pure transduction, since in addition to changing the substance of the information it also changes its form; in particular it must classify the continuous image in order to assign it to one of the discrete symbols, and so we have computation as well as transduction. (We can also have the case of an ``impure'' discrete-to-continuous transduction; an example would be an effector that interpolates between discretely specified states. Impure continuous/continuous and discrete/discrete transducers also occur; an analog filter is an example of the former.)

Conclusions

Harnad's notion of symbol grounding is an important contribution to the explanation of intentionality, meaning, understanding and intelligence. However, I think he confuses things by mixing it up with several other, independent issues. One is the important empirical question of whether discrete or continuous representational spaces and processes --- or both or neither --- are a better explanation of information representation and processing in the brain. The point is that grounding is just as important an issue for continuous (analog) computation as for discrete (digital) computation. Second, Harnad ties the necessity of symbol grounding to Searle's Chinese Room Argument with its problematic appeal to consciousness. This is unnecessary, and in fact he makes little use of the Chinese Room except to argue for the necessity of transduction. There is no lack of evidence for the sensorimotor grounding of meaningful symbols. Given the perennial doubt engendered by Searle's argument, I would prefer to depend upon a more secure anchor.


Harnad's response

Harnad's target article

Next article

Table of contents