Re: Chalmers: Computational Foundation

From: Wright Alistair (afw198@ecs.soton.ac.uk)
Date: Sun Mar 04 2001 - 00:09:19 GMT


http://cogprints.soton.ac.uk/documents/disk0/00/00/03/19/index.html

Alistair Wright.

Wright:
In this paper the author justifies the notion that formal computation
would be a sufficient (or at least useful) mechanism for implementing a
cognitive system. It is acknowledged that the mechanism behind our
own mental processes is not understood, but the paper argues that
computation is central to cognitive ability, answering critical arguments
that claim otherwise.

Wright:
Chalmers begins by describing what he means by a physical system
implementing a given computation. This analysis is useful in proving
that a given computation is unlikely to occur in arbitrary physical
systems, and ultimately will be used to show that the cognitive
abilities of a physical system are solely explainable by it
implementing some computation. Both of which point in favour of a
computationalist view, such as Searle's strong AI.

> CHALMERS:
> In order for the foundation to be stable, the notion of computation itself
> has to be clarified. The mathematical theory of computation in the
> abstract is well-understood, but cognitive science and artificial
> intelligence ultimately deal with physical systems. A bridge between these
> systems and the abstract theory of computation is required. Specifically,
> we need a theory of implementation: the relation that holds between an
> abstract computational object (a "computation" for short) and a physical
> system, such that we can say that in some sense the system "realizes" the
> computation, and that the computation "describes" the system. We cannot
> justify the foundational role of computation without first answering
> the question: What are the conditions under which a physical system
> implements a given computation?
>
> Once a theory of implementation has been provided, we can use it to answer
> the second key question: What is the relationship between
> computation and cognition? The answer to this question lies in the fact
> that the properties of a physical cognitive system that are relevant to
> its implementing certain computations

Wright:
Chalmers goes on to describe how a physical system an be interpreted to be
implementing some computation, using the notion of a combinational state
automata (CSA) to represent the general notion of such a system.

> CHALMERS:
> The above account may look complex, but the essential idea is very
> simple: the relation between an implemented computation and an
> implementing system is one of isomorphism between the formal structure of
> the former and the causal structure of the latter. In this way, we can
> see that as far as the theory of implementation is concerned, a
> computation is simply an abstract specification of causal
> organization.

Wright:
Chalmers says that any physical system can implement a computation,
where the particular computation(s) are constrained by the causual
structure of the system (ie, changes in the physical system's state must
consistently & reliably map to formal states of the computation).

It remains up to an observer to determine the computation being
implemented; the causal structure of the system is such in spite of,
rather than as a consequence of, the computation it implements. A
purpose built silicon CPU, for example, implements behaviour
interpretable as corresponding to some desired computation, but that is
a byproduct of its physical behaviour.

This is an interesting point, since we do not need an external observer
to determine our existence. If our awareness is the result of a set of
computations, then they would somehow embody the properties we associate
with
self-awareness, capable of independent reasoning.

> CHALMERS:
> With cognition, by contrast, the claim is that it is in virtue of
> implementing some computation that a sy1stem is cognitive. That is, there
> is a
> certain class of computations such that any system implementing that
> computation is cognitive. We might go further and argue that every
> cognitive system implements some computation such that any implementation
> of the computation would also be cognitive, and would share
> numerous specific mental properties with the original system. These claims
> are controversial, of course, and I will be arguing for them in the
> next section.

Wright:
Chalmers does not specify what he means by 'specific mental properties'
Presumably this includes all mental functionality that we exhibit ourselves,
and leads back to the question of whether cognition implies self-awareness.

This appears to be an important consideration, and is unanswerable, except
in
our individual case. We do assume other people we meet are self-aware, but
it
is arguably less to do with their overall shape than the vocal & other
(non-verbal) signals we recieve from them. If a particular computation (or
set of
computations) could give rise to a cognitive system when implemented, would
we have to assume any such implementation self aware (and of course treat
it with the same respect as we accord to any naturally self-aware
entity)?. The existence of a cognitive system implies the requirements of
some sensory i/o, also. The only identification possible of a cognitive
system would have to involve the system interacting with us in some
form.

> CHALMERS:
> What about semantics? It will be noted that nothing in my account of
> computation and implementation invokes any semantic considerations,
> such as the representational content of internal states. This is precisely
> as it should be: computations are specified syntactically, not
> semantically.
> Although it may very well be the case that any implementations of a given
> computation share some kind of semantic content, this should be a
> consequence of an account of computation and implementation, rather than
> built into the definition. If we build semantic considerations into the
> conditions for implementation, any role that computation can play in
> providing a foundation for AI and cognitive science will be endangered, as
> the notion of semantic content is so ill-understood that it desperately
> needs a foundation itself.

Wright:
This appears to be a reasonable claim; semantics are assigned by an
external observer of some computation, they are not inherent in the
formal description, which deals only in syntactical requirements.
Computation
which implements a cognitive system should give rise to cognition, by
definition, without requiring the presence of an external observer to
interpret
any semantics.

> CHALMERS:
> It may be that any
> behavioral description can be implemented by systems lacking mentality
> altogether (such as the giant lookup tables of Block 1981). Even if
> behavior suffices for mind, the demise of logical behaviorism has made it
> very implausible that it suffices for specific mental properties: two
> mentally distinct systems can have the same behavioral dispositions. A
> computational basis for cognition will require a tighter link than this,
> then.

Wright:
So emulating the behaviour of a concious entity is not enough to guarantee
mentality.

> CHALMERS:
> An exception has to be made for properties that are partly supervenient on
> states of the environment. Such properties include knowledge (if we
> move a system that knows that P into an environment where P is not true,
> then it will no longer know that P), and belief, on some construals
> where the content of a belief depends on environmental context. However,
> mental properties that depend only on internal (brain) state will be
> organizational invariants.

Wright:
It has to be asked what the cognitive properties are, exhibited by the
brain, that do not depend on an external environment. If it is true that
the brain consists mainly of sensory stuctures, then it would be hard
to separate 'internal' functionality from that which is dependent on
environment.

> CHALMERS:
> The central claim can be justified by dividing mental properties into two
> varieties: psychological properties - those that are characterized by
> their causal role, such as belief, learning, and perception - and
> phenomenal properties, or those that are characterized by way in which
> they are
> consciously experienced. Psychological properties are concerned with the
> sort of thing the mind does, and phenomenal properties are concerned
> with the way it feels. (Some will hold that properties such as belief
> should be assimilated to the second rather than the first class; I do not
> think
> that this is correct, but nothing will depend on that here.)

Wright:
What is at stake here is whether it would be enough to mimic the causal
structure of the brain with different, but functionally identical,
discreete components. This may or may not be reasonable. If the brain's
operation is determined by its neurological structure, then reproducing its
causal topology would seem to be a valid mechanism for producing identical
mental functionality. Such a reverse engineering approach may yield systems
which appear to have 'mentality', but the question remains, how finely would
the causal mechanism have to be simulated? Tiny variations in timing or
signal
intensity due to chemical or other interactions may be at the root of
concious
experience, and this may be difficult or impossible to abstract into a
different
form.

It is entirely possible that the causal mechanism that produces our
mentality is
not at all reducable in this way. Certainly, the reproduction of neural
structure
could produce a sequence of CSA states which is consistent with a high-level
view
of changes of brain state. It is not known whether our 'continuous'
perception of
the world exists in lieu of a linear sequence of discreete brain states, or
not,
but this is what the use of a CSA would imply.

Chalmers suggests that cognition can be achieved by reducing the
functionality
of the brain into smaller computable sub-systems which each perform a
specific role,
that when put together achieve the cognitive functionality of the whole.

> CHALMERS:
> Psychological properties, as has been argued by Armstrong (1968) and Lewis
> (1972) among others, are effectively defined by their role within an
> overall causal system: it is the pattern of interaction between different
> states that is definitive of a system's psychological properties. Systems
> with the same causal topology will share these patterns of causal
> interactions among states, and therefore, by the analysis of Lewis (1972),
> will
> share their psychological properties (as long as their relation to the
> environment is appropriate).

Wright:
This, taken to be true, would imply that pyschological properties
are organizationally invarient. Chalmer's thesis, and the
sufficiency of computation, rests on the correctness of this statement.

> CHALMERS:
> If all this works, it establishes that most mental properties are
> organizational invariants: any two systems that share their fine-grained
> causal
> topology will share their mental properties, modulo the contribution of
> the environment.
>

Wright:
Hence, if one were to model a person's brain in such detail so that
the model shared the brain's 'fine grained causal topolology' (earlier,
Chalmers proposes for example a neuron-level simulation as suitable),
the model would, by definition, be imbued with the person's psychological
and phenomal functionality.

> CHALMERS:
> To establish the thesis of computational sufficiency, all we need to do
> now is establish that organizational invariants are fixed by some
> computational structure. This is quite straightforward.

> CHALMERS:
> If what has gone before is correct, this establishes the thesis of
> computational sufficiency, and therefore the the view that Searle has
> called
> "strong artificial intelligence": that there exists some
computation such
> that any implementation of the computation possesses mentality. The
> fine-grained causal topology of a brain can be specified as a CSA. Any
> implementation of that CSA will share that causal topology, and
> therefore will share organizationally invariant mental properties that
> arise from the brain.

Wright:
What is mentality? This argument proposes that computation is sufficient
for achieving mentality, but it doesn't seem to account for the function of
experience in our own mental operation (we can only look at our own case
in determining what qualifies as cognition). Perhaps the modelled
cognitive process would be equivalent, say, to a newborn child.

Wright:
What is the use of a brain without senses? I suppose the view could be
taken that the brain merely processes information, the structure of
which in our case happens to coincide with that supporting the existence
of our world. All our assumptions and memories are then due to our
having recognized patterns in our input, etc. This would support the
notion of a system of cognition arising solely from implementation of
computation, with concepts like the presence of three dimensions being
arbitrary to the existence of such mental properties as learning and
perception (identified by Chalmers as organizationally invarient, thus
computable).

> CHALMERS:
> This is to some extent an empirical issue, but the relevant evidence is
> solidly on the side of computability. We have every reason to believe that
> the low-level laws of physics are computable. If so, then low-level
> neurophysiological processes can be computationally simulated; it follows
> that the function of the whole brain is computable too, as the brain
> consists in a network of neurophysiological parts.

Wright:
Chalmers shifts his argument from simulating the brain by replicating its
causal topology, to simulation by modelling the physics underlying brain
neurophysiology. This would permit 'scanning' of a brain's neurological
structure, then implementing it as a copy of the original in an environment
simulated down to the level of, say, quantum physics, but would not prove
Chalmer's views of computation underlying AI (strong AI). It would instead
show that quantum physics is computable, and trivially show that higher
level physical processes and structures exist due to lower level processes.

> CHALMERS:
> The view that I have advocated can be called minimal computationalism. It
> is defined by the twin theses of computational sufficiency and
> computational explanation, where computation is taken in the broad sense
> that dates back to Turing. I have argued that these theses are
> compelling precisely because computation provides a general framework for
> describing and determining patterns of causal organization, and
> because mentality is rooted in such patterns. The thesis of computational
> explanation holds because computation provides a perfect language in
> which to specify the causal organization of cognitive processes; and the
> thesis of computational sufficiency holds because in all implementations
> of the appropriate computations, the causal structure of mentality is
> replicated.

Wright:
Chalmer's view of computation is quite broad, and talks about the general
nature
of computation as applied to causal systems, rather than any particular
model.

Wright:
In conclusion, Chalmers presents quite a strong argument in favour of
the use of computation in implementing an cognitive system. The paper
advocates computation as a basis for cognition, without
attempting to address specific technical aspects of such a system
(beyond the notion of the creation of models of 'mental circuits'). This
leads to a vagueness in the paper's claim that psychological (and hence
phenological) properties of the brain are reliant solely on the causual
organization of, say, neurological structure. The fact that the rest of
his argument is based upon this premise makes the paper a rather less
convincing argument in favour of strong AI, but does not reduce its
usefulness in establishing a grounds for arguing the case of
computation as a strong contender for explaining the functionality of
concious systems.

Wright:
Chalmers finally concedes that existing computational methods may not be
suitable in implementing cognition, but could replace the functionality of
the 'brain' in a composite system, which would appear to solve issues of
symbol grounding etc.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:19 BST