Re: Pylyshyn on Cognitive Architecture

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Mon Mar 06 2000 - 18:49:36 GMT


On Mon, 6 Mar 2000, Worrall, Nicholas wrote:

> > PYLYSHN
> > * Architecture as a theory of cognitive capacity. Another way of
> > looking at the role of architecture is as a way of understanding the set
> > of possible cognitive processes that are allowed by the structure of
> > the brain.
>
> Worrall:
> It would be possible to find the total processes and functions that the
> brain can do and from there you would have the max bounds of cognitive
> process that the brain can do, which is a good model of the system. Again it
> is good practice to find the capacity of the system to allow for
> investigation for the processes that it may perform.

But this is not just a question of "capacity" (as in memory capacity);
it is a question performance capacity (what can and can't the system
DO). It is hard to imagine how one can know that in advance, or in
general. It's hard even to know everything a known algorithm can/can't
do; in other ways, questions of capacity could run into combinatorial
explosion, as in all the possible moves on a chess-board.

I think Pylyshyn might have meant something similar to Chalmers's
"causal structure" here.

> > PYLYSHN
> > Architecture as marking the boundary of representation-governed
> > processes. Namely, that the architecture must
> > be cognitively impenetrable.
>
> Worrall:
> This is a strange idea, as for the boundary to be impenetrable, there must
> be no interaction between these cognitive levels. Many processes must surely
> use many different levels and areas for a given process.

Pylyshyn is famous for this "cognitive penetrability" criterion: He
suggested that it is a simple test of whether you are below or above the
level of the virtual architecture. Below it is unchangeable or
inaccessible hardware. So what you think and know cannot alter it
("penetrate" it).

An example is the "moon illusion." We know that although the moon looks
bigger and closer on the horizon than it does in mid-sky, it is actually
at the same size and distance in both cases. Yet we cannot learn to see
it as being of the same size and distance. The moon illusion is
cognitively IMpenetrable.

In contrast, the "Monty Hall Puzzle," which definitely IS cognitively
penetrable (though it takes quite a while to penetrate!):

http://www.cogsci.soton.ac.uk/~harnad/Hypermail/Explaining.Mind97/0035.html

> > PYLYSHN
> > The distinction between directly executing an algorithm and executing
> > it by first emulating some other functional architecture is crucial to
> > cognitive science. It bears on the central question of which aspects of
> > the computation can be taken literally as part of the cognitive model
> > and which aspects are to be considered as part of the implementation of
> > the model (like the colour and materials out of which a physical model
> > of the double helix of DNA is built).
>
> Worrall:
> >From this we can understand that it is very different to execute a given
> algorithm which is rule based, than to implement a process on a system using
> a model. E.g. whether we are using a direct model or another fragmented
> model process.

I didn't understand it that way. I took it as saying that it is easier
to run an algorithm for a performance than an algorithm for both a
performance and the hardware that generates that performance.
>
> > PYLYSHN
> > This issue frequently arises in connection with claims that certain
> > ways of doing intellectual tasks, for example by the use of mental
> > imagery, by passes the need for knowledge of certain logical or even
> > physical properties of the represented domain, and bypasses the need
> > for an inefficient combinatorial process like logical inference.
>
> Worrall:
> >From here we can see that, when the mind views something we do not
> have to run finite time algorithm to process the given information
> recieved, we can understand this immediately, from this we can perceive that
> the cognitive architecture has some cognitive method incorporated. In machine
> form this would be hard to model without having the same biological
> architecture as the mind it is modelling.

Again, I think you have misunderstood. Pylyshyn is famous for having
criticized the idea of "mental images." He said that as long as we are
speaking about solving a problem using mental images, we have not yet
found the algorithm that solves the problem. And once we find the
algorithm, the images become as irrelevant as the hardware.

[This is controversial, and in fact our course will investigate another
possibility: that for many kinds of performance (including human
TT-scale performance), hybrid systems may be the only ones (or the only
practical ones) that can do the job. For example, an internal analog of
an object might be a useful thing to have (rather like the many
isomorphic copies of the retina mentioned in the last lecture) for
certain kinds of tasks (such as "mental rotation."). Sure, it can also
be done computationally, with coordinates and bitmaps, but maybe that
would require a brain that was 100 times as big!]

>
> > PYLYSHN
> > But how can the behavior of a system not be due to its internal
> > construction or its inherent properties? What else could possibly
> > explain the regularities it exhibits? It is certainly true that the
> > properties of the box determine the totality of its behavioral
> > repertoire, or its counter factual set; i.e. its capacity.
>
> Worrall:
> given the system, the capacity of it, and the processes that it
> can perform are based on the architecture but the outcome of those processes
> are not solely based on the architecture.
>
> He argues that the systems reactions are based on knowledge that has been
> gathered and that the mind does not use a biological method for reasoning,
> but uses a rule based system to perceive things.

Yes, but the real point here is that a full explanation of an
intelligent system has to be an explanation not only of what it actually
DOES do, but all that it CAN do. That's why the TT is life-long: It is
testing your overall capacity, not just a particular performance on a
particular task at a particular time.

That's reasonable. If you design a plane, you want it to really be able
to fly, under all flight conditions, not just because it is light and
borne aloft on a windy day...

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT