Re: Pylyshyn on Cognitive Architecture

From: Terry, Mark (mat297@ecs.soton.ac.uk)
Date: Tue Mar 07 2000 - 18:52:38 GMT


>Worrall:
> Here he is stating that when the virtual machine and the real machine are
> running there is no difference, it is only when the virtual machine is
> turned Off that a distinction can be found - Essentially pulling the plug.
> I agree, as once the plug is pulled the Virtual machine no longer
> has the ability to perceive. Where as a real machine has no plug unless
> deceased - Although who knows what happens then!

Agreed. However, a virtual machine is running on a real machines hardware,
programmed in some language understood by the real hardware. Thus could
the virtual machine simply be considered a class of algorithm, simply a
layer of abstraction between the world and real machine. What I'm
getting at is; is the virtual machine actually a machine or just an
interpreter between the input and the actual hardware?

> > PYLYSHN
> > For any particular computational process there is only one level of the
> > system's organization that corresponds to what we call its cognitive
> > architecture. That is the level at which the states (datastructures)
> > being processed receives a cognitive interpretation.

> Worrall:
> This is a important concept as he specifically says that the for a given
> computational process that there is only ONE level of organization that allow
> That given process to be interpreted cognitively. In many ways we can say
> that there are interlocking states on cognition in the human mind, which are
> used to retrieve abstract information.
>
Not to sure what Worrall's getting at here. I think the point Pylyshn is
making is that the architecture of the system (its physical layout) is only
described in one 'level' of an entire system (presumably he is thinking of
the system in a hierarchical way). At this level the state we are in is
given some meaning (e.g; we relise that thing in front of us is a wall).
Therefore, the interpretation we give the data structures being passed to
this level correspond to our 'cognitive architecture'. This seems
deliberately vague - how does it correspond? and more to the point, why
should this task (giving cognitive meaning to data) be related
to the design of the system?.

> Next he states:
>
> > PYLYSHN
> > semantic interpretation of these states figures in the explanation of
> > the cognitive behavior. Notice that there may be many other levels of
> > system organization below this, but these do not constitute different
> > cognitive architectures because their states do not represent
> > cognitive contents. Rather, they correspond to various kinds of
> > implementations, perhaps at the level of some abstract neurology, which
> > realize (or implement) the cognitive architecture.

>Worrall:
> >From this we can see that the system uses different levels of representation
> To describe the level of perception and knowing things. But from this the
> Levels are not representational of differing architectures but of differing
> Experiences. From this we can say that the architectures would be similar
> For a wide range of cognitive tasks.
 
I don't think 'experiences' is the right word - surely these are stored in
memory. Possibly the lower levels represent different ways to react to the
same cognitive contents, depending on other criteria, like our mood.
What exactly does Pylyshn mean by 'cognitive contents'? Why don't
the lower levels have any? Seeing as from the above a reasonable
definition of cognitive architecture appears to be 'a
system which represents cognitive contents', this seems like an important
question.

>Worral
> From here he says that we have to find a model of the mind before we can have
> an accurate model of the processes that run on it. E.g. to model a given
> process we have to understand that algorithms that are used to run that process.
>
Pylyshn is talking about understanding the layer /below/ the algorithms,
as he explains later that algorithms may only work on the architecture
they were implemented on. He believes we have to model the 'cognitive
architecture'.

> > PYLYSHN
> > * Architecture-relativity of algorithms and strong equivalence. For
> > most cognitive scientists a computational model is intended to
> > correspond to the cognitive process being modelled.

>Worrall:
> We have to first understand the real system to understand the synthetic one we
> Are trying to model on the real one. This is a good and obvious point,
> to have a real process we must first have the real architecture.

Didn't you say before that there's no difference between physical hardware
and a virtual machine?. Pylyshn states that the model must correspond to
the cognitive process, that is all. There would be nothing gained from
modeling some process which doesn't relate to the human mind in any way.
Strong equivalence seems to be the theory that algorithms designed for one
machine will not work on other machines (I don't think any computer
scientists will argue with this).

> > PYLYSHN
> > * Architecture as a theory of cognitive capacity. Another way of
> > looking at the role of architecture is as a way of understanding the set
> > of possible cognitive processes that are allowed by the structure of
> > the brain.
>
>Worrall:
> It would be possible to find the total processes and functions
> that the brain can do and from there you would have the max bounds of cognitive
> process that the brain can do, which is a good model of the system.

Really? How would you do that? Isn't there a strong argument for the
brains unlimited capacity to understand and learn? Sure, there are limits
on short term memory, but could we ever get to the point where we are
unable to learn anything new because our brains are full? There would at
one time be a finite set of possible processes the brain can carry out if
we agree it is modeled by a system of states, but the ability to learn
makes the notion of the maximum number of things our brain can do at any
one time irrelivent.

> > PYLYSHN
> > Architecture as marking the boundary of representation-governed
> > processes. Namely, that the architecture must
> > be cognitively impenetrable.

> Worrall:
> This is a strange idea, as for the boundary to be impenetrable, there must
> be no interaction between these cognitive levels. Many processes must surely
> use many different levels and areas for a given process.

This 'cognitively impenetrable' phrase isn't very clear. What Pylyshn says
before this is a clearer indication of what he means. He talks of a
"knowledge level" which carries out reasoning based on knowledge. He
argues that because this level exists, behaviour based on knowledge does
not reflect on the cognitive architecture (presumably as this is not
represented as 'cognitive contents'.

> > PYLYSHN
> > According to the strong realist view, a valid
> > cognitive model must execute the same algorithm as that carried out by
> > the subject being modelled.
>
>Worrall:
> >From this we understand that for a given algorithm to work we must have the
> given architecture for the given process, for example using a Chess algorithm
> to talk to a person just would not work! We have to use an accurate
> algorithm based on a model to be most successful.

This isn't quite right. This 'strong realist view' states that a chess
playing machine must be running exactly the same algorithm (and, given
his previous arguments, must thus have equivalent architecture) as the
chess-playing human to be a valid model of the humans cognitive process.
This is a very strict definition, and would render most AI useless
(as a model of cognitive process) before it's even run.

> > PYLYSHN
> > The distinction between directly executing an algorithm and executing
> > it by first emulating some other functional architecture is crucial to
> > cognitive science. It bears on the central question of which aspects of
> > the computation can be taken literally as part of the cognitive model
> > and which aspects are to be considered as part of the implementation of
> > the model (like the colour and materials out of which a physical model
> > of the double helix of DNA is built).
>
>Worrall:
> >From this we can understand that it is very different to execute a given
> algorithm which is rule based, than to implement a process on a system using
> a model. E.g. whether we are using a direct model or another fragmented
> model process.
>
O.K., but if a virtual machine is identical to real hardware, this
fragmentation idea doesn't matter. I agree with Pylyshn that separating
the physical system from the processes running on it is absolutely
crucial. For instance, if we can't make this distinction, we don't need
to emulate what a mind does, but what a mind is.
 
> > PYLYSHN
> > This issue frequently arises in connection with claims that certain
> > ways of doing intellectual tasks, for example by the use of mental
> > imagery, by passes the need for knowledge of certain logical or even
> > physical properties of the represented domain, and bypasses the need
> > for an inefficient combinatorial process like logical inference.
>
> >From here we can see that, when the mind views something we do not
> have to run finite time algorithm to process the given information
> recieved, we can understand this immediately, from this we can perceive that
> the cognitive architecture has some cognitive method incorporated. In machine
> form this would be hard to model without having the same biological
> architecture as the mind it is modelling.

Agreed (apart from the 'biological' part). This is another example of
separating the hardware from the software

>
> > PYLYSHN
> > Architecture and Cognitive Capacity Explaining intelligence is
> > different from predicting certain particular observed behaviours.
> > In order to explain how something works we have to be concerned with
> > sets of potential behaviours, most of which might never arise in the
> > normal course of events.
>Worrall:
> We can interpret that we have to have a complete set of potential behaviours
> to model the total process efficiently. Otherwise the model would not
> react as the mind would for a given process.

Plyshn's argument seems slightly circular. To explain how the mind works
we need to say what may happen, so we need to predict the outcome of a
situation. But how do we predict outcomes if we don't understand how the
system works?. I'd say experimental data is the answer to this problem.
(If 1000 people react in the same way to some stimuli, we can say with
some certainty that this is how the system behaves, without having to
understand why)

> >From here the author introduces the idea that for a given process that the
> outcome of that process is not solely due to the internal architecture
> of the model.
>
> > PYLYSHN
> > But how can the behavior of a system not be due to its internal
> > construction or its inherent properties? What else could possibly
> > explain the regularities it exhibits? It is certainly true that the
> > properties of the box determine the totality of its behavioral
> > repertoire, or its counter factual set; i.e. its capacity.
>
> He suggests that given the system, the capacity of it, and the processes that it
> can perform are based on the architecture but the outcome of those processes
> are not solely based on the architecture.
Yep. I wonder if what a processes' output can be defines some part
of its capacity.
Worrall:
> He argues that the systems reactions are based on knowledge that has been
> gathered and that the mind does not use a biological method for reasoning,
> but uses a rule based system to perceive things.

Don't confuse perception and reasoning (surely two completely different
things?). In the same way that the other levels in the mind need the level
corresponding to architecture, this level needs the other levels.

> > PYLYSHN
> > What the biological mechanism does provide is a way of
> > representing or encoding the relevant knowledge, inference rules,
> > decision procedures, and so on -- not the observed regularity itself.--

> Worrall:
> The idea that we are bound by knowledge is coherent, although some biological
> functions must apply as with a new born child has certain abilities to do
> things, and must be able to perceive certain things.

I think Worrall is reading too much into this statement. All Pylyshn is
saying is that if we accept the mind has rules and knowledge etc.,then
biology gives us a way of encoding them (as the 'data structures' referred
to earlier, presumably).

> ARCHITECTURE AND THE AUTONOMY OF THE COGNITIVE LEVEL
>
> > PYLYSHN
> > The need for an independently-motivated theory of the cognitive
> > architecture can also be viewed as arising from the fundamental
> > hypothesis that there exists a natural scientific domain of
> > representation-governed phenomena (or an autonomous "knowledge-level").
>
> Worrall:
> Here he introduces the concept that we have a automatic knowledge level, e.g
> if we put our hand into the fire, we automatically pull it away without
> thinking to pull the arm away! Basic knowledge and operations are important
> in perceiving things and can be seen especially in new born children.
>
This is reaction (it hurts!), afterwards we have knowledge (don't put arm
in fire, it hurts.) Pylyshn is stating the need to understand how the
fore-mentioned knowledge level works, and studying cognitive architecture
may help this. However, he stated previously that this knowledge level
"does not reveal properties of the architecture". So why should the
reverse hold true?

> If something can be shown to be changed, e.g. The way we perceive colours then
> it can be said to be based on the knowledge of that system and not on the
> automatic knowledge level. It involves reasoning and perceiving the
> problem based upon the rules.

Pylyshn offers his 'cognitive impenetrability' idea as a tool to help
separate the software from the hardware. Has anyone actually attempted to
follow this up with psychological tests (this paper is 12 years old) to
add credability to his idea?

Mark.

mat297@soton.ac.uk



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT