Re: Pylyshyn on Cognitive Architecture

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Thu Mar 09 2000 - 15:40:56 GMT


On Tue, 7 Mar 2000, Terry, Mark wrote:

> is the virtual machine actually a machine or just an
> interpreter between the input and the actual hardware?

It's the latter; but when Machine X is emulating Machine Y, you need to
write code for machine Y, not Machine X.

> I think the point Pylyshn is
> making is that the architecture of the system (its physical layout) is only
> described in one 'level' of an entire system (presumably he is thinking of
> the system in a hierarchical way). At this level the state we are in is
> given some meaning (e.g; we relise that thing in front of us is a wall).
> Therefore, the interpretation we give the data structures being passed to
> this level correspond to our 'cognitive architecture'. This seems
> deliberately vague - how does it correspond? and more to the point, why
> should this task (giving cognitive meaning to data) be related
> to the design of the system?.

Pylyshyn thinks that the only explanation of cognition is a computational
explanation: The computation is taking place on the "virtual machine."
Anything below that level is not cognitive, and it might as well be
considered hardware.

> What exactly does Pylyshn mean by 'cognitive contents'? Why don't
> the lower levels have any?

Cognitive contents are the things a system "knows." Knowledge (according
to Pylyshyn) is computational, and hence only occurs at the level of the
virtual machine. Below that architectural level, it's all just
irrelevant implementational details.

This is orthodox computationalism ("Strong AI," cognition =
(implementation-independent) computation)

> Strong equivalence seems to be the theory that algorithms designed for one
> machine will not work on other machines (I don't think any computer
> scientists will argue with this).

Where would "strong equivalence" fit into the Turing Hierarchy I gave
last week?

> This 'cognitively impenetrable' phrase isn't very clear. What Pylyshn says
> before this is a clearer indication of what he means. He talks of a
> "knowledge level" which carries out reasoning based on knowledge. He
> argues that because this level exists, behaviour based on knowledge does
> not reflect on the cognitive architecture (presumably as this is not
> represented as 'cognitive contents'.

If there is something about you that my giving you new information
cannot change, then the unchangeable thing is either in the hardware, or
below the level of the virtual machine that is being emulated the
hardware plus software. Knowledge is cognitively penetrable; vision is
not.

> [The] 'strong realist view' states that a chess
> playing machine must be running exactly the same algorithm (and, given
> his previous arguments, must thus have equivalent architecture) as the
> chess-playing human to be a valid model of the humans cognitive process.
> This is a very strict definition, and would render most AI useless
> (as a model of cognitive process) before it's even run.

Except as a first approximation. But apart from that, is there any
reason for thinking Pylyshyn is wrong in insisting on the right
algorithm, and not just any one that works?

> > > PYLYSHN
> > > Architecture and Cognitive Capacity Explaining intelligence is
> > > different from predicting certain particular observed behaviours.
> > > In order to explain how something works we have to be concerned with
> > > sets of potential behaviours, most of which might never arise in the
> > > normal course of events.
>
> Plyshyn's argument seems slightly circular. To explain how the mind works
> we need to say what may happen, so we need to predict the outcome of a
> situation. But how do we predict outcomes if we don't understand how the
> system works?. I'd say experimental data is the answer to this problem.
> (If 1000 people react in the same way to some stimuli, we can say with
> some certainty that this is how the system behaves, without having to
> understand why)

You are right, but Pylyshyn would not disagree. When we model
intelligence (cognition) we are modeling CAPACITY, i.e., what the mind
CAN DO. We are not just modeling some specific toy thing a particular
mind does do: It's capability we want to capture.

We know, roughly speaking, what the human mind can do; but sometimes the
experimental details are needed to supplement this general informal,
intuitive (Turing-Test-type) sense we have of what the mind can and
cannot do.

> > > PYLYSHN
> > > The need for an independently-motivated theory of the cognitive
> > > architecture can also be viewed as arising from the fundamental
> > > hypothesis that there exists a natural scientific domain of
> > > representation-governed phenomena (or an autonomous "knowledge-level").
> >
> Pylyshyn is stating the need to understand how the...
> knowledge level works, and studying cognitive architecture
> may help this. However, he stated previously that this knowledge level
> "does not reveal properties of the architecture". So why should the
> reverse hold true?

Pylyshyn thinks cognition is computation, so the basis of our cognitive
capacity, and the explanation of our cognitive capacity, will be
computational, algorithmic. The algorithms are all executed on the
"cognitive architecture" -- the "virtual machine" that the brain is
implementing. The implementational details are irrelevant.

> Pylyshn offers his 'cognitive impenetrability' idea as a tool to help
> separate the software from the hardware. Has anyone actually attempted to
> follow this up with psychological tests (this paper is 12 years old) to
> add credibility to his idea?

Here's an update:

Pylyshyn, Zenon. Is vision continuous with cognition? The case of
impenetrability of visual perception. Behavioral & Brain Sciences,
1999 Jun, v22 (n3):341-423.

http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.pylyshyn.html

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT