> From: "Nelson Claire" <CLN195@psy.soton.ac.uk>
> Date: Mon, 20 May 1996 11:16:41 GMT
> What is Pylyshyn's critique of neural nets ?
> The concept of neural nets was formulated in order to produce a model
> that had a greater degree of neurosimilitude to actual neural action
> than was suggested in previously proposed information processing
> models. A neural net consists of an input layer which communicates with
> the output neurode by a system of interconnecting neurodes. The most
> basic of these is the perceptron which consists of just two layers.
> However it has been found that by incorporating hidden layers a neural
> net can be trained to give feedback. This feedback serves to either
> inhibit or excite the connections which in turn will decrease or
> increase the likelihood of getting a correct output next time.
Neural nets are not trained to GIVE feedback. Feedback is used to train
them: In Backpropagation nets, an input comes in, makes its connections
through the hidden layer(s) and then produces an output. If this output
is CORRECT, this is "backpropagated" along all the connections that got
the input to the right output, and they are all strengthened a little.
If the output is INCORRECT, the backpropagation weakens those
connections. Eventually, the net learns to get the right output for
the right input every time (if the problem is one that the net is able
to learn: this WON'T learn for grammatically correct and incorrect
strings of natural language symbols, because of the "poverty of the
stimulus," though it may work for setting the parameters on UG).
In ordinary backprop nets, the feedback about whether the output was
correct or incorrect comes from a built-in instructor, but in the real
world, there is no reason the feedback should not come from the outside
world, in the form of feedback from the CONSEQUENCES of making the wrong
response (getting sick from eating a bad mushroom, or getting nourished
from eating a good one). Skinner could be quite happy with that.
Feedback is a much more general mechanism than just that found in neural
nets: A thermostat works on feedback: If the thermometer drops below
a certain point, it triggers the heater to go on. When the heat rises
above that point again, it turns the heater off. This is negative
feedback) because turning on the heater eventually causes heat which
then turns itself off. There's positive feedback too, as when the
microphone at a rock concert picks up its own humming and amplifies
that, and it eventually grows into a screech!
Backprop uses positive feedback when the output is correct and negative
feedback when the output is incorrect.
> Pylyshyn criticises neural nets for what they cannot do as well or as
> easily as symbol systems. He maintains that if neural nets are just the
> hardware for a symbol system then this is irrelevant information anyway
> as what is necessary is ' implementation independence '
The hardware details are irrelevant for a symbol system, so if a symbol
system happens to be running on neural net hardware rather than the
usual computer hardware or brain hardware, it makes no difference. The
"action", so far as cognition is concerned, is at the symbolic level:
What matters is what algorithm is running, not what hardware is it
running on. So if nets are just alternate hardware for computation,
they are not interesting. It's still the computation that's doing the
real cognitive work, hence would explain HOW it's done.
> symbols are arbitrary and can be combined and recombined in many
> different ways, neural nets stand for something as a whole and cannot
> be broken down into its constituent parts;
It's not exactly neural nets that don't have constituent parts that can
be recombined the way symbols can be: It is STATES of neural nets.
There are in general two kinds of states in neural nets that can stand
(1) LOCAL states, where the activity of one or a few specific neurodes
stands for something. (This might be thought of as analogous to
"grandmother" cells that some have hypothesised there might be in the
brain: cells that fire only when you see or think of Grandma.
They would then be called "grandmother" receptors or grandmother
(2) DISTRIBUTED states in nets are detect and represent not by having
specific neurodes standing for specific things, but through a PATTERN
of activity and connections, distributed across the entire net, not
localised anywhere in particular.
In either case, it is not clear how the recombinatory component
structure that symbols are specialised for could be gotten from neural
net states, whether local or distributed, according to Fodor &
> for example, from a neural
> net of ' the cat sat on the mat ' a neurode cannot be found for ' cat '
> or ' mat ' independently. Thus they have not got the same manipulation
> power and are therefore much less flexible than symbol systems are.
Not just less flexible, but much less expressive power. In symbols
if you have "CAT ON MAT" you also have CAT, MAT, ON, MAT ON CAT,
CAT not-ON MAT, not (CAT ON MAT), CAT ON X, X ON MAT, X ON CAT,
etc. etc. In a net, if you have a "CAT ON MAT" state it does not follow
that you have any of the rest -- so in a sense you don't even really
have the CAT ON MAT state, but merely the detector of a category of
inputs: cats on mats.
> Pylyshn maintains therefore that neural nets are not suitable
> representations for the flexibility of language and thought which are
> purely propositional and therefore symbolic. Thus the association of
> one chunk of information with another does not reflect the complexity
> and flexibility of thought and therefore does not have the property of
> "systematicity" evident in symbol systems. -- Claire Nelson py104
The emphasis is on the "systematicity" rather than the flexibility. Look
at this email message. The word "net" appears all over the message.
Suppose you didn't know what "net" meant. Supposing you thought it meant
an airplane. Try some of the sentences in which it appears. Do they make
sense (and do the sentences around them make sense) if you interpret
"net" as meaning "jet"? The answer is: No. But if you interpret it as
really meaning "net," it all makes systematic sense. All the
combinations in which "net" appears make sense. That's true for each and
every symbol in the message. That's not a trivial property. Symbols
have that property, and it makes it possible to give them, and all
their parts, and all the combinations of their parts (actual and
possible) a systematic interpretation. (It's also why symbolic codes can
be so hard to break, and why cryptography is such an important field)
>From this systematicity you get the power of mathematics, logic,
computer programming, and artificial and natural languages.
Neural net states have not yet shown themselves to have this property
of systematicity (except if they are used as hardware for symbol
systems, which is irrelevant). They could be trained, tediously, case by
case, to have all the CAT/MAT states I listed above, but I could have
listed an infinite number more, and those would each have to be
separately trained too. And you'd still only be approximating a symbol
system. In a real symbol system you get it all for free. So, say Fodor &
Pylyshyn, why not go straight to symbol systems instead of bothering
(Reply? What about the symbol grounding problem? It's all very well that
symbols can be systematically interpreted as standing for this and that,
but in and of themselves they are just dead squiggles and squoggles. The
meaning is not in them; they merely agree systematically with the
meaning you project on them. So how are the symbols to be connected to
what they can be systematically interpreted as standing for? Maybe nets
and symbols are both needed. And what about the analog "shadows" of the
world? Maybe nets can connect symbols to the things they stand for by
finding the features in their sensory projections that allow you to
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:41 GMT