Re: Harnad: The Symbol Grounding Problem

From: Sparks Simon (snjs197@ecs.soton.ac.uk)
Date: Thu May 24 2001 - 21:39:16 BST


>> HARNAD:
>> It is not enough, for example, for a phenomenon to be interpretable as
>> rule-governed, for just about anything can be interpreted as
>> rule-governed. A thermostat may be interpreted as following the rule:
>> Turn on the furnace if the temperature goes below 70 degrees and turn it
>> off if it goes above 70 degrees, yet nowhere in the thermostat is that
>> rule explicitly represented. Wittgenstein (1953) emphasized the
>> difference between explicit and implicit rules: It is not the same thing
>> to "follow" a rule (explicitly) and merely to behave "in accordance with"
>> a rule (implicitly).[2]

> Henderson:
> Rules are represented explicitly in computer programs: the archetypal
> IF...THEN...ELSE construction being the most obvious example. This applies
> just as much to programs *simulating* neural networks as it does to other
> programs: updating rules for neuron weights are specified *explicitly* in
> a software simulation of a neural network, although (like Harnad's
> thermostat) they may not be made explicit in a physical implementation.
> Similarly, a program simulating the working of the thermostat would of
> course be rule-governed, but then it wouldn't be a thermostat: the
> thermostat is implementation-dependent. A neural network simulated in
> software isn't actually a neural network just as much as a photo of a vase
> isn't a vase. If all neural network functionality can be modeled
> algorithmically, however, symbol systems will be able to offer an
> empirical explanation of how neural networks function, and thus
> connectionism will no longer be able to present a rival explanation of
> human intelligence to symbolic AI.

Sparks:
It is true that neural networks are rule-governed with respect to the
explicit updating actions enforced according to conditions of the
networks state (Error Back Propagation for example), but given a
collection of data a Mutli-Layer Feed-Forward Neural Network using EBP,
for example, will learn to recognise patterns (extract features) and
inductively behave according to such learned factors. Nowhere in the
implementation of the neural network will you find the explicit rules
concerning the given problem to be learned. The rules governing the
neural network concern only the method by which it should learn its own
'rules' for a given problem, not the rules of the problem itself.
Similarly, there are explicit physical and biochemical rules by which
the human brain functions, regardless of its 'intelligent' content.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST