Re: Dennett: Cognitive Science as Reverse Engineering

From: Cove Stuart (smc198@ecs.soton.ac.uk)
Date: Tue May 29 2001 - 14:33:38 BST


Cove Stuart on Yusuf Larry

RE: Dennett: Cognitive Science as Reverse Engineering

>>Hunt:
>>Here, Dennett explains that speech perception cannot be entirely
>>data-driven, and to back-up his claim he points out that our brains are
>>equipped to deal with speech recognition, however this does not
>>automatically mean that we can understand all speech. If we can speak only
>>English, we cannot understand Chinese. He further demonstrates that if we
>>speak English but are not interested in football, and someone tries to
>>talk to us about football, then we will understand the vocabulary, but not
>>necessarily the content, i.e. I do not understand a great deal about
>>football, and if someone talks to me about it, I can understand the words
>>they use, but I do not understand for instance, the "off-side" rule that
>>might come up in conversation. I understand 'off' and 'side', but do not
>>understand the combination of the two. The combinations change the context
>>of the words.
>
>Yusuf L:
>Totally Agree. The problem of understanding language; not just the words
>but the context of the words
>being used has plagued AI for decades. A very interesting question would
>be how to implement a machine
>that can pick out the context, and then interpret the speech based on
>the context, without hitting the frame
>problem (through building up its knowledge of every interpretation
>possible in every context). I doubt that the
>use of symbol grounding in a T3 candidate would help because knowing
>what a football is and the game of
>football does not mean that the machine would be able to understand the
>off-side rule.
>I suspect Harnad would say, why worry? Most humans do not know the
>off-side rule and so why should one expect the T3 candidate to know .
>However, following Turing's indistinguishability thesis, if the T3
>candidate was tested against the human that knew what the off-side rule
>was, surely it has failed the TT or has it?

Cove:
I don't believe that not knowing about the offside rule, even if all
the benchmark candidates knew it, would constitute a failed TT3 robot.
However, being able to recognise that this was an unknown piece of
knowledge, and the application of credit/blame assignment in at least a
partial solution, is an ability that is generic to humans, and the
robot must be capable of this.

This is a case of knowledge versus ability, and although detailing
knowledge in symbolic form is generic, having the knowledge there in
the first place is not.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST