> From: "Lyons, Tim" <TRL295@psy.soton.ac.uk>
> Date: Fri, 24 May 1996 15:31:58 GMT
> Searle's CRA disproves the claim that if a machine passes the
> teletype version of the Turing Test then you have no reason to doubt
> that it has a mind.
Correct, but a bit too compact for kid-sib, who does not yet know what
the CRA is about or even what "CRA" means!
And it's: no better or worse reason to doubt it than you do with another
> The aspect of having a mind that is in question
> here is understanding written language. If something understands
> written language then it has a mind, if it doesn't understand
> written language then it doesn't have a mind. Searle states that if
> a machine can pass the test in Chinese, then it doesn't necessarily
> understand Chinese, it is simply following a set of symbol
> manipulation algorithms. If someone who doesn't speak Chinese was to
> memorise the algorithms that the computer used then they would also
> be able to pass the test in Chinese but would still not understand a
> word of what they had written. This does not prove that machines
> cannot have minds, it simply shows that if they do it is not due to
it is not due SOLELY to
> the computational states that they are implementing, because a
> person can implement the same computational states with the same
> symbols, and still not understand what the symbols mean.
Good job, and just shy of an A, for which you would need to relate it to
the big issues, such as computation and cognition, symbol grounding,
reverse engineering and the mind/body problem.
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:44 GMT