Re: Turing Test question

From: HARNAD Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Thu Mar 29 2001 - 18:27:29 BST


For the article that emerged from this exchange, see:

http://www.vny.com/cf/news/upidetail.cfm?QID=173020

On Thu, 29 Mar 2001, MJ Martin wrote:

> Dr. Harnad:
>
> I am reading your work on Turing and wondered if you could comment on
> the below information. I am a science correspondent with United Press
> International.
>
> Does the information below strike you as accurately portraying
> Turing's design?
>
> If so, is it reporting anything newsworthy or of great interest to the
> scientific community?
>
> Thanks.
>
> Mike Martin
> Senior Science Correspondent
> United Press International
> www.upi.com
>
> (Ai) is using progressive mathematical and cognitive theories and
> techniques tomake the sci-fi dream of talking to our computers a
> reality. Ai's researchers have developed a computer program designed
> to learn how to carry a conversation and succeeded in training it to
> converse on the level of an 18 month old baby. Using Alan Turing's
> (the "father of artificial intelligence") Child Machine concept and a
> developmental model created in the 50's, Ai's research division
> created Hal, Ai's baby algorithm, which demonstrates the linguistic
> capabilities of an 18-month-old baby. Recently, Ai met its first
> milestone when Hal passed the "Infant Turing Test", in which a user
> would not be able to distinguish if he or she were interacting with a
> human or a computer. This is just one of a series of milestones, based
> on human development, that Hal is expected to pass before it achieves
> the conversational capabilities of an adult speaker. The algorithm is
> currently being trained by a team of cognitive scientists and child
> development experts to acquire language.

Dear Mike,

Hal IS news-worthy, but it ISN'T the Turing Test (TT).

The TT is about giving a system (not necessarily just a computer) a
mind by giving it full human capacities. In fact, there are reasons
[the Symbol Grounding Problem] for thinking that just a computer alone
couldn't pass the TT, only a robot could pass. And there are further
reasons [Searle's Chinese Room Argument] for thinking that if a
computer alone COULD pass the TT, then it would be the TT that had
failed, because the system [the computer] still would not have a
mind).

But to pass the TT, the system must (1) be able to DO everything a real
person can do and (2) must be able to do it in a way that no person can
tell apart from a real person. There is no "infant TT," no "toddler TT"
etc. (there is also no worm TT, fish TT, dog TT, monkey TT). There is
just the TT, the full TT, which is based on TOTAL indistinguishability
from ourselves in performance capacity (not necessarily appearance) --
for a lifetime, if necessary.

On the other hand, there will no doubt need to be worm, fish... infant,
toddler MODELS before we come even close to designing anything that can
pass the TT. That's where this kind of work comes in. You DO have to
crawl before you can walk, so to speak -- not in order to be a
candidate who is able to pass TT (for the winning candidate could be
designed as an adult able to pass from day 1), but in order to be able to
design such a candidate!

So for that reason, HAL is interesting as one of these early "toy"
models.

Hal's real handicap, though, is not that it is only trying to model the
capacities of an infant or toddler, but that it is trying to model them
using a computer program alone. For the problems with that approach
(Searle's Chinese Room Argument and the Symbol Grounding Problem) are
not only there at the TT level, but at the toy-model level too: Mental
states are not just computational states.

Hal will need to be a robot, with sensorimotor capacities too.
(18-month-olds are not that good at letter-writing anyway!)

See:

Searle, John. R. (1980) Minds, brains, and programs. Behavioral and
Brain Sciences 3 (3): 417-457
http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html

Harnad, S. (1990) The Symbol Grounding Problem Physica D 42: 335-346.
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.sgproblem.html

Harnad, S. (1992) The Turing Test Is Not A Trick: Turing
Indistinguishability Is A Scientific Criterion. SIGART Bulletin 3(4)
9 - 10.
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad92.turing.html

and

Harnad, S. (2000) Minds, Machines, and Turing: The Indistinguishability
of Indistinguishables. Journal of Logic, Language, and Information 9(4):
425-445. (special issue on "Alan Turing and Artificial Intelligence")
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.turing.html

Cheers,

Stevan
--------------------------------------------------------------------
Stevan Harnad
Professor of Cognitive Science
Department of Electronics and phone: +44 23-80 592-582
             Computer Science fax: +44 23-80 592-865
University of Southampton http://www.cogsci.soton.ac.uk/~harnad/
Highfield, Southampton http://www.princeton.edu/~harnad/
SO17 1BJ UNITED KINGDOM



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:26 BST