Newsgroups: comp.ai.philosophy
Path: utzoo!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!think.com!mintaka!bloom-picayune.mit.edu!news
From: mlevin@jade.tufts.edu
Subject: Turing Test: opinions on an idea
Message-ID: <1991May13.133711.102@athena.mit.edu>
Followup-To: mlevin@jade.tufts.edu
Sender: news@athena.mit.edu (News system)
Organization: Me, Myself, and I, inc.
Distribution: usa
Date: Mon, 13 May 91 13:37:11 GMT
Lines: 30

    I am new to this group, so if this has been covered recently,
please point me to the articles.
    I'd like to hear opinions on the following thought I had, about
the Turing Test. Start off with a story. Suppose in X years, physics
gets to such a point where very fast storage and retrieval of
arbitrary amounts of information is easy (imagine some sort of
hyperdimensional memory, or something). They then make an enormous
'game-tree' of all possible conversations in English (taking 
into account randomizing elements, repeat questions,
etc.), and make an idiot box that simply accepts inputs from an
interrogator, and, by direct table look-up, spits out answers, which
are good enough to pass the Turing Test. I imagine supporters of the
test (except behaviorists, I guess) will not want to classify this
device as intelligent (or as a 'person') in any sense of the word.
One way out for them is to say that this device exploits advances in a
science (physics/engineering) which really has nothing to do with the
question of sentience, to produce an indistinguishable simulation of
the real thing.  Given that, what is to stop an opponent of AI (like a
dualist, for example) from saying the same thing about any
currently-feasable AI project? i.e., that it exploits advances in
computer science to produce a good simulation, but really has nothing
to do with the question of primary consciousness? 
     Any and all opinions are welcome. Especially, if anyone has seen
this problem brought up in the literature before (I vaguely recall
someone telling me this has already been thought of), I'd appreciate a
reference. 

Mike Levin


