Newsgroups: comp.ai.philosophy
Path: utzoo!utgpu!watserv1!watdragon!violet!cpshelley
From: cpshelley@violet.uwaterloo.ca (cameron shelley)
Subject: Re: Minds, machines, and Godel
Message-ID: <1991Jan17.162141.12917@watdragon.waterloo.edu>
Sender: daemon@watdragon.waterloo.edu (Owner of Many System Processes)
Organization: University of Waterloo
References: <1991Jan16.035058.7465@bronze.ucs.indiana.edu> <91Jan16.135532edt.1132@neuron.ai.toronto.edu> <1991Jan17.040803.8205@bronze.ucs.indiana.edu> <JMC.91Jan16213907@DEC-Lite.Stanford.EDU> <1991Jan17.104913.15692@sics.se>
Date: Thu, 17 Jan 91 16:21:41 GMT
Lines: 57

In article <1991Jan17.104913.15692@sics.se> torkel@sics.se (Torkel Franzen) writes:
>
>  To amplify my previous comment: John McCarthy's remarks correctly
>emphasize that machines just as well as people can use Godel's theorem
>in its positive application, i.e. as a means of indefinitely extending
>the set of formal principles which we recognize as valid.
>

  Before I comment on this, let me digress for a moment (sorry! :>).
What Goedel showed, briefly, was that for any axiomatic system T1, there
is a Goedel number G1 which represents a statement about T1 that is
'true' but not 'provable' in T1.  If we then switch to axiomatic system
T2, then we could find that number G1 now represents a provable statement
about T2 -- however there is now a Goedel number G2 which represents
a statement about T2 that is 'true' but not 'provable'.  There are two
things to note about this arguement: 1) the incompleteness of any 
axiomatic system is a semantic property of our notion of axiomatic, in
other words there is no fixed number (structure) which always corresponds
to the meaning "true but unprovable", and 2) the role played in the
arguement by the term "represents" should indicate that the interpretation
function being used is critical to the problem.  Is it part of the
axiomatic system?

  I think Penrose's arguement is that it is not.  He might say: "A human
is able to perceive the truth (a semanitc property) of the various
structures Gx ("syntactic" entities) indicated by the incompleteness
theorem, so that for the human syntactic validity is not isomorphic
with semantic validity.  But the axiomatic systems Tx cannot perceive
this because for them syntactic validity is isomorphic with semantic
validity.  The upshot: the human's interpretation function does not
have the same limitations as that of an axiomatic system!  From this,
I can conclude that no axiomatic system can fully simulate a human,
QED."  Based on the above premises, I think the conclusion is correct.
But what about the premises?

  As several people have pointed out, an axiomatic system (I'm getting
tired of that term, how about "computer"?) can be made to approximate
such an interpretation function to an arbitrary accuracy epsilon.  This
point throws one of Penrose's assumptions into relief: a human's
assignment of 'truth' to a syntactic object is certain (ie. not subject
to approximation).  I have some reservations about this, and this is
obviously where the 'philosophy' comes into play.  What does "truth"
mean?  Is it atomic?  I don't think Tarski resolved this for all time
with his definition, as the current controversy points out.  At this
point, all I can state is my opinion: I am not inclined to the notion
of certitude in anything, because it requires an ideal standard which
I can't conceive of as real, so I would reject Penrose's arguement.
It is tempting here to start conjecturing about how modern
physics might be used to support me, but as David said when he started
this, "no hand-waving"!

PS.  Steve Dai: happy now? :>
--
      Cameron Shelley        | "Absurdity, n.  A statement of belief
cpshelley@violet.waterloo.edu|  manifestly inconsistent with one's own
    Davis Centre Rm 2136     |  opinion."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce
