Newsgroups: comp.ai.philosophy
Path: utzoo!utgpu!watserv1!watdragon!violet!cpshelley
From: cpshelley@violet.uwaterloo.ca (cameron shelley)
Subject: Re: Just Minds and Machines this time
Message-ID: <1991Jan31.040637.15353@watdragon.waterloo.edu>
Sender: daemon@watdragon.waterloo.edu (Owner of Many System Processes)
Organization: University of Waterloo
References: <11656.9101241836@s4.sys.uea.ac.uk> <1991Jan25.022026.12999@watdragon.waterloo.edu> <16510@venera.isi.edu> <1991Jan27.185935.18038@watdragon.waterloo.edu> <16537@venera.isi.edu> <1991Jan29.165646.17764@watdragon.waterloo.edu> <16558@venera.isi.edu>
Date: Thu, 31 Jan 91 04:06:37 GMT
Lines: 76

In article <16558@venera.isi.edu> smoliar@venera.isi.edu (Stephen Smoliar) writes:
>In article <1991Jan29.165646.17764@watdragon.waterloo.edu>
>cpshelley@violet.uwaterloo.ca (cameron shelley) writes:
>>  For example, the system should deal differently with:
>>"%^%#*%^&%^%^%%^$#@!#$#" (noise), "After being run *^@r by the truck,
>>the man said 'Ouch'!" (where "over" can be interpolated), "flying
>>airplanes make people ill" (ambiguity), and "pick peek poke pack puck pink"
>>(nonsense).  Should I interpret the last as a description of a
>>psychodelic hockey game, or just respond "What?"  I think both should
>>be options, but NN's (as they exist currently) do not have a choice
>>in such a case.
>>
>Cam, you interpreted my account of Peter Todd's work as suggesting "that the
>notion of 'ill-formed' is context-dependent."  I would say that the above
>nonsense example supports a similar conclusion:  How you choose to interpret
>depends upon the context in which you received it.  I would even say the same
>of your noise example.  If you saw those symbols in some avant-garde poetry
>magazine, you might very well try to kick in SOME attempt at interpretation,
>rather than just writing them off as noise (even if they had been scrupulously
>generated by a truly random source).
>

Actually, I was wondering about that after sending that post.  It seems to
me that you're correct.  If I equate 'context' with 'interpretation function'
being applied,  then I suppose you could have a crack at anything.  I am
reminded that in Montague semantics (if I recall correctly), the truth
assignment of a function has a superscript identifying which of the
multiple 'worlds' it is applying to.  The situation seems similar here,
with 'world' playing the role of 'context'.

>Nevertheless, I think we are basically in agreement.  Todd's system "works"
>because its architecture assumes a single context.  I have not seen any
>convincing demonstration of a neural net which would be capable of maintaining
>multiple contexts, such as your nonsense example requires, let alone having a
>level of control for DECIDING which context is appropriate for a given
>interpretation task.

Ok, let me try this out.  To summarize the information distinctions
I made previously (about how well transmitted constraints can be 
satisfied in general), within a fixed context, let @ = noise, 0 = under-
defined point, 1 = uniquely-defined point, * = multiply-defined points.

[Opps!  I should mention that by 'point', I mean a coordinate in a
solution space...]

Thus the options (under a fixed context) that a complete classification
system should have are (@,0,1,*).  I know of no NN which has a node
dedicated to '@', ie. to explicitly recognizing over-noisy input;  I am
also unaware of any which explicitly recognizes '0' (which I am also
calling "ill-formed"), ie. units which are individually recognizable
but form no interpretable whole (regardless of noise); on the other
hand, NN's are quite capable of producing 1 or * (more) interpretations
by activating a single output mainly, or several equally.  This 
implies that current NN design could well be augmented with nodes
on the output indicating '@' and '0' which would be mutually 
inhibitory.  I noticed in the Dec CACM, that Kevin Knight suggested
putting noise in normal training pairs, ie. ([a,p,^,l,e],apple);  what
I'm suggesting is ([^,%,#,&,&],@).  This also leads me to suggest
pairs like ([s,t,k,a],0), bearing in mind that I'm fixing the context
on english words.  You may take issue with the example I've chosen,
but I think the idea is at least sound.

As far as varying the context goes, there are examples of I know of
in which entire sub-structures of a NN are mutually exclusive but
deal with the same input 'item'.  My original mention of two
differently structured nets independantly competing for the same
'output' seems similar.  On the other hand, I have no idea how a
net might be allowed to vary its context freely (or creatively!)
I would be curious to know how the linkage structure you proposed
might account for this.

--
      Cameron Shelley        | "Absurdity, n.  A statement of belief
cpshelley@violet.waterloo.edu|  manifestly inconsistent with one's own
    Davis Centre Rm 2136     |  opinion."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce
