Newsgroups: comp.ai
Path: utzoo!utgpu!watserv1!watdragon!violet.waterloo.edu!cpshelley
From: cpshelley@violet.waterloo.edu (cameron shelley)
Subject: Re: UNIFIED MODEL FOR KNOWLEDGE REPRESENTATION? (IMPOSSIBLE
Message-ID: <1991Jun12.221121.15828@watdragon.waterloo.edu>
Sender: news@watdragon.waterloo.edu (News Owner)
Organization: University of Waterloo
References: <9106110020.AA17886@lilac.berkeley.edu> <133090@tut.cis.ohio-state.edu> <1991Jun12.130817.3621@kingston.ac.uk>
Date: Wed, 12 Jun 1991 22:11:21 GMT
Lines: 126

In article <1991Jun12.130817.3621@kingston.ac.uk> is_s425@kingston.ac.uk (Hutchison C S) writes:
>I am aware of the PAULINE program; also of a program by (I think) Cornelius
>Wegman which is similar; also of Carbonell's POLITICS.  I still believe that
>'truth' is an issue.
>
>A semantic theory will specify the meanings of well-formed sentences in the
>language.  A semantics built upon a correspondence theory of truth has a
>commonsensical appeal to it: what, after all, are sentences expressing
>propositions about if not about the world in which language users live?

Perhaps the key word here is "about".  How directly is a speaker's 
statement related to a 'real' thing?  In other words, how 'about' do
you want?  I doubt anyone would question that speakers are interested
in expressing propositions about the world.  However, you seem to be
asserting that such statements can be 'un-abouted' in order to arrive
at an exact record of the sensations the speaker experienced at some
time.  To construct a model of the relationship between sensation and
communication (realistically), you are going to have to accept some
compromises, and thus let slip some of the absoluteness of the 'truth'
about which you're concerned.  Do you want "the truth" or "a truth"?

>Conversely, if sentences do not express propositions about the world that
>can be true or false (referring instead, for example, to speakers' 
>"perceptions" or internal representations of the world), then how can
>conversants ever know that they are talking about the same thing(s)?  

We assume we have a common ground of understandings (culture, etc.) or
we use some means to establish common ground (the negotiation Steve
mentioned).  Otherwise, we are very likely to misunderstand and I don't
see how any theory can do better.

>A
>correspondence theory of truth, and a semantics dependent upon it, rescues
>the theory of semantics from the vagaries of 'mentalism' and 'solipsism'
>that Stephen Smoliar fears.
>
>Let's go back to my four newspaper headlines:
>
>	Rioting blacks shot dead by police as ANC leaders meet
>	Police shoot 11 dead in Salisbury riot
>	Rebels kill 11 ANC men
>	Racists murder Zimbabweans
>

If you're after *the* truth, then you'll have to provide *the* definitions
of "riot", "racist", and so on (as I guess John Bradshaw pointed out).  These
are perceptual, possibly unique categories to each speaker and not things
you can measure, weigh, or otherwise legitimately encode in a formal 
language---not and capture all the variations involved.  It might be
tempting to define a 'correct' riot, and then assert others are incorrect,
but then you've done nothing but become another interpreter with your
own opinion.

This situation doesn't preclude the possibility of understanding other
speakers, but it does mean understanding will require work (in the 
literal sense).  Occasionally, it may require the agent to shift the
'fixed-points' (axioms, assumptions) of interpretation, something I
haven't observed truth-conditional theories to be good at.

>It seems to me that talk of 'partial truths', 'negotiation', and so on, may
>not get us very far.  If I'm negotiating with you, I'm really just trying to
>tell you why you are (mostly) wrong and I am (mostly) right.  If I adduce
>evidence to support my claims, then we may end up negotiating what counts as
>evidence.  We're stuck in a hopeless regress.  (Try telling one billion
>Christians or one billion Muslims they're wrong -- especially if it is
>perfectly obvious to you that Humanistic Buddhism is the only right way. Try
>negotiating with the Jehovah's Witness on your doorstep.  Try telling the
>free market liberal about the unspeakable suffering and brutality that
>capitalism has wrought upon the cheap labour markets of the Third World.)

What you seem to be saying is that the *process* of understanding (or
failing to understand) is very hard in difficult cases.  I doubt anyone
would dispute that.  Unfortunately, truisms don't support one position
over another.  Perhaps we should consider a concrete example.  What
distinction would a truth-functional theory make between the following
pair of utterances?

	The cat sat on the mat.
	The mat was sat on by the cat.

Although they are different, the usual theory would describe both with
the same semantic form and say that either both were true or both were
false.  But if our communicative apparatus exists just to transmit the
truth, why do we have more than one token for the same proposition?
The only explanation comes from considering the speaker's desire to
emphasize "cat" or "mat" (or conversely de-emphasize the other).  Even
in a description of an uncontroversial and simple event, point of view
can play a role.

Now, how true are the following two from previously?

	Rebels kill 11 ANC men
	Racists murder Zimbabweans

We may judge that members of one group killed members of another, but
the points of view have probably dictated the exact descriptions.
Were the Zimbabweans really "murdered" or "killed"?  Most people would
say there is a difference between the two; the two terms are certainly
used to have different effects. 

>To get things in context, despite the political flavour that my question may
>appear to have taken on, my main concern is with automatic knowledge acquisition
>from text (whatever kind of text it may be).  My problem is: is knowledge
>representation going to be about an intelligent agent's models of the
>physical world or of speakers' reports about the world?  This is a technical
>rather than a philosophical issue since it impinges directly on what kinds
>of inference and what sources of knowledge are relevant to the reasoning
>process.

Like Carbonell's (and Hovy's) systems, a model of the physical world will
require 'objective' input at some point.  Since this is not really possible,
I would select option b) you give above.  Also, as I hope I pointed out,
by not modelling speaker's reports, you lose information about speakers, and
speakers are presumably in the real world too.  And then you have the 
problem of handling reports about things which don't exist in some way (ie.
unicorns, Sherlock Holmes, rained-out ball games, etc...).  A paper by
Hirst I was forced to read lately (on KR of non-existence) suggests just
going with a naive model.

I would say, in summary, that using a truth conditional KR and model will
get you somewhere, but maybe not where you would think at first blush.

Sorry for rambling.  :-(

				Cam

