Newsgroups: comp.ai
Path: utzoo!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!iris.cis.ohio-state.edu!byland
From: byland@iris.cis.ohio-state.edu (Tom Bylander)
Subject: Re: LOGIC AND RELATED STUFF
Message-ID: <1991Jun26.152830.12273@cis.ohio-state.edu>
Followup-To: comp.ai.philosophy
Sender: news@cis.ohio-state.edu (NETnews        )
Organization: The Ohio State University, Department of Computer and Information Science
References: <9106190527.AA17403@lilac.berkeley.edu>
Date: Wed, 26 Jun 1991 15:28:30 GMT
Lines: 54

In article <9106190527.AA17403@lilac.berkeley.edu> ISSSSM@NUSVM.BITNET (Stephen Smoliar) writes:
>When you are standing out there in the world, the
>issue is not a matter of truth, certainty, or even "what it would be like
>for it to be true."  The issue is far simpler:  What do you do when someone
>says "there's a truck coming towards you from behind?"
>[T]he answer to this question, in its
>simplest terms, is that you BEHAVE.  In a situation as urgent as this one,
>anything you are likely to call reasoning will not take place until AFTER
>you have behaved and you are reflecting on what just happened (perhaps while
>choking on the exhaust fumes).

I think there is a couple (common) confusions here.

First, there is the confusion of equating reasoning with deliberative,
conscious behavior.  If I accidently touch a hot surface, and then
involuntarily flinch, have I made no inferences at all?  To the
contrary, one answer is that I (or the relevant part of my nervous
system) "perceived" that I am touching a hot surface, and that I have
inferred that I should move away from it quickly.  The fact that the
transformation from hotness to flinching is not deliberative/conscious
does not imply that no reasoning has occurred, or that no "truths"
have been represented [too many negatives, I know].

Second, there is the confusion between languages of analysis and the
phenomena being analyzed.  Using logic to analyze some reasoning does
not imply that the reasoning itself explicitly uses the rules of
logic.  For example, computational learning theory uses statistics and
computational complexity to analyze inductive learning algorithms.
However, the algorithms themselves do not apply the rules of
statistics or computational complexity.  Similarly, an algorithm can
be analyzed using logic without any requirement that the algorithm
explicitly use resolution, quantifiers, etc.  (Note: to avoid logical
analysis, you will have to avoid, among other things, doing any
programming at all!)  The bottom line is that any argument of the sort
"logic is bad because we don't explicitly use it" is a non-starter.

>Thus, I think Hector's example is a good
>illustration of the danger of confusing the EXPLANATORY value of logic with
>any PREDICTIVE value--a point which I recently raised in comp.ai.philosophy.
 
It is very mysterious to me how you are going to make any predictions
without inferring them from some initial situation, i.e., without
doing logic.

I should mention that I do not believe that logic is going to solve
all the world's problems.  As many articles have noted, there are lots
of problems with logic.  However, just because logic has some problems
doesn't mean that logic is dispensable.  Whether we like it or not,
modus ponens is still something we will have to take into account.

Tom Bylander
byland@cis.ohio-state.edu


