Newsgroups: comp.ai.philosophy
Path: utzoo!utgpu!watserv1!ssingh
From: ssingh@watserv1.waterloo.edu ( Ice )
Subject: Re: Conciousness
Message-ID: <1991Apr16.232600.10977@watserv1.waterloo.edu>
Organization: University of Waterloo
References: <1991Apr16.061532.10775@panix.uucp>
Date: Tue, 16 Apr 1991 23:26:00 GMT
Lines: 56

In article <1991Apr16.061532.10775@panix.uucp> yanek@panix.uucp (Yanek Martinson) writes:
>While it can be said that even a calculator that can take a square root has some intelligence, and there are even microwave ovens that have been called intelligent, the more interesting topic is not intelligence but conciousness or awareness.

Ice: If you mean _self-consciousness_ then I agree. What I am going to type
is probably nothing new, and has been said many times before, but hey...

These machines are formal systems... and they are finite state machines...
as such, they can assume a certain number of states. We can very broadly
define _mind_ for any finite state machine as the set of states that it
can assume. A 386 PC is has a more powerful mind than an HP calculator
because it can assume a larger number of states. But depending on how
it is programmed, it may or may not exhibit intelligence.

One property of intelligence is having a model of the outside world consistent
with reality. This means that the states of the machine are in some way
isomorphic to the outside world...

But the machine need not be _self-conscious_ for this to happen... Self-
consciousness arises when the machine is also able to have an abstract
model of itself somewhere in its set of states. That presumably requires
that certain requirements be met...

i) Perceptual devices to link to the outside world. A self-model cannot
exist _AT_ALL_ without this, however powerful ii) is...

ii) A _LARGE_ # of states. How large? I don't know. Wish I did.

>Has any research been done on programs that are concious, that is have awareness of the world and of themselves? Also there is the problem of infinite recursion because if something is aware of itself, it is also aware of itself being aware of itself etc.. 

Read Hofstadter's _Godel,Escher,Bach_ : Classic advice :-). Personally, my
guess is that finite state machines can achieve a sense of self-awareness,
but not a perfect one for exactly that reason. 

If your brain had the capacity for modelling itself to inifinite precision
but only a finite computing speed, you'd go into a trance and never exit
it if you tried to introspect. In order for you to be able to achieve
a perfect model of yourself, you would have to have an infinite
computing speed as well, and neurons are _slow_. You could perhaps
achieve infinite computing speed with an infinite number of neurons but
that is not realizable either...

So I conclude that we can have at best a reasonably accurate model of
ourselves, but never a perfect one. There are limits to 
"computational resolution." (My chance for a question: Is this why we have
a subconscious, stuff happening below the conscious level, because of
introspective limits?)

Hope this helps.

Ice.

-- 
(1ST HYPERMEDIA .SIG) ; #include <black_rain.h> ; #include <robotron.h>
"Ice" is a UW AI living at: ssingh@watserv1.[u]waterloo.{edu|cdn}/[ca]
"The human race is inefficient and therefore must be destroyed"-Eugene Jarvis
Visual component of .sig: Saito in the cafe doing some slicing in _Black_Rain_ 
