[HN Gopher] Feral Minds
       ___________________________________________________________________
        
       Feral Minds
        
       Author : PaulHoule
       Score  : 61 points
       Date   : 2024-01-22 18:56 UTC (1 days ago)
        
 (HTM) web link (www.noemamag.com)
 (TXT) w3m dump (www.noemamag.com)
        
       | jruohonen wrote:
       | Surely soon enough, if not already, we are able to test Jonze's
       | postulate? That is to say, what is interesting here is:
       | 
       | "Concerned, Samantha begins discussing these feelings with other
       | AIs -- and quickly finds relief communicating at a speed and
       | volume not intelligible to Theodore and other users."
       | 
       | Ref.:
       | 
       | https://news.ycombinator.com/item?id=39089720
        
       | 082349872349872 wrote:
       | Compare https://en.wikipedia.org/wiki/Blindsight_(Watts_novel)
        
       | djoldman wrote:
       | Unfortunately, the hype about LLMs has generated breathless
       | ruminations on AGI and consciousness. Dig a little deeper and
       | there doesn't seem to be much there, as one may find that these
       | terms are not adequately defined.
       | 
       | Here we see this attempt:
       | 
       | > First, a disclaimer. Consciousness is a notoriously slippery
       | term, if nonetheless possessed of a certain common-sense quality.
       | In some ways, being conscious just means being aware -- aware of
       | ourselves, of others, of the world beyond -- in a manner that
       | creates a subject apart, a self or "I," that can observe.
       | 
       | Which is followed by paragraphs correctly calling out the
       | inadequacy of the above definition. Nowhere though does the
       | reader get a satisfactory answer: what really is consciousness?
       | 
       | One might say who cares if they don't define it? Well, if you
       | don't define it, there's no point in discussing whether it
       | exists, how it came to be, or what it comprises.
       | 
       | You might as well be asking the question: how close are we to
       | quidlesmoopy?
        
         | 082349872349872 wrote:
         | I have a definition of consciousness: consciousness arises when
         | a creature not only applies a theory of mind to others
         | (predators, prey, and its conspecifics would all be likely
         | others) but also applies a theory of mind to itself. (given
         | that we have the most data on ourselves, it is not surprising
         | that we'd have an "I", but it is a little surprising that the
         | "I" modelled usually has such a high variance from others'
         | models of oneself)
        
           | ben_w wrote:
           | Great (although that's a hypothesis for a cause rather than a
           | definition), but for any discussion with another person, you
           | have to be sure in advance that they don't use one of the
           | other 49 common meanings of "consciousness".
           | 
           | What I care about when I ask if an AI is or isn't conscious,
           | is if it has the kind of experiences of existing that I know
           | I have.
           | 
           | Why is there a feeling of sensations rather than just the
           | sensations themselves in this body of mine? How is it that I
           | am more than mere stimulus-response? Is it homunculi all the
           | way down?
        
             | k__ wrote:
             | I found the works of Thomas Metzinger illuminating, at
             | least in terms of human consciousness.
        
             | xcode42 wrote:
             | Good luck. I've given up trying to explain qualia to
             | people, and why they are at the core of why conscience
             | matters. It's so frustrating. Once I heard it for the first
             | time it seemed obvious to me and I was glad someone had
             | already made up a word for it, but every time I try to
             | explain it to people in the consciousness discussion people
             | just look at me cross-eyed I must just be explaining it
             | wrong :)
        
             | 082349872349872 wrote:
             | How do we know that other _H sapiens_ have the same kind of
             | experiences of existing as you have? Best I can do is that
             | "I know, right?" is a fairly universal reaction, so we know
             | that even if others' qualia are not identical to ours, they
             | are at least isomorphic.
             | 
             | As to "more than mere stimulus-response", much of me is
             | pretty basic stimulus-response. Certainly the derivative
             | controller, and arguably all the proportional controller as
             | well. It's only the integral controller that's not, and
             | much of _that_ could be adequately explained by an FSM with
             | a high number of states. (it is a little known fact that
             | Regexen were originally derived as a mathematical model of
             | what neural networks could potentially recognise!)
             | 
             | (see https://thefarside.net/i/61ee3cb4 infra)
        
               | bumby wrote:
               | > _How do we know that other H sapiens have the same kind
               | of experiences of existing as you have? Best I can do is
               | that "I know, right?"_
               | 
               | This was answered in a different comment:
               | https://news.ycombinator.com/item?id=39103099
               | 
               | I think the point is that we agree that I (as the
               | individual asking) has consciousness, then we can infer
               | that others have consciousness based on similarity.
               | Another _H sapiens_ is very much like me, so they are
               | very likely to have similar experiences. Chimpanzees less
               | so, but still similar enough to infer they are more
               | likely than not to be have similar experience. And on
               | down the line, through dogs, earthworms, plants, and
               | bacteria...each getting potentially less likely the
               | further from similitude they become.
        
               | 082349872349872 wrote:
               | I prefer using my definition because it offers something
               | more testable than just some (arbitrary?) degree of
               | similarity.
               | 
               | For instance, cows have been shown to spend
               | (significantly) more time standing near (randomly placed)
               | pictures of smiling farmers than pictures of frowning
               | farmers, which suggests that they have enough of a theory
               | of mind to prefer farmers who appear to be in good moods
               | to those who appear to be in poor ones.
               | 
               | That, of course, doesn't say anything about whether cows
               | have a theory of their own mind, nor did that study do
               | the (perhaps not so obvious?) step of recording picture
               | placement relative to the cow's "bubble" (a curious
               | bovine approaches head on; a frightened bovine leaves
               | head first; a skeptical bovine places itself sideways to
               | the object at a distance, with one eye upon it, so it can
               | change to either of the first two if it comes to a
               | decision either way), but at least it offers something
               | more quantifiable than "I believe this thing to be more
               | like me than that thing".
        
               | bumby wrote:
               | > _I prefer using my definition because it offers
               | something more testable than just some (arbitrary?)
               | degree of similarity._
               | 
               | That's fine to have a preference, but we must also
               | concede that many (most?) human classification ontologies
               | are arbitrary. Even in your example of what constitutes a
               | "smiling" or "frowning" farmer is a somewhat arbitrary
               | definition. What you might call a frown, I may call a
               | smiling 'smirk.'
               | 
               | I think this testable framework preference may border on
               | a reductionist perspective that dismisses the hard
               | problem of consciousness on the assumption that if it
               | can't be measured, it's not real (see also my other
               | comment: https://news.ycombinator.com/item?id=39103504).
               | Consider the perception of muscle fatigue/burning; in the
               | context of a workout it may be found to be pleasurable,
               | but in the context of an illness it might be felt as
               | suffering. An MRI may "test" and show both have the same
               | neural circuits activated but yet the subjective
               | experience is vastly different. Just because we don't
               | (yet...maybe?) have the tools to test for it, would we
               | conclude that differences in subjective experience don't
               | exist?
        
           | 082349872349872 wrote:
           | today's serendipity: https://www.smbc-
           | comics.com/comic/consciousness-3
        
         | hiAndrewQuinn wrote:
         | The only two things I can say with much confidence about
         | consciousness:
         | 
         | 1. I definitely have it, some of the time. (I'm not sure if
         | anything else does, but I can't rule it out.)
         | 
         | 2. The chances other things have it feels lower as they become
         | more different from me. Another person may or may not have
         | consciousness, and a rock may or may not have consciousness,
         | but the human feels more likely to have it than the rock does.
         | 
         | This feels like the common sense stance for anyone who has
         | considered solipsism in depth, and doesn't want to proclaim a
         | unshakeable faith in things like panpsychism or non-dualism or
         | whatever.
         | 
         | It's honestly a bit frustrating, because I find mathematical
         | platonism otherwise quite compelling, but it's totally possible
         | there's just abstract mathematical objects and oh yeah this
         | weird fairy dust we sprinkle over certain reifications of
         | things to ontologically privilege them. I know, Occam hates me.
        
           | bumby wrote:
           | It may be that the tools we have come to rely on so much to
           | define such terms are, almost by definition, inadequate to to
           | define consciousness. The scientific method and rational
           | discourse are used to describe _objective_ reality; I don 't
           | know that they can fully answer the "hard" problem of
           | consciousness because it is largely concerned with
           | _subjective_ reality.
        
             | beezlebroxxxxxx wrote:
             | > It may be that the tools we have come to rely on so much
             | to define such terms are, almost by definition, inadequate
             | to to define consciousness.
             | 
             | And yet, we are confronted by the fact that we use the
             | concept of consciousness in dozens of different contexts
             | and we understand perfectly well (usually) what people are
             | talking about. That's the basic insight of ordinary
             | language philosophy. These aren't fundamental mysteries,
             | they're webs of concepts tied up in language and careful
             | attention to how they are used, un packing the web and knot
             | of uses, reveals what they mean. The definition is a "rule"
             | or explanation for their use _in language_ , which is not
             | "subjective" (private) but _public_ behaviour.
        
               | hiAndrewQuinn wrote:
               | Mm, I'm still on the "by definition inadequate" side
               | personally. I just don't see any unambiguous way to
               | verify that another being even experiences qualia, to say
               | nothing of actual consciousness.
        
               | beezlebroxxxxxx wrote:
               | > I just don't see any unambiguous way to verify that
               | another being even experiences qualia, to say nothing of
               | actual consciousness
               | 
               | That's actually one of the core parts of Wittgenstein's
               | "Private Language Argument"[0]. The problem isn't so much
               | that verification is "difficult", it's that certainty _in
               | the specific sense of intelligibly private_ is a logical
               | impossibility. The implication is that  "experiential
               | qualia" have a specific intelligible meaning _in
               | language_ (or even more broadly in behaviour), which is
               | to say that our  "subjectivity" is personal but not
               | private. Further implications tumble outward. For
               | instance, Wittgenstein spends a lot of time discussing
               | the "qualia" of pain. It was (and still is, obviously
               | considering the rise of "AI" discussions lately) an
               | enormously consequential argument in philosophy.
               | 
               | [0]: https://en.wikipedia.org/wiki/Private_language_argum
               | ent#:~:t....
        
               | hiAndrewQuinn wrote:
               | _clicks link_
               | 
               | "Wittgenstein does not present his arguments in a
               | succinct and linear fashion; instead, he describes
               | particular uses of language, and prompts the reader to
               | contemplate the implications of those uses. As a result,
               | there is considerable dispute about both the nature of
               | the argument and its implications. Indeed, it has become
               | common to talk of private language arguments."
               | 
               | Yeah, sorry, I don't have a hundred hours to spend on
               | some dude who decided to write half of an argument in ten
               | different places. Maybe a link to plato.stanford.edu
               | would've been better.
        
               | beezlebroxxxxxx wrote:
               | Sure, if you want:
               | https://plato.stanford.edu/entries/private-language/
               | 
               | But the wiki entry is a far more succinct summation of
               | the main points of the argument. The very first
               | subsection, which you apparently didn't even try to read,
               | in the wiki is "Significance" which summarizes the
               | argument in all of...2 paragraphs.
        
               | bumby wrote:
               | I think the crux of it is how the OP was talking about
               | making claims "with confidence".
               | 
               | I think you're saying we can get a general sense of what
               | is meant without a formal definition. That's a fuzzy
               | definition; i.e., one with high uncertainty/low
               | confidence. My point is that I don't know that we have
               | the tools to make high-confidence claims. We can't even
               | agree on when consciousness starts, whether it's a toggle
               | switch or a dimmer switch etc. That all speaks to high
               | levels of uncertainty, where we can't agree on a "public"
               | language. I think the limits of language to describe the
               | phenomena is right in line with the "we don't have the
               | right tools" argument.
        
         | ilaksh wrote:
         | It's still a very interesting article. But I think that your
         | point is correct. This article demonstrates why philosophy was
         | largely obsoleted by science.
        
           | beepbooptheory wrote:
           | It's true, given enough time, we won't ask questions anymore.
           | Every abstract concept and moral belief will be simply data.
           | "Meaning" will stop making sense. There will only be science
           | and those who do science.
           | 
           | Some poor fool might ask every once and a while " _why_ are
           | we doing this science anyway? " and they will be swiftly
           | silenced.
           | 
           | We will of course force children away from their natural
           | wonder, as nothing good could come from something like that.
           | There are plenty of other things to fill the kids brain with
           | anyway.
           | 
           | There will only be data. Meaninglessly complete and perfect
           | data. A young would-be Kurt will come along and ask "is it
           | possible this is really complete? all our data? how might we
           | verify it?" He too, would need to be silenced.
           | 
           | One night your wife will sob "but why are you being like
           | this?" But you know this is a silly thing to ask: you are
           | just what your atoms are, we can measure them perfectly you
           | know. What other "why" does she want?
           | 
           | Things like "value" and "human" and "well being" aren't
           | important concepts anyway. If it can't be resolved down to
           | data, lets just let the market and the state decide. It's not
           | like we are going to be able to bring up some obsolete, non-
           | falsifiable ideas in order to defend ourselves! It doesn't
           | matter much anyway, its not science.
           | 
           | And when you die you will know that your whole life really
           | meant nothing but at best a small donation to the science.
           | Because what value could subjective experience really have
           | outside of, you know, doing science. But you are still happy,
           | you did Science. But don't ask _why_ you are happy!
           | 
           | And the great drama of science vs. philosophy will have
           | ended. There was always going to be a winner after all! It
           | was totally a battle, not a dichotomy. And the battle is
           | over.
           | 
           | \s
        
         | gnramires wrote:
         | I think your argument may be a little too strong. Would you
         | refrain from discussing, say, happiness, sadness, etc. in your
         | daily life without a formal definition of what happiness is
         | exactly at a structural level? And I say that as someone
         | interested in formalizing ethics, and formalizing all of those
         | notions. The point is, to even (ideally, formally) define them
         | we need intuition and intuitive discussions about what their
         | definitions should be, and we are still at (very) early stages
         | of what could constitute a real formalization.[1]
         | 
         | More significantly, the utility of those concepts far precludes
         | scientific definitions, those terms have been useful for
         | thousands of years, and are essential to human society!
         | 
         | (Again, certainly not against definition, but be mindful of
         | being too strict, too soon about it :) )
         | 
         | [1] The benefits of formalizing (human concepts) are subtle:
         | like formalization in math, they help us build confidence that
         | we're not building sand castle, dig deeper into theories of
         | meaning, and use this to slowly improve lives of everyone. But
         | like math, we've been doing it informally for very long, and a
         | strict axiomatic formalization is still today relatively rare
         | (although progressing with the aid of computer proof systems)
         | for most fields. Still, the evolving standards for proof are
         | likely essential for many of the deep theories we've achieved
         | (probably including say modern logic and Godel's theories,
         | algebraic geometry, functional analysis, etc.).
        
           | djoldman wrote:
           | You raise some good questions.
           | 
           | Although humans discuss emotions a great deal, there doesn't
           | seem to exist formal definitions for those concepts. Despite
           | this, data can be collected that sidesteps this fact that can
           | be useful: we can ask many people if they feel particular
           | emotions and draw conclusions from that data. Still we must
           | concede that a shared definition may not exist, which weakens
           | these conclusions.
           | 
           | In the case of consciousness, we could ask many people if
           | they thought that someone/something else was conscious, but
           | it's not clear what value that would have.
        
         | netcan wrote:
         | >One might say who cares if they don't define it? Well, if you
         | don't define it, there's no point in discussing whether it
         | exists, how it came to be, or what it comprises.
         | 
         | > You might as well be asking the question: how close are we to
         | quidlesmoopy?
         | 
         | Adequately, and generally defining intelligence, consiousness,
         | AGI & such... big ask. Impossible in practice.
         | 
         | So... dies that mean the conversation ends here because any
         | further discussion = quidlesmoopy? We don't know what
         | intelligence is precisely and therefore can't reason about it
         | at all.
         | 
         | I don't think this level of minimalism or skepticism is viable.
         | 
         | We need to make do with placeholder definitions, descriptions
         | and theories of consciousness and intelligence. We can work
         | with that.
         | 
         | So yes... language as a foundation for conciousness,
         | intelligence is "just a theory." Its probably not entirely
         | correct. And still... positing and testing the theory by
         | building and artificial talking machines is possible.
        
           | djoldman wrote:
           | I think placeholder definitions are the way to go: one
           | defines a concept that is acknowledged not to be the target
           | and works with it.
           | 
           | So for the purposes of discussion, one may define
           | consciousness(prime) as something less than true
           | consciousness and attempt to work with that lesser
           | definition. However, at all times, one must admit to working
           | with a lesser definition that may never lead to knowledge
           | that applies to the target definition.
           | 
           | We must also admit that consciousness as many understand it
           | may not exist.
        
         | simiones wrote:
         | > One might say who cares if they don't define it? Well, if you
         | don't define it, there's no point in discussing whether it
         | exists, how it came to be, or what it comprises.
         | 
         | This is not how we normally treat other concepts, even in
         | physics or mathematics. Concepts can exist and even be studied
         | for a long time while lacking a rigorous definition.
         | 
         | For example, a truly rigorous and satisfying definition of
         | natural numbers probably dates from the 19th century. Does that
         | mean that people before this shouldn't have discussed natural
         | numbers? For another example, the Dirac delta function was used
         | in the study and teaching of QM while not being very well
         | defined as a mathematical object (it's not an actual function,
         | for example).
         | 
         | In general, most interesting concepts begin life as some kind
         | of vague intuition, and it often takes significant study to
         | create a formal definition that captures the intuition well.
         | This applies to any kinds of concepts, even in mathematics.
        
         | codeulike wrote:
         | This is why the Turing Test gets talked about a lot (although I
         | gather Turing's original 'imitation game' suggestion had
         | slightly different emphasis).
         | 
         | If you can reliably reproduce a situation where a human can't
         | tell whether they are conversing with an AI or a Human, then
         | thats a kindof yardstick.
         | 
         | So yes we can't define consciousness but we've got a sortof
         | yardstick for when we might consider something to have achieved
         | it.
         | 
         | And people think the Turing Test is too easy but the point is
         | that the 'tester' should ask difficult questions to try and
         | probe the depth of thinking of their subject.
        
           | Throw84949 wrote:
           | Can some people even pass Turing test?
        
         | lebuffon wrote:
         | In my layman's mind consciousness, at it's lowest level, is
         | awareness, as mentioned in the article.
         | 
         | I think what humans have achieved is the next level which is
         | "awareness" of our awareness. The ability to self-reflect.
         | Perhaps it is even a form of "recursive consciousness" (?) And
         | perhaps structured language is the secret sauce that makes this
         | possible.
         | 
         | Without that next level I can't differentiate humans from other
         | mammals.
         | 
         | This has caused me to ask: Is there a level above this? ie:
         | Awareness of your awareness of your awareness. and... can
         | humans achieve that next level with our current biological
         | equipment?
         | 
         | :-)
        
           | peterlk wrote:
           | This is the problem with terms like this. There are
           | definitions in academia/industry that differentiate between
           | awareness, consciousness, qualia, intelligence, sentience,
           | and more. But these distinctions require some pondering to
           | understand and are divergent from general understanding which
           | treats them as roughly the same thing. For example,
           | consciousness for me is the ability for something to take
           | directed action to affect their surroundings. This means that
           | a robot can be conscious. But now the question for what we're
           | talking about here is: is it aware? It is possible to display
           | consciousness without awareness (see blindsight studies), but
           | to my knowledge it is not possible to be aware without
           | consciousness. And then we get to sentience, which is still
           | very slippery, and often relies on definitions involving
           | "qualia", which is also quite slippery.
        
             | lebuffon wrote:
             | I have no formal knowledge in the domain to go much deeper
             | but I can understand the problem you stated about defining
             | the terms properly. I suppose this spinning around on word
             | definitions is similar to my "awareness of awareness"
             | description in that we have the ability to ponder the
             | meaning of the very tool (language) that we are thinking
             | with. But my old head begins to implode if I go very
             | deep...
        
         | morsecodist wrote:
         | I agree that definitions are the problem. It seems the
         | conscious term is overloaded and refers to a lot of concepts
         | some of which are overlapping and some are totally different.
         | 
         | People also want to define conscious in terms of the concept
         | they feel is most importantly different about humans. So this
         | definitions conversation gets mixed with a what concepts are
         | important conversation.
         | 
         | For example, the thing that I think is important is subjective
         | experience. I think this is the most important difference
         | between beings that need to be given moral consideration and
         | those that don't and this is always what I have thought of as
         | consciousness. However, I often have conversations where I am
         | just talking past someone because they are interested in
         | something else entirely. I hope we can start defining this
         | upfront and having separate conversations here.
        
       | ImPleadThe5th wrote:
       | Article aside,what an interesting publication! Looking forward to
       | pursuing through more articles in their archive.
        
       | ilaksh wrote:
       | To reason about this usefully we need to unpack the numerous
       | dimensions of intelligent living beings like humans (many shared
       | with some animals). And realize that although most of them are
       | typically packaged together, they are not all necessarily
       | inextricable.
       | 
       | - Awareness of self versus other.
       | 
       | - Core language for communicating simple information.
       | 
       | - Simple spatial understanding
       | 
       | - More complex spatial reasoning
       | 
       | - Complex language and abstract reasoning.
       | 
       | - "Feeling" anything from a body
       | 
       | - Feeling emotions (arguably tied to embodiment also)
       | 
       | - A constant stream of concentrated focus on relevant sensory or
       | sometimes internal information
       | 
       | - a state pattern that persists through time despite constituent
       | elements being replaced (as in an organism)
       | 
       | - or any tendency of a pattern to persist itself
       | 
       | - etc.
       | 
       | We can categorize these questions into things like living/non-
       | living, different types of cognition, types of embodiment and
       | sensory experience and aggregation/streams/management, etc.
       | 
       | The starting point would ideally be science rather than
       | philosophy. At least for the parts we can use science for, which
       | I suggest is almost all of them, if we properly deconstruct
       | loaded ambiguous terms like "consciousness".
        
       | heckraiser wrote:
       | Throwing out the definition of consciousness as "the inflection
       | upon the potential of existential being."
       | 
       | All matter in the universe is dormant consciousness and life
       | technology animates this through electrochemical biotechnology.
       | 
       | By this definition quantum computers are more consciousness than
       | an infinitely complex algorithm, yet consciousness is analogue
       | (not qubits.)
       | 
       | We still understand so little and I accept your skepticism yet
       | this is the definition that is not being discussed.
       | 
       | All those other messy details are yours to sort out (bacteria has
       | bacterial consciousness.)
       | 
       | Our brains are vast by comparison, and like any holographic
       | system (constructive and destructive interfering wave fronts)
       | higher resolution.
       | 
       | Also subjective doesn't mean some detached arbitrary thing, it
       | literally means (and may be a direct synonym of) "a point of
       | perspective." In a non anthropomorphic way it means from the
       | perspective of a specific instance of a thing (forensic
       | referential interpretation.) In this way the angular corners of a
       | triangle are subjectively relative to one and other, yet all
       | exist on an objective plane.
        
         | Cacti wrote:
         | what are you talking about
        
           | heckraiser wrote:
           | A perspective on the definition of consciousness. Sorry I
           | didn't respond in line to the comments I am actually
           | addressing. I thought it overall relevant.
           | 
           | By all this I am saying consciousness is our echo chamber,
           | something grounded in the quantum cloister of objective
           | reality, not directly related to intelligence.
        
       | sudden_dystopia wrote:
       | In the beginning, was the word.
        
       | 082349872349872 wrote:
       | from https://en.wikipedia.org/wiki/Theory_of_mind#Deficits
       | 
       | > _Theory of mind deficits have also been observed in deaf
       | children who are late signers (i.e. are born to hearing parents),
       | but such a deficit is due to the delay in language learning, not
       | any cognitive deficit, and therefore disappears once the child
       | learns sign language._
        
       | nonameiguess wrote:
       | It's nearly impossible to draw any conclusions from these
       | isolated, uncontrolled single discoveries of feral children. At
       | least a few modern experts believe Victor probably could have
       | learned sign language, but Itard only tried to teach him speech.
       | It's hard to believe a human of normal cognitive ability can't do
       | at least as well as a gorilla, but two hundred years ago, no one
       | knew it was possible to teach a gorilla sign language. Speech may
       | be far more difficult to pick up later in life than other forms
       | of language because of the complexity of the vocal motor patterns
       | involved. Many also speculate Victor could have been autistic or
       | frankly just severely traumatized from being abandoned and left
       | in the wild as a child. Trying to reason from this one case study
       | to find implications for the role played in language acquisition
       | by all humans, or potentially all computational systems whether
       | artificial or organic, seems like a tall task.
        
       | dsign wrote:
       | It may be more useful to reason about the consequences of the
       | capabilities of any form of AI, than trying to box consciousness.
       | 
       | Somehow connected to the above, one can use "axioms" to
       | understand and predict minds. Here are a couple of examples:
       | 
       | Axiom of need: "For any evolved entity that relies on X for its
       | survival, if that entity has a sufficiently advanced mind, then
       | there is a 'subjective' perception of the need for X that is deep
       | and outside its conscious control. I.e., a feeling or intense
       | craving."
       | 
       | Example of the above:
       | 
       | X=Food -> perception of hunger
       | 
       | X=Procreation -> Love, lust
       | 
       | X=Survival -> Good vibes from interacting with peers
       | 
       | X=Peers -> I can attribute "consciousness" when I see it
       | 
       | Axiom of fear: "For any evolved entity that relies on avoiding Y
       | for its survival, if that entity has a sufficiently advanced
       | mind, then there is a 'subjective' perception of the avoidance of
       | Y that is deep and outside its conscious control. I.e., a fear or
       | intense phobia."
       | 
       | Y=Snakes -> Fear of snakes
       | 
       | Y=arbitrary external threat -> Fear of discord, fear of
       | alienation.
       | 
       | Y=alienation -> shame
       | 
       | I can think of a few more axioms, but this is getting long so
       | I'll omit them. The axioms themselves may or may not be correct,
       | but just as one does in mathematics, one can take them as true
       | and see where the chain of thought leads. For me, the axioms
       | above imply that we can create* entities that, as appreciated
       | from outside, will seem entirely conscious to us. And I guess
       | this is a procedural definition of consciousness.
       | 
       | * Via synthetic evolution, e.g. gradient descent.
        
       ___________________________________________________________________
       (page generated 2024-01-23 23:01 UTC)