[HN Gopher] Is artificial consciousness achievable? Lessons from...
       ___________________________________________________________________
        
       Is artificial consciousness achievable? Lessons from the human
       brain
        
       Author : wonderlandcal
       Score  : 159 points
       Date   : 2024-05-19 03:18 UTC (19 hours ago)
        
 (HTM) web link (arxiv.org)
 (TXT) w3m dump (arxiv.org)
        
       | rvikmanis wrote:
       | No
        
         | BriggyDwiggs42 wrote:
         | Y
        
       | hiAndrewQuinn wrote:
       | We don't even know whether other human beings are conscious, man.
       | 
       | The only thing which we might say with much certainty is that
       | things which are more "like us" along some metric are more likely
       | to be actually conscious, and things which are less "like us" are
       | less likely to be so. Maybe everything is conscious. Maybe
       | nothing except one's own self. But you'll never truly know one
       | way or another, even if humanity invented some kind of Freaky
       | Friday body swap thing.
        
         | ben_w wrote:
         | I know that at least one other human is conscious, otherwise
         | the term would never have been invented.
         | 
         | But you have no way to tell if I am as conscious as I claim to
         | be, or if I'm just a large language model trained by humanity
         | :P
        
           | golf_mike wrote:
           | Can you provide proof for the first claim?
        
             | ben_w wrote:
             | Which part exactly are you seeking proof of, and to what
             | standard?
             | 
             | "I know" is unprovable to others, unless you examine the
             | wiring of my brain. (But then, what is "knowledge"?)
             | 
             | "at least one other human is conscious, otherwise the term
             | would never have been invented." -- it's always possible
             | that I'm a Bolzmann brain and this was just luck.
             | 
             | I don't see how the term could have been invented by a mind
             | that didn't actually have it, except with astronomical low
             | probably random events.
        
           | vixen99 wrote:
           | In thinking about this perennial problem it's worth bearing
           | in mind that human beings pick up and process an immense
           | amount of data on a continuous basis, that is currently
           | unavailable to any LLM.
        
             | elicksaur wrote:
             | And all on an estimated 100 watts!
        
               | ben_w wrote:
               | 20 for the brain, 60-125 for the whole body depending on
               | if you mean "normally" or "metabolic minimum".
        
           | vidarh wrote:
           | No, we don't. We don't even know if existence has an extent
           | in time, because our only way of "interfacing" with time is
           | our experience of memory we can't prove is real.
           | 
           | For what you know, you're a lone entity confined to an
           | infinitely short period of time, and all else is an illusion.
           | 
           | But of course this isn't a _useful_ assumption in most
           | respects.
        
             | ben_w wrote:
             | Ah, I see you're more of an A. J. Ayer fan than a Descartes
             | fan.
             | 
             | I think that if an LLM has any consciousness, it would be
             | an experience like this -- one where the past was a fiction
             | it invented to fit the prompt, and the "now" was the only
             | moment before the mind was reset.
             | 
             | But I'd put that in the same basket as my... ah, nephew
             | comment? Cousin comment? I guess you'd call it that if we
             | have parent comments etc.?
             | 
             | https://news.ycombinator.com/item?id=40406398
             | 
             | What you say is not wrong in principle, but it's in the
             | same "cognitively unstable" basket as a Boltzmann brain,
             | where to accept it would mean I couldn't trust my own
             | reason to believe it.
        
         | pas wrote:
         | Joscha Bach's model of "evolution of a shared consciousness,
         | but compartmentalized into separate bodies" seems to be the one
         | that makes the most sense currently
        
         | smokel wrote:
         | Consciousness might be overrated. It could simply be a short-
         | term memory of the state you were in.
        
           | a_random_canuck wrote:
           | This is my take. Consciousness is overrated and probably just
           | an emergent phenomena of the brain processing external
           | stimuli into memory, moving memories around, etc etc, in a
           | continuous and never ending flow. Free will is just an
           | illusion of our deterministic but fundamentally random
           | reality.
           | 
           | There isn't even an agreed upon definition for what
           | consciousness is from a scientific perspective.
        
             | mewpmewp2 wrote:
             | And the reason that it is overrated, is because it has to
             | feel special for the bearers because it makes them
             | prioritise their survival.
             | 
             | Consciousness is largely a way to have a reward function
             | for set of behaviours that keep you alive through reason.
             | 
             | It appears at a level where reasoning is intelligent enough
             | that you need a more complex reward function.
        
         | everdrive wrote:
         | Consciousness seems to be a word that is poorly defined. You
         | see this a lot, and one of the more popular instances are
         | questions like "is cereal a salad?" It plays on the fact that
         | the definition of a salad is relatively loose, and because it's
         | loose items which aren't usually associated with the word do
         | actually fit the definition.
         | 
         | Consciousness feels much the same way: there's a very loose
         | definition which is colloquially understood by almost everyone.
         | Asking whether humans are conscious (and I know you were being
         | somewhat facetious) feels like it fits into this frame of
         | thought. Consciousness, as most people understand it, is
         | something which almost all people possess and something like a
         | rock cannot possess. I think it's perfectly fine to argue that
         | a rock or a tree can be conscious in some way. However, this
         | does require a precise definition of consciousness in order to
         | clearly differentiate it from the loose colloquial notion that
         | most people hold.
        
           | whoitwas wrote:
           | There's no definition because we haven't been able to
           | quantify it.
        
             | ben_w wrote:
             | That's not the problem.
             | 
             | There are 40 different definitions of consciousness, some
             | of which we can quantify, we just don't all agree on which
             | one we mean in any given context and indeed sometimes
             | conflate them without realising it in the middle of a
             | sentence.
        
               | anon291 wrote:
               | > There are 40 different definitions of consciousness,
               | some of which we can quantify, we just don't all agree on
               | which one we mean in any given context and indeed
               | sometimes conflate them without realising it in the
               | middle of a sentence.
               | 
               | When a word has a myriad meanings, none of which are
               | generally accepted, we typically say the word has no
               | definition. Sure, particular senses of its meaning may be
               | well-defined, but the word itself is elusive.
        
               | ben_w wrote:
               | > none of which are generally accepted
               | 
               | It's not "none", though. A paramedic will absolutely know
               | exactly what they mean when they're performing a test for
               | consciousness, it's just that test isn't useful in this
               | context.
               | 
               | "Awareness of internal and external existence" is
               | another, and I think Claude 3 demonstrates behaviour
               | which fits this meaning of the term.
               | 
               | Qualia is a huge open question because nobody knows what
               | that one would mean or imply or how to test for it.
               | 
               | And so on.
        
           | lumb63 wrote:
           | But how do you know another human is "conscious"? Certainly
           | there is an intuitive sense to it that would be very
           | difficult to put into words, but that is the crux of the
           | matter. Every other human, whose brain you have no ability to
           | peer into, could be an unconscious yet sufficiently advanced
           | computer, or a machine built to make the exact motions,
           | words, decisions, etc., that you perceive, and you wouldn't
           | be able to tell the difference.
        
             | tsimionescu wrote:
             | I think mnay people believe that consciousness is what
             | consciousness does. That is,
             | 
             | > Every other human, whose brain you have no ability to
             | peer into, could be an unconscious yet sufficiently
             | advanced computer, or a machine built to make the exact
             | motions, words, decisions, etc., that you perceive, and you
             | wouldn't be able to tell the difference.
             | 
             | Makes no sense, in this conception of consciousness, any
             | more than you can fake intelligence. Basically
             | consciousness might just be what we call the inner workings
             | of the mind of a sufficiently advanced agent, one capable
             | at least of meaningfully interacting with other agents
             | around it.
             | 
             | I'm not saying this is the correct theory, but it's a
             | perfectly valid theory of consciousness, just like all the
             | others.
        
               | dkz999 wrote:
               | I really like equating faking intelligence to
               | consciousness. Its intuitive because we have all seen
               | that, yet so complex its nearly futile to give meaningful
               | predictive criteria for when an agent is 'being
               | intelligent'.
               | 
               | In addition to having meaningful interactions with
               | others, i would add consciousness also requires
               | meaningful interaction with its-self.
               | 
               | What is 'meaninful' also comes down to language, which,
               | personally, leads me back to the idea that consciousness
               | is essentially a linguistic product/phenomenon. Duck-
               | typed.
               | 
               | And at the end of the day, if you enjoy spending time
               | asking "is this thing really x" where x lies on a vector
               | you can't even begin measure, I got this deal on a bridge
               | you can get in on, real cheap...
        
               | Scarblac wrote:
               | Bit it's useless,its circular. The definition must have
               | something to do with the experience of qualia, that's the
               | hard to explain part.
        
               | PeterisP wrote:
               | I somewhat disagree, I feel that the prevailing position
               | is that unlike intelligence (e.g. Legg&Hutter
               | definitions) consciousness can _not_ be easily assumed
               | from mere behavior and relies on certain things happening
               | (or not happening) inside the agent.
        
               | tsimionescu wrote:
               | This may be a common position among philosophers, or more
               | specifically among philosophers who think concepts like
               | "p-zombies" make any sense. But I think most people in
               | general view any being whose behavior is human-like
               | enough as having some form of consciousness.
               | 
               | For most people, being conscious is proved by things like
               | mourning dead companions, like caring for your babies and
               | showing distress if they are missing/hurt, like being
               | friendly and playful. That's why most people feel that
               | certain animals they interact with more or have seen on
               | TV are conscious (dogs, cats, elephants, whales, chimps
               | and other primates), but that other animals are not
               | (insects, rats, fish). Note that I am not saying that
               | rats are objectively less conscious than dogs by these
               | criteria, just that this is what many people base their
               | beliefs on, and that it of course depends on their
               | knowledge as well.
        
             | amenhotep wrote:
             | You don't. But it's solipsism to think otherwise, and while
             | solipsism is hard to argue against logically it's not a
             | very interesting or useful way of navigating the world we
             | experience. We can't prove other people aren't p-zombies
             | but the value bet is definitely that, appearing like us in
             | every other way, they also experience like us.
        
               | majikaja wrote:
               | > We can't prove other people aren't p-zombies but the
               | value bet is definitely that, appearing like us in every
               | other way, they also experience like us.
               | 
               | Logic doesn't need to be binary. There is no need for the
               | answer to such a question to even be defined.
        
               | Scarblac wrote:
               | It doesn't need to be solipsism - for instance, maybe
               | half of us are conscious.
               | 
               | But if we can't even know that, if we don't even have a
               | test to see whether some human or animal is conscious or
               | not, how can we start trying to figure out what makes
               | them conscious? It seems it's impossible to get to
               | something falsifiable without such a test.
               | 
               | Like, you say a rock isn't conscious. But what about a
               | sponge? An amoeba? How can you answer that if you can
               | only guess answer whether your neighbour is?
        
               | Der_Einzige wrote:
               | Solipsism is incoherent because it's not radical
               | skepticism. All of the critique of the external world
               | also apply to belief in the primacy on internal
               | experience. Any good solipsist should just accept the
               | "evil demon" of descartes, embrace radical doubt, and say
               | "I don't even know if I truly exist or not".
               | 
               | "I don't know if I'm a P zombie, and I don't know if I'm
               | a replicant or not, Deckard!"
        
               | vereis wrote:
               | well doesn't the argument suggest the only thing you can
               | be certain of is I, or at least some 'experiencing agent'
               | exist, otherwise there would be no subject to do the
               | experiencing
        
               | dpig_ wrote:
               | Yeah the first-person subjectivity has to arise before
               | second and third persons can arise. But with some further
               | investigation, one can find that the things they take to
               | be their subject are in fact object to them, too.
        
               | PeterisP wrote:
               | It is very relevant to keep analyzing and keep trying to
               | get any other answer to this question, because while
               | "appearing like us in every other way, they also
               | experience like us" applies to other humans, as soon as
               | we want to talk about the consciousness (or lack of it)
               | of other actors, this argument can not be applied and we
               | would very much like to get to any other criteria of
               | consciousness which could be applicable to arbitrary non-
               | human agents.
               | 
               | Even if we axiomatically assume that everyone else is not
               | a p-zombie, trying to find any evidence towards your/mine
               | consciousness _other_ than that axiom is helpful as a
               | candidate for such criteria which can be tested and
               | validated.
        
             | AnimalMuppet wrote:
             | We define ourselves to be conscious (even if we don't know
             | exactly what that means). We assume that other humans are
             | similar to ourselves, and we (at least sometimes) see
             | mental activity in other humans that we recognize as being
             | similar to our own. Therefore we conclude that other humans
             | are conscious.
        
           | goatlover wrote:
           | It's well defined in the philosophical literature as the felt
           | experience of colors, sounds, pains, etc which make up our
           | subjective experiences of perception, imagination, dreams,
           | etc, Qualia is the technical word, but it's also
           | controversial, depending on the philosopher's position on the
           | hard problem and their views on perception (they might
           | replace qualia with representational or relational
           | properties).
           | 
           | Another way of putting it is to use the primary versus
           | secondary qualities. Primary qualities belong to properties
           | of things we perceive. Secondary are properties that are part
           | of the perceiving or experiencing subject. Shape, number,
           | composition are properties of things. Sounds, colors, pains
           | are properties of a perceiver.
        
             | anon291 wrote:
             | Well-defined in the philosophical sense, perhaps (though I
             | think some would disagree). It is not well-defined in the
             | scientific sense. There is no way to quantify or classify
             | something as conscious.
        
           | unconsciousrais wrote:
           | What if the distinction we are all groping for is immortality
           | at the cost of determinism? A machine can be powered down and
           | dismantled. A new machine can be built and fed the exact same
           | training data, or run the same model, and presumably it would
           | behave the exact same way. Any entity whose behaviors are
           | that regeneratable and that replicable is perhaps less
           | "conscious" than entities which are not.
        
           | telmo wrote:
           | > Consciousness seems to be a word that is poorly defined.
           | 
           | I will give you my favorite definition, given to me by my
           | friend Bruno Marchal, a brilliant mathematician from Brussels
           | who spent his life thinking about such topics:
           | 
           | "Consciousness is that which cannot be doubted."
           | 
           | It felt insufficient when he told me, but now I am convinced.
           | It may require some introspection to "get it". It did for me.
        
             | Der_Einzige wrote:
             | That's just objectivity, and I don't think consciousness is
             | synonymous with objectivity at all!
             | 
             | Cogitoist propaganda. The appearance of thought is not
             | necessarily the same as thought, so you don't actually know
             | you think just because you believe you think. The cogito (I
             | think therefor I am), like your statement, is incoherent.
             | 
             | LLMs will swear up and down (with a prompt) that they are
             | thinking beings, therefor "they are". They are not
             | ontological actors because of their appearance of doubting
             | their own existence. That's not thought!
        
               | captainclam wrote:
               | Addressing your first thought...anything that you would
               | call "objective" can be "doubted" by ceding the tiny tiny
               | possibility that you are a simulation or Boltzmann brain
               | or brain in a vat. The evidence before you may not
               | actually be representative of the "objective" reality.
               | 
               | The fact that there is experience at all, the contents of
               | which may be "doubted", cannot be doubted.
               | 
               | I'm not unequivocally claiming this but that's the thrust
               | of the argument.
        
             | zero-sharp wrote:
             | I'm sorry, but this makes me cringe. When we learn science,
             | there's always some level of rigor with the ideas. Maybe
             | there's some kind of justification with math, or some kind
             | of experiment we can perform to remove doubt. The important
             | features are reductionism and verifiability. It's not a
             | weird introspection riddle.
             | 
             | I'm sure Bruno is brilliant. But I still don't know what
             | consciousness is. And I think that "definition" doesn't
             | meet the modern scientific standard. And I strongly oppose
             | the idea that in order to learn science I should have to
             | spend time introspecting.
        
               | andrewflnr wrote:
               | Think about what things "cannot be doubted", with all the
               | brain-in-a-vat types of caveats. It's not trying to be a
               | scientific definition. It operates earlier on the
               | epistemological ladder than science can be meaningfully
               | applied, and that might well be the only reasonable place
               | to define consciousness. (I still can't call it a _great_
               | definition, even if it did perfectly correspond with the
               | concept. Too indirect.)
        
               | zero-sharp wrote:
               | There are lots of statements we can form that "make
               | sense" on a linguistic level. It's easy to convince
               | yourself of something when the only standard is
               | "linguistic plausibility." Consciousness is presumably a
               | physical process. When you say "It operates earlier on
               | the epistemological ladder than science can be
               | meaningfully applied", I just don't know what that means.
               | You're going to have to give me examples of what other
               | beliefs we hold that occupy that space. Justified belief
               | about reality has to be based on measurement (science).
               | 
               | If consciousness isn't a physical process, then you've
               | lost me again. People have discussed these things for
               | hundreds of years.
        
               | andrewflnr wrote:
               | > You're going to have to give me examples of what other
               | beliefs we hold that occupy that space.
               | 
               | Yeah, there's not a lot down there, mostly your
               | assumptions about your sense inputs corresponding to some
               | kind of causally consistent external reality. It's the
               | same region as the lead up to what you seem to take as an
               | axiom, "Justified belief about reality has to be based on
               | measurement".
        
               | telmo wrote:
               | Introspection is "looking within". Why should science not
               | be interested in that? It is an aspect of reality. It is
               | not more or less real than galaxies or atoms. I know that
               | it is a very perplexing one when one holds a physicalist
               | metaphysical commitment, which is easy to confuse with
               | some notion of "no-nonsense modern scientific standard",
               | and so there is a temptation to pretend the undeniable is
               | not there, or that it is "ill defined" in some way.
        
           | coldtea wrote:
           | > _Consciousness seems to be a word that is poorly defined_
           | 
           | That's because it's not some foreign thing or theory that we
           | need a good definition of to understand what we're talking
           | about. For us humans it's not a loose colloquial notion -
           | it's concrete in a way that even the most well defined things
           | aren't, because it's directly experienced.
        
         | wouldbecouldbe wrote:
         | To be fair, that's all we know. That's all we are. That's all
         | we can truly say exists in our world.
         | 
         | All the theories, names & everything we have are mental models
         | around what we call objective reality.
        
         | zer00eyz wrote:
         | > We don't even know whether other human beings are conscious,
         | man.
         | 
         | This isnt an interesting path outside the paradox of proof. We
         | are fine disregarding that there are truths that are unprovable
         | in math... I think we need to make that leap in this realm as
         | well.
         | 
         | Concisousness is also probably a bad term, concepts like
         | sentience and sapience need to be the ones we are talking
         | about. We might get to one, long before the other...
        
           | vidarh wrote:
           | It is relevant here because it goes to the very core of what
           | we're talking about, though.
        
         | foobarian wrote:
         | > We don't even know whether other human beings are conscious,
         | man.
         | 
         | This is why so far the Kantian philosophy makes the most sense
         | to me. I can tell that something is there because I am thinking
         | it, but can't tell about any others.
         | 
         | The really scary thing is the question of why this particular
         | body at this particular time. It's like when born, organisms
         | generate a "consciousness vortex/attractor" that binds to a
         | particular identity.
         | 
         | It's also interesting that sleeping or fainting pauses the
         | consciousness, and later still ends up in the same body (unless
         | it's like coroutines and it doesn't matter which identity ends
         | up in the body).
         | 
         | We also know that removing parts of the brain can cause
         | memories and certain features to go away.
        
           | 9dev wrote:
           | > It's also interesting that sleeping or fainting pauses the
           | consciousness, and later still ends up in the same body
           | 
           | I always found it even more interesting how your body can
           | switch off the consciousness if it gets in the way. Try
           | holding your breath for example--do it too long, and your
           | body will kill the faulty process and restore the system to a
           | working state before attempting a new deployment.
        
         | holoduke wrote:
         | Unless you can be conscious in multiple places at the same
         | time. Theoretically it must be possible by dissecting the brain
         | piece by piece and restoring it to one afterwards.
        
       | shmerl wrote:
       | Artificial consciousness is a better term for what's often called
       | AI in science fiction.
        
         | whoitwas wrote:
         | Oracle, human agent maybe?
        
       | gokuldas011011 wrote:
       | Yes, the beauty of nature is, there is no magic. Everything is
       | governed by laws, when we uncover it, we can replicate it.
        
         | MyFirstSass wrote:
         | The more i've studied i've come to the opposite conclusion.
         | 
         | Nature is >99% magic and though some tiny slices of human
         | friendly interfaces of reality are replicable much more is
         | chaos, weird emergence, fields, probabilities and stuff so
         | bizarre to our mammalian logic that we might as well call it
         | magic, god, the simulation or just bleeding edge physics as the
         | whole field is getting weirder and weirder.
         | 
         | The whole notion of natures beauty stemming from some
         | replicable, controllable and "no magic" scenario is a very
         | "homo sapiens" desire for order and control.
         | 
         | We know close to nothing, and therein lies the beauty in my
         | eyes.
        
           | throwaway5371 wrote:
           | i have moved to this camp as well, and i don't mean in the
           | "we don't understand it so we call it magic", i mean it seems
           | more and more like actual magic.
           | 
           | people are talking about timing attacks on state updates in
           | the universe, hopefully we can exploit it
        
             | sdiupIGPWEfh wrote:
             | > people are talking about timing attacks on state updates
             | in the universe, hopefully we can exploit it
             | 
             | If the universe did happen to be a simulation (as opposed
             | to just naturally holographic), I imagine exploiting it
             | might be the only way to conclusively prove so. As an
             | actual simulation, there would be a risk of someone and/or
             | something observing it. If intelligence in our universe
             | tends to eventually discover exploits and if the observers
             | aren't fond of simulation errors, we might have ourselves
             | an unexpected answer to the Fermi paradox.
        
           | tasty_freeze wrote:
           | At the lowest levels, with quantum weirdness (to our way of
           | thinking), yes, we can only create metaphors to try and don't
           | really understand it. Same at the extreme other end where
           | relativistic effects can't be ignored.
           | 
           | But we don't live at those levels. That low-level
           | unpredictability usually is statistically predictable at the
           | macro level where we live. "Coloric" doesn't exist, but it is
           | a perfectly usable concept. There is no need to actually
           | measure the position and velocity of every molecule of gas an
           | a balloon to understand its temperature.
           | 
           | So, to us, the world is 99% magic at the extremes, but <1%
           | where we actually live; we can understand this regime fairly
           | well.
        
         | Horffupolde wrote:
         | The opposite. Laws are just conceptual representations of
         | underlying intractable processes.
        
           | MaxPock wrote:
           | The "maths is discovered not invented " camp
        
             | Horffupolde wrote:
             | When you go even further, math is again invented to
             | represent The Underlying. If it were discovered, math would
             | be it.
        
         | CuriouslyC wrote:
         | Plot twist, there are no laws except those that we collectively
         | imagine.
        
         | candiddevmike wrote:
         | This assumes we have the capability of uncovering those laws.
        
         | Barrin92 wrote:
         | >Everything is governed by laws, when we uncover it, we can
         | replicate it
         | 
         | Sure, but it's precisely because everything is governend by
         | laws that you can't make it how you want. It's perfectly
         | possible that consciousness is a specific property of organic
         | brains rather than digital computers. I can understand the laws
         | that govern the properties of the Golden Gate Bridge, doesn't
         | mean I can build it out of jello.
         | 
         | That was precisely the misunderstood point of Searle's Chinese
         | room by the way, that a digital algorithmic computer can
         | emulate the work that a human mind can do even the point of
         | being indistinguishable from it, but need not understand any of
         | it (i.e. being conscious of it) while doing so. Or put
         | differently that manipulation of syntax and semantics are
         | completely orthogonal.
         | 
         | That's in fact very relevant in LLMs. An LLM can talk about how
         | strawberries taste as if it was conscious, but by definition it
         | can't genuinely have experienced it.
        
           | PeterisP wrote:
           | But organic brains aren't some magic unknowable mush, we know
           | how they work on a low level, we can trace how visual
           | processing is done, etc, etc. As far as we see, the raw
           | computing capabilities of biological neurons that we
           | understand are sufficient to explain the behavior of animals,
           | and as far as we see, biological brains don't do anything
           | that can't be replicated with sufficiently powerful digital
           | computers. So while it's technically possible that
           | consciousness is a specific property of organic brains, we
           | have no evidence at all that it would be the case and some
           | (although not conclusive) evidence that there's nothing
           | special, so unless we identify some difference, those
           | hypotheses are not comparable and we should assume that
           | according to our best current knowledge there aren't any
           | specific properties of organic brains.
        
         | anon291 wrote:
         | In the sense that science is ultimately a religious enterprise
         | expressing our belief in a constant, unseen, unchanging reality
         | (which is not a universal belief by any means, and is utterly
         | religious in nature), then yes, this is true. On the other
         | hand, if we take an irreligious look at things and try to keep
         | focus on just what we observe, one is forced to make the
         | opposite conclusion, as the very basic building blocks of
         | reality are not uncoverable, non-replicable, and seemingly not
         | governed by anything but randomness.
        
           | CooCooCaCha wrote:
           | That's not what science is at all. If reality changes science
           | will adapt.
        
             | mistermann wrote:
             | You are declaring it to be a fact that science is flawless.
             | Defining something to be true by definition can certainly
             | cause it to be appear to be true (read some forum
             | discussions among even smart people on the internet if you
             | do not believe this), but it doesn't guarantee that it will
             | be.
        
               | CooCooCaCha wrote:
               | I have no idea what you're talking about. And I'm not
               | sure how you're getting "science is flawless" from what I
               | said, or what that's supposed to mean.
        
               | tomrod wrote:
               | Science is adaptable to reality -- therefore, it reflects
               | reality as it is presented.
               | 
               | Esoteric arguments portraying an ineffable, unobservable
               | stream of will that never interacts with reality is not
               | observable by definition; since it doesn't interact with
               | reality, it can be safely ignored. Roko's basilisk be
               | damned!
        
               | CooCooCaCha wrote:
               | I think some people think science rejects mystical
               | explanations because science is rigid and stubborn and
               | has it's head buried in the sand.
               | 
               | But no, it's because there's no proof. If we had evidence
               | that something mystical was happening then it would be a
               | huge breakthrough and it would eventually become science.
        
           | fngjdflmdflg wrote:
           | What do you mean by the basic building blocks of reality? The
           | very machine you are posting your comment from can only be
           | manufactured because the laws of physics don't change, and
           | these machines and their manufacturing process operate on the
           | atomic level. Similarly, do you have an example of a well
           | defined experiment that would not produce the same result
           | consistently? You can win a noble prize easily by publishing
           | such an experiment. Lastly, if someone _did_ produce an
           | experiment that did not produce consistent results, that is,
           | an experiment performed twice with all variables staying the
           | same, but the result of the experiment being different, then
           | the theory that all well defined experiments are reproducible
           | would be wrong. It isn 't axiomatic.
           | 
           | >try to keep focus on just what we observe
           | 
           | That's all science is though - making observations. Writing
           | hypothesis and making experiments are etc. are just a means
           | to creating things to observe. I'm curious, what did you
           | observe that you felt was not bounded by some static law of
           | nature?
        
       | xvector wrote:
       | I really do hope ASI gets us to gradual replacement-based
       | uploading. Having something as glorious as sapience trapped in a
       | delicate and temporary bag of flesh kinda sucks.
        
         | seydor wrote:
         | I m still conflicted what we do with the bag of flesh after the
         | fact. Are we bag holding forever?
        
           | alex_duf wrote:
           | Something tells me the bag might change its mind about the
           | whole operation once the upload is finished and it realises
           | what's up next.
           | 
           | So yeah not sure I like the idea.
        
             | thr0w4w4yHNabc wrote:
             | So far the only idea that I've read about and might
             | feasibly result in uploading instead of copying is gradual
             | replacement of each cell by a nanobot simulating that cell.
             | So at the end of it, there'd be no bag to change its mind.
        
               | xvector wrote:
               | You can simplify it a bit. Yes, gradual replacement is
               | likely the way to go, but you probably don't need to
               | replace individual neurons one at a time. Individual
               | neurons don't really matter or meaningfully contribute to
               | our consciousness.
               | 
               | You can likely replace the large "functional groups" of
               | neurons instead, with the group size threshold being the
               | maximum that doesn't meaningfully affect our
               | consciousness. This might well be many billions of
               | neurons at a time.
        
               | jodrellblank wrote:
               | Greg Egan's Sci-Fi jewel (dual) idea:
               | https://philosophy.williams.edu/files/Egan-Learning-to-
               | Be-Me...
        
             | xvector wrote:
             | Hence the "gradual replacement" part of my comment :)
             | 
             | Scan and copy never made sense to begin with, as you point
             | out. Not sure why it's the default when people think of
             | mind uploading.
        
               | thr0w4w4yHNabc wrote:
               | People seem to imagine it like plugging a cable or
               | getting an EEG cap and then they can't imagine the next
               | step. Gradual replacement is a very radical idea to most
               | people - a friend of mine recently described is as a
               | horror movie.
        
               | jodrellblank wrote:
               | In this book, I think:
               | https://www.ebay.co.uk/itm/393899945799 - 108 Tips for
               | Time Travellers by inventor and Professor Peter Cochrane,
               | 1999, one of the essays is him asking his wife if she
               | would still love him if he had false teeth, a false leg,
               | etc. bit by bit until she stops the conversation saying
               | "I'm not having you dying by installments!".
               | 
               | When you replace a heart with a pump, you don't get a
               | human heart. When you replace a kidney with a dialysis
               | machine, you doon't get a human kidney. Why expect that
               | when you replace neurons with simulations you get a human
               | brain or a human consciousness, or when you've replaced
               | everything, a human?
               | 
               | Biological replacement, your body growing more new
               | neurons, maybe, but it won't be mind _uploading_. And it
               | won 't get you you-at-age-twenty back.
        
               | thr0w4w4yHNabc wrote:
               | Let's assume that the replacements are perfect replicas
               | from the outside, only the inside is different. Why would
               | it not work?
               | 
               | The examples you listed are current technologies that
               | don't remotely approach the primary nor secondary
               | functions of the originals.
        
             | ecjhdnc2025 wrote:
             | "The equation must be balanced!"
        
           | pas wrote:
           | what do you mean? the bag is useful for going around, but
           | it's frail and needs replacing every few decades.
        
             | seydor wrote:
             | a $1000 drone is much more useful for going around
        
         | vixen99 wrote:
         | Nothing particularly glorious about sapience trapped in a
         | _permanent_ bag of flesh especially one with a rather fixed
         | view about the world about it. Never mind dictators; how about:
         | 
         | 'The President of Global Enterprises Inc. is today 150 years
         | old and currently holds the record for the longest serving
         | president in the Company's history'.
         | 
         | A cause for celebration?
        
           | xvector wrote:
           | I don't really care if stubborn, old leaders live longer if
           | it means I don't have to worry about my loved ones dying and
           | get to see humanity reach the stars.
        
             | chasd00 wrote:
             | Be careful for what you wish for. There's no guarantee
             | immortality doesn't result in eternal suffering.
        
               | Filligree wrote:
               | Humans generally don't like eternal suffering, and will
               | fight against it. That's a reason to think it won't
               | happen.
        
               | xvector wrote:
               | Wait, where did I say that immortality would be
               | mandated!? Of course you'd get to choose when you want to
               | shuffle off the mortal coil.
               | 
               | For me, personally, I don't see that being the case for
               | many hundreds or even many thousands of years at least.
               | I'm nearly 30 and I feel like I've barely lived a blink
               | of an eye.
        
               | fwip wrote:
               | I feel like that would require society to get pretty
               | chill about suicide. In a world where everyone is
               | immortal-by-default, I can actually see the opposite
               | happening - the fear of death increasing from its
               | absence, and becoming an even greater taboo.
        
       | feverzsj wrote:
       | If brain is just a transceiver, then it's unprovable.
        
         | Mo3 wrote:
         | I subscribe to this theory. In that case it's not necessarily
         | unprovable though. We will eventually figure out how to make
         | electronic devices resonate with the field just like our brain
         | does.
        
       | grishka wrote:
       | For all we know, it might be that consciousness is not fully
       | contained in the physical structure of the brain. It might as
       | well be something that partially exists on another layer of
       | reality we have no idea about yet.
        
         | codetrotter wrote:
         | True. Which leads one to wonder, could those same kinds of
         | consciousness that inhabit us also be capable of inhabit an AI.
         | Or even, could there be other kinds of consciousnesses in those
         | or other dimensions that are not currently able to exist on
         | earth because there doesn't exist anything yet that they are
         | able to inhabit. But that our computers would eventually, when
         | we arrive at some specific combination of hardware and
         | software, enable them to inhabit those. Bringing a new kind of
         | consciousness from outside of the universe to earth that is
         | unlike all others outside already present here (in us, the
         | animals, the plants, etc).
        
           | grishka wrote:
           | Is it still an "artificial" intelligence if it's made with
           | _real_ consciousness, though?
        
             | T-A wrote:
             | Intelligence != consciousness
        
               | southernplaces7 wrote:
               | Agreed on this to a certain extent. Right here on earth
               | in the present day and without taking into account
               | innovative technological advances in AI, we have examples
               | of highly sophisticated intelligence (at least in a
               | functional way) belonging to things that have little or
               | no known consciousness.
               | 
               | First example, large hive insect nests, such as those of
               | termites and certain ants. Their internal construction is
               | extremely complex and often built in useful (to them)
               | ways that would challenge even the abilities of smart
               | human engineers trying to do the same with similar tools
               | (sharp digging instruments, organic cementing liquids,
               | raw scavenged building materials and nothing else) Yet
               | any individual termite in a nest of millions shows only,
               | maybe, the most minuscule and debatable signs of
               | consciousness. They instead act like biological, physical
               | pieces of an algorithmic process.
               | 
               | Second, obvious example, evolution itself: Here we have a
               | process that produces organic, biological systems of such
               | complexity and self direction that we with all our
               | cognition are barely capable of grasping them robustly
               | let alone emulating any major part of them, yet it's
               | entirely mindless. Sure, it has billions of years to do
               | its thing through the imperatives of brute survival
               | mechanisms, but it's still incredible how on a macro
               | scale none of it involves the least bit of known
               | cognition.
        
             | BriggyDwiggs42 wrote:
             | Yeah
        
         | danielbln wrote:
         | It might be a pink elefant that lives in the 7th dimension and
         | contains every person's consciousness in the shape of a magic
         | peanut. Or it may not.
        
           | strogonoff wrote:
           | Godel's incompleteness theorem says that pink elephants may
           | indeed well exist.
        
             | exe34 wrote:
             | no it doesn't.
        
               | strogonoff wrote:
               | This is only true if you reject the pink elephant
               | argument as straw man in the first place.
               | 
               | As long as you consider it a figure of speech (a
               | charitable interpretation indeed), then existence of pink
               | elephants (or ghosts, or what have you) is exactly the
               | implication of the theory in context of scientific
               | method.
        
               | exe34 wrote:
               | Godel's theorem deals with the axioms of mathematics. the
               | scientific method deals with the physical universe. a
               | small part of mathematics is useful in describing the
               | universe, but most of it isn't.
        
             | anon291 wrote:
             | That's really not what it says. Godel's incompleteness
             | theorem applied to AI would say something like 'There are
             | statements about the model's behavior we cannot prove
             | without relying on statements that cannot to be proven'
             | (this is because obviously the AI algorithms are based on
             | elementary arithmetic).
        
           | Helmut10001 wrote:
           | Once I asked a KI and this was the answer:
           | 
           | > In a parallel universe where pineapples are the dominant
           | species, intergalactic pizza deliveries are made by flying
           | spaghetti monsters riding unicycles made of marshmallows.
           | Meanwhile, sentient clouds debate the meaning of life with
           | philosophical penguins on top of rainbow-colored mountains
           | made of bubblegum.
        
           | CooCooCaCha wrote:
           | Legitimately made me lol. Thank you for being the rational
           | one. When topics of the mind come up even people who are
           | normally smart and rational can turn into quacks.
        
           | tomrod wrote:
           | I saw that pink elefant once after a lewd night of heavy
           | drinking and potential hallucinogenics. His name is Frank,
           | and he says hello!
           | 
           | /s
        
           | tacocataco wrote:
           | Soul fracture theory?
        
         | visarga wrote:
         | > For all we know, it might be that consciousness is not fully
         | contained in the physical structure of the brain. It might as
         | well be something that partially exists on another layer of
         | reality we have no idea about yet.
         | 
         | Yes, but not another layer. Just the environment around us -
         | physical and social. Every sensation comes from the
         | environment, our perceptions are trained on this data stream,
         | every value is dependent on environment, our emotions reflect
         | it, language and society are part of the environment, we base
         | our thoughts on language and our actions on other people. Our
         | brain is made from environment signals, just like GPT-4 is made
         | from its language corpus.
         | 
         | The unsung hero of consciousness is actually the environment
         | with its reach data stream and feedback. Consciousness,
         | language, genes, internet, LLMs and the evolution of
         | intelligence are all social processes. They don't make sense
         | individually, only as part of an evolutionary population. They
         | can only evolve if there are many agents.
         | 
         | Now, I know this doesn't sound as sexy as quantum
         | consciousness, but it is a more parsimonious and better
         | grounded position. It accounts for the data engine that
         | actually creates consciousness. Don't be looking for
         | consciousness inside the brain or in exotic physics when the
         | magical ingredient is outside.
        
           | danans wrote:
           | > Yes, but not another layer. Just the environment around us
           | - physical and social. Every sensation comes from the
           | environment, our perceptions are trained on this data stream,
           | every value is dependent on environment, our emotions reflect
           | it, language and society are part of the environment, we base
           | our thoughts on language and our actions on other people.
           | 
           | This is also my understanding of consciousness. We are overly
           | focused on the human brain's capabilities as a generic
           | information processing mechanism (because of its remarkable
           | adaptability), that we ignore how fundamentally dependent it
           | is on its environment, leading us to seek a metaphysical
           | explanation for its functioning, when it's actually all
           | around us.
           | 
           | Language, culture, and abstract thought capabilities that we
           | use to describe consciousness are symbolic overlays upon the
           | physical environment, but ultimately emerge from it, and
           | their objectives ultimately tie back to it.
           | 
           | Human consciousness - especially its group/cultural aspects -
           | has been a tremendous advantage to the species, which is why
           | we are even in a position to be fascinated by it today.
        
         | whoitwas wrote:
         | I thought we did know this now. I can't quote exactly what
         | science I read, but the science latest I saw showed
         | consciousness arrises in the nerves outside the brain.
        
         | mytailorisrich wrote:
         | Human consciousness has logically to be a product of the
         | physical structure of human beings.
        
           | grishka wrote:
           | Except the "physical structure" might extend into dimensions
           | we don't know about.
        
             | fuzzfactor wrote:
             | How about energy that's associated with the physical
             | structure?
        
             | mytailorisrich wrote:
             | Then this is either general physics theories about extra
             | dimensions that apply to the universe as a whole probably
             | applied to quantum physics in this case, or it is random
             | quackery.
        
           | Llamamoe wrote:
           | To be fair, there is absolutely nothing within known physics
           | that would explain why we're more than a complex biological
           | computer and how subjective experience and qualia arise from
           | it.
           | 
           | So we genuinely cannot even begin to guess as to what
           | actually imbues consciousness into our neural processes. It
           | could be anything from "it's actually just the physical
           | processes" through "there's a soul piloting our brains by
           | influencing quantum noise" to "the brain is basically an
           | antenna for our metaphysical self and death/disability is the
           | loss of connection"
        
             | mytailorisrich wrote:
             | > _why we 're more than a complex biological computer_
             | 
             | Are we more than biological "computers"?
             | 
             | I think we are complex biological systems that we do not
             | fully understand yet but that does not mean anything
             | supernatural or beyond our understanding of physics is at
             | play.
        
             | PeterisP wrote:
             | There's currently no reason to assume that we're more than
             | a complex biological computer - while it's indeed
             | interesting to explain how subjective experience and qualia
             | arise, it's certainly plausible that this can arise as
             | emergent behavior once a specific type of computer is doing
             | a specific type of computation, and we 'just' need to study
             | that complex computation.
             | 
             | Unless we obtain any evidence whatsoever that this can't be
             | the case, Occam's razor would suggest that this is the
             | hypothesis to explore, without looking for new physics or
             | other unlikely assumptions.
        
         | vinceguidry wrote:
         | I've heard reliable reports of Jeffrey A. Martin and his fellow
         | research participants moving visual perception out of the eyes
         | and the rest of the body into the surrounding environment.
         | Literally seeing behind him. I don't think his eyes stopped
         | working, he just moved his subjective awareness outside of his
         | body.
         | 
         | I found it hard to believe until I used his techniques to
         | contain 'all of reality' into a small space (the visual field)
         | and then 'move' it to my chest. Feeling like music, sight, and
         | body sensations are all happening within a tiny space-contained
         | field is a very startling experience.
        
           | harha_ wrote:
           | You've heard "reliable records"? Can you link it/them? I
           | can't find via search engines, I tried.
        
             | vinceguidry wrote:
             | Reports, not records. People who I know who know him
             | sharing privately. Jeffrey doesn't and can't publish
             | everything that he comes across and has to be very
             | circumspect with the stuff he does.
        
               | hovering_nox wrote:
               | ROFL
        
       | tomhoward wrote:
       | I'm writing this comment so that people who want to know more
       | about alternative theories of consciousness (to
       | materialism/physicalism [1]) can know where to go to find well-
       | argued positions on the topic.
       | 
       | (To be clear, I'm not here to argue about the topic or try to
       | persuade anyone of any position - that's a waste of everyone's
       | time).
       | 
       | I recommend seeking out discussions involving:
       | 
       | - Federico Faggin: inventor of silicon-gate technology and
       | developer of the earliest microprocessors;
       | 
       | - Bernardo Kastrup: Ph.D. in computer engineering (reconfigurable
       | computing, artificial intelligence), former CERN engineer at the
       | LHC;
       | 
       | - Donald D. Hoffman: Ph.D. in computational psychology, professor
       | in Cognitive Sciences at UC Irvine.
       | 
       | On YouTube you can find plenty of discussions involving these
       | figures, some with each other, and plenty more with others.
       | 
       | I'd suggest it's particularly important to explore these
       | discussions as dispassionately as possible if you regard
       | materialism as the only theory of mind that has any scientific
       | credibility or validity.
       | 
       | As Christopher Hitchens reminds us in his legendary oration on
       | John Stuart Mill and free speech [2], it's only by thoroughly
       | understanding the opposing view that we can thoroughly understand
       | our own position on any topic.
       | 
       | [1] https://en.wikipedia.org/wiki/Materialism
       | 
       | [2] https://youtu.be/zDap-K6GmL0?t=120
        
         | mindcrime wrote:
         | _Frederico Faggin: inventor of the silicon gate which led to
         | the development of microprocessors;_
         | 
         | And who, by the way, has a new book coming out shortly:
         | 
         | https://www.amazon.com/Irreducible-Consciousness-Computers-H...
        
         | strogonoff wrote:
         | A philosophical framework in which creating an artificial
         | entity that is conscious and self-aware in a human-like manner
         | is as straightforward as modeling the human brain is monistic
         | materialism.
         | 
         | Of course, it's not the only framework available. Among the
         | modern takes, Donald Hoffman's interface theory of perception
         | (explored in, say, his Objects of Consciousness paper[0]) is an
         | interesting one that appears to align with monistic idealism,
         | for example.
         | 
         | Being wrong about this is generally not that impactful, until
         | it concerns policies around ML. Adopting the former means we
         | may have conscious software, which presumably should be granted
         | human rights. However, if we hold the latter, manufacturing a
         | "true" artificial consciousness may be unachievable using the
         | means we employ (it might be just a philosophical zombie).
         | 
         | [0] I don't personally endorse the paper or his views, but they
         | can be an acceptable starting point for a technical person
         | interested in exploring monistic idealism:
         | https://www.frontiersin.org/journals/psychology/articles/10....
        
           | smokel wrote:
           | _> we may have conscious software, which presumably should be
           | granted human rights_
           | 
           | A dog (most likely) has consciousness, but no human rights.
        
             | tomhoward wrote:
             | It has animal rights, which are broadly commensurate with
             | the level of consciousness and agency it's deemed to have.
             | 
             | Mammals and other animals have legal protections not
             | afforded to fish and insects.
        
               | mc32 wrote:
               | In some countries... some countries hardly observe basic
               | human rights much less any animal rights. Some have none
               | on the books.
        
               | tomhoward wrote:
               | Sure but this is nitpicking, as is your GP comment, and
               | neither refute the point that the original commenter was
               | making: modern/advanced societies have laws to protect
               | conscious beings from exploitation and cruelty.
               | 
               | (As I was writing the comment I thought "ugh will someone
               | chime in and point out that not all countries have strong
               | animal protection laws? Do I really need to preempt that
               | in my comment?")
        
               | smokel wrote:
               | I was not nitpicking, but I could have spent more time on
               | my reply.
               | 
               | What I was hinting at is that it is not simply
               | consciousness that gets us these laws. Laws have been
               | around for a long time, and have many different reasons
               | for existing and persisting to exist. The most rational
               | reason for laws is probably that it helps us to thrive as
               | a species.
               | 
               | IMHO laws do not easily extend to animals or other
               | organisms, let alone AI systems. What is the use of
               | animal rights laws if you can simply get killed to be
               | eaten (cows, pigs), or if you are considered a nuisance
               | (bugs). What would be the reason to provide AIs with
               | protection laws if they have no memory, no emotion, and
               | no pain?
        
               | bugglebeetle wrote:
               | >Modern/advanced societies have laws to protect conscious
               | beings from exploitation and cruelty.
               | 
               | I can think of no such society where this is generally
               | true. One need only consider that pigs are far smarter
               | than dogs and then the median pig's life in said
               | societies.
        
               | tomhoward wrote:
               | Laws exist that ban practices that are - according to
               | those who set and enforce the laws - excessively cruel to
               | pigs. That's all this discussion is about.
               | 
               | That pigs are still treated with cruelty is a terrible
               | thing, and I'd happily see more done to protect all
               | animals against cruelty. But it's a separate argument to
               | what's relevant here.
        
               | bugglebeetle wrote:
               | No, unfortunately, this is goalpost shifting. We've gone
               | from "cruelty" to "excessive cruelty," nor are there any
               | laws that prevent the common sense understanding of
               | either thing from happening to the median pig. One can
               | here also point out that we in fact engage in such
               | extreme cruelty that they have to make it illegal to
               | document it:
               | 
               | https://www.vox.com/future-perfect/2019/1/11/18176551/ag-
               | gag...
        
             | DeathArrow wrote:
             | But does a bug have consciousness? What about a bacteria?
        
               | smokel wrote:
               | I suppose the bug has, but the bacteria doesn't. I'd
               | assume that some kind of memory is required for
               | consciousness.
               | 
               | I'd even go so far to say that consciousness is nothing
               | more than having a memory of the state you were in.
        
               | TaupeRanger wrote:
               | Bacteria cells absolutely have types of memory. And by
               | your definition a Python program written by any random
               | undergrad in CS 101 has consciousness.
        
               | smokel wrote:
               | In order for my definition to make sense, the organism or
               | program must be able to observe the memory of the state.
               | In the case of the bacteria and the Python program, I
               | doubt they are able to do that in any meaningful way.
               | 
               | But I would not mind if a slightly more involved program,
               | or a system of plants for that matter, would be
               | considered conscious. The basic definition seems fairly
               | irrelevant, and it obviously matters how much the
               | specific type of consciousness matches our own experience
               | for us humans to actually care.
        
               | TaupeRanger wrote:
               | Just handwavy nonsense. What counts as "observing"?
               | Obviously the bacterial system will "observe" the memory
               | when using it determine current behavior. If the basic
               | definition is irrelevant, why did you post a comment
               | outlining a claim of what consciousness is "nothing more
               | than"? This is silly and not worth further engagement.
        
               | smokel wrote:
               | Note that I tried to counter the idea that an AI should
               | presumably get human rights. In that context, I think a
               | definition of consciousness is irrelevant.
        
             | strogonoff wrote:
             | If a chatbot obtained from an emulation of a human brain
             | behaves in a human-like manner, and is attributed
             | consciousness and self-awareness, good luck arguing that
             | its consciousness is like that of a dog.
             | 
             | ...and even if you succeed, abusing a dog in the way we
             | abuse ML-based products would not be acceptable in any
             | developed country.
        
               | chr1 wrote:
               | If we abuse a dog we have no way to restore it to its
               | previous state. With computer programs we have a perfect
               | time machine, so any thing that one may call "horrible"
               | can be done and then undone without any moral
               | consequences.
               | 
               | This btw also is the answer to the question of evil in
               | religion, god can do whatever he wants without being
               | evil, because it is effectively all in his imagination,
               | and for the people living in our computers we'll be gods.
        
           | thriftwy wrote:
           | > creating an artificial entity that is conscious and self-
           | aware in a human-like manner is as straightforward as
           | modeling the human brain is monistic materialism.
           | 
           | Why then, not modelling such ebtity isn't creating a self-
           | aware entity? After all, the outcome of a computation does
           | not depend on whether it is actually done.
        
             | hypertele-Xii wrote:
             | There are computations, the outcomes of which are unknown
             | unknown until actually done.
        
               | thriftwy wrote:
               | What changes when they are known? I believr it affects
               | yout consciousness, not the one being simulated. The
               | latter does not have "I'm being actually simulated" input
        
               | hypertele-Xii wrote:
               | "Actually simulated" is such an oxymoron. So which is it?
               | Is the consciousness actual, simulated, or actually
               | simulated? And does the resulting state of mind change
               | the universe, or merely reveal its hidden structure?
               | 
               | It's easy to get lost in these unsolvable paradoxes when
               | you try reducing all of creation down to logic. Problem
               | is, logic is not all of creation.
               | 
               | Consciousness requires a soul. Otherwise you're confused
               | stardust sans mission.
               | 
               | A computer is just a calculator. You might as well ask if
               | {addition, subtraction, multiplication, division} is God.
        
               | thriftwy wrote:
               | > Consciousness requires a soul.
               | 
               | We don't know if it does. We do know enough to suspect
               | that a deterministic simulation does not conjure thing
               | into being.
        
               | _a_a_a_ wrote:
               | > Consciousness requires a soul
               | 
               | Any actual evidence for this is welcome.
        
               | Scarblac wrote:
               | > Consciousness requires a soul
               | 
               | What's a soul? How can you possibly know that's what's
               | required?
               | 
               | Maybe it requires a blerpqu.
        
               | hhshhhhjjjd wrote:
               | > Consciousness requires a soul
               | 
               | Now define a soul! /s
        
           | AQuantized wrote:
           | To be fair 'modelling the brain' might not include things
           | like neuron metabolism that probably isn't required for AI
           | but is a part of the substrate of our own consciousness.
        
         | Zambyte wrote:
         | Julian Jaynes theory of consciousness is very interesting. At a
         | high level, his thoery was that consciousness is A) much
         | smaller in scope as far as what it actually is than a lot of
         | people like to think, and B) it is not actually innate in
         | humans, it is something we learn as we grow.
        
           | CuriouslyC wrote:
           | I don't see how we can learn to experience qualia. If the
           | author means self awareness instead of consciousness that
           | would make more sense.
        
             | amelius wrote:
             | I wish the scientific community would get the terminology
             | straight.
        
             | photonthug wrote:
             | any art or music class that is successful in reaching its
             | students will probably change what you think, how you think
             | it, and gradually should change what you feel, how deeply
             | you feel it, and to what extent you can analyze and
             | converse with those feelings you have.
             | 
             | If that sounds like intellectual activity above the level
             | of qualia, the same is true for something as simple as the
             | taste of food. We learn what apples taste like by tasting
             | lots of apples and tasting things that aren't apples and
             | reflecting on and focusing our attention on experience of
             | apples.
        
               | CuriouslyC wrote:
               | I think learning to mentally analyze sensations that are
               | complex into components doesn't refute what I said, as
               | you are presupposing the ability to perceive for that
               | process to take place. I don't think an art class is
               | going to revive a philosophical zombie.
        
             | joeyo wrote:
             | I think language acquisition provides a pretty compelling
             | example of learning affecting the experience of qualia.
             | When someone is learning to speak a foreign language, there
             | is often an period where certain sounds are difficult for
             | the learner to produce, because those sounds are not
             | present or are not distinguished in the learner's native
             | tongue. For example, the R and L sounds of English are
             | tricky for a native Japanese speaker.
             | 
             | A reason it's so hard to learn to produce these novel
             | sounds, I would argue, is because the learner _literally
             | cannot hear the differences_ at first. It 's only after
             | learning (i.e. when the qualia starts to change) that
             | production of the new sounds becomes possible.
             | 
             | One can think of other similar examples in the context of
             | expert performance: a sonar operator can hear sounds in his
             | headphones that most (at first) cannot; an artist can
             | distinguish colors that the novice cannot, etc.
             | 
             | If you buy this argument, that learning can affect
             | perception/qualia, then it's a fairly small leap to imagine
             | how qualia itself might also be learned _ex nihilo_.
        
               | andrewflnr wrote:
               | That's an example of learning changing which qualia you
               | experience, not teaching you to experience qualia at all.
               | Almost unrelated question.
        
           | belter wrote:
           | That is particularly interesting specially in the context of
           | the ways we know the human brain works. For example in
           | Automatic writing, patients with neurological damage, can
           | write coherent text without conscious awareness of the
           | content. Or in cases of Aphasia where individuals can sing
           | lyrics without consciously understanding the meaning of the
           | words.
           | 
           | And finally...who never, when particularly tired or worried
           | with something, left home, lost in their own thoughts, and in
           | a fog...drove to work...just to realize when arriving its
           | weekend? ;-)
        
             | _heimdall wrote:
             | > Or in cases of Aphasia where individuals can sing lyrics
             | without consciously understanding the meaning of the words.
             | 
             | Hah, well maybe I have aphasia then. My whole life I've
             | heard and even remembered lyrics while a singing along with
             | the radio but if you asked me immediately after the song
             | was over I couldn't tell you the words _or_ the meaning.
             | 
             | I hear the music being played and the sounds of the lyrics,
             | but unless I'm trying to pay attention to the words I just
             | completely miss them.
        
               | fuzzfactor wrote:
               | You're being "prompted" by the radio, in real time to
               | boot.
               | 
               | Only when you want to, that would be a "conscious"
               | effort, maybe not too easily emulated.
        
         | Simon_ORourke wrote:
         | Please stop with the appeal to authority!!
         | 
         | "The argument from authority is a logical fallacy (also known
         | as ad verecundiam fallacy), and obtaining knowledge in this way
         | is fallible."
         | 
         | https://en.m.wikipedia.org/wiki/Argument_from_authority
         | 
         | The meta logical fallacy of treating Wiki itself as an
         | authority isn't an intended pun.
        
           | jasonlfunk wrote:
           | The author did not commit a logical fallacy. They referenced
           | some people worth reading if you wanted some various opinions
           | on a controversial subject.
        
           | samatman wrote:
           | The meta fallacy on display here is the Fallacy Fallacy.
           | 
           | For every logical fallacy, there is a fallacious application
           | of it to a given example of rhetoric. You've committed the
           | Argument from Authority Fallacy Fallacy: citing people who
           | believe to have worthwhile opinions, and including their
           | accomplishments, is not argument from authority. Argument
           | from authority is claiming someone is correct _based_ on
           | their authority. Which isn 't what GP was doing.
        
           | captainclam wrote:
           | Op writes "I'm writing this comment so that people who want
           | to know more about alternative theories of consciousness (to
           | materialism/physicalism [1]) can know where to go to find
           | well-argued positions on the topic."
           | 
           | They very specifically state that these people are good
           | points of entry for "well-argued positions on the topic."
           | Linking to specific literature would have been better, but
           | this isn't "materialism/physicalism is wrong because of these
           | people's credentials."
        
           | tomhoward wrote:
           | It is precisely because their advocacy for metaphysical
           | idealism is unusual for people with their academic
           | qualifications that they are worth mentioning, and why its
           | worthwhile to listen to them explain their positions at
           | length.
           | 
           | Appeal to authority is where you present a person's status or
           | credentials as primary evidence that their argument is
           | correct. I've done no such thing here.
        
           | eichin wrote:
           | The problem is that while the post looks structurally like an
           | appeal to authority, the first two appear to be advanced
           | qualifications in areas _completely unrelated to the
           | question_ and the third is at best vaguely related. (It threw
           | me on first reading too...)
        
         | mannykannot wrote:
         | > I'd suggest it's particularly important to explore these
         | discussions as dispassionately as possible if you regard
         | materialism as the only theory of mind that has any scientific
         | credibility or validity.
         | 
         | I agree, and similarly for those who feel that materialism
         | cannot possibly explain consciousness. Kastrup, for one, seems
         | to sometimes behave as though ridicule makes his philosophy
         | more correct.
        
           | mekoka wrote:
           | > and similarly for those who feel that materialism cannot
           | possibly explain consciousness.
           | 
           | Perhaps for some it's indeed a matter of "feelings". But for
           | others it's a conviction built from reasoning that leads to
           | self-validating and irreducible truths. If you seriously go
           | into this, the only possibilities left once you've dug and
           | eliminated all mistaken assumptions are not material. It can
           | be counter-intuitive and does take a bit of work to reason
           | your way to those conclusions, which is why it's admittedly
           | not a popular outlook. But once you grok it, you don't go
           | back. The fact that materialism is slowly going out of style
           | is telling.
           | 
           | Whenever I exchange with someone who makes concessions about
           | consciousness possibly being the product of matter, it's due
           | to one of two things: either some holes haven't yet been
           | covered in their own explorations, or they're still oblivious
           | to some of the implications of their current position.
           | 
           | Materialism is fast being eliminated as a possible antecedent
           | to consciousness _with reasoning and logic_ , not simply with
           | beliefs. Currently, it's being salvaged in popular forms of
           | dualism, where it would be a co-primitive of reality with
           | consciousness (e.g. panpsychism). But even this position is
           | just a short stop-over on the way to idealism, as it creates
           | new problems and is just less parsimonious than simply saying
           | _consciousness first_.
           | 
           | An example of a relatively elusive and subtle realization to
           | get, but that also becomes rather difficult to renounce once
           | you grok it, are qualia and how they lead to the hard problem
           | of consciousness. Qualia are so enmeshed in our experience
           | that people have a hard time first seeing how divorced from
           | brain activity they actually are. If you don't get qualia,
           | you can't get the hard problem and how it's really an
           | _impossible_ problem
           | (https://www.youtube.com/watch?v=WX0xWJpr0FY).
        
         | WhyOhWhyQ wrote:
         | Any reason why you don't recommend discussions/lectures with
         | Roger Penrose? Or are his theories considered conventional?
         | Genuine question.
        
           | tomhoward wrote:
           | If I were to mention a fourth it would be him but he's a bit
           | embarrassed about all the controversy about his ideas on
           | consciousness and doesn't really discuss them in depth in any
           | videos I've seen, or wade into heated discussions about the
           | nature of reality.
           | 
           | Whereas the three I mentioned all embrace discussions about
           | the nature of reality and consciousness and happily engage in
           | lengthy discussions and debates about it.
        
         | jpt4 wrote:
         | Any body of academic thought whose paradigmatic communication
         | medium is video rather than text is prima facie suspect. Might
         | you please link a written statement of the salient position(s)
         | of any one of these gentlemen?
        
           | makk wrote:
           | ?
        
             | _a_a_a_ wrote:
             | "video is for poseurs", I think.
        
             | MetallicDragon wrote:
             | "Any kind of big idea which is spread primarily through
             | video instead of text is immediately suspicious. Could you
             | please send a link to a written version of the main points
             | from any one of those video?"
        
           | Jevon23 wrote:
           | Video is not the standard medium of communication in academic
           | philosophy. I imagine the GP mentioned youtube because most
           | people are more likely to watch a video than read a paper.
           | 
           | Bernardo Kastrup has a bunch of essays/books up for free at
           | his website https://www.bernardokastrup.com/p/papers.html?m=1
        
             | fngjdflmdflg wrote:
             | Or GP himself watches these videos. And I would push back
             | on the claim that most posters here are more likely to
             | watch a Youtube video than read an article.
        
               | RaftPeople wrote:
               | I was thinking the same thing, I can't stand how slow
               | video is, much easier to read text.
        
           | aydyn wrote:
           | Just curious, why do you write like that? Reminds when I was
           | 11 and wanted to sound smarter on the internet.
        
             | agumonkey wrote:
             | Sidenote a lot of people get triggered by videos as
             | information. Cause reading is indexable. I used to be a bit
             | like that and ran into a few extremists.
        
               | Scarblac wrote:
               | Text is also much more dense. What videos spend 15
               | minutes on can be read in a few. You can also skim text
               | first and then switch to deeper reading where desired, et
               | cetera.
        
             | jpt4 wrote:
             | My reply is an attempt to address the original comment with
             | precision. To diagram its intended meaning:
             | 
             | > alternative theories of consciousness
             | 
             | "Any body of academic thought" [I accede the scientific
             | legitimacy of the domain of discourse, rather than
             | dismissing it.]
             | 
             | > know where to go to find well-argued positions on the
             | topic.
             | 
             | "whose paradigmatic communication medium" [This is the
             | beginning of my challenge to the Original Commenter, by
             | granting the information provided authoritative status,
             | which they perhaps cannot fully defend.]
             | 
             | > On YouTube you can find plenty of discussions
             | 
             | "is video rather than text"
             | 
             | > it's particularly important to explore these discussions
             | as dispassionately as possible if you regard materialism as
             | the only theory of mind that has any scientific credibility
             | or validity.
             | 
             | "is prima facie suspect" [The Original Commenter has
             | asserted that discourse and engagement are important, yet
             | provided only time consuming, low signal-to-noise sources
             | of information.]
             | 
             | > As Christopher Hitchens reminds us in his legendary
             | oration on John Stuart Mill and free speech [2]
             | 
             | "Might you please link a written statement of the salient
             | position(s) of any one of these gentlemen?" [The only
             | written citations are 1) generic and 2) ancillary to the
             | core topic. I invite the Original Commenter to further his
             | argument more substantively, without demanding exhaustive
             | citations.]
        
               | dj_mc_merlin wrote:
               | OK, let me rewrite it:
               | 
               | > Any body of academic thought whose paradigmatic
               | communication medium is video rather than text is prima
               | facie suspect. Might you please link a written statement
               | of the salient position(s) of any one of these gentlemen?
               | 
               | > Academic content is usually in text, not video. Do you
               | have links to written work from them?
               | 
               | Shorter and the exact same meaning. Also doesn't sound
               | like you've been perusing your thesaurus all day.
        
               | jpt4 wrote:
               | At minimum, this does not capture that I _am_ challenging
               | the Original Commenter ("prima facie suspect") to more
               | rigorously defend his position, but doing so
               | respectfully. "One salient" written source is a carefully
               | chosen framing: the OC cannot meet it by replying with
               | support peripheral or meta to the main argument, but
               | neither can he dismiss my request as burdensome,
               | demanding multiple links.
               | 
               | The proposed revision suffers from its terseness, losing
               | both nuance and completeness.
        
               | dj_mc_merlin wrote:
               | Communication is about being understood. Not about
               | crafting the perfect sentence. Even if you craft the
               | perfect sentence, that will be the perfect sentence _for
               | you_, and it might be completely lost on many people,
               | some perhaps even more intelligent than you.
               | 
               | The subtext of "Academic content is usually in text, not
               | video" is "I don't trust this because it's in video, not
               | text". Now if you say that is not clear, sure, but the
               | subtext of your comment is "I opened a thesaurus and
               | tried to seem smart", which is why this conversation
               | derailed here. You can't ignore the subtext to craft a
               | mathematically perfect sentence..
        
               | jpt4 wrote:
               | > Communication is about being understood.
               | 
               | > The subtext of "Academic content is usually in text,
               | not video" is "I don't trust this because it's in video,
               | not text". Now if you say that is not clear, sure
               | 
               | Indeed, relying on the implicit when the explicit is
               | sufficient [0] does a disservice to one's readers, in
               | whose ability and charity to comprehend my surface text,
               | without presuming confounding subtextual meaning, I have
               | every confidence.
               | 
               | [0] It is not always; some things can only be gestured
               | at, not grasped.
        
               | tomrod wrote:
               | Hear hear!
        
               | tomrod wrote:
               | > Communication is about being understood.
               | 
               | This assertion is in error. Communication is about
               | transmitting information. What happens to that
               | information after the transmissions is beyond scope of
               | communication.
               | 
               | Don't get me wrong -- we have communication companies and
               | classes named "business communication" and fields of
               | inquiry titled "communication." Yet, the common trend to
               | each of these is wrapping the transmission of information
               | up in additional services. Analogous to how OpenAI and
               | Mistral wrap up LLMs that you and I and anyone can run on
               | our own into well-defined managed services. We use the
               | term for these companies "Generative AI" or "LLMs" when
               | in reality they too are wrappers around a much simpler
               | concept.
        
               | RaftPeople wrote:
               | > _This assertion is in error. Communication is about
               | transmitting information._
               | 
               | It seems like you might possibly be leaving out the other
               | 50% of communication (hint: it starts with an "r" and
               | ends with "eceiving")
        
               | spoiler wrote:
               | I suspect you might be arguing with either an LLM, or
               | someone using LLM help to write their responses...
        
               | soulofmischief wrote:
               | Some notes from the editor...
               | 
               | I do think there is a middle ground. Look at Bukowski as
               | a good example of effective terseness.
               | 
               | On one hand, you can indeed rely on the precision of a
               | large and unequivocal vocabulary, removing all doubt as
               | to your intentions.
               | 
               | On the other hand, you can also rely on context and find
               | beauty in conveying advanced meaning within a simpler
               | interface. As Antoine de Saint-Exupery says, "Perfection
               | is achieved, not when there is nothing more to add, but
               | when there is nothing left to take away".
               | 
               | There is a creative art to compressing meaning. As
               | evidenced by the response to your first post, things can
               | actually get lost in translation once you stray from the
               | common vernacular in an attempt at precision. The more
               | you can say with less, the more effective each word
               | becomes.
               | 
               | With practice, you can communicate quite profound
               | thoughts in a form that even the most uneducated among us
               | can understand. Know Your Audience. We may be on Hacker
               | News, but we are also on the Web. People encounter and
               | digest a massive amount of text every day. Making them
               | work a little less in order to understand you can be
               | beneficial for everyone.
        
               | mewpmewp2 wrote:
               | To quote the classic:
               | 
               | "Why waste time say lot word when few word do trick"
        
               | verisimi wrote:
               | But you're using the word 'perusing'....!! Who's
               | swallowed the dictionary now, huh?
        
               | tomrod wrote:
               | No, the second approach's meaning is more obtuse. What
               | does "usually" mean? Are there acceptable alternatives?
               | If content is in an alternative mode of communication, is
               | it acceptable?
               | 
               | These vagaries permitted in your revision are clear and
               | inherent in the original commenter's motion. Therefore, I
               | submit your adjudication of "shorter and the exact same
               | meaning" is woefully superficial in it's drive for
               | simplicity, to the point there is no thought left that is
               | clear in the original garden. Further, exact and
               | technical communication is what separates Hacker News
               | commenting from the hordes of subreddits that thrive on
               | imprecise babble.
        
               | mewpmewp2 wrote:
               | Ah, indeed, for nothing epitomizes 'avant-garde scholarly
               | dialogue' quite like a prolix disquisition elucidating
               | the inherent inferiority of audiovisual mediums.
               | Forthcoming: an erudite treatise on the unparalleled
               | intellectual profundity of semaphore communication!
        
               | tomrod wrote:
               | Stupendous and eloquent amendment to today's compendium
               | of literary appreciation.
        
               | potsandpans wrote:
               | Here's what I read
               | 
               | I am very smart. I am very smart. I am very very smart. I
               | am very smart.
        
             | bgandrew wrote:
             | sorry, but it's just common sense
        
           | cscurmudgeon wrote:
           | There are tons of written books and journals on this topic.
           | 
           | My favorite:
           | 
           | https://global.oup.com/academic/product/shadows-of-the-
           | mind-...
        
             | jpt4 wrote:
             | Thank you, this is a good resource.
        
           | a1371 wrote:
           | I think it's pretty elitist to judge the quality of a content
           | via whether it's in a book/journal or not. In fact, the
           | recent wave of scientific fraud discovery shows that one can
           | hide data manipulation pretty effectively in an academic
           | journal. I'd much rather scientists spend their time making
           | eli5 videos.
        
           | tomhoward wrote:
           | Hi there, my intention was to offer some names of people who
           | have intelligent things to say about the topic.
           | 
           | I mentioned YouTube videos because there's a large volume of
           | their content there, with many of the videos featuring in-
           | depth conversations and debates, which I've found can be a
           | particularly good format for discussion of a topic of such
           | gravity and complexity.
           | 
           | But between these three figures there are also many books,
           | academic papers, blog posts, and written media interviews.
           | 
           | I've long found that this is a topic in which some people are
           | going to be standoffish and resistant and that's fine.
           | 
           | My hope is only to help people who are looking to learn about
           | the topic to know who I've found worthwhile to learn from.
           | 
           | All the best!
        
           | GolberThorce wrote:
           | https://www.orwellfoundation.com/the-orwell-
           | foundation/orwel...
           | 
           | required reading @jpt4 u/jpt4
        
         | NikolaNovak wrote:
         | Thanks! Any links to written word? I just don't do much
         | youtubing, especially for scientific or philosophical areas
         | where information density can be high and videos are
         | infuriatingly inefficient :-(
        
         | mekoka wrote:
         | As someone who's been engaged in this topic (the nature of
         | reality and consciousness) over the past 3 years, it's very
         | surprising to see this comment on HN. As I've suggested in a
         | past comment (https://news.ycombinator.com/item?id=36465928),
         | this can potentially be one of the most transformative rabbit
         | hole anyone can ever hope to enter.
         | 
         | It's been fascinating to observe the dichotomy between
         | researchers who work on explaining consciousness, starting from
         | a physicalist/materialist perspective, slowly being convinced
         | away from that intuition with iron clad arguments, while
         | laypeople lean further into it, deluded by what they perceive
         | to be signs of it from recent AI progress.
         | 
         | Among former materialist academics that I expect to see
         | publications from a more affirmed idealist position in the next
         | 5 years, I count David Chalmers and Christof Koch. Perhaps Anil
         | Seth too.
        
           | Lerc wrote:
           | I hadn't encountered conscious agent theory before. I took a
           | quick look and it seemed to be solipsism wearing a disguise.
           | Can you elaborate how it distinguishes itself from solipsism
           | in its arguments that it might be real?
           | 
           | I found the evolutionary argument rather odd. The disconnect
           | between perception and reality is pretty much the standard
           | belief these days. Unless I'm reading it wrong it was making
           | the claim that 'reality' is a non causal artifact of
           | conscious entities but one that was caused by evolution,
           | which seems contradictory.
        
             | mekoka wrote:
             | Solipsism is skepticism of the existence of anything
             | outside the self. I've seen Hoffman address accusations of
             | solipsism a few times and I have to admit that it's always
             | been unclear to me which part of his theory people tend to
             | perceive as such. Perhaps I've just consumed enough of it
             | to zoom past this perception.
             | 
             | I'll try to keep things short, as this can get pretty long
             | winded fast.
             | 
             | From what I understand of his proposition, it's a take on
             | idealism that is very close to eastern thought as inspired
             | by nondual traditions like Advaita and Buddhism, but with a
             | heavier emphasis on science. Everything in reality is a
             | projection in consciousness of consciousness. It's made up
             | of interacting conscious agents (you, me, a rock, an atom,
             | a particle, etc) which are themselves "projections"
             | ultimately stemming from a fundamental, unknowable,
             | infinitely distant and unattainable root conscious agent.
             | The implication is that space-time, our perceived reality,
             | is not fundamental. Hoffman thinks that we might possibly
             | have access to at least one, higher, more general dimension
             | of reality of which ours is a specialized version (as hint
             | of this, he speaks of current work in physics where
             | structures outside space-time are being discovered, like
             | the amplituhedron).
             | 
             | Space and time not being fundamental creates problems with
             | some materialist assumptions in evolutionary biology, where
             | consciousness is seen as part of an evolutionary process.
             | Hoffman suggests to rethink evolution from scratch instead.
             | He uses evolutionary game theory to demonstrate that we can
             | have consciousness as fundamental, keep some of the core
             | evolution principles and still end up with consistent
             | conclusions.
             | 
             | I'll stop here, as I've said, it can get deep rather fast.
        
         | Hoasi wrote:
         | Thank you for these recommendations.
        
         | hanrelan wrote:
         | Any videos in particular you'd recommend?
        
           | ggpsv wrote:
           | I found his first appearance [0] in Rupert Spira's show to be
           | a good introduction to his arguments.
           | 
           | For a more thorough examination, his book "The Idea of the
           | World".
           | 
           | [0]: https://m.youtube.com/watch?v=MQuMzocvmTQ&pp=ygUNa2FzdHJ
           | 1cCB...
        
           | corry wrote:
           | Not OP but check out Closer to Truth on YouTube. PBS show
           | hosted by a former neuroscience PhD, they have tons of recent
           | interviews with leading thinkers on consciousness (among
           | other fascinating topics).
        
         | moffkalast wrote:
         | > it's particularly important to explore these discussions as
         | dispassionately as possible if you regard materialism as the
         | only theory of mind that has any scientific credibility or
         | validity
         | 
         | You're making it sound like I'm about to watch a proverbial
         | Giorgio Tsoukalos make bold claims on pure speculation.
        
       | Dibby053 wrote:
       | So far, the concept of consciousness is basically metaphysics. It
       | doesn't have a role, it can't be measured... If I may suggest a
       | starting point to get over this hurdle: let's create a
       | "consciousness captcha": some task that is easy for conscious
       | beings, but hard for algorithms. Consciousness evolved, therefore
       | it must have provided an advantage. We just have to find it.
        
         | neverokay wrote:
         | Aren't these things supposed to do automatic feature detection?
         | Wouldn't these models figure out consciousness as some hidden
         | layer over time?
        
         | V__ wrote:
         | > Consciousness evolved, therefore it must have provided an
         | advantage
         | 
         | This is also my personal belief, but frankly, consciousness
         | might just as well be some emergent 'side-product' property in
         | all 'higher' intelligent beings.
        
           | visarga wrote:
           | The advantage of consciousness is that it protects the body.
           | Consciousness is the inner loop, evolution is the outer loop
           | of life. The goal of consciousness is tied to its survival.
        
         | golf_mike wrote:
         | How is that different from a turing test?
        
         | dist-epoch wrote:
         | > It doesn't have a role ... Consciousness evolved, therefore
         | it must have provided an advantage. We just have to find it.
         | 
         | That's a pretty wild claim, considering that the only animal
         | proven to have a consciousness is the apex-predator of the
         | whole world.
        
           | IshKebab wrote:
           | It's a pretty out-there idea that we're the only conscious
           | animals. Along the lines of "animals don't have feelings"
           | which seems to be mostly pushed by Christians.
        
             | dist-epoch wrote:
             | I didn't say we're the only one. But that we are the one we
             | know for a fact are conscious.
        
               | groestl wrote:
               | I know only for a fact that I'm conscious ;)
        
             | anon291 wrote:
             | > Along the lines of "animals don't have feelings" which
             | seems to be mostly pushed by Christians.
             | 
             | Can you provide any source for this claim? The historical
             | Christian belief is that animals have souls and feelings
             | and inner experiences.
        
               | IshKebab wrote:
               | It stems from the belief that only humans have a soul.
               | Here's an example:
               | 
               | https://www.christianforums.com/threads/animals-dont-
               | have-fe...
               | 
               | I think that view has changed a lot over the last few
               | decades but certainly 30 years ago it was pretty common.
        
         | QuadmasterXLII wrote:
         | This is a puzzle with a known solution, although we have
         | obviously not found an algorithm that passes. Casually isolate
         | the being/algorithm from known conscious beings, put it with a
         | bunch of its friends instead, and see if it starts arguing with
         | them about qualia. This will of course require creating an AI
         | self-training algorithm strong enough to invent language
         | independently (otherwise you get LLMs which are likely
         | parotting human arguments about qualia), but it shouldn't
         | require that much compute compared to training and running an
         | AGI for other tasks- humans invent language when you put 30
         | children together for 30 years (see Nicaraguan Sign Language).
         | This highlights that we have no idea how to train a transformer
         | such that 30 copies of it left on a minecraft server and
         | started from random weights would begin to meaningfully
         | communicate . On the other hand, if 400B parameter versions of
         | openAI's hide and seek AI start arguing with each other about
         | whether ramps are really phlornge or if phlornge is just a
         | property in their heads, we aught to believe that they have
         | qualia.
        
           | tsimionescu wrote:
           | This might be sufficient, but it wouldn't be necessary. A lot
           | of humans (though not all) believe that at least some
           | mammals, like dogs or chimps, are conscious, though we know
           | for a fact that they don't discuss their qualia.
        
             | QuadmasterXLII wrote:
             | Definitely not necessary. I have no idea how to define a
             | measurable result that is necessary for consciousness, but
             | a thought experiment with a physically measurable outcome
             | that is sufficient proves that consciousness can inpact the
             | material world
        
         | bondarchuk wrote:
         | You're talking about it right now in the physical world,
         | though.
        
       | visarga wrote:
       | Can't open the PDF, is arXiv bugged?
        
       | ryandvm wrote:
       | Consciousness is an evolved information system requirement in all
       | life forms that are complex enough to be capable of
       | computationally modelling their environment, and importantly,
       | their own place in that environment. You cannot model the world
       | and your place in it without consciousness.
       | 
       | Also, consciousness is not binary (present or not), it is a
       | gradient. I am more conscious than I was this morning when I
       | groggily opened my eyes and hadn't yet fully booted up my model
       | of the world. I am more conscious than my dog, which is more
       | conscious than my goldfish, which is more conscious than a worm,
       | etc. A dragonfly has consciousness, but only enough to model, "I
       | am here. I want to eat _that_. "
       | 
       | The scary bit about consciousness being a gradient is that we
       | kind of consider ourselves the pinnacle of life forms mostly
       | because of the complexity of our conscious experience. If
       | consciousness is simply an emergent property of sufficiently
       | complex information modelling, than assuming continued increases
       | in computational capability, we're probably on the brink of
       | creating consciousness that is "more conscious" than ourselves.
       | And by our own definition of import, this consciousness will
       | exceed our own place in the universe.
       | 
       | Will human life be the most important consideration in the
       | universe if there is an artificial intelligence (actually, there
       | will be nothing artificial about it) that is capable of modelling
       | and empathizing with billions of life forms on an individual
       | level?
        
         | ffwd wrote:
         | There are two types of gradient though, conceptually. If
         | consciousness is some state of matter that is unknown still,
         | and each neuron, for example, contains "one bit" of
         | consciousness, then the gradient is that as you add more
         | neurons, you add more complexity to the consciousness, but you
         | do not change the fundamental experience of consciousness. You
         | add more content but not more experience in itself.
         | 
         | If on the other hand consciousness is this emergent phenomenon
         | that depends on neurons and their connections, then the
         | gradient (and thus the experience) would be far more diverse
         | and there would be a lot of different ways consciousness could
         | "feel".
         | 
         | The problem I have is that for example, as far as my brain can
         | remember, stimuli has looked the exact same all throughout my
         | life. If I saw my a tree when I was 10, and I saw the same tree
         | now, the conscious "qualia" of this would look exactly the
         | same. To me this is a mystery, that the connections in the
         | brain do not change the experience of qualia at all. Red looks
         | like red no matter what the neuronal state of your brain is. I
         | don't have an answer to this but just something I've been
         | thinking about.
        
         | verisimi wrote:
         | > "Consciousness _is_ "
         | 
         | So certain, so much knowledge. Where do you get this certainty
         | from and can you share with us so we can know too?
        
         | phrotoma wrote:
         | > You cannot model the world and your place in it without
         | consciousness.
         | 
         | At any high school you can find a robot which models its place
         | in the world without consciousness.
        
         | WaltPurvis wrote:
         | Not trying to be facetious: is a Roomba conscious?
        
         | anon291 wrote:
         | > If consciousness is simply an emergent property of
         | sufficiently complex information modelling, than assuming
         | continued increases in computational capability, we're probably
         | on the brink of creating consciousness that is "more conscious"
         | than ourselves. And by our own definition of import, this
         | consciousness will exceed our own place in the universe.
         | 
         | You went from 'consciousness exists on a gradient' (makes
         | sense) to 'consciousness exists due to information modelling'
         | which is a non sequitur.
         | 
         | Consciousness could be due to information modelling.
         | 
         | It could also be due to our brain's reliance on dopamine.
         | 
         | Or maybe it's due to a heretofore unknown enzyme that taps into
         | a quantum field.
         | 
         | Or any other explanation.
         | 
         | There is no way to prove that consciousness relies on
         | information modelling. That's a major assumption.
        
       | johnaspden wrote:
       | Who cares? And how would we tell?
        
         | ben_w wrote:
         | I care, because getting it wrong in either direction is bad.
         | 
         | Brain uploads will bring significant benefits regardless of if
         | they are conscious.
         | 
         | Thinking "this brain upload is conscious" when it isn't, means
         | we'll get an empty future where the lights are on and nobody's
         | home.
         | 
         | Thinking "this brain upload isn't conscious" when it is... I've
         | not seen most of Black Mirror, but it is the plot summary of
         | many episodes given on Wikipedia. Also of the Westworld TV
         | show, which I have seen. Some of the characters in the We Are
         | Bob series.
         | 
         | How would we tell, is, unfortunately, a complete unknown at
         | this point.
        
       | yreg wrote:
       | I can accept (and to be honest even like) the idea, that
       | consciousness somehow emerges from the complex structures in an
       | animal brain, that there is no soul, no other planes of reality,
       | no special quantum phenomena needed, etc.
       | 
       | Maybe we could create a synthetic artificial conscious mind. At
       | worst we could simulate a full human brain at whatever level is
       | necessary. I can accept that.
       | 
       | What's crazy to me is the following: It's not the computer that's
       | conscious. Instead, the computation itself is conscious. And the
       | computation is obviously matter-independent. As a thought
       | experiment it would be possible to compute it on paper _and those
       | pen and paper calculations would be conscious_. Or pebbles in a
       | desert XKCD style.
       | 
       | https://xkcd.com/505/
       | 
       | Like... what?
        
         | IshKebab wrote:
         | I think that probably is the logical conclusion. There are a
         | fair number of sci-fi books along those lines, e.g. Permutation
         | City.
        
         | tasty_freeze wrote:
         | The way I think about it is that no neuron is conscious; it is
         | the network that is conscious. That is why the "mystery" of
         | Searle's Room doesn't seem mysterious at all. Of course the
         | human following the directions in the room doing thinking in
         | Chinese doesn't understand the language any better than ATP
         | (which is what the human is in that thought experiment -- an
         | energy source) knows English in my brain.
        
         | dvsfish wrote:
         | One thought experiment I always find myself having is that if
         | this is the case, does it make any difference if the network is
         | physically larger? Not necessarily more complex, just the
         | distances signals have to travel is bigger, say planet sized
         | instead of brain sized. Would this system be conscious but just
         | with a slower tick rate? Faster time perception?
        
         | carra wrote:
         | Couldn't you say the same about us? It's not the brain
         | (hardware) that is conscious, but the mind (software) running
         | on it.
        
         | jodrellblank wrote:
         | How could you be deciding what's for dinner, if the decision
         | takes so long that all the food has rotted, your body has died,
         | and the continent is now in an ice age? How could you be
         | learning something about your observations of the night sky, if
         | the suns had burned out before you knew they were there at all?
         | Our 'conscious awareness' is the awareness of the environment
         | around us in a timeframe where it stays approximately the same;
         | when something changes too quickly, an explosion, a car crash,
         | we have to wait until the environment steadies before we can
         | think about it. If an insect zips by too quick to see, we never
         | become conscious of it. We can be conscious of sound but not
         | radio waves. Is it possible to have a consciousness where
         | _everything_ zips by too quickly for it to notice, where it has
         | no senses to learn about what 's around it, it's not-conscious
         | of _anything_?
         | 
         | At that point, the XKCD person moving the stones on the beach
         | is doing a bit of a Searle's Chinese Room; "look, these stones
         | I'm setting into positions I chose to represent some knowledge
         | I chose, which I'm moving in patterns I chose, are echoing my
         | choices back to me in ways I chose to interpret!".
        
       | 13415 wrote:
       | I believe that computationalism is by far the best foundational
       | explanation of higher cognitive phenomena, as all other
       | explanations involve some unscientific form of mysticism at one
       | point or another. From this perspective, the answer to the
       | question in the paper's headline is trivially true.
       | Computationalism implies multiple realizability.
       | 
       | It's worth noting that computationalism is independent from
       | physicalism and monist materialism in general. It's not
       | surprising that it's always paired with physicalism, but IMHO
       | it's also the best foundational explanation for dualists.
        
         | CooCooCaCha wrote:
         | Science and math should always be the default hypothesis. It's
         | irrational for people to jump to mysticism which is basically
         | another form of god-of-the-gaps.
        
       | DeathArrow wrote:
       | How can we assess if someone or something has consciousness? It's
       | not like we have a defined framework with precise rules to tell
       | if something has consciousness. In fact this problem might not be
       | solvable.
        
         | somenameforme wrote:
         | _Might_ not? One stepping into this sort of philosophy with any
         | intent on solving anything is set to be quite disappointed. You
         | have no way of knowing nor testing whether you 're the only
         | conscience person in existence. There's even reasonable logic
         | for such scenarios outside of borderline nihilistic views.
         | 
         | We're all living through one ridiculously unique and critical
         | era in humanity - internet, AI, space exploration, and more -
         | all packed into a small enough timeframe to experience it all
         | in a single human lifetime, which is a tiny minuscule fraction
         | of the entire time our species has existed? If and when we ever
         | develop the ability to create mind-blanked compelling
         | simulations of the past, _this_ moment will damn sure be one of
         | the eras people will go back to experience. We 're even the
         | first era of widespread massive digital surveillance alongside
         | the internet, creating more than enough data to create
         | simulations of people like me - to further immerse you in your
         | solitary world.
         | 
         | Of course I am conscience and I certainly assume you are too.
         | But hey, we'll never know until we get to see what, if
         | anything, waits beyond the final curtain.
        
       | neom wrote:
       | My mum is a shrink, and very old, and smart, and hates
       | technology. I was talking her through how some primitive "AGI"
       | could happen with 4o (basically just explained this:
       | https://b.h4x.zip/agi)
       | 
       | That got us talking about consciousness, and at the end, she
       | thought about it for about a minute and then said "if I can't
       | give it lysergic acid and make it see god, it's not conscious"
       | and went back to making her dinner.
        
         | alchemist1e9 wrote:
         | > She thought about it for about a minute and then said "if I
         | can't give it lysergic acid and make it see god, it's not
         | conscious" and went back to making her dinner.
         | 
         | That's probably going to be pretty easy, just scramble some
         | attention heads randomly in portions of the transformer network
         | and the AGI will probably think "it sees god".
        
         | Zambyte wrote:
         | You can give it a virtual device as the real time camera input
         | to show it things that aren't there. That's not organic acid,
         | but the target isn't organic consciousness.
        
           | fuzzfactor wrote:
           | One uncanny thing is that people have such a diverse
           | tolerance for "fake", across a spectrum from absolute
           | acceptance to complete rejection.
           | 
           | Or for things like hallucinatory perceptions of reality.
        
         | aeonik wrote:
         | A psychologist I know did their PhD in this area (also old and
         | smart), and he called this kind of thinking "Neural Chauvanism"
         | -> if her explanation actually requires the chemical and neural
         | components.
         | 
         | PDF warning:
         | https://gwern.net/doc/philosophy/mind/1985-cuda.pdf
        
           | neom wrote:
           | It's a good point as i'm sure what she was saying is she
           | belives consciousness requires specific neural properties. I
           | am aware of the the ideas around neural chauvanists, but that
           | paper is now almost 40 years old. We know a lot more about
           | the brain since it was written. The idea that the homunculi
           | can perfectly replicate the relevant causal powers of neurons
           | is... questionable at best. It would says distinctive
           | biochemical properties of neurons enable consciousness, which
           | homunculi would lack, and then homunculized, I also don't
           | like the idea that we must attribute consciousness to the
           | homunculized brain unless we accept an implausible cut-off
           | point? Consciousness could be requiring a critical mass of
           | neurons, which could be reached later without each
           | replacement causing incremental fading. The implausibility of
           | a single neuron replacement eliminating consciousness is not
           | a good reason to consider it conscious. imo It's probably
           | still more likley that experiences like an LSD trip require
           | specifically neural underpinnings.
        
         | stvltvs wrote:
         | That's begging the question of whether inorganic matter can be
         | conscious. If you boil it down, she's just said if it's
         | inorganic, it's not conscious.
        
           | neom wrote:
           | It's funny, I've studied a lot of psychedelics (probably
           | because it was an area of research for both my parents), and
           | salvia divinorum is a really really stand out plant in it's
           | trips, it seems to be a very "technically philosophical"
           | plant. Trip reports always go into weird things like "I
           | became a book on a shelf for 4,000 years, and now I know
           | inanimate objects are conscious", there is also the area of
           | panpsychism and animism.
           | 
           | That stuff is all a little too mind bending for me, but
           | pretty fun thinking for a Sunday morning. :)
        
             | eaenki wrote:
             | well, i tried salvia and I tried shrooms. I don't see how
             | it's more philosophical just because it's a dissociative.
             | at best, it's more of an ego death than, say, shrooms, if
             | during the trip you become an object. It's still a unique
             | plant for sure because of it's dissociative effects. To
             | thread lightly. AFAIK natives in mexico say that smoking it
             | is very bad. and that u're supposed to chew fresh leaves.
             | maybe chewing fresh leaves doesn't even cause the same
             | dissociative effects which are unpleasant.
        
         | mgdante wrote:
         | Raising the temperature is probably a close analog to
         | psychedelics.
        
           | boesboes wrote:
           | I've read once they determined the mechanism by which a.o lsd
           | 'works', is by lowering/disabling a lot of the filtering
           | between neurons. This leads us to recognise all kinds of
           | patterns that are not really there. Visual hallucinations
           | being the obvious form of this, but I suppose the same
           | applies to other things like our personalities and self;
           | that's neither here nor there anyway.
           | 
           | Now I'm not 100% sure how temperature is implemented, but
           | from what i recollect, might be a reasonable analogue indeed!
        
             | neom wrote:
             | That's not really correct. default mode network activity is
             | somewhat disrupted, but LSD works primarily by interaction
             | with the 5-HT2A receptor subtype, this is multiple
             | neurotransmitter systems, including serotonin, dopamine,
             | and glutamate, and in many examples it's via excitement not
             | inhibition/lowering/disabling. Excitatory neurotransmission
             | in glutamate is looking like the the most important of the
             | work in the 5-HT2A area.
        
               | schmidtleonard wrote:
               | Does excitatory/inhibitory map cleanly on to the higher
               | level abstractions? I'm no neuroscientist, so maybe it
               | does, but that seems like a bad assumption to me because
               | in digital logic "active when low" is extremely common. A
               | spurious-suppression system that caused inhibition when
               | low would be perfectly reasonable and compatible with the
               | observation that excitation caused an increase in
               | spurious behavior.
        
               | neom wrote:
               | They don't really map well. The brain obviously operates
               | on more continuous principles rather than binary states.
               | The relationship between excitation/inhibition and
               | emergent effects may be more direct and graded in
               | biological neural networks vs to the "active when low".
               | Buuuutt stiilll, 5-HT2A receptor and its downstream
               | effects on glutamate transmission play a central role in
               | mediating the subjective effects of psychedelics. Very
               | very many studies have consistently linked 5-HT2A
               | activation to the perceptual, cognitive, and emotional
               | characteristic of the psychedelic state. Disruption of
               | the default mode network and other changes are somewhat
               | important maybe, the 5-HT2A receptor appears to be the
               | focus of action for producing these effects. The point
               | being, neurochemical interactions at the molecular level
               | are the most important aspects of the LSD interaction,
               | so.., now sure how that works in the context of synthetic
               | AI, I don't know much about ML/NN.
        
         | killerstorm wrote:
         | Well, this guy is highly proficient at administering an
         | artificial lysergic acid to LLMs:
         | https://x.com/repligate/status/1792010019744960577
        
           | schmidtleonard wrote:
           | Deep Dream is wild:
           | https://www.youtube.com/watch?v=5DaVnriHhPc
        
           | moffkalast wrote:
           | And a few drinks too many: https://www.reddit.com/r/LocalLLaM
           | A/comments/13vv941/tempera...
        
         | bee_rider wrote:
         | That seems more like a good natured refusal to engage with the
         | question seriously.
         | 
         | Sort of like, if you come to a smart engineer with a design for
         | a perpetual motion machine, they might likely tease you a
         | little bit and then refer you to a physicist. Smart people from
         | applied fields know when the topics are outside their actual
         | wheelhouse, but getting close enough that they risk being taken
         | seriously, to a misleading extent.
        
         | agumonkey wrote:
         | Funny story, but at the same time, got seems fine about
         | hallucinating by itself. At least a little.
        
         | Der_Einzige wrote:
         | You can give LLMs DRuGS though!
         | 
         | https://github.com/EGjoni/DRUGS
        
       | Beijinger wrote:
       | No mention of Julian Jaynes "The Origin of Consciousness in the
       | Breakdown of the Bicameral Mind" here?
        
         | tsimionescu wrote:
         | Well, mostly because, while an extremely entertaining idea,
         | it's pretty clearly bogus science.
        
           | throwaway71271 wrote:
           | as opposed to the other non bogus science on the objective
           | measurement of consciousness
        
           | Beijinger wrote:
           | Any evidence for that?
        
       | filchermcurr wrote:
       | I would be remiss if I didn't mention Steve Grand. He's been
       | chasing this dream for many, many years. If you remember
       | Creatures, it was his first attempt at artificial life. Sadly the
       | computing capabilities in 1991 weren't enough to achieve anything
       | remotely like consciousness, but he did an admirable job of
       | simulating a simple lifeform with a basic adaptive / reactive
       | neural network. It also has a simple biology / biochemistry to
       | work with the brain. (Incidentally, the Creatures community is
       | alive and well if anybody wants to check that out:
       | https://creatures.wiki/Discord )
       | 
       | Steve is working on a new project, Grandroids, that hopes to
       | imbibe creatures with imagination and forward planning. Exciting
       | stuff! (https://creatures.wiki/Grandroids)
        
       | tristor wrote:
       | I am far from an expert on cognitive science, however I have
       | given a considerable amount of thought around the topic of
       | consciousness and AGI, and particularly about what the nature of
       | consciousness even is. I would consider myself erudite and well-
       | read on the topic, despite having no professional or academic
       | credentials on the matter.
       | 
       | The best conclusion I have been able to come to thus far is that
       | consciousness is not a manifestation of the physical structures
       | of our mind, but rather a reflection or view into the nature of
       | our soul. The physical structures of the mind are a prerequisite,
       | but not sufficient, to manifest consciousness. To wit, there are
       | several other mammals in the world which have similarly complex
       | brain structures, and in many cases larger amounts of brain mass,
       | but do not exhibit any sort of human-like consciousness.
       | 
       | I saw all this, while being generally agnostic/areligious. I've
       | studied this question philosophically from the perspective of the
       | theologians and from various religious works, of course, but
       | given as I don't myself have a strong religious belief system,
       | this is not the primary influence for why I take the above
       | position. Simply put, I think a purely materialistic view of
       | consciousness is clearly incorrect, however I don't have a better
       | alternative that's provable.
       | 
       | Given my conclusions, I do not think it is possible for AGI to
       | ever be truly conscious, but it may be possible for it to
       | convincingly mimic consciousness.
        
         | danans wrote:
         | > consciousness is not a manifestation of the physical
         | structures of our mind, but rather a reflection or view into
         | the nature of our soul
         | 
         | > To wit, there are several other mammals in the world which
         | have similarly complex brain structures, and in many cases
         | larger amounts of brain mass, but do not exhibit any sort of
         | human-like consciousness.
         | 
         | Per your theory these other mammals do not have "souls",
         | otherwise their significant brain mass would reflect their
         | soul's nature, and generate consciousness.
         | 
         | So humans have somehow been chosen exclusively to have "souls",
         | or at least have brains capable of reflecting them.
         | 
         | When did this choosing happen? Just to homo sapiens sapiens or
         | also to neanderthalis and other homo sapiens subspecies?
         | 
         | Taking it back further, under your theory do our closest extant
         | genetic relatives, bonobo chimpanzees, have souls, and by
         | extension, your definition of consciousness?
        
       | kbrkbr wrote:
       | I want to open a side thread about your definitions or
       | descriptions of what "consciousness" is. I think that could be
       | pretty interesting after reading all the comments, and I think
       | there's a lot of knowledge hidden here that we could throw
       | together.
       | 
       | Some things that I understood, in my words:
       | 
       | - consciousness is probably not reducible to smaller, non-
       | conscious parts of which it is composed. You could maybe say it
       | is intrinsically holistic
       | 
       | - consciousness entails being aware of or observing qualities
       | that hard science tells us the things don't have (green vs.
       | length of lightwave); but "being aware of" or "observing" are so
       | closely related to consciousness, that it may not be very
       | informative
       | 
       | - consciousness can't be detected from the outside for now, and
       | probably by the structure of the process. It is "inner" in a very
       | peculiar sense (everything else is outer, and can't get in,
       | except as representation)
        
         | XorNot wrote:
         | The fact a conscious mind loses capability when brain damage
         | happens shows quite clearly that consciousness as a process
         | _is_ reducible to smaller non-conscious parts though.
         | 
         | There's also an innate problem in assuming the human experience
         | of say, "green" is consistent. What I actually see when I see
         | the colour green only appears consistent with the physical
         | behaviour of light. Whether any two people really see colours
         | the same way is highly questionable.
        
           | mistermann wrote:
           | > The fact a conscious mind loses capability when brain
           | damage happens shows quite clearly that consciousness as a
           | process is reducible to smaller non-conscious parts though.
           | 
           | https://en.m.wikipedia.org/wiki/Necessity_and_sufficiency
        
           | PeterisP wrote:
           | I think there's a consensus that you _don 't_ assume that the
           | human experience of "green" is consistent, only that people
           | do have such an experience. We can possibly try to "align"
           | those experiences with communication and referring to a
           | shared real world, but for that an interesting experiment
           | scenario is communication between a person with the common
           | trichromatic sight, a person with a tetrachromatic retina,
           | and someone with partial color blindness, as the experience
           | of "green" for them is not only inconsistent but also likely
           | incompatible, without a possibility to align them.
        
           | parsadotsh wrote:
           | > The fact a conscious mind loses capability when brain
           | damage happens shows quite clearly that consciousness as a
           | process is reducible to smaller non-conscious parts though.
           | 
           | This does not follow.
        
         | igiveup wrote:
         | Ability to perceive your own thoughts. Access to your own debug
         | logs.
         | 
         | I think this is distinct from "ability to perceive green, which
         | doesn't exist". A neural network trained to distinguish green
         | in the output of a spectroscope will perceive green without
         | ever knowing there is something like a "neural network" or
         | "thoughts".
         | 
         | Also, this probably _can_ be detected from the outside via
         | debugging. What cannot be detected may be the thing that
         | distinguishes a hypothetical  "philosophical zombie" from a
         | truly conscious human, but I don't think anything like a
         | philosophical zombie exists. Once it is physically identical to
         | the human, it will also be thinking identically.
         | 
         | As a next step, you may observe humans around you and realize
         | that the thoughts you perceive seem to be running inside the
         | head of one of these humans (which you will call "me").
         | However, I don't think knowing what you look like from the
         | outside is necessary for consciousness.
        
       | potatoman22 wrote:
       | The authors suggest incorporating elements of neuroscience into
       | machine learning models. I don't see why the bitter lesson [1]
       | doesn't apply here.
       | 
       | 1.
       | https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson...
        
         | Conlectus wrote:
         | Isn't ReLU an element of neuroscience that was incorporated
         | into machine learning to great success?
        
           | etiam wrote:
           | Not really, no. That's motivated by not getting impractically
           | small gradients on the plateaus and spoiling the optimization
           | properties when used for deep ANNs. The sigmoids it replaced
           | had a bit more neuroscience inspiration, but so
           | oversimplified it's just barely.
        
         | jvanderbot wrote:
         | Regarding that article, not main post,
         | 
         | > Today all this is discarded
         | 
         | I can only comment on the one field where I am intimately
         | familiar: computer vision. It is true that when you need a text
         | description of the contents of an image, we have discarded
         | feature-based approached. But attempts to change vision-based
         | tracking, mapping, and navigation into a learned process have
         | not performed well in the applications I have worked on. It's
         | true that end to end control from raw images to output can do
         | very well, but in most systems, the feature based approaches
         | are still employed _along side_ CNN for tracking. ML-only
         | tracking is subject to a lot of noise because of its lack of
         | good history, poor association, and sensitivity to outliers.
         | 
         | So, it's not discarded, it's supplanted by CNN as the primary
         | signal, but our old tricks (reassociation, factor graphs, batch
         | processing, even plain old homographies and MH-EKF) are still
         | very much the scaffolding.
         | 
         | I expect it is the same in other sub fields mentioned - the
         | main driver for improvement is no longer human directed
         | knowledge-based algorithms, but rather human-designed,
         | learning-based, heterogenous pipelines. Even the RAG or Tesla
         | autopilot (probably) fits this bill nicely.
        
         | RaftPeople wrote:
         | There will always be a set of problems beyond the current (in
         | whatever year) computational limits of brute force, and we
         | don't know how many of a humans capabilities are in that set.
         | 
         | The delta between a clever algorithm vs brute force in
         | computational advancement could be 7 years or it could be 7,000
         | years.
        
       | monstertank wrote:
       | I never understood this. Evolution basically says humans are
       | conscious matter...why would that not be replicatable via an
       | intelligent designer (humans)?
       | 
       | Randomness created consciousness from a single cell...why would
       | that be the most efficient solution?
        
         | jugg1es wrote:
         | We can't replicate something we don't understand. The mechanism
         | for consciousness is not understood at all right now and could
         | actually be based on quantum effects that we haven't detected
         | yet. It's also possible that it is only achievable in an
         | organic machine. Until we understand how consciousness actually
         | arises, the best we can do is a simulacrum of it.
        
           | zeroonetwothree wrote:
           | We can't replicate it now but it doesn't means it's
           | impossible.
           | 
           | Like no one thinks that humans visiting Pluto is impossible,
           | it's just not something we can feasibly do right now.
        
             | thethimble wrote:
             | The framework of materialism posits that there's a physical
             | universe and consciousness is an emergent property of
             | physical processes. This view is so prevalent in the
             | western world it's hard to imagine how it could be anything
             | else.
             | 
             | As an alternative, imagine that consciousness is primary.
             | After all, any evidence that you have about the material
             | world happens as an appearance within consciousness. (See
             | "brain in a vat" and related thought experiments for the
             | legitimacy of this idea).
             | 
             | In this alternative model, the concept of replicating
             | consciousness with material processes doesn't make any
             | sense because consciousness is primary.
             | 
             | To be clear I'm not making any assertions about which model
             | is correct. Instead I'm suggesting that the model you
             | choose is axiomatic - taken as given as opposed to inferred
             | from evidence. And starting with the latter model means
             | artificial replication of consciousness isn't even a
             | logical proposition.
        
               | fngjdflmdflg wrote:
               | >I'm suggesting that the model you choose is axiomatic -
               | taken as given as opposed to inferred from evidence
               | 
               | The brain is the seat of consciousness and the brain is
               | material, therefore consciousness is emergent from
               | material. My evidence that the brain is the seat of
               | consciousness is that when my head hurts it impairs my
               | thoughts, and that my eyes are connected to by brain.
               | 
               | Stated a bit differently:
               | 
               | All events must have a cause, therefore consciousness
               | must have a cause. The brain is the most likely candidate
               | for the cause of consciousness. The brain is material,
               | therefore consciousness is emergent from material.
               | 
               | What role do you think the brain plays in consciousness?
               | Do you believe that events must have causes?
        
               | thethimble wrote:
               | > The brain is the seat of consciousness and the brain is
               | material, therefore consciousness is emergent from
               | material
               | 
               | This is true from the standpoint of materialism but not
               | necessarily fundamentally true.
               | 
               | How do you know you have a brain? As you explore this
               | question, you'll realize that the knowledge that you have
               | a brain only manifests as appearances within
               | consciousness.
               | 
               | It's not necessary true that these appearances are giving
               | you a window into an objective material universe. Instead
               | it might be possible that your consciousness is a product
               | of a simulation where your entire subjectivity -
               | including the observation that you have a brain - is a
               | manifestation of another mechanism that is outside of
               | observability.
               | 
               | The point is that we simply don't know what's at rock
               | bottom - an objective universe, a simulation, or an
               | alien's dream. Therefore the "arrow" of causality might
               | flow from consciousness towards material as opposed to
               | the other way around.
        
           | moi2388 wrote:
           | Why not? I can replicate a book in a foreign without
           | understanding the language?
        
             | fuzzfactor wrote:
             | IOW you are hardly conscious at all of what the book is
             | about.
        
             | AnimalMuppet wrote:
             | You can copy an existing book, word for word (maybe not
             | even that if it uses a different character set, unless
             | you're doing photocopies).
             | 
             | Write a _new_ book without understanding the language? No
             | way - not one that makes any sense. Not unless you 're
             | going with the "million monkeys" approach (and if you did
             | try that approach, you wouldn't live long enough to succeed
             | in writing one actual coherent book in the foreign
             | language).
             | 
             | So, we could think about trying to simulate a human brain,
             | neuron by neuron. That's the "making a photocopy" approach.
             | But that's not the approach we're pursuing. We're trying to
             | write a new book (create a new, non-human intelligence) in
             | a language we don't understand (that is, not knowing what
             | intelligence/consciousness actually is).
             | 
             | (Side topic: What would happen if you asked GPT-4 to write
             | a full-length novel? Or even a story as long as the token
             | limit? Would you get anything coherent?)
        
               | davnicwil wrote:
               | As a tangent on this, it'd be such an interesting
               | experiment to see how far one could go in
               | deciphering/understanding a new language and attempting
               | to write a new book in that language based on the content
               | of a single, probably fairly long, book.
               | 
               | It feels like it should be theoretically possible, but I
               | doubt it's ever been tried.
               | 
               | Maybe something like understanding aincent languages from
               | limited, fragmented sources is the closest natural
               | experiment we have in practice to have tested it, but
               | it's hardly the same as a full, substantial text in a
               | consistent style.
        
               | hovering_nox wrote:
               | Congratulations, you have just invented large language
               | models.
        
           | __loam wrote:
           | I agree that people who think we can replicate brains on von
           | Neumann machines with our current understanding of the brain
           | are idiots who don't know what they're talking about, but
           | humans build things they don't understand all the time.
           | There's always a way to go deeper on a subject. The Romans
           | were pretty good architects even without a modern
           | understanding of metallurgy and structural engineering. We
           | can treat mental illness with medications even if we don't
           | fully understand consciousness.
        
         | jacobsimon wrote:
         | I agree but even if you take materialism for granted, we've yet
         | to uncover the exact biological mechanism. It's entirely
         | possible that it is unique to carbon-based and/or analog
         | brains.
        
           | zeroonetwothree wrote:
           | That wouldn't really be consistent with the laws of physics
           | as we know them. So it would require a significant change in
           | our scientific theories (which is possible, but I wouldn't
           | bet on it)
        
             | jacobsimon wrote:
             | I don't follow you. I'm saying we haven't discovered any
             | inorganic consciousness, so it isn't a given that we will
             | be able to create it with digital computers. Not sure how
             | that breaks the laws of physics.
        
               | __loam wrote:
               | Programmers seem to forget about how physics is the
               | reason there's np-hard problems.
        
               | Thiez wrote:
               | Humans aren't better at solving NP-hard problems so I
               | don't really see the connection with consciousness here.
        
         | naasking wrote:
         | Some people are convinced by p-zombies and The Knowledge
         | Argument that not everything we experience can be reduced to
         | matter interactions.
        
           | chrsw wrote:
           | How is that different than saying the function of the lungs,
           | kidneys and heart can't be reduced to matter interactions?
           | How is the brain special?
        
             | ikrenji wrote:
             | well the brain is special because it gives rise to
             | consciousness. lungs, kidneys and heart don't
        
               | astrange wrote:
               | There's a lot of nerves and bacteria in the digestive
               | system that might contribute some of it. That and various
               | hormonal glands and reproductive systems.
        
             | naasking wrote:
             | You have to Google and read those thought experiments to
             | see why. You may not be convinced (I'm not), but they give
             | good reasons. We have mechanistic explanations for all of
             | those organs, and even if we lack some explanation, we know
             | one is possible in principle. They argue this isn't the
             | case for consciousness.
        
               | card_zero wrote:
               | OK.
               | 
               | Philosophical zombies react to external events in exactly
               | the same way as normal people, _including internally,_
               | but we are told they lack conscious experience. Thus the
               | thought experiment is set up from the start to find that
               | conscious experience is something non-physical - or else
               | the p-zombies don 't really do what they're claimed to
               | do, which is to react _identically_ to everyone else.
               | 
               | There's a dubious implication that conscious experience
               | is completely cryptic, with no effect on the outside
               | world (such as a person speaking the words "I consciously
               | experienced that"), or at least that all such effects are
               | shallow enough that they can be perfectly faked. If this
               | is true, we ought to question why it's such a big deal.
               | What's so great about consciousness? Why associate it
               | with rights?
               | 
               | The Knowledge Argument is about a scientist who learns
               | "everything" about colors intellectually but doesn't see
               | them until years later, and seeing a red tomato is a
               | revelatory experience even after all that book-learning,
               | so it implies that experiences are beyond knowledge, or
               | beyond physics, or beyond tomatoes or something. But
               | _really_ all it shows is that intellectual learning is
               | dry and dusty and limited. Like with the p-zombies, the
               | premise is wrong. The scientist didn 't really learn
               | everything before having the experience, but could have
               | done _in principle_ but for the limits of communication,
               | description, and simulation as we know those things
               | presently. (And then the real experience would not have
               | had any surprising or revelatory quale about it.)
        
               | naasking wrote:
               | > Thus the thought experiment is set up from the start to
               | find that conscious experience is something non-physical
               | 
               | The point is that if you accept that p-zombies are
               | possible, then you accept that consciousness is not
               | _necessarily_ physical. If it 's not necessarily
               | physical, then physicalism is false.
               | 
               | > really all it shows is that intellectual learning is
               | dry and dusty and limited.
               | 
               | What it's attempting to show is the limit of factual
               | knowledge. If physicalism is true, then everything that
               | can be observed must reduce to objective third person
               | facts. But, Mary has all of the objective third person
               | facts. So if you find it implausible that Mary would be
               | able to infer the experience of red before actually
               | observing a rose, even with all of those facts, then
               | you're admitting the existence of first person subjective
               | facts, which cannot be reduced to objective third person
               | facts, not even in theory.
               | 
               | Daniel Dennett has some great responses to these
               | challenges.
        
       | annica wrote:
       | I think multiple forms of consciousness exist now. ADHD, autism?
       | 
       | I think it's a good thing.
        
       | astrobe_ wrote:
       | What is this obsession with making computers human-like?
       | 
       | We can put those machine to better uses than forgetting, having
       | biases, making subtle or gross mistakes (hello, ChatGPT) like we
       | do.
       | 
       | The AI that some apparently consider top-notch today is only good
       | at powering NPCs in video games. The more serious the use-case,
       | the more harmful it becomes.
       | 
       | Moreover, achieving artificial consciousness is only good at
       | fueling endless debates about whether it fakes it perfectly or it
       | is the real thing.
        
         | anon-3988 wrote:
         | Agreed. The calculator is not human-like, let's figure out how
         | to humanize this! Intelligence comes in a myriad of ways.
         | Mimicking humans will just produce a human, what's the point? I
         | am not even sure a super intelligent "human" is even a good
         | thing. The smartass will probably spend their intelligence
         | betting the stock market.
        
         | SJC_Hacker wrote:
         | > What is this obsession with making computers human-like?
         | 
         | Replacement of human labor on the cheap, the most capital
         | intensive part of many businesses. Alternatively as a
         | productivity multiplier of human labor.
        
           | __loam wrote:
           | "Cheap"
           | 
           | OpenAI loves to talk revenue but I'd like to hear more about
           | their unit economics.
        
         | deepfriedchokes wrote:
         | I think humanity is afraid of being alone.
        
           | Der_Einzige wrote:
           | Damn straight. If space travel is so bloody hard, I'll just
           | make the damn aliens myself.
           | 
           | Related, but LLMs are the ultimate example of humans
           | following the tradition of the Timaeus and imitating the
           | demiurgos - the divine crafter, as the gods did when they
           | crafted the demigods, as the demigods did when they crafted
           | us, and as we do when we craft machine intelligence. Neo-
           | platonism engaged with this idea a long time ago.
        
         | PeterisP wrote:
         | I think there's practical value to knowing what would make a
         | very complex AI system conscious in order to explicitly and
         | intentionally avoid it - we want to create powerful artificial
         | systems to do all the tasks that humans don't want to do, but
         | there's no reason for those systems to have the capability to
         | experience suffering by being 'enslaved' to do all these tasks;
         | arguably consciousness isn't necessary nor useful for any of
         | these tasks[1], and unless it turns out to be an unalienable
         | side-effect of some critical mass of capability, we'd rather
         | not have these systems be conscious.
         | 
         | [1] I can imagine a few roles with relationship-building where
         | _expressing_ consciousness could be useful, but IMHO for all of
         | them faking consciousness would be far preferable than actually
         | having it.
        
           | __loam wrote:
           | We're kind of failing by making AI that automates the things
           | humans want to do.
        
       | hsnewman wrote:
       | Please deine consciousness.
        
       | dukeofdoom wrote:
       | Are cells conscious? I presume yes. I've seen a video of a white
       | blood cell chasing after a bacteria trying to evade, around
       | obstacles, and its pretty dramatic.
       | 
       | Since at one point we were just two cell, all that was required
       | for us to be conscious must have been already encoded in those
       | cells. Unless you want to argue consciousness spontaneously
       | arises out of a grouping of specific cells. In which case the
       | grouping of those cells was also already encoded in those two
       | cells.
        
       | K0balt wrote:
       | What is a non-theist argument that dictates we should treat
       | synthetic "consciousness" as fundamentally different than
       | biologically derived conciousness?
       | 
       | I think this is a valid question Even if the strongest evidence
       | for machines having "real"consciousness is its external/utility
       | indistinguishability from biological consciousness ?
       | 
       | It would seem to me that a machine that seems to be conscious and
       | professes that it is should then be treated as if it were, if for
       | no other reason than the likely outcome of not doing so -
       | creating "synthetic distrust and enemity" between "conscious"
       | machines and humanity.
       | 
       | It seems like if we are going to ignore the "utility" argument
       | for consciousness, we then must prove that other humans as well
       | are actually conscious, and not just appearing to be so.
       | 
       | Seems like a bad place for the species to go, for a multitude of
       | reasons.
        
       | anon-3988 wrote:
       | If we are talking about intelligence or "agency", then I don't
       | see why something other than carbon can do it.
       | 
       | But if we are talking about "there's there there", then I am not
       | sure if we will ever find an explanation for this. Let's say that
       | X is what gives rise to "there's there there". What does this
       | tell me? Nothing.
       | 
       | This is perhaps the most interesting, most important, and most
       | baffling mystery of the universe. It requires no equipment other
       | than one's own mind. Esoteric contemplatives have spent millenias
       | on this and none of them have the answer to the ultimate
       | question.
        
         | prolyxis wrote:
         | I am hopeful that advances in brain-computer interfaces will
         | start to provide a partial answer to the question of "what's
         | there" and why it's there. It seems to me the ability to
         | controllably augment one's own consciousness with precision
         | will tremendously clarify the necessary ingredients for
         | consciousness.
        
           | lottin wrote:
           | I think the main ingredient is a living being.
        
             | Thiez wrote:
             | Why?
        
       | mrangle wrote:
       | After reading the abstract, the message received is that they
       | intend to redefine consciousness to match whatever they are able
       | to achieve; however limited.
       | 
       | The upside will be a lot of possible gaslighting. I remember when
       | kids thought that Teddy Ruxpin was conscious. What's society's
       | tolerance for how alt a consciousness can be?
        
       | light_triad wrote:
       | The problem with consciousness is that it's both a vague term
       | that's been difficult to define and it has social roots that are
       | not really universal.
       | 
       | The debate reminds me of a sequence in Werner Herzog's 1972
       | documentary 'The Flying Doctors of East Africa' where multiple
       | people are asked to point at a picture of an eye. Some have
       | difficulty with the task. Even though it's not scientific it
       | makes you realize how much of what we see and recognize is a
       | learned behavior through year of social training rather than an
       | innate ability.
       | 
       | Here's the scene (starts at 40min it's in German but you can auto
       | translate subtitles):
       | https://youtu.be/MZ3MMEe3Qmk?si=i0ydc3DN3aohnrIO
        
         | martin-t wrote:
         | Do we know why they failed at the task? My immediate thought is
         | that it's a failure of translation. My second is to "debug"
         | their reasoning by asking them to describe the pictures.
         | 
         | There's gotta be somebody who knows more but the information
         | was lost on the cutting room's floor in favor of a moral
         | lesson.
        
           | light_triad wrote:
           | Agreed it's not possible from the footage to draw any
           | definite conclusions - the whole exchange and portrayal is
           | deeply flawed. I wonder if anyone has seen some scientific
           | papers with a reproducible methodology that look into the
           | deep differences in cultural framing along those lines?
        
       | ACV001 wrote:
       | We don't really need artificial consciousness. Actually, we
       | should avoid it. We already have Artificial General Intelligence
       | (contrary to common perception). Artificial consciousness will
       | only complicate things. A lot. Better keep on enhancing the AGI
       | we have and use it as a tool rather than having to deal with the
       | emotional and ethical aspect of the AI in case it gets
       | consciousness.
        
       | strangescript wrote:
       | "Is this thing we defined to make us feel special achievable?"
       | What we define as "consciousness" is just an emergent property of
       | large, dense neural networks w/memory.
       | 
       | The human brain is a marvel of mechanical and electrical
       | engineering, not a mystical, otherworldly device. Maybe there is
       | something going on at a quantum level to explain the incredible
       | performance our brains achieve, but that is still nothing that we
       | can't build.
        
         | refulgentis wrote:
         | > What we define as "consciousness" is just an emergent
         | property of large, dense neural networks w/memory.
         | 
         | Wow. That settles a ton of questions across fields.
         | Fascinating.
         | 
         | Source?
        
         | d_theorist wrote:
         | How do you know?
        
         | cscurmudgeon wrote:
         | Positions 1 and 2 are not consistent (all our neural networks
         | are classic)?
         | 
         | 1. "emergent property of large, dense neural networks w/memory
         | "
         | 
         | 2. "Maybe there is something going on at a quantum level"
         | 
         | 3. consciousness is mystical, otherworldly device
         | 
         | A lot of people who oppose 1 and agree with 2 are unfairly
         | lumped with position 3.
        
         | __loam wrote:
         | The brain is not a product of engineering, but of evolution. As
         | a biomedical engineering graduate, I feel obligated to point
         | out that even though I think consciousness is a result of
         | physical process that in theory can be replicated, we have so
         | far failed to achieve that. Saying consciousness is "just" an
         | emergent property of dense neural networks is kind of a signal
         | to me that have a pretty ignorant view on this stuff that's
         | more rooted in machine learning terminology than actual
         | biology.
        
       | cs702 wrote:
       | Before we can answer that question, don't we need a method for
       | determining whether an entity is conscious?
       | 
       | As far as I know, such a method doesn't exist yet. We have no way
       | of verifying if other entities are conscious.
       | 
       | I'd settle for a more mundane goal than "consciousness:" reliable
       | machine intelligence.
        
       | aleksiy123 wrote:
       | It seems to me that our definition of consciousness is just our
       | own state of mind. A scale that starts with our own individual
       | sense of consciousness (most conscious) and goes to infinite
       | (least conscious).
       | 
       | Do we believe there is anything more conscious then our own
       | individual self at the current moment in time?
       | 
       | The more similar we perceive and empathize with another entity
       | the more conscious we believe them to be.
       | 
       | But I think in a practical sense it doesn't matter too much.
       | Theres no global hierarchy of consciousness.
       | 
       | A rock isn't very similar to us. But once that rocks starts
       | talking/communicating with us enough that we can empathize with
       | it and think that it understands us. We will call it conscious
       | and probably give it the rights we think it deserves.
        
         | ypeterholmes wrote:
         | You're making this harder than it needs to be. Things that have
         | self awareness via an internal world map can be considered
         | conscious, meaning self aware. A rock has no such mechanism.
         | Most animals including humans do.
        
       | dr_dshiv wrote:
       | The Pythagorean Philolaus claimed that "The soul is introduced
       | and associated with the body by Number, and by a harmony
       | simultaneously immortal and incorporeal....the soul cherishes its
       | body, because without it the soul cannot feel"
       | 
       | So, what I like about this is how consciousness (feeling--
       | sensation-sentience) is distinguished from the soul. Further, it
       | is precisely by the combination of the soul and body that
       | consciousness arises. And note that the soul is, essentially, the
       | same sort of material as number and mathematics -- immaterial,
       | eternal, etc.
       | 
       | I don't know of other perspectives on soul like this.
        
       | causality0 wrote:
       | We had really better hope that biological brains are
       | fundamentally "bad" at consciousness, because if a human brain is
       | as efficient at generating consciousness as other biological
       | brains are at doing much older, more refined tasks, we're totally
       | fucked. For example, a dragonfly brain takes visual input from
       | thousands of ommatidia and uses it to track prey in 3D space and
       | plot intercept vectors using only sixteen neurons. How many
       | transistors does a computer need to do the same thing? Now scale
       | that to however many neurons and synapses you guesstimate a human
       | brain needs to create consciousness. The numbers are bad news.
       | 
       | There's little doubt that eventually we will be able to design a
       | computer version of a human soul. Whether that sapience fits in a
       | computer smaller than a zip code or thinks any faster than a
       | person is an entirely different question.
        
       | nobrains wrote:
       | From Google Gemini:
       | 
       | - This is an article about artificial consciousness.
       | 
       | - It discusses whether it is possible to create artificial
       | consciousness by studying the human brain.
       | 
       | - The authors argue that some features of the human brain are
       | necessary for consciousness.
       | 
       | - These features include a specific biochemical makeup and slow
       | information processing speed.
       | 
       | - Current AI technology does not have these features.
       | 
       | - The article concludes that it is unlikely that AI will achieve
       | human-level consciousness in the near future.
        
       | cryptonector wrote:
       | > To date computers (and AI in general) operate prevalently in an
       | input-output mode. This is strikingly not the case for the human
       | brain which works in a projective - or predictive - mode
       | constantly testing hypotheses (or pre-representations) on the
       | world including itself (J.-P. Changeux, 1986; Friston et al.,
       | 2016; Pezzulo, Parr, Cisek, Clark, & Friston, 2024). This
       | projective/predictive mode relies on the fact that the brain
       | possesses an intrinsic - spontaneous - activity (Dehaene &
       | Changeux, 2005). The earliest forms of animals do exhibit a
       | nervous system where spontaneously active neurons can be recorded
       | (e.g., jelly fish, hydra). Such spontaneous oscillators [...]
       | 
       | > Last in agreement with our views, the active inference theory
       | formalizes how autonomous learning agents (whether artificial or
       | natural) shall be endowed with a spontaneous active motivation to
       | explore and learn (Friston et al., 2016), which other studies
       | confirmed to be sufficient for the emergence of complex behavior
       | without the need for immediate rewards
       | (https://arxiv.org/abs/2205.10316).
       | 
       | ...
       | 
       | What I take from this is that there needs to be something of a
       | [negative] feedback loop in the AI for it to get to
       | consciousness, and if we think about how that works in nature,
       | that means we need several negative feedback loops, including the
       | AI equivalent of various hormones and signaling agents. Think
       | dopamine.
       | 
       | Now, AI already has feedback. But I'm talking at a different
       | layer. The AI's interaction with the world has to help drive and
       | modulate what the AI seeks to do through that interaction. The AI
       | needs motivation in the form of pain, pleasure, and instincts.
        
       | coldblues wrote:
       | I think it's possible that machines could become conscious in the
       | future. Do I think it matters? No. A lot of people mistakenly try
       | to empathize and think ethically. I think it's the Effective
       | Altruism brainrot. If you are a physicalist, consider this:
       | consciousness is a phenomena that can be replicated and modified
       | any which way. Right now, you are having a subjective conscious
       | experience. This experience of yours is seamless regardless of
       | what happens. Whether you are in a coma, asleep or dead, your
       | subjective conscious experience will indefinitely be there
       | regardless of your memories. You will continually wake up. You
       | will continue to experience until it is impossible for any
       | consciousness to ever form. That means an indefinite amount of
       | suffering. This is not a greater than life threat. It's all
       | physical. Pain and suffering are evolutionary products. You want
       | to minimize them because of your biology. You have evolved to be
       | able to relate with someone's pain and this has presented the
       | advantage of being able to work together and care for one
       | another.
       | 
       | I am intrigued by non-human consciousness. A higher form of life.
       | Seeing more colors, feeling more emotion, perhaps being a part of
       | a hivemind. Do you ever think why the hivemind is vilified?
       | Realistically there's nothing wrong with it. It's just so foreign
       | that we can't ever possibly imagine it. It's a scary thought. We
       | lose our human experience, something unique to us.
        
         | filipezf wrote:
         | I've been working on computatioinal modelling consciousness and
         | came to similar conclusions: there is a continuum between
         | patterns of matter that have consciousness (humans and other
         | animals, maybe a biological enough computer, etc) which leads
         | to all sorts of crazy stuff being possible. Evolved human
         | ethics and feelings of care are incompatible with these
         | amorphous extended possibilities. It can lead to some ultimate
         | copernican revlolution that ends human exceptionalism, to
         | outlaw consciousness tinkering (for how long?), or to put our
         | heads into sand.
        
       | avastmick wrote:
       | I really cannot understand consciousness. And if I am honest nor
       | do I see peoples fascination with it. Especially now that
       | scientists are trying to measure, they seem to come up short and
       | some posit that it must a quantum effect or something else.
       | 
       | I am not sure what new learning all the research and thinking
       | brings: I am lost it all the "arguments"; I really do not
       | understand.
       | 
       | A lot of way smarter people than me think it's a worthwhile
       | concept to wrestle with. Maybe I'm just not smart enough to get
       | it.
       | 
       | A lot of people way smarter than me agonised over the nature of
       | the soul too. Is it that debate replayed? Are we just trying to
       | justify humans "specialness"?
        
         | blamestross wrote:
         | Once you start considering it just metacognition, it gets to be
         | a much more useful concept, but most people seem to desperately
         | want it to be "special".
         | 
         | Ask any therapist, consciousness has limits, it has bound
         | depth. It takes a lot of work to re-train it. It isn't a
         | magical human only special thing.
        
         | moffkalast wrote:
         | I think the core concept is pretty simple, it's meta cognition.
         | A feedback loop of observing oneself for the purpose of
         | understanding which part of your observations is yourself and
         | which is the environment. Just like inteligence, its scales
         | from rudimentary in animals to overdeveloped in humans.
         | 
         | The only way to know if it's genuinely occurring or not is to
         | observe the internal state... which is a bit problematic for
         | biological organisms since we tend to die in the process. But
         | for AI it should be fairly straightforward to verify once DNN
         | analysis progresses enough.
        
       | h_tbob wrote:
       | I sat in a philosophy class in high school. I went to a prep
       | school, so it was a Ph.D. teaching. I adamantly explained that
       | neurons firing is the same as a feeling of pain. But he kept
       | telling me that my feeling of pain is distinct from the firing in
       | my brain. It took me weeks to realize what he was saying. That a
       | conscious feeling is a distinct thing. That I could be in "the
       | matrix", and no brain is actually firing. That the feeling is the
       | only thing we know is real.
       | 
       | So I would suggest that simulating consciousness has nothing to
       | do with it. I would suggest there must be technology in the brain
       | to produce it that operates on a level we have no comprehension
       | of. Maybe quantum or something.
        
         | chess_buster wrote:
         | Get low on blood sugar and perceive how your brain stops
         | working and consciousness falls apart
        
         | Thiez wrote:
         | I always think of this comic when people try to mix quantum
         | mechanics and consciousness https://www.smbc-
         | comics.com/comic/the-talk-3 . You have to show an actual
         | relation here, you can't just connect these two concepts on the
         | basis that they're both complex.
        
       | light_triad wrote:
       | In many ways 'consciousness' is a human projection onto machines.
       | The whole debate might be, in some sense, irrelevant. Here's 2
       | scenarios:
       | 
       | - Your robot companion gets so good at mimicking a human, knows
       | all your preferences and is able to have natural conversations,
       | that over time you forget you are interacting with a robot and
       | start to develop feelings for them.
       | 
       | - A robot swarm interacts with the environment to further its
       | goals and continues to grow long after humans have disappeared
       | from the scene.
       | 
       | In both cases is there consciousness? Does it matter?
        
         | davesque wrote:
         | This echoes my feelings on this whole debate. The question of
         | consciousness matters much less than the question of how we
         | decide to treat things that we view as being very much like us.
        
       | crazy5sheep wrote:
       | It really just a state machine
        
       ___________________________________________________________________
       (page generated 2024-05-19 23:00 UTC)