[HN Gopher] One Head, Two Brains: The origins of split-brain res...
       ___________________________________________________________________
        
       One Head, Two Brains: The origins of split-brain research (2015)
        
       Author : shry4ns
       Score  : 62 points
       Date   : 2025-02-19 19:22 UTC (3 days ago)
        
 (HTM) web link (www.theatlantic.com)
 (TXT) w3m dump (www.theatlantic.com)
        
       | neonate wrote:
       | https://web.archive.org/web/20241228215151/https://www.theat...
       | 
       | https://archive.ph/gJ32A
        
       | zonkerdonker wrote:
       | The confabulation to justify picking out related images that the
       | left brain never observed (chicken and snow shovel in the
       | article) reminds me profoundly of the confident slop produced by
       | LLMs. Make you wonder if llms might be one half of the "brain" of
       | a true AGI
        
         | jjtheblunt wrote:
         | along those lines, maybe dreaming is piecing together new
         | adventures imagined from snippets of reality.
        
           | encipriano wrote:
           | Those videos about ai making up a game after having watched
           | countless hours of streaming is fucked up. It looks
           | completely how dreams do
        
         | MathMonkeyMan wrote:
         | That's a theme in the novel "Neuromancer".
        
         | danielmarkbruce wrote:
         | confident slop.
        
           | nuancebydefault wrote:
           | I believe most confident statements people make, are
           | established the same way. There are some anchor points
           | (inputs and vivid memories) and some part of the brain in
           | some stochastic way dreams up connections. Then we convince
           | ourselves that the connections are correct, just because they
           | match some earlier seen pattern or way of reasoning.
        
             | zdragnar wrote:
             | The basis of human irrationality is not tied to the basis
             | of LLM irrationality.
             | 
             | LLMs don't get to make value judgements, because they don't
             | "understand". They predict the subsequent points of a
             | pattern given a starting point.
             | 
             | Humans do that, but they also jade their perception with
             | emotive ethics, desires, optimism and pessimism.
             | 
             | It is impossible to say that two humans with the exact same
             | experience would always come to the same conclusion,
             | because two humans will never have the exact same
             | experience. Inputs include emotional state triggered by
             | hormones, physical or mental stress, and so forth, which
             | are often not immediately relevant to any particular
             | decision, but carried over from prior states and biological
             | processes.
        
               | svachalek wrote:
               | Just because humans have additional sources of
               | irrationality doesn't mean they don't also have
               | irrationality based on the same lack of self-awareness
               | that LLMs exhibit.
        
               | nuancebydefault wrote:
               | I could understand that argument as follows: LLMs fill in
               | the gaps in a creative but predictable way. Humans fill
               | in the gaps in creative but unpredictable ways. The
               | creativeness level is affected by the ad hoc state of the
               | brain.
               | 
               | I understand that you relate judgement, ethics and
               | emotions to 'understanding'. I'm not convinced. Emotions
               | might as well be an effect of pattern matching. You hear
               | a (pattern matched) type of song, you feel a certain way.
        
               | lo_zamoyski wrote:
               | Conversely, human beings with varying particular
               | experiences can come to the same conclusions, because
               | human cognition can abstract from particulars, while LLMs
               | are, at best, statistical and imagist. No two of us ever
               | experience the same set of triangles, but abstraction
               | allows us to form concepts like "triangularity", which
               | means we can _understand_ what it means to be a triangle
               | per se (something that is not concrete or particular, and
               | therefore cannot be visualized), while an LLM can only
               | proceed based on the _concrete_ and _particular_ data of
               | input triangles and derivations introduced into the
               | model. It can never go  "beyond" the surface features of
               | the training model's images, as it were, and where the
               | appearance of having done so occurs, it is not via
               | abstraction, but by way of product of human abstraction.
               | From the LLM's perspective, there is no triangle, only
               | co-occurrence of features, while abstraction goes beyond
               | features, stripping them away to obtain the bare,
               | unimaginable form.
        
           | Lerc wrote:
           | The confidence seems to be an artifact of fine tuning. The
           | first instruction trained models were given data sets with
           | answers to questions but generally omitted non answers to
           | things the model didn't know.
           | 
           | Later research showed that models know that they don't know
           | certain pieces of information, but the fine tuning constraint
           | of providing answers did not give them the ability to express
           | that they didn't know.
           | 
           | Asking the model questions against known information can
           | produce a correct/incorrect map detailing a sample of facts
           | that the model knows and does not know. Fine tuning a model
           | to say "I don't know" in response to the those questions
           | where it was incorrect can allow it to generalise the concept
           | to its internal concept of unknown.
           | 
           | It is good to keep in mind that the models we have been
           | playing with are just the first ones to appear. GPT 3.5 is
           | like the Atari 2600. You can get it provide a limited
           | experience for what you want and its cool that you can do it
           | at all, but it is fundamentally limited and far from an ideal
           | solution. I see the current proliferation of models to be
           | like the Cambrian explosion of early 8 bit home computers.
           | Exciting and interesting technology which can be used for
           | real world purposes, but you still have to operate with the
           | knowledge of the limitations forefront in your mind and
           | tailor tasks to allow them to perform the bits they are good
           | at. I have no-idea of the timeframe, but there is plenty more
           | to come. There have been a lot of advances revealed in
           | papers. A huge number of those advances have not yet
           | coalesced into shipping models. When models cost millions to
           | train you want to be using a set of enhancements that play
           | nicely together. Some features will be mutually exclusive. By
           | the time you have analysed the options to find an optimal
           | combination, a whole lot of new papers will be suggesting
           | more options.
           | 
           | We have not yet got the thing for AI that Unix was for
           | computers. We are just now exposing people to the problems
           | that drives the need to create such a thing.
        
         | codr7 wrote:
         | It certainly looks like what LLMs are doing is one aspect of
         | what a brain is doing.
        
           | ggm wrote:
           | Key point here is "looks like" I suggest if you want to argue
           | this further to invest the time asking Brain Scientists what
           | they think. Not AI scientists but people who actually work in
           | cognition.
           | 
           | (Not a brain scientist btw)
        
       | Hacker_Yogi wrote:
       | I disagree with Steven Pinker's claim that consciousness arises
       | from the brain.
       | 
       | This perspective fails to establish that the brain produces
       | consciousness, as it relies on the mistaken assumption that
       | "mind" and "consciousness" are interchangeable. While brain
       | activity may influence the mind, consciousness itself could be a
       | more fundamental aspect of reality. Rather than generating
       | consciousness, the brain might function like a radio, merely
       | receiving and processing information from an all-pervasive field
       | of consciousness.
       | 
       | In this view, a split-brain condition would not create two
       | separate consciousnesses but instead allow access to two distinct
       | streams of an already-existing, universal consciousness.
        
         | actionfromafar wrote:
         | Descartes was pretty much on the same page.
        
         | at_a_remove wrote:
         | I cannot see how one might perform an experiment to determine
         | which concept is correct. As with most things which are
         | unfalsifiable, the idea can be amusing for a bit but is
         | ultimately not useful to the extent that you can do anything
         | about it. You cannot serve tea from Russell's Teapot.
        
           | cognaitiv wrote:
           | If the brain is a receiver, information transfer could happen
           | non-locally and the tea might be telepathy, precognition, or
           | remote viewing. In the split brain example, demonstrating an
           | ability to coordinate between hemispheres in ways not
           | predicted by neural separation might challenge the physical
           | origin of consciousness as with the chicken and shovel
           | anecdote.
           | 
           | Experiments demonstrating an external source of consciousness
           | would be very interesting.
           | 
           | Not a teapot in this case!
        
             | cognaitiv wrote:
             | Or communicate telepathically with dogs.
        
             | at_a_remove wrote:
             | Ah, no.
             | 
             | Suppose you do all kinds of studies and not show any
             | telepathy, precog, or remote viewing. You could still say
             | that the brain was only a receiver. None of that would
             | _disprove_ the  "brain-as-consciousness-receiver" concept,
             | you would just say that, I guess it is one way, no
             | telepathy.
             | 
             | It's not disprovable. And so, kind of boring.
        
         | kerblang wrote:
         | It's not Steven Pinker's claim alone. Gazzaniga agrees, I
         | think, and I know of one other prominent neuroscientist but
         | don't remember his name. Pinker is "just" a psychologist.
         | 
         | (Edit: Michael Graziano is who I was trying to remember - he
         | uses the words "schematic" and "model")
         | 
         | Your view is called "pan-psychism". It's interesting, but there
         | isn't anything that makes it necessary. Everything we're
         | finding out is that most or all thinking happens outside of
         | consciousness, and the results bubble up into it as perception.
         | Consciousness does seem to be universal _within_ the brain,
         | though.
         | 
         | I find pan-psychism interesting just because of its popularity
         | - people want something spiritual, knowingly or not. I would
         | advise not to insist that consciousness==soul, however, as
         | neuroscience seems to be rapidly converging on a more mundane
         | view of consciousness. It's best to think of one's "true" self
         | according to the maxim that there is much more to you than
         | meets the mind's eye.
        
           | codr7 wrote:
           | Or, people are spiritual, and realize it to different
           | degrees. It's very easy to get confused about what we know
           | and don't know on these subjects.
        
         | antonkar wrote:
         | Yep, some unfinished philosophy if you're into it: you can
         | imagine that our universe at a moment of time has is just a
         | giant geometric shape, then at the next moment the universe
         | somehow changes into the this new shape. How does this change
         | happen? Some believe it's a computation according to a rule/s,
         | some that it's not a discrete change but a continuous equation
         | that changed the shape of the universe from one to another.
         | Basically you can imagine the whole universe as a long-exposure
         | photography in 3d and then there is some process that "forgets"
         | almost all of it leaving only slim slices of geometry and
         | changing from one slice into another. This forgetting of the
         | current slice and "recalling" the next, is consciousness, the
         | time-like process. And it looks like the Big Bang was like
         | matter converted to energy (or "space converted to time")
         | process. The final falling into a giant black hole will be the
         | reverse: energy converted to matter (or "time converted to
         | space"). Some say electrons are like small black holes, so we
         | potentially experience the infinitesimal qualia of coming into
         | existence and coming out of existence, because we are
         | sufficiently "time-like" and not too much "space-like". I'll
         | soon write a blog post ;)
        
         | jstanley wrote:
         | If consciousness doesn't arise from the brain, it seems to be
         | _suspiciously_ well correlated with the brain.
         | 
         | I think consciousness arises from the brain.
        
           | MailleQuiMaille wrote:
           | "If the music I dance to doesn't arise from the radio, it
           | seems to be suspiciously well correlated with the radio.
           | 
           | I think the music I dance arises from the radio."
        
             | dbtc wrote:
             | Well, it must all come from a singularity some time before
             | the Big Bang.
             | 
             | Yet, when I turn the radio on, music really does seem to
             | come out of it.
             | 
             | And when I turn the radio off, the music stops (for me, but
             | not for you).
             | 
             | Without the radio there is no sound, but the radio needs a
             | signal.
             | 
             | Does the radio make the music? Quite an interesting
             | metaphor.
        
               | MailleQuiMaille wrote:
               | Yes ! I like it even more when you consider the
               | brainwaves that deal with...frequency...hmm...
        
             | kulahan wrote:
             | Note that in this scenario, we've never even heard of radio
             | stations or radio waves before.
        
               | yencabulator wrote:
               | And despite looking for them intensely, we have never
               | found any evidence of the existence of radio waves, or
               | been able to send a signal to a radio ourselves.
        
             | jstanley wrote:
             | Postulate 1: The music is created by the radio in the form
             | of sound waves, the end.
             | 
             | Postulate 2: The music was played by a band in the form of
             | sound waves, some time in the past. The band recorded their
             | music on to some storage medium so that it could be
             | transmitted to the future. In the present, the storage
             | medium is connected up to a piece of equipment that turns
             | the recorded signal into some invisible power transmission
             | that spreads throughout space in a way you can't experience
             | directly with any of your natural senses. The radio however
             | can sense these invisible power transmissions and can turn
             | them back into audio that sounds like what the band played
             | in the past. So we're saying that it is possible to create
             | music in the form of sound waves (that's what the band
             | did), and it is possible for the radio to output sound
             | waves that sound like music (that's what the radio does),
             | but the radio is curiously not the thing that is producing
             | music and instead we have an enormous system of technology
             | transmitting the music across space and time.
             | 
             | You'd need an awful lot of evidence to convince me that
             | postulate 2 is true and postulate 1 is false.
             | 
             | On the one hand you have "consciousness can be created, and
             | it is created by the brain". On the other hand you have
             | "consciousness can be created, and it is created somewhere,
             | but it's not created by the brain, instead it is created
             | somewhere else and there is a system of consciousness
             | transmission that gets it into the brain".
             | 
             | There's just no reason to prefer the second explanation. It
             | is a more complicated story.
        
           | selcuka wrote:
           | > I think consciousness arises from the brain.
           | 
           | I tend to agree, but it doesn't fully explain Benj Hellie's
           | vertiginous question [1]. Everyone seems to have brains, but
           | for some reason only _I_ am me.
           | 
           | If we were able to make an atom-by-atom accurate replica of
           | your brain (and optionally your body, too), with all the
           | memories intact, would you suddenly start seeing the world
           | from two different pair of eyes at the same time? If no, why?
           | What would make you (the original) different from your
           | replica?
           | 
           | [1] https://en.wikipedia.org/wiki/Vertiginous_question
        
             | spiderfarmer wrote:
             | New commits.
        
             | trescenzi wrote:
             | I don't understand how this refutes physicalism. Only my
             | eyes are hooked up to my brain. If you duplicate the whole
             | system there would be a duplicate that would begin
             | experiencing its own version of reality.
        
               | selcuka wrote:
               | > I don't understand how this refutes physicalism.
               | 
               | Maybe it doesn't and there is a plausible explanation,
               | that's why it has been an unanswered question. But it's
               | definitely an astonishing question.
               | 
               | You instincitively say that even if you duplicate the
               | whole system "you" would remain as "you" (or "I", from
               | your point of view), and the replica would be someone
               | else. In this context you claim that there is a new
               | consciousness now, but there was supposed to be one,
               | because our initial assumption was consciousness ==
               | brain.
               | 
               | You are right if you define consciousness as being able
               | to think, but when you define it as what makes you "you",
               | then it becomes harder to explain who the replica is. It
               | has everything (all the neurons) that makes you "you",
               | but it is still not "you".
               | 
               | The above may not make sense as it is difficult for a
               | layman such as me to explain the vertiginous question to
               | someone else. I suggest you to read the relevant
               | literature.
        
               | trescenzi wrote:
               | Oh yes if the question is if the duplicate is also _me_
               | then I understand the concern. That's a much more
               | complicated question. But when it comes to perspective
               | it's easy to answer. Which I guess is literally what the
               | wiki page says it makes more sense as you state it
               | though.
               | 
               | Thanks for the additional explanation. I have read a good
               | deal from Nagel to Chalmers and somehow missed this
               | particular question.
        
               | selcuka wrote:
               | > I have read a good deal from Nagel to Chalmers and
               | somehow missed this particular question.
               | 
               | Chalmers' "Hard Problem" is very similar, although not
               | exactly the same. My understanding is that it asks "why
               | is there something called consciousness at all", as in, a
               | robot doesn't have the notion of "I", but for some reason
               | we do. The question is hard because it is hard to explain
               | it only by our brains being more complex than a robot's
               | CPU. Hellie's question is "why am I me and not someone
               | else".
        
               | ssfrr wrote:
               | Say I walk into a machine, and then I walk out, and also
               | an exact duplicate walks out of a nearby chamber. My
               | assumption is that we'd both feel like "me". One of us
               | would have the experience of walking into the machine and
               | walking out again, and the other would have the
               | experience of walking into the machine and being
               | teleported into the other chamber.
               | 
               | Im probably lacking in imagination, or the relevant
               | background, but I'm having trouble thinking of an
               | alternative.
        
               | selcuka wrote:
               | > My assumption is that we'd both feel like "me".
               | 
               | You assume that both would feel like you, but there is no
               | way you can prove it. The other can be a philosophical
               | zombie [1] for all you know.
               | 
               | Would the "current you" feel any different after the
               | duplication? Most people, including me, would find this
               | counterintuitive. What happens if the other you travels
               | to the other end of the world? What would you see? The
               | question is not how the replica would think and act from
               | an outside observer's perspective, but would it have the
               | same consciousness as you. Would you call the replica
               | "I"?
               | 
               | Or to make it more complex, what would happen if you save
               | your current state to a hard disk, and an exact duplicate
               | gets manufactured 100 years after you die, using the
               | stored information?
               | 
               | [1] https://en.wikipedia.org/wiki/Philosophical_zombie
        
               | kristiandupont wrote:
               | Like GP, I feel that I might be imagining imagination
               | here, but I really don't follow what this is supposed to
               | reveal.
               | 
               | >Would you call the replica "I"?
               | 
               | The two would start out identical and immediately start
               | to diverge like twins. They would share memories and
               | personality but not experience? What am I missing here?
        
               | polishdude20 wrote:
               | I too don't get what's being missed.
        
               | maksimur wrote:
               | I understand what the author means, though I struggle to
               | express it as well. The best I can come up with is this:
               | What defines I? Is it separated from "I" and if so how?
               | Or does I merely appears that way because our perspective
               | is informed by our limited being?
        
               | jstanley wrote:
               | > Would you call the replica "I"?
               | 
               | Both of the replicas would refer to themselves as "I",
               | but neither would refer to the other as "I".
        
             | peterlada wrote:
             | It would be a fork. Identical experience until that point
             | but bifurcated from the point of fork since it no longer
             | occupies the same physical space
        
             | jstanley wrote:
             | I feel like this is just a totally stupid question.
             | 
             | The brain has inputs, internal processing, and outputs. The
             | conscious experience happens within the internal
             | processing.
             | 
             | If you make a second copy, then that second copy will also
             | have conscious experience, but it won't share any inputs or
             | outputs or internal state with the first copy.
             | 
             | If you were to duplicate your computer, would the second
             | computer share a filesystem with the first one? No. It
             | would have a copy of a snapshot-in-time of the first
             | computer's filesystem, but henceforth they are different
             | computers, each with their own internal state.
             | 
             | You could argue that there are ways to do it which make it
             | unclear which is the "original" computer and which is the
             | "copy". That's fine, that doesn't matter. They both have
             | the same history up to the branching point, and then they
             | diverge. I don't see the problem.
        
               | selcuka wrote:
               | When you replace "I" with "it" (as in your example with
               | computers) the question becomes meaningless and stupid.
               | As an outside observer both computers are the same, as
               | they act exactly the same way, therefore there is no
               | question. That is actually the "egalitarian" view in Benj
               | Hellie's paper [1]:
               | 
               | > The 'god's eye' point of view taken in setting up the
               | egalitarian metaphysics does not correspond to my
               | 'embedded' point of view 'from here', staring out at a
               | certain computer screen.
               | 
               | The vertiginous question (or Nagel's Hard Problem [2] to
               | a degree: Why does physical brain activity produce a
               | first-person perspective at all?) is about the
               | subjectivity of consciousness. I see the world through my
               | eyes, therefore there is only one "I" while there are
               | infinitely many others.
               | 
               | The duplication example was something I made up to
               | explain the concept, but to reiterate, if I could make a
               | perfect copy of me, why would I still see the world from
               | the first copy's eyes and not the second, if the physical
               | structure of the brain defines "me"? What stops my
               | consciousness from migrating from the first body to the
               | second, or both bodies from having the same
               | consciousness? Again, this question is meaningless when
               | we are talking about others. It is a "why am I me"
               | question and cannot be rephrased as "why person X is not
               | person Y".
               | 
               | Obviously we don't have the capacity to replicate
               | ourselves, but I, as a conscious being, instinctively
               | know (or think) that I am always unique, regardless of
               | how many exact copies I make.
               | 
               | As I mentioned in another comment, I don't have a formal
               | education on philosophy, so I am probably doing a
               | terrible job trying to explain it. This question really
               | makes sense when it clicks, so I suggest you to read it
               | from a more qualified person's explanation.
               | 
               | [1] http://individual.utoronto.ca/benj/ae.pdf
               | 
               | [2] https://consc.net/papers/facing.pdf
        
               | jstanley wrote:
               | > Why does physical brain activity produce a first-person
               | perspective at all?
               | 
               | I agree that this question is mysterious and fascinating,
               | I just don't think the question of forking your
               | consciousness bears on it at all.
               | 
               | The fact that first-person perspective exists is probably
               | the fact that I am most grateful for out of all the facts
               | that have ever been facts.
               | 
               | But I don't have any difficulty imagining forking myself
               | into 2 copies that have a shared past and different
               | futures.
        
               | card_zero wrote:
               | Right, yes: _Why does physical brain activity produce a
               | first-person perspective?_
               | 
               | We might ask "what else do we expect it to do?" A
               | _second_ person perspective makes even less sense. And
               | since the brain 's activity entails first-person-
               | perspective-like processing, the next most obvious
               | answer, no perspective at all, isn't plausible either.
               | It's _reasonable_ that the brain would produce a first
               | person perspective as it thinks about its situation. (And
               | you don 't have extend this to objects that _don 't_
               | think, by the way, if you were thinking of doing that.)
               | 
               | But I'm still left with the impression that there's an
               | unanswered question which this one was only standing in
               | for. The question is probably "what is thinking,
               | anyway?".
               | 
               | Or, something quite different: "Why don't I have the
               | outside observer point of view?". It's somehow difficult
               | to accept that when there are many points of view
               | scattered across space (and time), you have a specific
               | one, and don't have all of them: "why am I not
               | omniscient?". It's egotistical to expect _not_ to have a
               | specific viewpoint, and yet it seems arbitrary (and thus
               | inexplicable) that you _do_ have one. But again, the real
               | question is not  "why is this so?" but "why does this
               | seem like a problem?".
        
             | card_zero wrote:
             | Yes, the two of you would see through two pairs of eyes,
             | independently.
             | 
             | Both of you would be you, and you two would function
             | separately, occupy separate spaces, and diverge slightly in
             | ways that would only rarely make a difference to your
             | personality.
             | 
             | But that's not the vertiginous question, which is "why am I
             | me". I've wondered that before. However, it _is_ nonsense.
             | Naturally a person is that person, not some other person
             | (and a tree is a tree, not some other tree). There 's
             | nothing strange about this. Why would it be otherwise? So
             | the urge to ask the question really reveals some deep-
             | seated misconception, or some other question that actually
             | makes sense, and I wonder what _that_ is.
        
               | jodrellblank wrote:
               | I wonder if the origin of the question is the religious
               | idea of a separate immortal soul which popped into _this_
               | body and not into some other body - but in some way
               | _could have_. This concept is in popular discourse like
               | "what if I had been born in Italy in 1420?!" as if that
               | were a thing thats plausible - an "I" separate from this
               | body /place/time/life
               | experiences/memories/language/family/etc but somehow
               | still 'me'.
               | 
               | Boring materialism view is that a brain with genetics
               | mixed from my parents and raised in the way I was raised,
               | with the experiences I had here and in this time, is what
               | makes "me" and I couldn't be anywhere or anyone else.
               | 
               | Or another way, we are all everyone else - what it would
               | be like if I was born to your parents and raised like you
               | is ... you. What you would be like here is... me.
        
               | card_zero wrote:
               | Well, if I were you, I wouldn't worry about it.
        
             | layer8 wrote:
             | > What would make you (the original) different from your
             | replica?
             | 
             | You'd be in two different locations, have independent
             | experiences, and your world lines would quickly diverge.
             | Both of you would remember a common past.
             | 
             | How do you know when you wake up in the morning that you
             | are the same "I" as you remember from the previous day? Who
             | isn't to say that the universe didn't multiply while you
             | were asleep, and now there are two or more of you waking
             | up?
             | 
             | (You don't actually need to go to sleep to do this:
             | https://cheapuniverses.com/)
        
               | detourdog wrote:
               | I think this is what severance is about.
        
         | financetechbro wrote:
         | The idea that the brain functions as a sort of radio capturing
         | a consciousness field makes the most sense to me and also feel
         | comforting in some way
        
         | morkalork wrote:
         | This is dualism, no.
        
           | Barrin92 wrote:
           | It's not a dualism at all. What the OP is proposing is
           | similar to Spinoza (probably the most hardcore monist to ever
           | exist), where mind is a fundamental property of the universe
           | (in fact, there's only one mind) and each individual person
           | is a 'mode' of it.
           | 
           | It's effectively akin to talking about mass. Despite the fact
           | that mass is observable as a distinct phenomenon in any
           | object, it's obviously not accurate to say that you "produce
           | mass" or that it's "your mass" in some private, ontologically
           | separated way, it just appears that way, by definition if we
           | look at particular manifestations of it.
        
         | EMM_386 wrote:
         | I've had numberous LLMs tell me that humans are conscious
         | because we are like radio receivers, picking up a single
         | consciousness field of the universe itself.
         | 
         | So that's very interesting that you mention that.
        
         | layer8 wrote:
         | This would imply that the behavior of elementary particles in
         | the brain (which ultimately cause our observable behavior via
         | nerve signals and muscle movements, including the texts we are
         | typing or dictating here) differs from the one predicted by the
         | known physical laws. That's difficult to reconcile with the
         | well-confirmed fundamental physical theories, and one has to
         | wonder why nobody tries to experimentally demonstrate such
         | known-physical-laws-contradicting behavior. It would be worth
         | at least one Nobel Prize.
         | 
         | Secondly, it wouldn't really explain anything. The
         | "consciousness field" would presumably obey _some_ kind of
         | natural laws like the known fields do, but the subjective
         | experience of consciousness would remain as mysterious as
         | before (for those who do find it mysterious).
        
       | teddyh wrote:
       | Related: _You Are Two_ by GCP Grey:
       | <https://www.youtube.com/watch?v=wfYbgdo8e-8>
        
       | 0x1ceb00da wrote:
       | Looks like this was one of the inspirations behind severance.
        
       | nuancebydefault wrote:
       | The fact that the explaining part of the brain fills in any
       | blanks in a creative manner (you need the shovel to clean the
       | chicken shed), reminds me to some replies of LLMs.
       | 
       | I once provided an LLM the riddle of the goat, cabbage and wolf,
       | and changed the rules a bit. I prompted that the wolf was
       | allergic to goats (and hence would not eat them). Still the llm
       | insisted on not leaving them together on the same river bank,
       | because the wolf would otherwise sneeze and scare the goat away.
       | 
       | My conclusion was that the llm solved the riddle using prior
       | knowledge plus creativity, instead of clever reasoning.
        
       | drupe wrote:
       | If one is interested in hemisphere theory, including
       | psychological and philosophical implications, make sure to check
       | out the work of Ian McGilchrist:
       | 
       | https://www.youtube.com/watch?v=3V3_Y_FuMYk
        
       | GonzoBytes wrote:
       | All of this is way above my paygrade, however.. There exists this
       | work by Julian Jaynes called The Origin of Consciousness in the
       | Breakdown of the Bicameral Mind:
       | https://ia802907.us.archive.org/32/items/The_Origin_Of_Consc...
       | 
       | Seems pertinent, and now I will try to read it again. Perhaps it
       | will be useful for reference by others.
        
       ___________________________________________________________________
       (page generated 2025-02-22 23:00 UTC)