[HN Gopher] What is it like to have a brain? Ways of looking at ...
___________________________________________________________________
What is it like to have a brain? Ways of looking at consciousness
Author : Hooke
Score : 80 points
Date : 2022-10-11 20:48 UTC (3 days ago)
(HTM) web link (lareviewofbooks.org)
(TXT) w3m dump (lareviewofbooks.org)
| seydor wrote:
| It's time to upgrade this folk philosophy of mind and its
| obsession with ever-elusive "consciousness" with theories of
| intelligence build on neuroscientific, cellular and developmental
| underpinnings.
| meroes wrote:
| intelligence is not the target I take it. The target is self-
| experience/qualia
| seydor wrote:
| intelligence creates and experiences all that
| agumonkey wrote:
| Most of the time "intelligence" lags behind experience, I
| doubt it's a superset.
| lifefeed wrote:
| Aeon just had a great article on the consciousness, "Seeing and
| somethingness" https://aeon.co/essays/how-blindsight-answers-the-
| hard-probl...
|
| It argues that consciousness evolved out of sensation, where we
| developed an "inner self" to predict how sensations would affect
| us, and it's that inner self that became our conscious.
|
| Don't miss out on the comments section, the author answers a lot
| of question in there.
| jiggywiggy wrote:
| These theories just dont' even try to prove that the brain
| creates consciousness. They just assume it's the case.
| PaulDavisThe1st wrote:
| Like Dennett's book "Consciousness Explained", the Aeon article
| falls into the category of explaining what we are conscious
| _of_ , not how it is possible to be conscious of anything at
| all. It does not really tackle Chalmers "hard problem of
| consciousness", despite the subtitle.
| gjm11 wrote:
| I would be interested in your response to the following
| thought experiment:
|
| After years of heroic work and ingenious insights, along with
| a lot of technological progress, we "solve" the "easy"
| problem of consciousness in the following (I admit
| implausibly in any foreseeable future) strong sense:
|
| (Note: I'm going into quite a lot of detail because I think
| that when people say things like "understanding how the
| machinery of consciousness works would not tell us anything
| about how it is possible to be truly conscious of anything at
| all" they are commonly underestimating what it would actually
| mean to understand how the machinery of consciousness works.)
|
| 1. There is a scanning device. You can strap yourself into
| this for half an hour, during which time it shows you images,
| plays you sounds, asks you to think particular kinds of
| thoughts, etc., all the while watching all your neurons, how
| they connect, which ones fire when under what circumstances,
| etc. It tries to model your peripheral as well as central
| nervous system, so it has a pretty good model of how all the
| bits of your body connect to your brain, and of how those
| bits of body actually operate.
|
| 2. There is a simulator. It can, in something approximating
| real time, pretty much duplicate the operation of a brain
| that has been scanned using the scanning device. It also has
| enough simulation of the body the brain's part of that it can
| e.g. provide the simulated brain with fairly realistic
| sensory inputs, and respond fairly realistically to its motor
| outputs. There's a UI that lets you see and hear what the
| simulated person is doing.
|
| 3. Researchers have figured out pretty much everything about
| the architecture of the brain, and it turns out to be
| reasonably modular, and they've built into the simulator a UI
| for looking at the structure, so that you can take a running
| simulation and explore it top-down or bottom-up or middle-
| out, either from the point of view of brain structure or that
| of cognition, perception, etc.
|
| 4. So, for instance, you can do the following. Inside the
| simulation, arrange for something fairly striking to happen
| to the simulated person. E.g., they're having a conversation
| with a friend, and the friend suddenly kicks them painfully
| in the shin. Some time passes and then (still, for the
| avoidance of doubt, in the simulation) they are asked about
| that experience, and they say (as the "real" person would)
| things like "I felt a sharp pain in my leg, and I felt
| surprised and also a bit betrayed. I trust that person less
| now." And you can watch the simulation at whatever level of
| abstraction you like, and observe the brain mechanisms that
| make all that happen. E.g., when they get kicked you can see
| the flow of neuron-activation from the place that's kicked,
| up the spinal cord, into the brain; the system can tell you "
| _these_ neurons are active whenever the subject feels
| physical pain, and sometimes when they feel emotional
| distress, and sometimes when they remember being in pain "
| and " _this_ cascade of visual processing is identifying the
| face in front of them as that of Joe Blorfle, and you can see
| here how _these_ neurons associated with Joe Blorfle are
| firing while the conversation is happening, and when the pain
| happens you can see how _these_ connections between the Joe
| Blorfle neurons and the pain neurons get strengthened a bit,
| and later on when the subject is asked about Joe Blorfle and
| the Joe Blorfle neurons fire, so do the pain ones. And you
| can see _this_ big chunk of neural machinery here is making
| records of what happened so that the subject can remember it
| later; here 's how the order in which things happen is
| represented, and here's how memories get linked up to the
| people and things and experiences involved, etc. And when
| he's asked about Joe Blorfle, you can see _these_ bits of
| language-processing brain tissue are active. These bits here
| are taking input from the ears and identifying syllable
| boundaries, and these bits are identifying good candidates
| for the syllable being heard right now, and these other bits
| are linking together nearby syllables looking for plausible
| words, with plausibility being influenced by what notions the
| subject is attending to, and these other bits are putting
| together something that turns out to be rather like a parse
| tree, and, and, and ... ".
|
| 5. That is: the linkage -- at least in terms of actual
| physical goings-on within the brain -- between being kicked
| in the shin by Joe Blorfle on Thursday, and expressing
| resentment when asked about Joe Blorfle on Saturday, is being
| accurately simulated, and the structure of what's being
| simulated is understood well enough that you can see its
| "moving parts" at higher or lower levels of abstraction.
|
| OK, so that's the scenario. I reiterate that it would be
| wildly optimistic to expect anything like this any time soon,
| but so far as I know nothing in it is impossible in
| principle.
|
| Question 1: Do you agree that something along these lines is
| possible in principle?
|
| [EDITED to add:] For the avoidance of doubt, of course it
| might well turn out that some of the analysis has to be done
| in terms not of particular neural "circuits" but e.g. of
| particular patterns of neural activation. (Consider a
| computer running a chess-playing program. You can't point to
| any part of its hardware and say "that bit is computing king
| safety", but you _can_ explain what processes it goes through
| that compute king safety and how they relate to the hardware
| and its states. Similar things may happen in the brain. Or
| very different things that likewise mean that particular bits
| of computation aren 't always done by specific bits of brain
| "hardware".)
|
| Question 2: If it happened, would you think there is still a
| "hard problem" left unsolved?
|
| Question 3: If you think there _would_ still be a "hard
| problem" left unsolved, is that because you think someone in
| this scenario could imagine all the machinery revealed by the
| simulator operating perfectly without any actual qualia?
|
| (My answers, for reference: I think this is possible in
| principle. I think there would be no "hard problem" left,
| which makes me disinclined to believe that even now there is
| a "hard problem" that's as completely separate from the
| "easy" problem of "just" explaining how everything works as
| e.g. Chalmers suggests. I think that anyone who thinks they
| can imagine all the processes that give rise to (e.g.) a
| philosopher saying "I know how it feels for me to experience
| being kicked in the shin, and I think no mere simulation
| could truly capture that", in full detail, without any qualia
| being present, is simply fooling themselves, in the same way
| as I would be fooling myself if I said "I can imagine my
| computer doing all the things it does, exactly as it does,
| without any actual electrons being present".)
| petemir wrote:
| One thing that surprised me about 'A Thousand Brains: A New
| Theory of Intelligence' by Jeff Hawkins was how many
| different types of computer simulations currently exist for
| approximations and parts to your experiment's steps.
| layer8 wrote:
| People fundamentally disagree about whether there is anything
| besides the "of". My personal introspection tells mere there
| is only "of", because what I perceive as my consciousness,
| is, by virtue of being a perception, in the end just an "of"
| itself. There is some sort of recursivity involved in the
| whole construct of consciousness, which makes it hard to get
| a grasp on. In some sense, consciousness is just that, being
| the perceptor and the perceptee at the same time. This
| recursitivity or fixedpointness will probably be key to a
| precise understanding of the whole shebang.
| PaulDavisThe1st wrote:
| The argument you're making is just eliding the "hard
| problem".
|
| We can trivially imagine an electronic circuit that
| registers different current levels when exposed to red or
| blue light. Nobody (that I'm aware of) suggests that there
| is an experience within the electronic circuit, despite the
| fact that it "senses" different frequencies in the
| electromagnetic spectrum. The circuit is qualia-free.
|
| You, on the other hand, are qualia-full. Whether the
| experience you have when a red object is in front of you
| derives purely from your optical sensory apparatus, or if
| it derives from a self-reflective awareness that your brain
| is dealing with "red" really makes no difference to the
| central point: _you have an experience_.
|
| We have no explanation for how there can be
| experiences/qualia, and possibly, because they are either
| extremely or completely subjective, we may never any means
| of studying their existence.
| zozbot234 wrote:
| The thing about experiences/qualia is that they aren't
| just subjective, but momentary. Any sense of permanence,
| continuing identity or indeed of experiences being
| "about" something in particular is ultimately linked to
| our memory, which is not part of the "hard problem"
| itself; it fits solidly within the structure of causal
| relations we usually call "reality", or just "the
| physical universe". So the hard problem is hard, but it's
| also very tightly constrained; it "only" has to explain
| tiny fragments of subjective experience that float in and
| out of existence.
| namero999 wrote:
| Recalling a memory or thinking about the future or
| whatever, are still and always experiences in the now.
| You are not getting out of it.
| zasdffaa wrote:
| We had a heatwave this summer in the UK, weeks of it. I
| loved every moment. Thus I refute your 'momentary'. I've
| also had decades of pain and while it might sink lower in
| your perceptions, it's always there while you're awake.
| rogerclark wrote:
| This argument is not "eliding" the hard problem. This
| argument is saying that Chalmers' hard problem does not
| actually exist.
|
| We have many explanations for what people describe as
| "the hard problem". But nobody who believes in "the hard
| problem" accepts these explanations, which have been
| given for decades by philosophers like Dennett.
|
| There is no way to reconcile your view, that there IS a
| hard problem and that no progress has been made toward
| solving it, with our view, that there is no such problem,
| and that it does not need solving.
| layer8 wrote:
| The circuit you're describing registers the external
| light impulses, but it doesn't experience its own
| registering of those impulses.
|
| What I'm imagining is that the registering mechanism
| would itself have sensors placed on its wires that
| measure the currency levels on those wires, and have the
| measurements of those sensors as additional inputs into
| the cognition automaton. And then have sensors on the
| gates and wires of that automaton, which again feed as
| additional inputs into that same automaton. Add memory
| and timing delays. And then multiply all that some
| million times to get to the level of complexity of our
| inner mind, of our sensory and movement apparatus, and of
| our mental models.
|
| When introspecting myself, I don't see or feel or think
| anything that couldn't be explained by such a setup. The
| different textures (qualia, if you will) of what I
| perceive in my mind have a certain complexity, but that
| is merely quantitative and structural, not qualitative.
|
| I therefore simply do not agree that there is a hard
| problem of consciousness to begin with, in the usually
| given sense. I don't agree that there is a qualitative
| difference between the perception of "qualia" and other
| perceptions. "Qualia" are just a perception of
| representations and processes happening in my brain. I
| see no puzzling mystery that would require solving.
| PaulDavisThe1st wrote:
| No, this totally missing the point again.
|
| It is not a question of what the sensor detects. It is a
| question of how it is possible for there to be an
| experience when sensing occurs.
|
| Your introspection is simply pointing out the likely
| nature of what you experience, and I actually agree
| (tentatively) with the idea that most of our conscious
| experience is rooted in a self-reflecting system. But
| none of that has any bearing on how there can be ay
| experience at all.
| layer8 wrote:
| What you call "experience" for me is just sensing of
| internal information processing, of internal
| representation. This may need some dedicated
| introspection to fully realize. You're making a
| distinction which I believe is a mirage. It's just a
| special attributation we make in our minds to those inner
| perceptions. If you look closely, it vanishes.
|
| Think about it: How do you know that you have what you
| call an "experience"? It's because you perceive it in a
| particular way. So, at some point, this "experience"
| quality is an _input_ to your cognitive process, and you
| match it to some mental models you have about such
| inputs. I adjusted my mental model to think of those
| "qualia" perceptions as sensing parts of the internal
| workings of my brain. It's a side-effect of all the
| processing that is going on, if you will.
| goodthenandnow wrote:
| > Nobody (that I'm aware of) suggests that there is an
| experience within the electronic circuit
|
| There is a theory, Integrated Information Theory (IIT)
| [0], which argues exactly that [1].
|
| [0]: https://en.wikipedia.org/wiki/Integrated_information
| _theory
|
| [1]: https://www.journals.uchicago.edu/doi/10.2307/254707
| 07#_i2
| PaulDavisThe1st wrote:
| I'm familiar with IIT. I don't believe it suggests that a
| photosensor has qualia.
| lordnacho wrote:
| Isn't the parent and related answers pointing towards the
| idea that there's a level of complexity above which you
| need to be to see those qualia? A single little circuit
| might not be the one that is conscious, but a bunch of
| them connected together might exhibit patterns that we
| could call experience?
|
| Maybe the analogy is that a single DNA molecule is not a
| living thing but that molecule along with a bunch of
| others is?
|
| Seems like the problem arises in pinning down what level
| of complexity is required.
| zasdffaa wrote:
| We don't know what qualia _are_ so it 's an acceptable
| possibility to me (unprovable, mind) that such a circuit
| may 'experience' something. It would be unutterably basic
| if it happened, nonetheless I'm ok with that.
|
| There's also a view that consciousness is intrinsic to
| everything (ah, here you go
| https://www.scientificamerican.com/article/does-
| consciousnes...) which is a cheap, cheesy and IMO totally
| unacceptable way to 'explain' consciousness and I reject
| that as an _explanation_ , but it doesn't make it
| actually wrong.
|
| Edit: missed your last line "We have explanation for how
| there can be experiences/qualia" - I'm surprised, you can
| explain it, got any links?
| PaulDavisThe1st wrote:
| Sure, pan-qualism/pan-psychism may well turn out to be a
| respectable postion.
|
| [ fixed the missing "no" in the GP ]
| namero999 wrote:
| Panpsychism is untenable because it trades the hard
| problem of consciousness for the composition problem. The
| only consistent and coherent game in town is analytical
| idealism.
| zasdffaa wrote:
| It might be a piss-poor explanation, because it doesn't
| explain anything, it presumes its conclusion, but that
| doesn't make it wrong.
|
| And if you throw in phrases like "composition problem"
| and "analytical idealism" the fer fuck's sake provide
| some simple explanation or something.
| TaupeRanger wrote:
| Just as handwavy as any other explanation of consciousness. I
| can make an electronic device that runs a Python program that
| predicts how input affects the device. That doesn't make the
| device conscious.
| sdht0 wrote:
| A sufficiently advanced such Python program will probably
| actually be conscious.
| [deleted]
| namero999 wrote:
| Of course not. And sorry if I've missed the sarcasm :)
| gjm11 wrote:
| For anyone who's wondering why the strange title:
| https://en.wikipedia.org/wiki/Being_and_Nothingness
| [deleted]
| oldstrangers wrote:
| Just blindly bought this book because I think consciousness is
| one of the most fascinating unexplained aspects of our universe.
| 53r3PB9769ba wrote:
| Maybe I'm a p-zombie then, because I just don't get it.
|
| I've spent hours upon hours thinking about thinking and
| observing my own thought processes and I don't see anything
| that couldn't be explained scientifically.
| namero999 wrote:
| One must explain how the jump from quantities to qualities
| works.
| notfed wrote:
| For starters:
|
| - Why is there something rather than nothing?
|
| - Does a universe with no one in it to observe it count as
| something or nothing?
|
| Then, imagine we're building an AI and want to know whether
| it's reached our level or not:
|
| - How can we determine whether the AI experiences qualia the
| same (or similar) way we do, and isn't lying?
|
| - Where to draw the line between conscious being and
| computer?
| flockonus wrote:
| I wish I got the appreciation.. would you be able to describe
| what is fascinating about it?
| namero999 wrote:
| It's the ultimate riddle.
| cscurmudgeon wrote:
| Here is a good overview of the problem (and the controversy):
|
| https://en.m.wikipedia.org/wiki/Hard_problem_of_consciousnes.
| ..
| steve_john wrote:
| This is a reminder that, in the end, House's book is not about
| consciousness -- it is about a set of ways for looking at it.
| agumonkey wrote:
| Maybe we have two system, an imaginary layer and an accepted
| reality layer, dreams happen in the first one, experience in the
| other one. Mania happens when first former leaks in the latter.
| andirk wrote:
| "How rare and beautiful it is to even exist." --some song lyric
| ryeights wrote:
| Can't believe / How strange it is to be anything at all.
| uoaei wrote:
| Consciousness vs awareness vs sentience are terms that really
| need some society-scale effort to nail down what we mean by one
| or another. The conversation circles round and round because many
| folks talk past each other or interpret discussions in ways that
| the writer didn't intend. (I'm not saying the answer is available
| today if only we solve this dialectical issue.)
|
| Philosophers of consciousness define "consciousness" as
| "phenomenological experience" in the barest, most unqualified
| sense, ie, the experience of "yellow" when photons of wavelength
| ~580nm strike a visual sensory organ of some kind of cognitive
| system.
|
| Note that the above does not automatically imply that the
| experience is _understood_ or even _recognized_. A lot of
| armchair philosophers and intellectual hobbyists conflate the
| term "consciousness" with the notion of having some kind of
| mental model through which to comprehend the experience (what I
| call "awareness"), or an understanding of the dichotomy between
| self vs environment (what I call "sentience", ie, "self-
| awareness").
|
| Acting through anthropocentrism, it is easy to assume that the
| three are inextricable, but I don't think that perspective is the
| way forward toward understanding of consciousness per se.
| nickmain wrote:
| "Metacognition" is a better term for what many refer to as
| consciousness.
| gbro3n wrote:
| What I rarely see / hear articulated well enough, and am not
| even sure that I can, are questions around why _I_ have
| consciousness. I understand the reasons why a body might
| develop meta cognition, and how it's advantageous for a being
| to be aware of it's thoughts. But none of this explains why my
| body is attached to _this_ consciousness and not another.
| 'Experience' is the key term I feel when the phenomenal aspect
| of consciousness is discussed, but I feel many don't understand
| this view point and attempt to explain it away as something
| reducable or inevitable.
| layer8 wrote:
| If you accept that certain bodies have metacognition, then
| this arguably predicts that each body's metacognition will
| perceive itself (the metacognition) and the body as two
| separate but connected entities. That is, your own perception
| that "you" are separate from your body would be predicted by
| the theory. That is, this _perception_ would be predicted by
| the theory. But it is a mere perception, because the
| metacognition "machine" (within the brain) is physically part
| of the body, and hence inherently bound to it, even if its
| own internal perception differs from that.
| comfypotato wrote:
| You sound like you'd be into the idea of "qualia" if you're not
| already aware.
| dqpb wrote:
| > need some society-scale effort to nail down
|
| I think a good approach would be for people to build things
| that exhibit consciousness according to whatever their model is
| and claim "this is conscious". Then let people debate whether
| it is or not.
| uoaei wrote:
| There is the notion of panpsychism: that consciousness
| defined in the basic sense is extant everywhere, all the
| time, in many varied forms and scopes. By the definitions
| above, awareness would be restricted to those systems which
| could reasonably be considered "cognitive", and sentience
| would belong only to those who can conceptualize "cogito ergo
| sum".
| csours wrote:
| I wonder how this compares with 'The Society of the Mind'
| comfypotato wrote:
| Right off the bat it seemed like it was saying experts have
| changed their tone recently. Saying "we're further from
| understanding consciousness than we thought we were". It never
| goes on to elaborate this point.
|
| Great book review. If I had more time, I would snap up the book
| immediately. The review left me wondering if the book elaborates
| on the above ^^^. I might make the time to read it.
| Hemospectrum wrote:
| > Right off the bat
|
| This might not be a deliberate reference to Nagel (as mentioned
| in the article) but at least it's thematically appropriate.
| comfypotato wrote:
| Perhaps. Or a strategy of the author to keep me reading until
| the end (it worked!)
| nsxwolf wrote:
| I am glad, because for years the prevailing attitude was that
| there's nothing special or interesting about consciousness at
| all, that it doesn't really exist, it's an illusion, etc.
|
| I don't think we'll ever be able to fully explain it in
| scientific terms, because not everything is in the realm of
| scientific knowledge.
| notfed wrote:
| What scares me is that some leading AI researchers hold this
| view (that consciousness in general is nothing special) and
| it makes them come across as unempathetic, as if all
| conscious beings---meat or AI---are simply computers and in
| turn pain is just some kind of computation and therefore that
| it's silly to even discuss taking precautions to prevent any
| kind of AI suffering.
| opportune wrote:
| That's because consciousness is not well-defined and might
| as well be woo. The verbiage around it is the same as a
| "soul"
|
| Define it rigorously and show a physical basis and you
| might have something to work with.
___________________________________________________________________
(page generated 2022-10-14 23:00 UTC)