[HN Gopher] A landscape of consciousness: Toward a taxonomy of e...
___________________________________________________________________
A landscape of consciousness: Toward a taxonomy of explanations and
implications
Author : danielam
Score : 40 points
Date : 2024-07-01 11:44 UTC (2 days ago)
(HTM) web link (www.sciencedirect.com)
(TXT) w3m dump (www.sciencedirect.com)
| bzmrgonz wrote:
| This is a wonderful project, I had no idea there was so much
| fragmentation n the topic of consciousness. Maybe we should feed
| these writings and concepts to AI and ask it to give us any grand
| unifying commonality among them, if any.
| bubblyworld wrote:
| I would love to be wrong about this, but I don't think anyone
| knows how to do that yet. You're basically asking for automatic
| most-likely hypothesis generation given a set of input data.
| Concepts about consciousness in this case, but you could
| imagine doing the same with scientific data, system traces
| around bugs and crashes, etc. That would be wild!
| russdill wrote:
| It's precisely the type of thing that current LLMs are not
| suited for. They excel at extrapolating between existing
| writings and ideas. They do really poorly when trying to do
| something novel.
| mistermann wrote:
| On their own yes, but as human-like intelligent agents
| running within a larger framework it's a different story.
| superb_dev wrote:
| Just skip all the thinking ourselves and see if some AI can
| come up with plausible sounding nonsense? I'm not interested
| dcre wrote:
| You should probably try thinking about it instead.
| cut3 wrote:
| This topic is so interesting. If I were creating a system for
| everything, it seems like empty space needs awareness of anything
| it could expand to contain, so all things would be aware of all
| other things as a base universal conscious hitbox.
|
| Panpsychism seems neat to think about.
| CuriouslyC wrote:
| You don't need empty space. All the processing power can be
| tied to entities, and space emerges from relationships between
| entities.
|
| Want something fun to think about? What if the Heisenberg
| uncertainty principle is basically a function of the
| information capacity of the thing being examined. To make a
| computer analogy, imagine you have 8 bits of information -
| using 6 for position leaves 2 momentum, for example.
| brotchie wrote:
| Two things I'm absolutely convinced of at this point.
| 1. Consciousness is primitive. That is, interior experience is a
| fundamental property of the universe: any information system in
| the universe that has certain properties has an interior
| experience, 2. Within the human population, interior
| experience varies vastly between individuals.
|
| Assertion 1 is informed though reading, introspection,
| meditation, and psychedelic experience. I've transitions the
| whole spectrum of being a die hard physical materialist to high
| conviction that consciousness is primitive. I'm not traditionally
| panpsychic, which most commonly postulates that every bit of
| matter has some level of conscious experience. I really think
| information and information processing is the fundamental unit
| (realized as certain configurations of matter) and certain
| information system's (e.g. our brain) have an interior
| experience.
|
| Assertion 2 is informed through discussion with others. Denial of
| Chalmer's hard problem doesn't make sense to me. Like it seems
| logically flawed to argue that consciousness is emergent.
| Interior experience can't "emerge" from the traditional laws of
| physics, it's like a nonsense argument. The observation that
| folks really challenge this makes me deeply believe that the
| interior experience across humans is not at all uniform. The
| interior experience of somebody who vehemently denies the hard
| problem must be so much different from my interior experience to
| the extend that the divide can't be bridged.
| tanepiper wrote:
| You word my position here too.
|
| 20's - a rabid Dawkins reading Athiest. 40's - I think Dawkins
| is an idiot and my favourite book is "Beelzebub's Tales to His
| Grandson"
| tasty_freeze wrote:
| You don't come off as being a nuanced thinker if those are
| your two positions on Dawkins. I can understand disagreeing
| with him, but calling him an idiot impugns you more than him.
| mistermann wrote:
| Assuming your model of him is correct.
|
| How many videos of him "destroying" theists have you
| watched on TikTok? I've seen 100+, and agree that he's an
| idiot, _amazingly_ so. Watch carefully the words he uses as
| he "proves" his "facts".
| s1artibartfast wrote:
| Have you considered that TikTok may not be a full
| representation of the human being?
|
| It is one thing to say someone spews bullshit on tiktok,
| and another to claim them an idiot.
|
| Do you use a purity testing approach to determining
| idiocy?
| mistermann wrote:
| A full representation is not necessary. If a Human has
| errors in any single sentence, they have errors in their
| corresponding model. These details _are the essence of
| the very point of contention_.
|
| > Do you use a purity testing approach to determining
| idiocy?
|
| If one is claiming logical and epistemic superiority, as
| he literally _and explicitly_ does, and _arrogantly_ so
| (followed by roars of applause from the audience), I will
| judge him by those standards. I will also mock him,
| _because he is sooooo dumb_ , while he chastises others
| for the same thing (which he is typically not wrong
| about, to be fair).
|
| Live by the sword, die by the sword.
| s1artibartfast wrote:
| Would you agree that this may make the error of judging
| them by their worst output, and not their best?
| carrozo wrote:
| What are your thoughts on what Donald Hoffman has been
| pursuing?
|
| https://en.wikipedia.org/wiki/Donald_D._Hoffman
| brotchie wrote:
| Compelling.
|
| I really buy his argument about our interior experience being
| a multimodal user interface (MUI) over stimulus from some
| information system. We describe the universe in terms of a 4D
| space-time with forces and particles, but this is really the
| MUI we've constructed (or evolution has constructed) that
| maximizes our predictive power when "actuating" our MUI (e.g
| interacting with that external system).
|
| I haven't thought about this before, and kinda rejected it on
| first reading of Hoffman's work, but think I grok it now.
| Because our internal experience is a MUI, and that MUI (4D
| space time, particles) can't be considered a "true reality",
| it's just an interface, then other conscious entities are
| more "real" than our MUI. That is, the fundamental true
| reality that really matters is other conscious agents (e.g.
| Conscious Realism).
|
| A slightly more wacky theory I like to think about is how
| this intersects with the simulation argument. If our reality
| isn't ring 0 (e.g. there's an outer reality that is actually
| time-stepping our universe), then the conscious interior
| experience we have in our reality may be due to the
| properties of reality in the outer universe "leaking through"
| into our simulation.
|
| This actually aligns well with the Hoffman's MUI argument. We
| live in some information processing system. Through evolution
| we've constructed a MUI that we see as 4D space time. But
| this doesn't at all reflect the true reality of our universe
| being a process simulated in the ring 0 reality. Conscious
| Realism then arises because ring 0 reality has properties
| that imbue pattern of information processing with interior
| experience.
| CuriouslyC wrote:
| Assertion 1 is quite weak. The stronger version is that
| consciousness is the mechanism by which the universe processes
| information, and choice (as we experience it) is the mechanism
| by which the universe updates its state. Under this assertion,
| the laws of physics are nothing more than an application of the
| central limit theorem on the distribution of conscious choices
| made by all the little bits of the universe involved in the
| system. This view also implies that space and reality are
| "virtual" or "imaginary" much like George Berkeley suggested
| 300 years ago.
| brotchie wrote:
| I'm starting to buy this argument after rejecting it before
| (primarily thought ignorance of the meaning of "consciousness
| is the mechanism by which the universe processes
| information").
|
| Also intersects with Hoffman's argument re: Conscious
| Realism. The only real thing is conscious experience, and
| "reality" as used in common parlance is just a multimodal
| user interface constructed to maximize evolutionary fitness.
| tasty_freeze wrote:
| I've never understood why Chalmer's reasoning is so captivating
| to people. The whole idea of p-zombies seems absurd on its face.
| Quoting the article:
|
| (quote) His core argument against materialism,
| in its original form, is deceptively (and delightfully) simple:
| 1. In our world, there are conscious experiences. 2.
| There is a logically possible world physically identical to ours,
| in which the positive facts about consciousness in our world do
| not hold. 3. Therefore, facts about consciousness are
| further facts about our world, over and above the physical facts.
| 4. So, materialism is false.
|
| (endquote)
|
| Point 2 is textbook begging the question: it imagines a world
| which is physically identical to ours but consciousness is
| different there. That is baking in the presupposition that
| consciousness is not a physical process. Points 3 and 4 then
| "cleverly" detect the very contradiction he has planted and
| claims victory.
| codeflo wrote:
| If you believe that what we describe as "consciousness" is
| emergent from the ideas a material brain develops about itself,
| then it's in fact not logically possible to have a world that
| is physically identical to ours yet does not contain
| consciousness. So indeed, premise 2. sneaks in the conclusion.
|
| To illustrate this point, here's an argument with the same
| structure that would similarly "prove" that gravity doesn't
| cause things to fall down:
|
| 1. In our world, there is gravity and things fall down.
|
| 2. There is a logically possible world where there is gravity
| yet things do not fall down.
|
| 3. Therefore, things falling down is a further fact about our
| world, over and above gravity.
|
| 4. So, gravity causing things to fall down is false.
| patrickmay wrote:
| Well and succinctly put. One would have to be a philosopher to
| be willing to consider p-zombies further.
| amelius wrote:
| > it imagines a world which is physically identical to ours but
| consciousness is different there
|
| So a world where people discuss consciousness but where it does
| not exist? That sounds very implausible.
| codeflo wrote:
| As a thought experiment, imagine we were to scan the position of
| every molecule in the human body to the Heisenberg limit of
| accuracy. Imagine we were to plug the resulting model in a
| physics simulation that models every biochemical and physical
| interaction with perfect fidelity. This is a thought experiment,
| and what I suggest isn't ruled out by physics, so let's assume
| it's technologically possible. Would the simulated being be
| "conscious" in the same way the original human is? Would it
| experience "qualia"?
|
| If you think the answer might be no, then congratulations, you
| actually believe in immaterial souls, no matter how materialist
| or rationalist you otherwise claim to be.
| mistermann wrote:
| Not necessarily, one could be a Pedant.
| vundercind wrote:
| Only holds if whatever hardware that's pretending to be the
| matter can act exactly like the matter without _being_ the same
| thing.
|
| For the distinction, consider the difference between a
| simulation of a simple chemical process in a computer--even a
| perfectly accurate one!--and the actual thing happening. Is the
| thing going on in the computer the same? No, no matter how
| perfect the simulation. It's a little electricity moving
| around, looking nothing whatsoever like the real thing. The
| simulation is _meaning_ that we impose on that electricity
| moving around.
|
| That being the case, this reduces to "if we recreate the matter
| and state exactly, for-real, is that consciousness?" in which
| case yeah, sure, probably so.
|
| This doesn't work if the _thing_ running the simulation
| requires interpretation.
| s1artibartfast wrote:
| exactly, The parent post does not address the issue of
| _representation_ vs reality.
|
| You can simulate every molecular interaction in a fire, but
| that does not mean the simulation gives off the same heat.
| You can write a perfectly accurate equation for splitting an
| atom, but the equation does not release energy.
| codeflo wrote:
| > You can simulate every molecular interaction in a fire,
| but that does not mean the simulation gives off the same
| heat
|
| It would to a simulated being standing next to the fire.
|
| > You can write a perfectly accurate equation for splitting
| an atom, but the equation does not release energy.
|
| It releases simulated energy inside the simulation.
|
| Every material interaction is simulated. If you believe
| that consciousness can't exist in the simulation, then you
| believe that consciousness is not a material interaction,
| q.e.d.
| vundercind wrote:
| It's some electrons moving around. Any further meaning of
| that is only what we assign to it.
|
| Unless your "computer" is identical matter in the same
| arrangement and state is the original, actually doing
| stuff.
|
| This is why the "what if you slowly simulated the entire
| universe on an infinite beach moving rocks around to
| represent the state? Could anything in it be conscious?"
| thing isn't my very interesting.
|
| No, you're just shoving rocks around on sand. They don't
| mean anything except what you decide they do. Easy
| answer.
| s1artibartfast wrote:
| >Every material interaction is simulated. If you believe
| that consciousness can't exist in the simulation, then
| you believe that consciousness is not a material
| interaction
|
| I think that is missing the point. You are literally
| changing the material and medium by conducting a
| simulation.
|
| Releasing simulated energy within a simulation is not
| identical to releasing real energy in the real world. The
| former is purely representational, and even a perfectly
| simulated object retains this property, and lack of
| equivalence.
|
| a simulated atom is not a real atom, no mater their
| similarlity.
| amelius wrote:
| Ok, next experiment.
|
| Imagine you took a brain and replaced one neuron by a
| transistor (or gate) that performs the exact same function as
| the neuron.
|
| Now replace more and more neurons until all neurons are now
| transistors.
|
| Would the resulting being be conscious and experience qualia,
| like the original did? If not, at what point was there a
| notable change?
|
| https://en.wikipedia.org/wiki/Ship_of_Theseus
| vundercind wrote:
| The same function, down to quantum and gravity et c effects
| on everything around it, and accepting and reacting to
| same? Yeah probably, but we're back to having to "run" this
| on the same arrangement of actual matter as the original.
|
| [edit] there's an obvious attack on this, but I'll go ahead
| and note my position on it: the whole premise that we can
| do any of this without just _using actual matter the
| ordinary way_ is so far into magical territory that we
| might as well ask "what if lephuchans simulated it?" or
| "what if god imagined the simulation?"--well ok, sure, I
| guess if magic is involved that could work, but what's the
| point of even considering it?
|
| "What if a miracle occurred?" isn't a rebuttal to the
| position that consciousness as we know it likely can't be
| simulated by simulating physics, because you can rebut
| anything with it. Its admission to a discussion is the same
| as giving up on figuring out anything.
| BobbyJo wrote:
| > If you think the answer might be no, then congratulations,
| you actually believe in immaterial souls
|
| If you scan a body of water, and simulate it perfectly, the
| resulting simulation will not be wet. You can't separate a
| material process from the material _completely_. Consciousness
| may be a result of carbon being a substrate in the
| interactions. It might be because the brain is wet when those
| processes happen. There is plenty of room between believing a
| perfect computational simulation is not conscious and believing
| in immaterial souls.
| codeflo wrote:
| > If you scan a body of water, and simulate it perfectly, the
| resulting simulation will not be wet.
|
| It will be wet to the simulated being that's swimming in it.
|
| > Consciousness may be a result of carbon being a substrate
| in the interactions.
|
| Are you conscious? If so, how did you find out that you're
| made from actual carbon atoms and not simulated ones?
| BobbyJo wrote:
| > It will be wet to the simulated being that's swimming in
| it.
|
| Which has an entirely different qualia to us, the beings
| who consciousness we are trying to unravel.
|
| > Are you conscious?
|
| That's the big question.
|
| > If so, how did you find out that you're made from actual
| carbon atoms and not simulated ones?
|
| If I assume I'm conscious, whatever my atoms are, they are
| the atoms of concern with regard to said consciousness.
| amelius wrote:
| What if you forked the simulator? Would there be two
| consciousnesses experiencing qualia?
___________________________________________________________________
(page generated 2024-07-03 23:01 UTC)