[HN Gopher] Making the hard problem of consciousness easier
___________________________________________________________________
Making the hard problem of consciousness easier
Author : hheikinh
Score : 48 points
Date : 2021-05-29 06:15 UTC (16 hours ago)
(HTM) web link (science.sciencemag.org)
(TXT) w3m dump (science.sciencemag.org)
| callesgg wrote:
| While arguably unfalsifiable.
|
| To me Joshua Bach's explanation of consciousness covers all the
| bases.
|
| It is a logical explanation that explains consciousness without
| the need for magic. Whether or not it covers what you want from a
| consciousness explanation I can't tell.
|
| That it is not more accepted seams strange, but it is fairly new
| and people in philosophy are famously slow when it comes to
| change. So I guess it makes sense.
| bobmaxup wrote:
| Why is it always the same people who expound the "hard problem"
| of conciousness. It is tiring to see authors like koch on
| everything.
| martingoodson wrote:
| I agree. This quasi-mystical framing seems unlikely to bear
| scientific fruits.
|
| Is consciousnesses a clearly defined term?
| mellosouls wrote:
| This "quasi-mystical framing" by Chalmers was a major
| contributor to recharging and refocusing the contemporary
| philosophical and scientific attack on the problem of
| consciousness.
| martingoodson wrote:
| Great. Could you tell us what scientific fruits have
| resulted?
| gizajob wrote:
| Philosophy isn't science.
| martingoodson wrote:
| The article we are both commenting on is in a scientific
| journal and is concerned with scientific research into
| consciousness.
|
| I think it's appropriate to talk about scientific results
| here.
| mellosouls wrote:
| I just have.
| TheOtherHobbes wrote:
| Who or what is asking that question?
| martingoodson wrote:
| Well I typed it, obviously. But my existence doesn't say
| anything about whether a 'hard problem' of consciousness is
| a useful way to think about the human mind.
| nsomaru wrote:
| Is it that obvious? Who is this "I" you speak of?
| martingoodson wrote:
| The history of Western philosophy has amply demonstrated
| that investigating the grammar of language is not a
| useful way to find out how the world works.
| mooseburger wrote:
| That isn't about grammar. Presumably, this "I" is real.
| Can you find it in the world? If you say it's the brain,
| there's only elementary particles there, same as
| everything else. We see no such "I" there. So where is
| this "I"?
| tsimionescu wrote:
| By the same logic, you could claim that a computer that
| is calculating Pi is not actually calculating Pi:
| Presumably, this computation is real, but where is it in
| the world? If you claim that it's in the micro processor,
| there's only elementary particles there, same as
| everything else. There's no computation there. So where
| is the computation?
|
| Of course, the rebuttal is very simple: the computation
| is actually in the microprocessor; and consciousness is
| actually in the brain. They are of course made of
| particles (or strings or fields or whatever the ultimate
| building block may be) just as everything else is, or are
| one interpretation/structure of those particles.
|
| And note that the fact that interpretation entails an
| interpreter does not make my argument circular: just as
| you can write a computation that detects computation in a
| microprocessor, you can have one consciousness
| interpreting the same sort of thing as another
| consciousness. Similarly to how the von Neuman numeral 2
| is an interpretation of the set {{}, {{}}} and vice versa
| (that is, the physical process would be isomorphic to
| consciousness).
| martingoodson wrote:
| 'Where' is it? Where is homeostasis? Where is the immune
| system?
|
| As tsimionescu suggested, a spatial metaphor is not the
| only way to understand phenomena.
| mistermann wrote:
| I would say it "is" more like Western philosophy
| _suggests_ this, or you could also say they _have not yet
| discovered_ a way _that it is useful_.
| ganzuul wrote:
| Fine, but this is all I got:
|
| f(a,a)
|
| f(a,b)
|
| It's called 'hard' for a reason. :p
| jdonaldson wrote:
| It's probably easier to think of consciousness as "learned". That
| is, we have these brains that can extract patterns from noisy or
| sparse inputs, and the notion of a "self" is just something it
| learned to recognize. The rest of the "common sense" that guides
| our course through the world is basically just adaptation and
| learnings of how to preserve that self in a given environment.
|
| It's interesting to think of situations where the self becomes
| subordinate... family, sex, certain types of anger, etc. There
| are some old and deep patterns in the brain that can override the
| control of "self"... and they roughly correspond to very
| primitive parts of the brain that govern simpler and essential
| fight or flight mechanisms.
| codeulike wrote:
| _adversarial collaboration rests on identifying the most
| diagnostic points of divergence between competing theories,
| reaching agreement on precisely what they predict, and then
| designing experiments that directly test those diverging
| predictions._
|
| omg why has no-one thought of doing this before
| codeflo wrote:
| Not strictly related to the article, but I've yet to be convinced
| that most of the talk around this supposed "hard problem" is
| anything more than an attempt to reintroduce Cartesian dualism in
| more scientific sounding terms. "Quantum consciousness" and
| whatnot.
| nahuel0x wrote:
| but Cartesian dualism (an the notion of souls) has its roots in
| the very real subjective experience. You feel, how do you probe
| others feels too? The hard problem is real.
| mabub24 wrote:
| A lot of philosophical work has been done explicitly against
| Cartesian Dualism. Wittgenstein's, extremely influential,
| "Philosophical Investigations" is probably the biggest
| example. The entire book is dedicated to dissolving the
| problem posed by dualism, and presents a conceptual approach
| that makes the "hard" problem of consciousness, or the
| problem of understanding someones feelings, effectively not
| really a problem at all. Much of the problem is more a result
| of conceptual confusion than anything else. The [Private
| Language
| Argument](https://plato.stanford.edu/entries/private-
| language) is probably the most famous. Its result can also be
| extended to pain, feelings, and vision. Fundamentally private
| subjectivity is largely a misconception. It's like trying to
| use a currency that has no purchasing power. Insistence on
| subjectivity and subjective experience is more productively
| understood as "personal" rather than as "private".
| benlivengood wrote:
| The hard part of consciousness is drawing the line left by the
| dissolution of Cartesian dualism. Some matter is not conscious,
| other matter is. How finely can we draw the boundaries between
| the two? Dualism drew a metaphysical line between matter and
| spirit but we have to draw a manifold around and through a
| physical brain to locate conscious experience.
| archibaldJ wrote:
| The problem I have with these definitions (and the accompanying
| theories) is that they are not practical. At best fun
| abstractions to wrap your head around with, at worst pretentious
| and misguided.
|
| To advance the field of consciousness, I believe at the current
| stage we should always treat consciousness as a blackbox, and ask
| questions around it with practical engineering implications.
| Perhaps two categories of questions:
|
| Category 1: qualia/perception
|
| These would be human-centric questions related to experimenations
| with altered states of mind. Here is one for example:
|
| Why is it that under the effect of THC, certain stimuli and
| actions [1] can reliably slow down the perception of time, while
| certain stimuli (e.g. the soft humming of the aircon) tend to
| normalize time perception for some individual?
|
| [1]: e.g. start the stopwatch app, have the phone at arm's
| length, stare at the millisecond digit and slowly moving your
| phone closer to you.
|
| What can we say about the neural activations (and subsequently,
| oscillations) of individuals who are able to alter the time
| perception more easily (even at the presence of normalizing-
| stimuli) and how can this ability be learnt or unlearnt?
|
| Understanding of the above phenomenon could be used to design the
| calibration phrase of a BCI device so preprocessing, signal
| processing, etc, can be customized to deliver a smoother user
| experience.
|
| Category 2: data/computation
|
| One of the key charactistics of biological systems that invoke
| consciousness appears to be a cybernetics-oriented ability that
| involves orchestrating (often-function-specific) modules (e.g. in
| human brains) to accomplish (often-highly-abstracted(?)) tasks.
|
| Perhaps we can take inspirations from mindful practises (and
| other consciousness-centric activities) and study the brain and
| how its modules work together to come up with architectures,
| models, etc, that (going one step above spiking neural network?)
| mimic the cybernetic nature of consciousness for the integration
| of loosely-coupled things e.g. in transfer learning, etc, as well
| as systems that involve a lot of feedback loops.
|
| Perchance such biomimetics would help us to get a better idea how
| type (and category)-theoretical aspects of things can be
| introduced to engineer highly fault-torelent and energy-efficient
| systems that employ millions of pretrained models like GPT3 at
| the lower level and are constantly self-learning for general
| purpose tasks.
| tgv wrote:
| It sounds premature to me. "Big" science (CERN, Human Genome,
| etc.) were only possible and sensible because the subject matter
| was well understood, and getting more and more information about
| it required apparatus and manual labor beyond the reach of a
| normal lab.
|
| The consciousness problem, however, is very poorly understood.
| The article contains some hand-waiving pointing at extremely
| large groups of neurons, jumping to irrelevant details such as
| its "anatomical footprint." But even the function of small groups
| of neurons is not understood, nor the interaction between them.
| How "big science" can get meaningful results then is not clear to
| me.
|
| > and change the sociology of scientific practice in general
|
| Right then.
| midjji wrote:
| Is it hard though? Or is the hard part ethics i.e. personhood,
| and because the terms are conflated that makes consciousness
| hard, since it means you cannot accept what consciousness is
| without needing to also define ethics. Drop the idea that
| consciousness is sufficient or required for personhood in favor
| for something more behaviourally consistent like cuteness or
| power, and things become clearer.
|
| There is a part of you which simulates social interaction by
| learning models of various other agents it has inferred the
| existence of. As can be expected from something which is looking
| for agents based on indirect clues, we know this part does
| struggle with accidentally assigning agency to things which
| clearly lack consciousness i.e that damned sharp rock you stepped
| on twice. This part of you is capable of simulating a finite
| number of simultaneous such agents at a time, meaning it will
| focus on, as a whole, being able to predict the actions of the
| agents most often observed. It is also why we would expect it to
| replace groups of people you only interact with as a group as a
| "them". It is also very common that the most significant agent to
| simulate would be you. Hence one of the models being simulated is
| you. This is what generates the perception of consciousness, why
| it is you yet separate. It predicts the cognitive bias of the
| mind body duality, yet maintains the perception of consciousness.
| A part of you is constantly trying to explain your own actions,
| but critically, while we would expect it to be good at providing
| a socially acceptable explanation, we do not expect it to be all
| that good at predicting what you will actually do, or even
| explaining why you did something. See Split brain examples,
| https://www.youtube.com/watch?v=wfYbgdo8e-8&ab_channel=CGPGr....
| It also makes the prediction that it should be possible to damage
| this part of the brain and lose the sensation of consciousness
| yet retain primary function as a human. Which raises no ethical
| problems as the person still remains cute.
|
| Further, when predicting/explaining the actions of the modelled
| this social simulator is fairly robust, but it can have chaotic
| points, i.e. points where imperceptibly tiny differences in the
| inputs result in drastically different outcomes, specifically,
| the model/language has a name for these. When the social
| simulator concludes that such a point exists, we call these
| points choices, we do so regardless of our awareness that the
| agent is a machine or not, as in deep blue chose to move his
| knight instead of the queen, or you chose to accept/disbelieve
| this. Specifically we call these points choices, or once made
| decisions, when the model expects its there. This is the reason
| why one person will call what they did a choice others may or may
| not. It is why one person can know that person X will do y and be
| right, while person X thinks they are choosing between y,z and
| chose y. You may not be the best predictor of your own actions,
| and if you have/had kids you know this.
|
| In our case, the social simulator is strongly connected to
| language, and it will use language to perform simulations,
| providing predictions and explanations, and social manipulations.
| However, our ability to simulate the actions of animals shows
| that consciousness is not limited by language.
|
| Remember whenever we reason using language, we generally get far
| worse results compared to when we do not restrict ourselves to
| reason using language. If you have ever experienced the Zone when
| programming or doing math, or anything really, then you know the
| deeply disturbing feeling of the social simulator suddenly
| starting to chatter and try to weigh in to problems it has jack
| shit ability in and in practice going from smart and non
| conscious/ego dissolution (mostly ignoring the output of the
| social simulator/putting it in a sleep mode if you will) to
| conscious and retarded. Programming and math highlights this,
| because you cant argue with a compiler.
|
| This model of consciousness and free will isn't perfect by any
| means, but its the best one I know, mostly because it does not
| try to add magic into things while explaining the perceptions of
| it, and most of the contradictions between perception and
| physical reality as we know it.
|
| It predicts the cognitive bias towards mind body duality, and the
| cognitive bias towards free will. We needed words to communicate
| these. It resolves the paradoxes around free will in their
| entirety, while predicting the perception of free will, notably
| including the "thinking I will do x" then doing y problem. It
| predicts that it might be the case that we have "decided" on
| something before we are consciously aware of the decision,
| consciousness only being a weak input to choices, not the
| decision maker after all. And as if another's model of you does
| not put you in a chaotic point, then that does not mean your
| model of you, i.e. your consciousness wasn't. It predicts that we
| would be constantly simulating ourselves, yet can be surprisingly
| bad at predicting our own actions, and even worse that trying to
| subvocally reason yourself into changing behaviour by thinking "
| I will do this instead" would be utterly useless. The social
| simulator is expected to provide the outcome of actions in social
| context, decisions are then taken based on them. But when you are
| having hypothetical discussions that does not make a prediction
| it will use, that's just practice. Meaning, if you want to
| convince yourself not to have another slice of pizza, thinking i
| chose to be on a diet is useless, but imagining meeting a nice
| girl flirting then looking disgusted at your waistline might be
| strong enough to want to hurl. In short it predicts how to
| strengthen the influence of what you perceive as conscious will.
| It makes it possible that the output from the social simulator
| could be severed and that we could create people who live in the
| Zone, while being socially oblivious. (Not autism)
|
| It predicts that if you want to build a consciousness from
| scratch what you need is a system designed to infer the existence
| of and predict the interactions with other agents and having a
| very limited output bandwidth and whose input is direct
| environment observation and time delayed/ no feedback on the
| agents internal state. Trained on the feedback signal of some
| other system/agent using its predictions to optimize some score
| in an environment with multiple agents not all of whom are
| interacting. The consciousness so made wont feel like a person
| deserving of rights, but that isn't necessary, as we didnt tie
| ethical personhood to consciousness.
| hliyan wrote:
| The problem of consciousness (or the quality of something being
| able to _experience_ itself and things around it), may never be
| solved. Even if the reductionist approach reveals some
| fundamental field or particle that gives rise to consciousness
| (which is very unlikely), it just shifts the problem from the
| brain 's neural network and into that phenomenon.
|
| An old Julian Jaynes analogy comes to mind: if you're a
| flashlight, you will never be able to understand light, because
| wherever you look, light will be there. By definition you're
| unable to look at something that is dark. You perceive the world
| as being bathed in perpetual light.
|
| The closest we might get may be a hand-wavy form of panpsychism,
| with some probable connection to quantum fluctuations.
| pmoriarty wrote:
| Another problem is that many of the people arguing about this
| don't agree on what it means to "understand" or "explain"
| something.
|
| Is it enough to get a majority of scientists, researchers,
| and/or philosophers to agree on something, or have it published
| in some prestigious journals for us to pat ourselves on the
| back and consider the subject "understood"?
|
| I have no doubt that such agreement on this subject could be
| attained, as people are good at convincing themselves and each
| other of stuff, and it's not inconceivable that it'll happen
| again regarding consciousness at some point.
|
| However, will that mean that we've really understood it, or
| merely convinced ourselves that we did?
|
| The same goes for explanations. What counts as an adequate
| explanation? Are a certain number of successful predictions
| enough? Is an elegant equation that accounts for all of the
| data enough? Or do we require something more?
|
| This is where the disagreements come in, as when it comes to
| experience equations, journal articles, and scientists patting
| each other on the back as having "understood" it doesn't seem
| to be enough for many participants in this controversy, and
| some argue it'll never be enough because there is something
| special about experience that transcends all that.
|
| That's why articles that point to some new discovery about
| neuron function or results of experiments on the brain can be
| seen as laughable before even reading them. On this view no
| fruit of the scientific enterprise, or even philosophy, can
| touch what it's like to experience the world.
| bserge wrote:
| I don't see a problem to solve there. Every conscious creation
| has a limited number of sensors and limited processing capacity
| to work with, as such they can be recreated with enough
| research and resources.
|
| The flashlight in that example will never experience darkness,
| just like we will never experience lower or higher dimensions.
| Or dare I say, a different galaxy. But we could simulate them,
| so that's also experience, even if not accurate.
|
| Imo, an autonomous robot/car would be no less conscious than a
| cow. The experience is simply limited by the hardware/software
| available.
| andybak wrote:
| It's so strange to read comments from people who think
| there's nothing to explain. It must be just as strange for
| those people to hear from the other side of the debate.
|
| To me it seems so obvious that there's an explanatory gap
| between any functional explanation of behaviour and
| information processing - and the feeling of experiencing
| something. The various thought experiments around p-zombies
| crystalises this for me.
| [deleted]
| Luc wrote:
| One can readily find persuasive critiques of Chalmers'
| theories by professional philosophers. Book length even.
| antonvs wrote:
| The GP comment was specifically talking about the "hard
| problem." That's hardly "Chalmer's theory," although he
| did name it.
|
| There are critiques of the hard problem, like
| eliminativism. I'm not aware of any that are
| "persuasive," though. Do you have any recommendations?
| antonvs wrote:
| There's an obvious explanation for the differences in
| perspective, which is covered by the title of this paper:
|
| https://philarchive.org/archive/KEACDD
| andybak wrote:
| That's wonderful. I'm going to read it full later but
| I've been looking for some writing on this topic.
| jokethrowaway wrote:
| I'm firmly on the other side: I don't think there's
| anything special about consciousness. Consciousness is
| merely the capacity of the brain to keep track of
| everything that's happening and adapting behaviour. The I
| is merely a thought structure, we're just a very
| complicated deterministic script with lots of inputs.
|
| I score quite high for psychopathy and I didn't understand
| emotions for a long time (as a kid I imagined that people
| just pretended to have emotions), so the lack of that
| emotion processing capability, may influence my view in
| that topic.
|
| Either my brain is defective or it's this century version
| of geocentrism.
| andybak wrote:
| It's interesting that you mention psychopathy. I wonder
| if there's a correlation between these traits and the
| dismissal of the "hard problem".
|
| Maybe you don't experience selfhood in the same way that
| I do? Or maybe you just find it easier to disregard that
| aspect than I do.
| goatlover wrote:
| > Consciousness is merely the capacity of the brain to
| keep track of everything that's happening and adapting
| behaviour.
|
| Problem comes in when you take into account color, sound,
| pain, etc. How does the brain keeping track of itself
| produce conscious sensations? They're not coming in as
| such from the outside world. It's just EM radiation,
| vibrations in the air, molecular motion on the skin, etc.
| Our nervous system turns that noise into sensations.
|
| It's easy to see this problem when you think of how you
| might go about making a conscious robot. A sophisticated
| robot can keep track of its sensory inputs and adapt its
| behavior. But what would you add to turn that into colors
| and sounds and feels?
| tsimionescu wrote:
| > A sophisticated robot can keep track of its sensory
| inputs and adapt its behavior. But what would you add to
| turn that into colors and sounds and feels?
|
| The common thinking along those lines is that you would
| add introspection and nothing more. That is, "red" is the
| brain's interpretation of the brain processes that happen
| when light of some wavelength is hitting the retina (or
| perhaps we need a few more layers - the brain's
| interpretation of the brain processing the information
| that the brain is processing the information that [...]).
|
| Under this general idea, emotions would be similarly
| explained as analysis of other brain processes,
| ultimately reacting to even older (evolutionarily
| speaking) mechanisms for motivation and goal-seeking.
| hliyan wrote:
| It's my comment he's responding to, and even though I may
| not agree with his formulation, I'm sensing he has a point.
| What if this is a bias that we've acquired while growing
| up? Perhaps we've conflated consciousness with intent and
| reactivity (another term introduced by Jaynes 40 years
| ago). We've gotten used to thinking that a chair, for
| example, is not conscious by our definition because it
| reacts in a purely non-intentional way. But does that mean
| it doesn't have some rudimentary level of "experience"
| (sans pain, pleasure, a sense of self or other things that
| come with higher intelligence)?
| andybak wrote:
| I agree "consciousness" is a hodgepotch of different
| things, some of which are easier to fit into a
| materialist world view than others.
|
| But that doesn't detract from my belief that there's an
| irreducible nub that remains once you've removed all the
| easy bits. That nub is what I regard as the "hard
| problem"
| LinAGKar wrote:
| Yes, a lot of naturalist seem to think it's just about the
| ability to process information, but that's not it at all.
| mypalmike wrote:
| What gap are you speaking of though? Do you suspect there
| is some as yet undiscovered fundamental quantity of the
| universe which lay dormant for billions of years until the
| human form evolved to make use of it?
| andybak wrote:
| > Do you suspect there is some as yet undiscovered
| fundamental quantity of the universe which lay dormant
| for billions of years until the human form evolved to
| make use of it?
|
| No idea. That's not what I'm discussing here.
|
| The gap is the fact that I indisputably experience things
| that happen to me. I feel them. There is an "I" to talk
| about that has genuine meaning in a way that it wouldn't
| do if a fictional character or a computer programming
| used the word.
|
| It's possible to imagine a form of life as complex as
| ours with functioning societies where non of the
| lifeforms experienced the world as an "I". Maybe ants
| don't have an "I". NPCs in a video game or GPT3 almost
| certainly don't. At some point GPTx or video game NPCs
| might become as complex or behaviourly rich as a real
| person but still lack an "I".
|
| This is a p-zombie. The "gap" I'm referring to is the
| difference beween similar entities, one of which
| experiences qualia when the other doesn't.
| CuriousSkeptic wrote:
| You seem to extrapolate from a single sample that the
| probabilities of an "I" experience differs between
| individuals based on how similar they are to you.
|
| I would say that if you can imagine other complex life
| forms without that experience it must be equally valid to
| assume the same can be true for everyone around you.
| Perhaps it's only you.
| andybak wrote:
| > Perhaps it's only you.
|
| And thus solipsism rears it's slightly unsightly head
| once again.
|
| Solipsism has always had the virtue of being coherent -
| unlike many other theories in this area. I think pan-
| psychism also has this quality.
|
| The issue is they are both lead you to some bizarre
| conclusions. In some ways it's an analogue to the role
| Many Worlds plays in the philosophy of Quantum Theory.
| "cheap on assumptions, expensive on universes"...
|
| I do lean towards some form of pan-psychism. Or at least
| I'm not sure you can get to "I" from a strictly
| materialist starting point.
|
| However I think it's also likely that we're missing a
| fundemental part of the problem and these conversations
| will one day look like the babblings of children.
| CuriousSkeptic wrote:
| I think its not only analogue to the many worlds, I quite
| suspect it's the same problem actually.
|
| Or rather it seems we're basically asking the wrong
| questions and therefore ending up with seemingly bizarre
| or contradicting things like particle/wave duality or the
| concept of time.
|
| How can we have consciousness when there is no objective
| "now"?
|
| To me the question of which processes experience
| consciousness is akin to asking if a sound wave is a
| crescendo or how many image frames in a movie is required
| to make a scene.
|
| Well, it depends... you might say, and the follow up
| question "depends on what?" is much more interesting.
|
| Some physicists say the world is fully determined. That
| there are at least four fully existing dimensions of
| space time, or even more than that given some
| interpretations of quantum mechanics.
|
| So we are given a static structure with certain patterns
| following specific laws of how the structure may be
| formed. Like if you made a 3D-print of a play of conways
| game of life.
|
| Now in that static structure we have this interesting
| phenomenon called consciousness that at first
| approximation looks like paths of focus tracing specific
| patterns.
|
| There might be just a single path, or many, or even a
| single connected path through them all as you say. I'm a
| compaibilist here, I don't think it's a difference with
| more substance to it than the particle/wave duality. It's
| mostly a matter of perspective.
| bserge wrote:
| I see what you mean, however in my opinion this "I" you
| are talking about is just an evolution of the human
| brain, most likely for the purpose of social
| cohesiveness/activity/integration/interaction. Everything
| we do beyond maintaining our own body is for society.
|
| Every action taken is rooted in social hierarchy, social
| undertakings, social interaction. Better clothes, better
| cars, better phones for better social status, more
| knowledge, more money, more research to advance society.
|
| Someone who lives alone in the woods will not take care
| of themselves much (just like ants, btw), and everyone
| who has discovered/learned something has an almost
| compulsive need to share it with others. If they didn't
| share, they'd be an outcast, considered broken by
| everyone else. A lot of similarities with an ant or bee
| hive there, except we're more autonomous as individuals.
|
| Our species is not just about the individual, and as
| society grew, it required more and more advanced
| individual hardware and software, which our primitive
| brains did not have. So it was evolving. Evolving likely
| by cooperative/social individuals/groups being superior
| to less cooperative/social ones in survival and warfare.
| And what it has evolved into is the "I" we experience
| today.
|
| It adds a whole new layer of complexity, but it's still
| just "software" running on the same brain as everything
| else and using the same resources.
|
| There is that theory of the bicameral mind, not sure how
| accepted it is, but it does seem like a good description
| of an earlier version of our current minds.
|
| And yes, I believe other social animals also have this
| sense of "I", even if primitive.
|
| Just one of my (possibly insane) theories, since I find
| myself thinking about this quite often, in the context of
| artificial intelligence based on the human one.
| andybak wrote:
| I agree with most of this but all that adaptation and
| behavioral complexity could also happen to a race of
| p-zombies. And I still struggle to see how "I" (as
| distinct from behavioral patterns mostly
| indistinguishable from "I"-hood) can emerge in a
| materialist univere without some substrate that has
| properties beyond the mechanical.
|
| I'm veering dangerously close to mysticism and the non-
| falsifiable here. But it seems inescapable for the same
| reason Descartes clearly stated. My "I"-ness is the only
| indisputable fact about reality and the one from which
| all other beliefs follow.
| layer8 wrote:
| The "feeling of experiencing" is just a perception. That
| became obvious to me after dabbling in
| meditation/mindfulness exercises. My mental model of this
| is that in addition to regular sensory perception, the
| brain also perceives some of its own internal processing.
| That is, part of the further processing of the sensory data
| (or more generally of any cognitive processing) becomes
| itself subject to an internal perception (a bit like a
| debugger or profiler applied to its own execution). That
| recursion may even go one or two levels deeper. Or rather,
| there is no clean separation between the recursion levels,
| which somewhat obfuscates to the internal observer what is
| going on. Furthermore, it only perceives parts of its
| internal processing, so it doesn't get the whole picture
| (probably far from it).
|
| This is why consciousness seems magic and confusing. But
| when you think about it, everything we mean when we talk
| about consciousness is something we perceive ("feel",
| "experience", "qualia"), so clearly it's something having a
| representation in the brain that is being perceived by
| another part of the brain (or partially the same part).
| What makes it confusing is that at that level there's no
| clear separation between subject and object of the
| perception. But that fits nicely with the general messiness
| and staggering complexity of biological systems.
|
| With roughly a 100 billion neurons and a 100 trillion
| synapses in a brain, it shouldn't be surprising that the
| internal self-perception can be very detailed, complex,
| multifaceted, nuanced, subtle, and subject to itself.
| tandav wrote:
| Wow, that idea about internal perception recursion is
| making things much clearer for me and quite enlightening.
| Thank you for this :)
| andybak wrote:
| > The "feeling of experiencing" is just a perception.
|
| That's exactly the phenomenon we're trying to explain.
| You've just said "feeling is just feeling".
|
| The question is - why is there someone experiencing a
| feeling. A machine or algorithm can be self-referential.
| Is there some magical "amount of recursion" that suddenly
| results in consciousness? That seems strange to me.
| layer8 wrote:
| In my view, consciousness is a gradual thing, not a
| binary property. In addition, it's also not really "a
| thing", as in not "something extra". By that I mean, when
| you make it a habit to probe into your
| consciousness/awareness, you will find that all there is
| are perceptions. You may think that you have a qualia
| about a particular perception (e.g. of a color, a scent,
| a mood, a memory, an insight, etc.), but then the qualia
| is only there by virtue of you perceiving the qualia.
| There is nothing that is pure subject. When you perceive
| yourself as being a subject (an "I"), it thereby becomes
| an object of perception. It can't exist without being an
| object of perception. (E.g. it isn't there when you're
| unconscious.) That raises the question whether it is
| really anything more than an object (as opposed to a
| subject).
|
| Now, a perception in that sense only requires that the
| thing perceived (the object of perception) has a
| representation in the brain, plus that some information
| processing of that representation is occuring in the
| brain (the act of perceiving). Our objects of perception,
| including qualia and e.g. the awareness of being
| conscious, can be very rich and complex, but it's
| certainly not implausible that they are fully represented
| in the brain with all their richness -- and there is
| nothing more to it than that representation. (In fact, my
| personal experience is that the range of qualia is not
| _that_ large, when you observe it over a longer term and
| for example compare it to our memory capabilites.)
|
| So, when I introspect my consciousness, I cannot pinpoint
| _anything_ that doesn 't fit that model of "information
| processing (perceiving) of representations (objects,
| including qualia)". Even what I think of as "I", in the
| end, is just one of those objects (or probably more a
| cluster of those). The confusion, I believe, stems from
| the fact that we can only introspect a fraction of the
| information processing that is really going on, and that
| the "I" as an independent subject is a very strong
| illusion. But when you try to look closely for it, there
| is arguably nothing really there. (The Buddhists do have
| a point with their concept of non-self.)
| benlivengood wrote:
| I think there is a definite distinction between
| perceptions and models. For example, we don't consciously
| perceive the edge and motion detection of our vision
| system but instead the aggregate interpretation of
| modelling those perceptions. Similarly we don't seem to
| be conscious of some internal organs, but conscious of
| others under different circumstances. Usually we aren't
| conscious of our breath but by focusing we can both raise
| conscious awareness and control of it. So it seems like
| we have a "consciousness" module in the brain that can be
| directed and connected to different other parts of our
| nervous system, but not all, in order to "do the
| perceiving". With practice it seems possible to become
| conscious of more autonomous parts of the body, but not
| all.
|
| And still the question remains of how the brain produces
| the conscious experience and expands or contracts it,
| while not being fully conscious of the whole brain or of
| its own operation (in which case we would understand
| consciousness much better). Is it the chemical activity?
| The electrical? A combination? Is it in the energy
| transfers between matter or in the relationships that get
| established?
| kordlessagain wrote:
| It just seems magic because of time maybe. Which would
| imply the "answer" to the "hard" non-question of
| consciousness probably lies in time series data. The idea
| gravity is tied to consciousness through time and space
| may be related through the assertion space is a dimension
| similar to a search index using an array of timestamps.
|
| Being hungry is just my stomach being empty, over time. A
| feeling emerges, which is then synthesized into a
| perception. Similar to how the world model emerges inside
| us from sound and imagery, imagery and sound bites which
| are themselves built from sensors and post processing,
| which aren't necessarily made aware to the user in and of
| themselves (the image from the left half of the right eye
| only, for example).
|
| Imagine an apple, if you can. Where does the image of the
| apple come from? Can you, like many, see the apple for
| what it is (an iconic image) projected into your
| perception? If you want some esoteric text on this, read
| up on Theosophy. They present the idea the image comes
| from a "mind body", made from alternate types of matter.
| This is relevant when looking at older schools of
| philosophical thought, which used to encompass the
| sciences.
|
| In Tibetan Buddhism this internal model or map of the
| world is presented no different than someone having a
| full immersion PTSD episode...a "broken" model run "on
| the side", in mind, which appears to be "real", occluding
| the "reality" model most of us agree is "here".
|
| > For me I say God, y'all can see me now 'Cos you don't
| see with your eyes You perceive with your mind That's the
| end of it So I'mma stick around with Russ and be a mentor
| -The Gorillaz
| berndi wrote:
| I think you are confusing science with philosophy. In science,
| solving a problem amounts to building a (quantitative) theory
| that allows us to predict phenomena and outcomes in the real
| world. We say that we have understood a phenomenon when we have
| a self-consistent theory that makes predictions validated in
| experiments.
|
| The ultimate question of "why" reality behaves in a way that is
| congruent with a given theory is left to philosophers.
|
| Your claim that a theory of consciousness would simply "shift
| the problem" is only true with respect to these philosophic
| (arguably unscientific) types of questions.
|
| Consider the problem of gravity. Very coarsely, we have moved
| from Aristotle's theory of gravity: the natural place of things
| is on the ground and things like to stay in their natural
| place, to Newtonian gravity: objects are attracted with a force
| that is proportional to the product of their masses, to general
| relativity: objects follow geodesics in curved spacetime, with
| the curvature determined by the energy content of space.
|
| Our grasp on gravity has certainly improved greatly, but the
| "why" questions have simply been shifted.
|
| Consciousness is a natural phenomenon and as such can be
| subject to scientific study. The ultimate question of why the
| laws of nature are as they are is not part of this project and
| is best entertained after closing time.
| malka wrote:
| Consciousness is not observable. Gravity is.
| berndi wrote:
| Any reasonable definition of "observing" presupposes a
| conscious observer and it is possible for you to observe
| yourself being conscious (I assume). Indeed, consciousness
| is observable in a much more direct sense than gravity.
| goatlover wrote:
| Yes, but you can't observer other people's consciousness,
| only infer it. This is a problem when it comes to
| animals, infants, intelligent machines and coma patients.
| Or aliens if we ever made contact.
| Isinlor wrote:
| You can observe, in principle, other people
| consciousness.
|
| Brain conjoined-twins can do it:
| https://en.wikipedia.org/wiki/Craniopagus_twins
|
| For example, there are twins that can see with each other
| eyes or feel each other body.
|
| You could try, in principle, to create a chimera from
| your brain and a brain of some other animal.
| lisper wrote:
| > there are twins that can see with each other eyes or
| feel each other body
|
| That's not the same as directly observing someone else's
| consciousness. That's just two people sharing some I/O
| devices.
|
| In fact, one could argue that the inability to directly
| observe another consciousness is a necessary condition
| for being an individual: the limits of your direct
| observation is the _definition_ of where "you" stop and
| the rest of the universe (including other people) begins.
| Isinlor wrote:
| It is not just sharing I/O devices. Brain conjoined twins
| can also share their mental states. They do experience
| each other qualia [0].
|
| > Though Krista and Tatiana Hogan share a brain, the two
| girls showed distinct personalities and behavior. One
| example was when Krista started drinking her juice
| Tatiana felt it physically going through her body. In any
| other set of twins the natural conclusion about the two
| events would be that Krista's drinking and Tatiana's
| reaction would be coincidental. But because Krista and
| Tatiana are connected at their heads, whatever the girls
| do they do it together.
|
| I also recall examples where one girl does not like
| eating something and so the other girl can not eat it
| because the other can feel it.
|
| Concept of an individual is not a fundamental property of
| the universe. It is an emergent, fluid and complex
| concept.
|
| There are people with multiple personality disorders.
| There are brain conjoined twins. There are people with
| severed corpus callosum.
|
| E.g. here is report about one person [1]:
|
| > After the right and left brain are separated, each
| hemisphere will have its own separate perception,
| concepts, and impulses to act. Having two "brains" in one
| body can create some interesting dilemmas. When one
| split-brain patient dressed himself, he sometimes pulled
| his pants up with one hand (that side of his brain wanted
| to get dressed) and down with the other (this side did
| not). He also reported to have grabbed his wife with his
| left hand and shaken her violently, at which point his
| right hand came to her aid and grabbed the aggressive
| left hand. However, such conflicts are very rare. If a
| conflict arises, one hemisphere usually overrides the
| other.
|
| [0] https://en.wikipedia.org/wiki/Craniopagus_twins#Media
| [1] https://en.wikipedia.org/wiki/Split-brain
| lisper wrote:
| > They do experience each other qualia.
|
| How do you know? How could you _possibly_ know?
|
| I don't dispute that there is not a one-to-one
| correspondence between consciousnesses and bodies.
| Multiple consciousnesses could inhabit the same body
| (multiple personality, split brain, conjoined twins) and
| a single consciousness could be distributed across
| multiple bodies (I don't know of any examples but I can't
| think of any reason this should be impossible in
| principle). But there's a difference between receiving
| input from someone else's sensors and experiencing their
| qualia. It's the difference between, "This ice cream
| tastes like pistachio", and "This ice cream (which tastes
| like pistachio) tastes _good_. " To demonstrate someone
| experiencing someone else's qualia you'd have to find an
| example where the two individuals had different
| preferences about something (like pistachio ice cream)
| and having one of them simultaneously experience liking
| it and not liking it. I don't see how that could be
| possible even in principle.
| Isinlor wrote:
| I use the same standard of knowing as I do for qualia in
| general. I listen to what other people report.
|
| > there's a difference between receiving input from
| someone else's sensors and experiencing their qualia.
|
| There can not be clear difference if you are not able to
| even fully separate two different personalities.
|
| If distinction between personalities is fluid, then
| distinction between personalities qualia also has to be
| fluid.
| lisper wrote:
| But that's exactly what I'm asking: what does a report of
| shared qualia _actually look like_?
|
| > If distinction between personalities is fluid
|
| But it isn't. It might be _vague_ but it isn 't _fluid_.
| _If_ two personalities are distinct then they remain
| distinct. It might be difficult to decide whether two
| different kinds of behavior are manifestations of two
| different personalities, or the same personality behaving
| differently from one moment to the next. But whatever
| criterion you apply, the situation is not going to change
| from one moment to the next. This is one of the defining
| characteristics of personalities. It is what allows you
| to say that a person is _the same person_ as they were
| yesterday, despite their behavior today being different
| in some respects from what it was yesterday.
| Isinlor wrote:
| There can be one personality that can be split into two
| personalities. People with multiple personality disorders
| are not born with the disorder. The same with people
| whose corpus callosum is severed. It is fluid. The amount
| of damage to the brain will determine how distinct the
| personalities will be. If you could transplant brain it
| possible that one personality could create two fully
| distinct personalities. The process could be in principle
| also reverted back again leading to one personality.
|
| Additionally, the process of separating parts of a brain
| is also continuous in time. People could be experiencing
| qualia when then brain is being damaged and the process
| could take long enough for them to register the states in
| between.
|
| With regard to reports, here:
|
| > He also reported to have grabbed his wife with his left
| hand and shaken her violently, at which point his right
| hand came to her aid and grabbed the aggressive left
| hand.
|
| And here: https://youtu.be/N1Mac4FeKXg?t=124
|
| With regard to definition of personalities - definitions
| created for an average case are often not sufficient to
| describe the reality. Quantum physics is a perfect
| example how reality can break old definitions.
| lisper wrote:
| > He also reported to have grabbed his wife with his left
| hand and shaken her violently, at which point his right
| hand came to her aid and grabbed the aggressive left
| hand.
|
| I see that as evidence of two
| consciousnesses/personalities sharing one body (i.e. one
| set of I/O devices), not two personalities sharing
| qualia.
| [deleted]
| bsenftner wrote:
| We observe our own consciousness all the time. It's just
| that your consciousness is closed to all but you.
| zepto wrote:
| That's not what we call an 'observation' in science.
| antonvs wrote:
| Science answers "why" questions all the time.
|
| We know the answer to why uniform radiation propagates due to
| an inverse square law. We know when and why conservation laws
| exist. The are many, many examples of this.
|
| Examples like these refute your claim about the limits of
| science.
|
| Claims like yours are common, though - you're probably just
| repeating what someone else has told you, perhaps without
| thinking very deeply about it.
|
| This position seems to have arisen as a result of some of the
| limits of science that were encountered last century. The
| "shut up and calculate" mentality was a kind of reaction to
| the philosophical problems with quantum mechanics. But the
| defensive reaction that "science is just about theories that
| make predictions" is incoherent.
|
| If it were really true, then science would be utterly
| dependent on philosophy to come up with new theories, because
| a prediction-generating machine isn't going to help you with
| that.
|
| Ironically, the very people who make these claims would be
| the last to accept that progress in science is utterly
| dependent on philosophy - but that's the consequence of their
| own attempt to make a sharp delineation between, essentially,
| thinking and just crunching numbers.
| pmoriarty wrote:
| _" We know the answer to why uniform radiation propagates
| due to an inverse square law."_
|
| This is an answer to a "how" question rephrased
| (incorrectly) as an answer to a "why" question.
|
| First, the "inverse square law" isn't a law in the
| colloquial sense (like a maximum speed limit law) where the
| universe is forced to obey it. Instead, "law" is just a
| conventional phrase indicating what the consensus among
| scientists is regarding certain observations.
|
| So it's really an answer to the question of "how does [our
| current observations of how] radiation propagate?":
| "according to the inverse square law".
|
| Future observations of radiation propagation might run
| completely contrary to those we've had up to now, and it is
| scientific explanations that will have to be modified to
| accommodate those observations.
|
| But the inverse square law does _not_ explain _why_
| radiation has been observed to propagate in this way.
|
| For such an answer you'd have to resort to a much grander
| explanation of the universe, involving all sorts of other
| theories involving many other observations, back to the big
| bang, which is not yet fully understood and may never be
| (even if we assume that the big bang theory itself won't be
| replaced by some other origin theory in the future, and not
| to mention what happened "before" the big bang, which may
| be even more impenetrable still).
|
| But even were there to be some comprehensive "theory of
| everything" (in the larger sense), that doesn't mean the
| why of it has been explained, as there'll still be open
| questions like: "why something rather than nothing?" or
| "why this universe and not another?"
|
| "But," some may object, "I just wanted to know why
| radiation propagates as it does, not why there's something
| instead of nothing." Well, I'm afraid that science can't
| answer your little question without answering the big
| questions. Religion or philosophy might, but they're also
| seen as unsatisfactory to many, so such why questions might
| never be answerable to everyone's satisfaction.
|
| Harder questions, like those about consciousness, are even
| less likely to be satisfactorily answered, as touching them
| immediately lands one in to the morass of assumptions,
| definitions, points of view, and perspectives.
|
| Half the time people are completely speaking past each
| other because they've never agreed on or even stated what
| their definitions or assumptions are, so are going off
| about two or more completely different things.
| Consciousness itself is notoriously difficult to define, so
| when two or more people are talking about something that
| they "know it when they see it," they're bound to talk past
| each other half the time, whether they agree or disagree.
|
| Some philosophers are better at setting the ground rules
| and making their fundamental assumptions and defintions
| explicit, but they're usually pretty balkanized, and you'll
| find plenty of other philosophers disagreeing with their
| assumptions and definitions.
|
| I personally see little hope of this thorny problem ever
| being resolved to everyone's satisfaction, but there'll
| surely be plenty of arguing about it until the end of time.
| amelius wrote:
| > Science answers "why" questions all the time.
|
| Nope, science is just shifting the perspective.
|
| There is a nice video of Feynman about it, appropriately
| titled "Why":
|
| https://www.youtube.com/watch?v=36GT2zI8lVA
| berndi wrote:
| I'm having a hard time understanding your post -- my depth
| of thinking may well not be on par with yours. Its obvious
| that science as a whole explains a hierarchy of
| phenomenona, with a given phenomenon (like radiation
| propagation) often being explained ("why does it happen in
| this specific way") by a more fundamental theory that
| provides an overarching account of a set of phenomena.
|
| The point is that every update of a scientific theory
| shifts old "why" questions to new ones. Science will not
| ever and does not aim to provide an answer to the ultimate
| question of why anything exists at all or why a given
| theory of everything applies rather than another (indeed,
| string theory for example posits a possible, if not actual
| theory of everything).
|
| In this sense, in the scientific study of consciousness, we
| do not aim for an ultimate account of why the laws of
| nature give rise to consciousness. Instead, it is about
| explaining a natural phenomenon within a theoretical
| framework that allows us to make predictions with respect
| to experimental outcomes.
| ganzuul wrote:
| There is a logical answer to the question of why there is
| something rather than nothing, but interpretations of it
| are varied. If you accept that consciousness (hard) is a
| natural phenomena then it is much less of a leap.
| However, you do lose the ability to conclude with
| certainty that your individual consciuosness did _not_
| instantiate the entire universe, which tends to lead to a
| very self-centered path of inquiry which often only
| skirts around the main issue. If you are further attached
| to the logic of the Law of the Excluded Middle, then
| megalomania is always close at hand and probably the
| reason why this knowledge isn't shouted from the
| rooftops.
|
| This idea has been around for thousands of years and is
| similar to the central teaching of Advaita Vedanta.
| Indeed, we are continuing a truly great tradition of
| inquiry in our natural philosophy of science.
| [deleted]
| thanatos519 wrote:
| The less hand-wavy form of panpsychism is called IIT:
| https://en.wikipedia.org/wiki/Integrated_information_theory
| mellosouls wrote:
| Discussed in the article itself.
| hliyan wrote:
| Interesting! So basically: the basic unit of consciousness
| (whatever you may call it) is an intrinsic property of matter
| (or energy or particles?) and the more complex information
| systems matter forms, a more detailed model of the physical
| world emerges with that system, and the system experiences
| said model? I only did a quick reading so I may be
| misinterpreting the theory.
| jw1224 wrote:
| You are the universe experiencing itself.
| baxrob wrote:
| https://www.susanblackmore.uk/consciousness-an-introduction/
| o_p wrote:
| The only hard thing about consciousness is that we dont have a
| good enough interface to experiment with, the question about
| "what is consciousness" is meaningless and not scientific,
| science progesses when we start asking "how" things work,
| modeling the dynamics of things is the thought shift that allowed
| science
___________________________________________________________________
(page generated 2021-05-29 23:02 UTC)