[HN Gopher] We Are Beast Machines
       ___________________________________________________________________
        
       We Are Beast Machines
        
       Author : CapitalistCartr
       Score  : 52 points
       Date   : 2021-10-14 11:10 UTC (1 days ago)
        
 (HTM) web link (nautil.us)
 (TXT) w3m dump (nautil.us)
        
       | StuntPope wrote:
       | Another materialist who has it exactly backwards.
       | 
       | Consciousness is not an epiphenomenon of the brain. It is the
       | base layer of reality.
        
         | ganzuul wrote:
         | You have to admire the repeatability with which they get it
         | wrong. It's almost like they copied their homework.
        
         | callesgg wrote:
         | Of subjective reality yes.
        
           | hnthrway wrote:
           | Which is all any of us have access to, and which all science
           | is couched within
           | 
           | Objective reality is just consensus reality
        
             | callesgg wrote:
             | I don't know.. consensus reality is what I would call the
             | reality that is portrayed in media or mabye the reality
             | that fits the most nr of people.
             | 
             | Objective reality is best described with particle physics
             | but that stuff can't be understood by the human mind. The
             | human mind can't keep track of so much complexity at once.
        
       | m3kw9 wrote:
       | I'm always curious on how the selection process works. How come
       | I'm not this person or that person? How come I'm this person
        
         | robotresearcher wrote:
         | Extreme selection bias.
        
       | ganzuul wrote:
       | I'm finally starting to unravel New Age terminology which deals
       | with this subject. Eastern philosophy is almost impenetrable due
       | to the cultural barrier, but those who succeeded imported some
       | relevant ideas to Wester society.
       | 
       | It's really bizarre how some of these wisdom teachers go on
       | conspiratorial rants which have a completely different tone from
       | the actually useful information.
        
         | joeberon wrote:
         | Eastern philosophy isn't impenetrable, and people who cannot
         | penetrate it aren't hampered by cultural difference.
         | 
         | Even for natives of those cultures, penetrating that philosophy
         | often takes years of spiritual practise and training. The
         | reason is that it isn't about ideas, it's an actual body-mind
         | training and transformation of being. It isn't something you
         | just sit and talk about and discuss, although it can include
         | that, but also it must come with actual implementation to
         | occur.
         | 
         | That's the main difference I've noticed between western
         | philosophy and eastern philosophy which makes them difficult to
         | integrate. The former is almost always understood by the
         | experience of so-called "ordinary beings" and communicated in
         | terms of intellectual arguments, whereas the latter cannot
         | really be understood unless you actually physically and
         | spiritually transform your very being.
         | 
         | For example, I used to practise Soto Zen, and Dogen's
         | philosophical writings are very popular there. Western
         | philosophers, or just general non-practitioners, absolutely
         | bash their heads against his writing, it's just so difficult to
         | wrangle and there's no connection. In Soto Zen we understand it
         | by sitting zazen (seated meditation). Literally, the only way I
         | know to understand it is by doing meditation practise. Then
         | next time you read it and you have a deep connection with it
         | that you can't explain in a way that someone who hasn't had
         | that connection would understand. Western philosophy seems to
         | not be compatible with those kinds of transformations. If it
         | cannot be explained in a way that is somehow independent of the
         | observer's own practise, then it is not considered rigorous,
         | but also such a thing is impossible when talking, for example,
         | about Zen.
        
           | ganzuul wrote:
           | I did use the qualifier 'almost'... and I don't see you
           | actually disagreeing with what I meant.
           | 
           | There was something called Eckhart Seminar Training or
           | something which took people through decades of training in
           | the more cumbersome system in a week. Similarly, a lot of
           | cultural impedance has been removed in the teachings of
           | Western sages.
           | 
           | Teachings need to be updated and modernized as we progress as
           | society.
        
             | joeberon wrote:
             | > Teachings need to be updated and modernized as we
             | progress as society.
             | 
             | I don't think there's any way to do that without
             | understanding them, but to understand them fully you
             | basically have to become a Buddha...so it is not clear how
             | to do this without removing essential parts of the
             | teachings, as are often done when surgically transposing
             | bits of eastern philosophy into the west
        
               | ganzuul wrote:
               | What Westerners need to be taught is different from the
               | needs of other cultures. Seekers here are seldom lacking
               | in their discernment, but very lacking in their empathy.
        
               | joeberon wrote:
               | I haven't experienced that the teachings need to change
               | personally, but yes the emphasis needs to. Unfortunately
               | westerners are being fed teachings that are more like
               | "this meditation will make you experience a sober LSD
               | trip" and "this one will cure your depression" and "this
               | will give you insight into the nature of reality",
               | because that's what westerners want. But what they
               | actually _need_ is teachings on compassion, loving
               | kindness, etc, which they are in no way interested in.
               | Ultimately you cannot have one without the other. There
               | are no eastern spiritual paths that don 't integrate both
               | insight and compassion.
        
               | ganzuul wrote:
               | AFAIK the ads don't really correspond to the content.
               | Regardless if you are looking for weight loss, business
               | success, or liberation, you come into contact with the
               | same teachings because it does all of the above. It
               | brings people in with concepts they are familiar with.
        
               | joeberon wrote:
               | Yeah I think you have to misrepresent to get people in,
               | but personally I don't like that. I would rather we don't
               | lie about it and Buddhism (for example) dies than we
               | misrepresent it.
        
       | nickelpro wrote:
       | Nothing in this essay addresses the single most prominent
       | question of contemporary philosophy of mind, which is the hard
       | problem of consciousness. It only addresses the soft problems,
       | which everyone agrees are solvable "merely" with enough time,
       | funding, and initiative. Nothing about the "beast machine"
       | addresses how qualitative experience arises from physical
       | phenomena, or as Chalmers puts it, "Why are we not philosophical
       | zombies?" Of course the hard problem, by its nature, is likely
       | irreducible, so expecting it to be solved is unreasonable.
       | 
       | The point is books and passages like this are only compelling if
       | you've already accepted a hand-wavey answer to the hard problem.
       | In this case the author's accepted answer seems to be the ever
       | popular, "consciousness is an illusion," but that particular
       | explanation to the hard problem is no more valid (or any less
       | vague) than any other. Stating it with authority and not
       | acknowledging it as a "faith-based" axiom undermines the work.
        
         | dougmwne wrote:
         | To turn this on it's head, maybe the problem is assuming that
         | everything else in the universe is without consciousness aside
         | from certain classes of matter organized in a particular
         | configuration of a brain. If we instead assume that
         | consciousness is an intrinsic property of all matter and space,
         | then the problem goes away. Or to state it another way, either
         | everything is conscious or nothing is.
        
           | jerf wrote:
           | It doesn't do a darned thing to the problem. It just moves it
           | around; the question becomes "If the entire universe is
           | 'conscious', for whatever your definition is, why are some
           | configurations of matter more able to express it and others
           | less?" How can we build an identifier that says what will be
           | more expressive and less by inputting the configuration? How
           | can we engineer configurations of matter that are more
           | expressive rather than less? It is clearly obviously that
           | configurations of matter are not just additive, that is,
           | every 3 pounds of matter is the exact same type of conscious
           | as any other 3 pounds of matter, so what's the difference,
           | _exactly_?
           | 
           | Which happens to be the exact same problem.
           | 
           | This problem doesn't go away with any amount of changing
           | definitions. Shifting the mystery under the rug where you
           | can't see it doesn't mean the mystery has gone away, nor that
           | everyone else is going to be fooled. Some of still see the
           | lump under the rug you just created.
        
             | dougmwne wrote:
             | To me, all those questions are much closer to creating a
             | testable hypothesis than trying to ask how to go from 0 to
             | 1, from the inert to the conscious. "Consciousness
             | expression" sounds like something you could develop a
             | measurement of based on an observable behavior. I'm not
             | sweeping the problem under the rug, rather presenting a
             | scenario in which trying to discover the origin of
             | consciousness is like trying to discover the origin of
             | energy.
        
         | lostmsu wrote:
         | I think you are the one doing hand-waving to pretty trivial
         | resolutions of p-zombie problem.
         | 
         | Specifically, that the way I'd define sentience, "qualia", and
         | experience as certain replies to certain stimulae. Namely,
         | sentience as ability (skill level) to play any game like
         | Starcraft, experience as ability to reproduce previously
         | received inputs, and qualia as the set of answers to all
         | possible questions to describe a particular input.
         | 
         | Given these definitions, the construct of Chinese room (which
         | is supposed to be a p-zombie) has all 3.
         | 
         | So unless you can either provide better definitions (without
         | violating Occam's Razor and Popper's criteria) of the 3
         | concepts, which Chinese room would not satisfy, or give another
         | way to construct a p-zombie, which would not have one of them,
         | but still feel otherwise indistinguishable from "a person", the
         | problem's solution is staring at you right there.
        
         | [deleted]
        
         | robotresearcher wrote:
         | I firmly believe this problem will melt away in time.
         | 'Consciousness' will gradually cease to be something that
         | empirical science is concerned with, any more than the 'soul'
         | is now, though it was highly salient to science 200 years ago
         | and still is to non-science communities.
         | 
         | The signs of this are: there's no broad agreement on what it
         | means, there are no falsifiable predictions outstanding, and
         | the only reason to not accept that we are philosophical zombies
         | is what people say when you ask them about it. People believe
         | all kinds of things that are obviously not true. Why do we give
         | such special credence to this one?
         | 
         | I think we will move on from this idea over time.
        
         | chmsky00 wrote:
         | The hard problem will be solved in time.
         | 
         | My reasoning is that the people that defined it did so in a
         | time when our science was less developed, so of course the
         | problem seemed much harder.
         | 
         | Information theory makes even the hard problem a matter of
         | understanding interaction fields of physics, with the chemistry
         | of biology. Relativity and sensory network effects explain
         | relative experience elegantly enough.
         | 
         | A lot of theories from back in the day are built on outdated
         | understanding. Unfortunately their authors did not get to see
         | our achievements in engineering unravel a lot of their over
         | baked theories due to a need to fill in gaps without hard
         | evidence. Same as we won't see technology in the future have no
         | need of all this software we wrote. It won't literally be
         | handed on.
         | 
         | Luminiferous Aether was once a thing to many even though it's
         | not one discrete thing. One might consider it was a poetic
         | initial take on field theory, which we now rely on sets of
         | glyphs of shares meaning. Which artistically could also be
         | imagined as flowing sets of matrix code that glow like a
         | luminous field.
         | 
         | If there's a hard problem to consciousness it's an
         | unwillingness to consider there is no Valhalla. Is there a hard
         | problem? Or do we hope for an answer that suggests we're not
         | just meatbags?
        
           | PKop wrote:
           | >a matter of understanding
           | 
           | There is also the possibility that our cognitive limits will
           | prevent us from creating artificial intelligence and
           | consciousness, even if it is materially possible.
        
             | chmsky00 wrote:
             | Good, we don't need AI.
             | 
             | I'd be more interested in augmented human intelligence.
             | Growing neuron structures to speed the acquisition of skill
             | and knowledge.
             | 
             | AI as we know it now is for empowering aristocrats. Here's
             | Googles data center empowering Google to make business
             | choices that involve extracting effort from us.
             | 
             | I'd rather science and technology empower individuals
             | uniquely and not be ground down to the fiscally prudent
             | efforts.
        
           | nickelpro wrote:
           | The hard problem has technological progress built into its
           | definition. It defines problems of consciousness that can be
           | solved by progress as the "soft" problems, and the problem
           | that cannot be solved by progress as the "hard" problem. The
           | hard problem doesn't say that a mechanism _hasn't_ been
           | found, it says a mechanism _cannot_ be found. Like Godel's
           | incompleteness theorems or Heisenberg's uncertainty principle
           | it places a limit on what can be known about the system,
           | "[the hard problem will] persist even when the performance of
           | all the relevant functions is explained."
           | 
           | The validity of that is up to you, but if you accept the hard
           | problem as a valid question it will not be solved by
           | technological progress.
        
             | chmsky00 wrote:
             | Yep it's not an interesting idea, the hard problem. We can
             | never see outside our universe. We can't know all states of
             | matter ever. We can't peek beyond the speed of light. We
             | can solve a lot of problems we actually have without an
             | answer (42, but what...)
             | 
             | Humans have a willingness to see truth in metaphor and
             | analogy, and invent them to avoid accepting we're just meat
             | bags.
             | 
             | That's what the hard problem of consciousness is to me;
             | biological ideation run amok.
             | 
             | It has useful political effects, it can be used to disabuse
             | the self righteous because it's a purposeful thought ending
             | monolith, nothing more.
             | 
             | We'll keep iterating on our theories of the interaction of
             | fields and matter and stop caring about the hard problem
             | like we quit discussing luminiferous aether. We'll stop
             | seeing the literal edge of reality as a boundary on
             | experience in the first place.
        
         | [deleted]
        
         | bondarchuk wrote:
         | If you are going to demand a precise, non-handwavey answer to
         | the hard problem, then you must first give the hard problem in
         | a precise, non-handwavey way, or at the very least prove in a
         | precise, non-handwavey way that there even is such a thing as
         | the hard problem. As far as I've seen nobody has really done
         | this, it's always just "what breathes fire into the equations?"
         | "what is it like to be a bat?" "I just feel it in my bones that
         | there's something missing"... P-zombies are an attempt, except
         | that "p-zombies are conceivable" is to be taken as axiomatic
         | without any further explanation.
        
           | joeberon wrote:
           | That's the problem and exactly why I say I don't think it
           | can't be explained via a vague feeling. I've never seen
           | someone who says there is a hard problem of consciousness
           | adequately explain to someone why there isn't. The poster is
           | now making what seem more like appeals to emotion throughout
           | the thread now: "are you saying that sufficiently complicated
           | blocks of matter are conscious?" and the like. This is what
           | always happens.
           | 
           | While I personally believe there is a problem, I cannot
           | _explain_ why, and for a long time I didn 't think there was
           | a problem
        
             | mach1ne wrote:
             | The so-called "hard problem of consciousness" may be a
             | problem through which one may find a suitable definition
             | for their own humanity, but it absolutely is not a
             | scientific problem.
             | 
             | If one wishes to gain concrete data, one needs to define
             | the problem before defining the answer. Any current
             | "problems" relating to consciousness are explainable
             | without needing a consciousness component. I have not
             | encountered a single problem which couldn't be explained by
             | pure cognitive computation.
             | 
             | Consciousness becomes a problem suitable for scientific
             | endeavor the moment someone can actually define the
             | problem.
        
               | nickelpro wrote:
               | Yes, just like the various interpretations of quantum
               | mechanics, it's not a scientific question because it
               | doesn't allow for any testable hypothesis. Science is
               | bound by the scientific method, questions of philosophy
               | exist outside these bounds. Is math invented or
               | discovered? Does the wave function collapse upon
               | observation, or do observers exist in superposition? Is
               | my "red" the same as your "red"?
               | 
               | Even if we could have exact answers to these questions,
               | they wouldn't allow use to make predictions, and so they
               | are not science. I think it's reasonable to say that
               | means they aren't "useful", we can't go to space or build
               | skyscrapers with these answers, but I think it equally
               | makes them the more interesting questions. Our usual
               | tools at rational inquiry fail to penetrate these sorts
               | of questions.
        
               | mach1ne wrote:
               | Actually most of the problems you described are relevant
               | and do exist in the objective world. The problem of the
               | problem of consciousness is that there is no aspect in
               | the definition of consciousness which couldn't be
               | explained by objective mechanisms.
               | 
               | Currently, asking whether there is a consciousness is
               | like asking if there is glaxtimbo. What is glaxtimbo, you
               | may ask? I can't explain it, but I feel like it is there.
               | That's the extent to which the definition of
               | consciousness in its unexplainable attributes currently
               | reaches.
        
           | unyttigfjelltol wrote:
           | My question is simply " _why_ am I me? " Why and how is my
           | personal point of view fixed to this particular lump of meat
           | and _I_ never wake up one day as _you_?
        
             | eli_gottlieb wrote:
             | Because if you woke up as me, you'd be me.
             | 
             | Let's assume that consciousness works in roughly the way
             | implied by your question: it exists, but it exists
             | _entirely_ separately from both the body and from mental
             | contents.
             | 
             | Now, let's further assume that the scenario you're
             | proposing _actually happened_. Last night, in our sleep,
             | you and I switched souls. Yesterday 's me is actually you
             | now, and yesterday's you is actually me now.
             | 
             | How can either of us tell the difference? Clearly the
             | contents of our consciousness - our memories, knowledge,
             | skills, perceptions, emotions, etc. - are attached to the
             | body, particularly the brain. This is just the assumption
             | we started with. What, then, makes you _you_ and me _me_?
             | Clearly our souls aren 't doing a very good job at telling
             | the difference!
        
           | nickelpro wrote:
           | Agreed, I somewhat address this in my follow on comment. In
           | my mind rejecting the hard problem is the same as accepting
           | the "consciousness is an illusion" answer to the problem. But
           | I can see the point of view that views the entire
           | conversation as a category error.
        
           | Al-Khwarizmi wrote:
           | I find the classical argument with color qualia very
           | convincing (in fact, I thought about it before even knowing
           | what qualia were...). Nothing you can find out about
           | wavelengths, cones, rods, neurons and so on will tell you why
           | you see red the specific way you do, and whether my
           | perception of red corresponds to yours or my "red" is
           | actually your "green". So there is a gap there, something
           | that doesn't follow from the mechanical state of the system.
           | 
           | Of course, I won't claim to be able to define that gap in a
           | precise way (I'm not even a philosopher), but I think it's
           | clear that it exists, and in my view the burden on proof is
           | on those that claim the problem doesn't exist because no one
           | can come up with a believable methodology (even supposing
           | infinite time, infinitely precise measuring instruments,
           | etc.) that would tell us the answer to questions like why my
           | "red" feels like it does or whether my experience of red is
           | different from yours.
        
             | the8472 wrote:
             | > I find the classical argument with color qualia very
             | convincing (in fact, I thought about it before even knowing
             | what qualia were...). Nothing you can find out about
             | wavelengths, cones, rods, neurons and so on will tell you
             | why you see red the specific way you do, and whether my
             | perception of red corresponds to yours or my "red" is
             | actually your "green".
             | 
             | Red being green makes no sense. Let's simplify a bit and
             | say you have a neuron for hearing the word "red", one for
             | the visual stimuli "red" and then a sensory fusion/abstract
             | concept of red. Several layers up you'll also have one for
             | a "stop sign" (which also takes inputs like octagon shapes
             | and some white letters). There are many other, more distant
             | associations of course, we have many neurons after all.
             | 
             | The spoken words, physical wavelengths and stop sign design
             | are more or less physically or socially fixed. If you
             | magically swapped some red and green neurons then those
             | physical stimuli wouldn't change. The brain would be forced
             | to rewire all the associations until it arrived at the
             | original connections again. Or it would suffer from
             | observable mispredictions such as being more likely to
             | respond to a green rather than a red stop sign. Why would
             | there be a free-floating "qualia green" neuron connected to
             | "visual red" that is yet somehow disconnected from
             | "physical green" and why wouldn't it update once it became
             | miswired?
        
               | Al-Khwarizmi wrote:
               | Suppose that I see red the way you see green. Then, when
               | I see a stop sign or a fire extinguisher, I'm seeing the
               | color that you would call "green". But of course, I have
               | been told since birth that fire extinguishers, stop
               | signs, etc. are red, so I would naturally call that color
               | red. And in the same way, if I see green the way you see
               | red, when I look at grass I would be perceiving the same
               | color that you see when you look at a stop sign. But I
               | would call it green, because everyone knows grass is
               | green, what else would I call it?
               | 
               | There would be no disagreement between us about what
               | objects are red, no way to tell that we see them
               | differently, and no real-world consequences that would
               | force to rewire any neurons, because our ways of
               | perceiving the world would both be coherent and equally
               | valid.
        
               | civilized wrote:
               | > Suppose that I see red the way you see green. Then,
               | when I see a stop sign or a fire extinguisher, I'm seeing
               | the color that you would call "green".
               | 
               | Are you sure it is possible to pluck the color I'm seeing
               | out of my head, and the color you're seeing out of your
               | head, and compare them side-by-side, like we could
               | compare two different colors of paint on a piece of
               | paper?
               | 
               | I'm not so sure about that. It seems to depend a lot on
               | what phenomenon "how you see a color" actually refers to.
               | 
               | If "how you see a color" refers to the neural patterns
               | excited by light of the relevant wavelengths, and you and
               | I have the same neural patterns, then we see the color
               | the same way. There is no metaphysical layer on top that
               | can make one of those experiences green and the other
               | red.
        
               | mopierotti wrote:
               | By that sense, if qualia have no discernible impact on
               | any part of the system, in what sense do they exist?
               | 
               | One can take this thought further. Red vs green light
               | have real physical properties that affect one's
               | subjective experience of them. For example different
               | colors' physical properties change the way that they are
               | perceived in low light conditions, the way they are
               | perceived when placed next to other colors, etc. So at
               | the end of the day even if hypothetically one person's
               | qualia for red is swapped for green, you end up with a
               | green that acts an awful lot like a red for all intents
               | and purposes in the brain.
               | 
               | Edit: So my personal hunch is that qualia can't be
               | meaningfully said to exist.
        
             | mabub24 wrote:
             | There is a very fruitful line in philosophy of mind and
             | language that addresses the colour qualia question: anti-
             | cartesianism and anti-skepticism.
             | 
             | Wittgenstein's arguments on seeing-as is the best example
             | of this line. He argues, quite convincingly I think, that a
             | lot of the kind of skepticism, of a cartesian sort,
             | implicit in the colour qualia question arises out of
             | conceptual confusions in language. The goal isn't to
             | _solve_ the debate, or directly refute it per se, but to
             | dissolve the entire conceptual confusion upon which it
             | relies by making clear how it is essentially a house-of-
             | cards.
             | 
             | For instance, we know that humans have a relatively uniform
             | perceptual landscape. Physiological realities in the eye
             | limit the variations in human visual perception to a
             | certain set wavelength band. With all the correct parts
             | working correctly, we see and we all see in relatively the
             | same way. We don't see our seeing. We don't perceive our
             | perceptual experience from the inside out. We see. The
             | variations in "what people see" are more _in_ language,
             | which is to say that people _report_ seeing different
             | things. As Wittgenstein would put it, we often  "see-as"
             | just as much as we "see-that", which are ways of "seeing"
             | that intersect and mix in language.
             | 
             | Colour is interesting because in our language it is heavily
             | dependent on ostensive definitions, which is the linguistic
             | way of saying colour is reliant on pointing at examples.
             | Often quite literally pointing with your hand, or directing
             | someone to look at something in particular. So if there is
             | a red car in front of you and both people say the car is
             | "red" then they are seeing the same red because the colour
             | is not the result of empirical validation of an internal
             | private perceptual experience, but because the colour is in
             | language. They are reporting the same colour.
             | 
             | Viewed in this line, the colour qualia question essentially
             | dissolves away as irrelevant.
        
               | Al-Khwarizmi wrote:
               | I don't understand the point: we could both be seeing a
               | red car (which is objectively red, in the sense that it
               | has a wavelength of 625-740 nm) but we could be
               | perceiving it radically differently (for example, maybe
               | my perception of red is exactly like your perception of
               | blue) and language would never help us find out.
               | 
               | In my world, that car, stop signs, fire extinguishers or
               | chili peppers would look like the clear sky or sea water
               | in yours. But we would never find out, because I would
               | point at the car and say "red" (I have learned that all
               | objects that look that way from my point of view are
               | "red"), and you would agree that it's "red". There is no
               | way to find out that our perception is different.
               | 
               | But I guess my lack of understanding is because I haven't
               | read Wittgenstein, so I hope to do so when I get the
               | time. Thanks!
        
               | mabub24 wrote:
               | > There is no way to find out that our perception is
               | different.
               | 
               | Wittgenstein's point is more that there is no way in
               | which a distinction of that kind could even be
               | _intelligible_ in the way that you have presented it.
               | What is intelligible is our public behavior, namely our
               | language use, so to find out if what you perceive is
               | different you can ask someone if you 're looking at the
               | same thing, if it has the same characteristics, etc.
               | 
               | He would point to the fact that we don't have radical
               | uncertainty in our day to day life and perception. The
               | philosophical question is a question of sense and
               | nonsense around the concept rather than a metaphysical
               | question about an "inside" vs. an "outside".
               | 
               | I would really recommend reading Wittgenstein's
               | _Philosophical Investigations_ if you are interested in
               | these kinds of questions, it 's incredibly influential
               | _in philosophy_ precisely because of the way it
               | approaches  "age-old" problems in ways that lead to
               | clarity and very interesting results.
        
               | Dudeman112 wrote:
               | >the colour qualia question essentially dissolves away as
               | irrelevant
               | 
               | And yet, I still wanna know.
               | 
               | The question is IMO as valid as "if I poke you with this
               | pen, do you feel it different enough than what I feel if
               | I poked myself?"
               | 
               | If we go through the exercise of breaking down the
               | physiology, neurology and linguistic, I still wanna know
               | if they feel it any different and how different.
        
               | mabub24 wrote:
               | > The question is IMO as valid as "if I poke you with
               | this pen, do you feel it different enough than what I
               | feel if I poked myself?"
               | 
               | Funnily enough, Wittgenstein also famously looked at pain
               | as well [0] in the context of his famous private language
               | argument, and he comes to the same conclusions as he does
               | about perception and language.
               | 
               | It should be noted that Wittgenstein's central point is
               | about the _intelligibility_ of color, or perception, or
               | pain, or a private language. He is not denying the
               | existence of pain, or of what could be considered  "first
               | person priority", _as a sensation_. But he 's commenting
               | on the logical necessity of a public intelligibility for
               | it. For him that intelligibility comes about through
               | behavior or public criterions, that is, what we might
               | call pain-behavior wrapped up in a public form of life.
               | 
               | > I still wanna know if they feel it any different and
               | how different.
               | 
               | Wittgenstein's whole point, really, is that you can get
               | an answer by asking them because, after all, you're
               | speaking the _same_ public language. That is also why
               | pain, as it is intelligible, _has a history_.
               | 
               | [0]: https://plato.stanford.edu/entries/private-language/
        
             | bondarchuk wrote:
             | > _Nothing you can find out about wavelengths, cones, rods,
             | neurons and so on will tell you why you see red the
             | specific way you do_
             | 
             | Or maybe it will, but we just don't have the neurological
             | and information-theoretical tools at this moment in time.
             | 
             | > _and whether my perception of red corresponds to yours or
             | my "red" is actually your "green"._
             | 
             | And maybe (at least this is my personal guess right now) it
             | will show that this question is incoherent.
        
             | civilized wrote:
             | > Nothing you can find out about wavelengths, cones, rods,
             | neurons and so on will tell you why you see red the
             | specific way you do, and whether my perception of red
             | corresponds to yours or my "red" is actually your "green".
             | 
             | This is begging the question. You're _assuming_ that color
             | qualia exist in the way that objects do. That  "my red" and
             | "your green" are not just properties of personal
             | experience, but refer to _objects_ that could be
             | identified, drawn out, moved around and compared like the
             | ordinary objects in our world.
             | 
             | The basic mistake of qualia theory is to assume that qualia
             | share any of the properties of physical objects or even
             | abstract concepts. That qualia are entities distinct and
             | stable enough to be placed side by side, compared and
             | contrasted, the way you could compare rocks or different
             | software architectures.
             | 
             | I suspect that this makes about as much sense as asserting
             | that square circles exist. You can say it, you can even
             | kinda-sorta imagine it, but you can do that with a lot of
             | things that fall apart on closer inspection.
             | 
             | For all we know, p-zombies are square circles. We think
             | that they're a coherent concept because we can kinda-sorta
             | imagine them, but we're nowhere near _knowing_ that they
             | 're a coherent concept.
             | 
             | There is such a thing as trusting feelings and intuitions
             | and imaginations too much, especially in the incredibly
             | abstract context of somehow reifying your experiences and
             | attempting to understand what they "are".
             | 
             | There are a lot of things that people used to feel strongly
             | about, like "heavier objects fall faster". Many people
             | still didn't believe Galileo was right when he proved it
             | false with his experiments. Even when he did the experiment
             | right before their eyes. The feelings and intuitions were
             | too strong.
             | 
             | Such is the case with qualia, I think. I don't know if
             | we'll ever be able to prove the intuitions wrong and
             | illuminate the true nature of consciousness, like Galileo,
             | but that doesn't mean our prescientific intuitions are
             | correct by default. P-zombies and qualia are embellishments
             | of our prescientific intuitions, not rigorous or scientific
             | versions of them.
        
             | FactolSarin wrote:
             | I don't see why two people experiencing red differently
             | causes a problem for consciousness. Can't we both be
             | conscious even with very different qualia?
        
               | Al-Khwarizmi wrote:
               | Of course we can be all conscious but with different
               | qualia. But the thing is that if consciousness was merely
               | a physical phenomenon, we should be able to determine
               | qualia purely by looking at the physical system, just as
               | we determine any other property. However, we could be in
               | a situation where we have understood our behaviors down
               | to particle level, and I would still have no idea how you
               | perceive colors.
        
               | lostmsu wrote:
               | > However, we could be in a situation where we have
               | understood our behaviors down to particle level, and I
               | would still have no idea how you perceive colors.
               | 
               | I do not see how this is possible. If you can perfectly
               | simulate me you can perfectly simulate any response I
               | will give to any question about how I experience red.
               | Moreover you will have better idea than I do because you
               | could also simulate how my experiences of red will change
               | with time as I gain more experience from seeing red or
               | anything else really.
        
         | pitspotter2 wrote:
         | Check out Chapter 11 of _The Hidden Spring_ by Mark Solms.
         | 
         | In it he identifies subjective experience, for example
         | perceiving the colour red, as an observational _perspective._
         | 
         | He reminds us that we can also experience vision objectively,
         | with the right lab equipment, e.g. by listening to spike trains
         | travelling from the retina down the optic nerve.
         | 
         | Say we do the latter. We can then legitimately ask what the
         | same process looks like from the point of view of the
         | system/subject/patient. Answer, vivid red.
         | 
         | So spike trains and redness are differing valid perspectives on
         | the same underlying physical process, namely vision. One
         | doesn't arise from the other; they are both products of vision.
        
           | ganzuul wrote:
           | Generally it is understood that qualia are not quantifiable.
        
             | Robotbeat wrote:
             | Isn't that begging the question?
        
               | ganzuul wrote:
               | I don't think so... I thought GP was explaining how they
               | would be quantifying qualia, and pointed out that this is
               | kind of against the definition of the word.
               | 
               | Maybe I missed something.
        
         | joeberon wrote:
         | So far the only answer I've seen people give to the question is
         | "what problem? There's no problem". Basically, there are many
         | people who don't (or also have convinced themselves that they
         | don't) perceive an incongruity between their personal
         | subjective experience and the idea that consciousness is just
         | an emergent phenomena of a physical system. I haven't found a
         | good way to explain it, I just know that there's something
         | wrong or missing.
         | 
         | So basically it seems there's three camps:
         | 
         | Those that feel the incongruity and don't know a solution.
         | 
         | Those that feel the incongruity and have found their own
         | solution (which is usually what people will call "religion").
         | 
         | Those that do not feel the incongruity.
         | 
         | I don't personally see a way to justify it if someone doesn't
         | feel the problem in their bones.
        
           | nickelpro wrote:
           | I don't understand how someone can not understand the
           | problem.
           | 
           | You and I have conscious experience, we have a meaningful
           | sensation of qualia. A billet of 4140 steel does not, or so
           | we assume. Both are governed by physical laws of the
           | universe, so there must be some distinction beyond these
           | physical laws that differentiate them.
           | 
           | In this framing, there are typically four camps, three of
           | them line up with your camps:
           | 
           | 1) I am not like a block of steel, but I don't know why.
           | 
           | 2) I am not like a block of steel, but I know why
           | ("religion")
           | 
           | 3) I am like a block of steel, neither are conscious (no
           | incongruity/consciousness is an illusion)
           | 
           | 4) I am like a block of steel, both are conscious ("it from
           | bit"/Chalmers's universal conciousness)
           | 
           | Of course this implies the existence of a 5th and 6th camp
           | that believes the block of steel is conscious, but we are
           | not. I find these camps throw the best parties.
        
             | munificent wrote:
             | I think a variation of the Sorites paradox [1] should be
             | sufficient to argue that 4) is closest and that there is a
             | continuum of consciousness:
             | 
             | You and I are (I presume) conscious and have a meaningful
             | sensation of qualia. If you were to take a high powered
             | laser and destroy exactly one neuron, we still would be.
             | Given how many people survive and do relatively fine after
             | traumatic brain injuries, you can probably extend that to
             | some number of fried neurons while still preserving
             | consciousness.
             | 
             | If the laser destroyed _all_ of our neurons and left our
             | brains a smoking lump of cinder, we would not be conscious.
             | If the laser destroyed all _but one_ neuron, we probably
             | still would not be conscious. That 's probably equally true
             | of five or ten neurons.
             | 
             | Now, it _may_ be that if you extend those two scenarios
             | then at some number of neurons, the two will meet and
             | frying _just one more_ neuron completely flips your brain
             | from  "as conscious as a fully functional human" to "as
             | inert as a block of steel". But the existence of that magic
             | number seems _highly_ unlikely to me. It would require some
             | explanation for what is special about that exact quantity
             | of neurons that one more or less _completely_ lacks.
             | 
             | It seems then that the most likely reality is that
             | consciousness is a continuum where cognitive machines and
             | living beings can have relatively more or less of it and
             | any threshold that we define to mean "conscious" is mostly
             | as arbitrary as the names of colors.
             | 
             | [1]: https://en.wikipedia.org/wiki/Sorites_paradox
        
             | [deleted]
        
             | joeberon wrote:
             | Well we do know emergent phenomena that occur as
             | abstractions. Probably the most obvious example is a
             | computer program. At a basic level, the computer really
             | doesn't look like it's doing useful work, it just processes
             | lists of rules and stuff, but actually the emergent
             | behaviour is that I am browsing Hacker News. What is the
             | difference between a computer and a rock? Well there isn't
             | actually a fundamental difference, other than that the
             | computer is physically arranged such that it is doing some
             | complicated calculation resulting in this rich UI
             | experience.
             | 
             | I think it's the same with the brain. Why is there
             | consciousness in this brain and not in a rock? Well because
             | the brain is set up to perform a complicated "calculation"
             | and the rock is not. Just like the program's behaviour only
             | exists as an emergent property of the computer's physical
             | state, so does consciousness arise as an emergent property
             | from the brain's physical state.
             | 
             | EDIT: to clarify this is why it is justified to say there
             | is not a problem, but I do not personally find it
             | satisfying
        
               | daseiner1 wrote:
               | But the question is when and from where does
               | "experiencing things" (qualia) arise. When does
               | calculation become sensation?
        
               | joeberon wrote:
               | Those same people might say that it is just the
               | experience of "being the computer" rather than seeing it
               | from afar. But I don't know, because personally it isn't
               | an argument I find convincing.
               | 
               | As an aside, I think you have to be a bit arrogant or
               | closed minded to assert that they actually can see the
               | problem. I _really_ don 't think my wife is lying or
               | being nefarious when she says she doesn't see a problem.
               | I trust her and others when they say they don't see a
               | problem.
        
               | GonzaloRizzo wrote:
               | So if there's such a thing as "being the computer" could
               | we say that a computer experiences qualia and it is
               | somewhat conscious?
        
               | GlennS wrote:
               | Obviously: in the insect that wriggles its legs in pain.
               | 
               | Earlier, in the cell that avoids the chemical.
        
               | nickelpro wrote:
               | Exactly, you have a handwave. A vague idea that when a
               | system becomes "complex" enough it suddenly becomes
               | conscious. There's no mechanism, no rigor, no cause and
               | effect. With the browser you can chase down every bit
               | through the hardware, down to the quantum tunneling
               | effect dictating what's happening in the silicon and come
               | up with a concrete explanation based in physical laws.
               | 
               | With the browser example, at no level of abstraction does
               | there exist any ambiguity. "Emergent" in that framing
               | simply means we can explain abstracted behaviors in terms
               | of other abstractions. But if you wanted you could drill
               | down to the particle physics and have a complete,
               | rational explanation of the system.
               | 
               | Not so with consciousness, and thus the "hard problem"
        
               | joeberon wrote:
               | I don't have it, because it's not my argument, it's their
               | argument, and it's clearly making you emotional so I
               | think it's better to stop it here.
        
               | nickelpro wrote:
               | Apologies if my language is emotional, I like this debate
               | a lot and get rare few opportunities to discuss it with
               | people who are familiar with all the background. I don't
               | mean to invalidate anyone, I think all camps are equally
               | valid regardless of my personal feelings.
        
               | CuriouslyC wrote:
               | The problem in your stating (and many others) is that the
               | notion of consciousness has self-awareness is baked into
               | it. It's odd to think that the ability to experience
               | qualia magically occurs for sufficiently complex systems,
               | but it's not odd to think that self-awareness just arises
               | once a system that can experience qualia reaches a
               | sufficient complexity level. Unfortunately, people latch
               | on to the reasonableness of the latter, and ignore the
               | ridiculousness of the former.
        
               | [deleted]
        
               | adriand wrote:
               | One obvious difference between computers and humans is
               | that only one of them is an animal. So far as we can
               | tell, consciousness only arises in animals. This sounds
               | like a truism, but I wonder if using phrases like
               | "complex system" masks it: both humans and computers are
               | complex systems, and only one of them is conscious, and
               | therefore what could possibly be the difference?
               | 
               | Well, the list of differences between the two is
               | enormous. As the article discusses, some of those
               | differences include embodiment, emotions and moods. So
               | perhaps the "hard problem" is not a philosophical one but
               | a question of detailed knowledge: if we had the ability
               | to drill down to the underlying particle physics of
               | bodies, emotions and moods, plus all the other things
               | that comprise our animal natures, then perhaps the
               | problem would be solved.
               | 
               | To put this another way: perhaps the issue with many
               | approaches to understanding consciousness is that it
               | assumes our animal nature doesn't matter and that
               | conscious minds can exist abstractly, or even be modelled
               | in silicon. But maybe they can't be, and so for us to
               | truly understand consciousness we will need to truly
               | understand the body, which we are very far away from.
               | 
               | To sum this up: consciousness, in this view, is not an
               | emergent property of complex systems. Rather it is a
               | fundamental property of animals.
        
               | n4r9 wrote:
               | If we take a human and replace one of their neurons with
               | a silicon chip that exactly replicates its electrical
               | activity, is the human any less "conscious"? Presumably
               | their conscious experience is the same (if not why not?)
               | If the human is still conscious, what if we continue one
               | by one to replace all the neurons with silicon chips in
               | the same way? Where does this thought experiment fail?
        
               | joeberon wrote:
               | Personally I think most people who argue consciousness is
               | an emergent property of complicated physical systems
               | would believe that the machine would still be conscious
               | after replacing every neuron with a chip
        
               | slfnflctd wrote:
               | Given a sufficiently advanced chip which included all
               | chemical interactions (or effective simulations), why
               | wouldn't they be?
               | 
               | If after the transition is complete we determine/decide
               | they _aren 't_ conscious, then we have to argue about at
               | what point the 'hybrid' brain ceases to manifest
               | consciousness, and why. Maybe the organic parts would
               | start to change their prior functionality in response to
               | the synthesized parts... but that would suggest we just
               | didn't have a complete enough model of the neuron and
               | we're back to square one. Once that model is complete,
               | there should by definition be no problem, unless you want
               | to argue it can never be complete, for which you then
               | need evidence to convince everyone.
               | 
               | I say it's worth continued study.
        
               | lovecg wrote:
               | On the topic of simulating neurons, we might be able to
               | simulate the _static_ structure but neurons also move
               | around and make new connections. This video opened my
               | eyes on how far we really are from a realistic
               | simulation: https://youtu.be/CJ3d1FgbmFg
        
               | adriand wrote:
               | My house is made of bricks. I can confidently remove one
               | of the bricks and replace it with a Kleenex box. The
               | house is still standing, it's perfectly liveable and no
               | one can tell the difference. Now what happens when I
               | replace all the bricks with Kleenex boxes?
        
               | n4r9 wrote:
               | At some point, the structure will collapse or blow over.
               | When that happens depends on the order in which you work,
               | and the weather conditions, but the outcome is at least
               | well-defined and conceptually graspable.
        
               | LargoLasskhyfv wrote:
               | You have a honey-comb structure. Maybe put exactly
               | fitting X-shaped stabilizers into them, before. For
               | safety, and such :)
        
               | eli_gottlieb wrote:
               | >To sum this up: consciousness, in this view, is not an
               | emergent property of complex systems. Rather it is a
               | fundamental property of animals.
               | 
               | Or at the very least, it's a fundamental property of the
               | particular kinds of physical systems that animals happen
               | to be, and without understand what _that_ is, we have no
               | hope of replicating consciousness in any non-animal.
               | 
               | This makes sense to me, because consciousness seems to be
               | a question related to brains, which are part of animals,
               | so a _really great_ way to confuse yourself would be to
               | abstract the entire question of consciousness away from
               | the life sciences and then wonder why you 're so
               | confused.
        
               | adriand wrote:
               | > a really great way to confuse yourself would be to
               | abstract the entire question of consciousness away from
               | the life sciences and then wonder why you're so confused
               | 
               | Yes, exactly. Which is a direct challenge to
               | transhumanism because it means that maybe it actually
               | _won 't_ be possible to achieve immortality via brain
               | upload, or build general AI (without first solving the
               | life science "problem", i.e. building an artificial
               | body), and so on.
        
               | eli_gottlieb wrote:
               | Sure, but was anyone really suggesting that we somehow
               | create disembodied people? That seems to be the other
               | side of the dialectical coin from immaterial souls, which
               | are precisely what most Hard Problem disbelievers deny.
        
               | CuriouslyC wrote:
               | You think your computer isn't conscious? How do you know?
               | Just because it's not demonstrating free will? How many
               | layers of error correction are built into computers to
               | stifle the "randomness" of electronic equipment? Do you
               | think you could still exercise free will locked in a
               | padded room and strapped to a bed with a straightjacket?
        
               | unyttigfjelltol wrote:
               | This comment is amazing. Thank you.
        
               | Dudeman112 wrote:
               | I don't think free will matters at all for consciousness.
               | 
               | How is a world where your will is decided by quarks
               | randomly zagging instead of zigging any more open to free
               | will than a deterministic world?
        
               | adriand wrote:
               | There's a difference between knowing, and knowing how you
               | know. I know my computer is not conscious even if I'm not
               | entirely certain how I know. I chalk this up to the fact
               | that, as a conscious animal with millions of years of
               | evolutionary history, it is both beneficial and entirely
               | natural for me to be able to recognize conscious beings
               | when I encounter them.
               | 
               | (It also helps that I know how to program computers and
               | don't view that activity as bringing consciousness in the
               | world. I have children, however, so it turns out that I
               | am -- with some help -- also able to bring consciousness
               | into the world. The former activity I am both able to do
               | and fully understand, while the latter I am clearly able
               | to do, but don't at all understand.)
               | 
               | I recognize this point of view isn't popular among a lot
               | of technical folks. I get it, I was there once too, but
               | I've come around to a new appreciation for our animal
               | nature. This question -- how do you know what is
               | conscious? -- is very similar to questions like, "how do
               | you know that that dog is afraid?" The short answer to
               | that question is, "because we are kin". Which is an
               | explanation I find much more rich and satisfying than
               | reductionism.
        
               | rhn_mk1 wrote:
               | That breaks down for anything that is not your kin
               | though. "How do you know the computer is not conscious?"
               | does not allow the answer "Because we are not kin". Best
               | you can say is "Because we are not kin, I don't know".
               | The dog example is illustrative of only half of the
               | question.
        
               | adriand wrote:
               | I take your point, but I disagree: your statement assumes
               | that there are conscious beings who are not my kin (and
               | to be clear, by "kin" I broadly mean animals: living
               | entities that are embodied and show characteristics like
               | intention, mood and emotion). But there isn't any
               | evidence that these exist and there's little if any
               | evidence that they are even possible.
               | 
               | At best, you can put forward a thought experiment that
               | starts with, "Suppose there are beings which are not
               | animals but which are nonetheless conscious." I'm
               | questioning the premise of that thought experiment,
               | however, because so far as we can tell, there is no such
               | thing.
               | 
               | In other words, _computers are not conscious because they
               | are not animals_.
               | 
               | This may seem like circular reasoning, but not if you
               | take the view that consciousness is a fundamental
               | property of animals as opposed to an emergent property of
               | complex systems.
        
               | rhn_mk1 wrote:
               | No, there's no such assumption in my statement. That
               | statement is merely open about what we don't know.
               | Conversely, there is an assumption in your reasoning that
               | non-animals are never conscious, which is, well, an
               | assumption, and not derived from facts.
               | 
               | This creates a blind spot where an area of non-knowledge
               | is assumed to be known about, and rejected out of hand. I
               | have to point out that taking the view that consciousness
               | is a property of animals does not exclude non-animals
               | having it.
               | 
               | Even then, there will be a tussle over the exact line of
               | what qualifies as an animal for the purpose of
               | consciousness, resulting in the need to answer the same
               | generalizations (e.g. intention, mood, emotion) that
               | would allow yet undiscovered but plausible forms of life
               | to qualify as conscious.
        
               | [deleted]
        
               | k__ wrote:
               | But maybe a computer isn't a complex system.
               | 
               | Most, if not all, animals are much more complex than
               | computers.
        
             | rmah wrote:
             | Consider the weather. Weather exists, I think everyone can
             | accept this. But what is weather? Is it simply an aggregate
             | emergent property of many interacting atmospheric and
             | geological systems? Or is it something more? How can
             | emergent properties of even a complex system create self-
             | organizing and self-perpetuating semi-stable phenomenon
             | like tornadoes? What controls the weather?
             | 
             | Consider consciousness. Everyone can accept that it exists.
             | But what is consciousness? Is it really just an emergent
             | property of brain-body activity? Or is it something more?
             | How can an emergent property of a complex system create
             | semi-stable behaviors and self-awareness? What controls
             | consciousness?
             | 
             | IMO, consciousness is just an emergent property of brain-
             | body activity (mostly brain). More recent studies of animal
             | behavior have shown that consciousness is not a binary --
             | you have it or you don't -- thing. It is a gradient.
             | Different animals possess different levels of cognitive
             | ability, consciousness, self-awareness, social-awareness,
             | etc. In short, there is (again IMO) nothing special about
             | human consciousness: I submit that any sufficiently complex
             | trainable associative-memory neural network (like a
             | biological brain) will possess some level of consciousness
             | as an emergent property. Just like any sufficiently large
             | atmospheric + geological environment will possess weather.
             | 
             | Thus, there is no problem to understand (or not
             | understand).
             | 
             | As an aside, are humans even "fully self-aware"? How would
             | we know if we're not? What does that even really mean?
        
             | danielheath wrote:
             | In nature, there are many instances of systems which change
             | drastically as they pass a tipping point.
             | 
             | I suspect the human (and many animal) brain has an
             | information density sufficient to pass such a tipping
             | point. That could even explain why observating quantum
             | phenonema changes the outcome (due to multiple extremely-
             | low-entropy states being unstable).
             | 
             | There could, of course, be many other explanations. Which
             | camp is "I'm unlikely to figure out the answer, and I'm
             | unclear on how it would help me if I did, so I'll continue
             | to disregard the question".
        
             | ivanbakel wrote:
             | The trouble with the "hard problem" of consciousness is
             | that it's predicated on the possibility of the existence of
             | p-zombies. Even if you don't know how a complex physical
             | system gives rise to consciousness, you understand that a
             | block of steel is _not_ a complex physical system - so the
             | fact that it isn 't conscious isn't that contentious.
             | 
             | If p-zombies could exist as complex systems without
             | consciousness, then there would be a problem. But it's not
             | so obvious that their existence is a possibility - or even,
             | as Chalmers puts it, that they are "conceivable". To me,
             | the leap of faith involved in the "hard problem" is really
             | this first one.
        
               | joeberon wrote:
               | Yeah I think a more interesting problem is asking what
               | the difference between an electronic robot that behaves
               | in every way like a human, and a human, is. Or further,
               | an electronic robot where every single neuron in the
               | human was completely simulated. I thought about this a
               | lot when I was a teenager lol, surely in this case such a
               | robot must have a subjective experience just as we do,
               | but why?
        
               | nickelpro wrote:
               | Really? A sufficiently advanced computer is a perfect
               | P-zombie to me. This is the Chinese Room problem
               | restated. A computer is capable of behaving as a
               | conscious actor without any concept of knowledge or
               | qualia as we would recognize it. But again this gets into
               | a faith-based discussion. To me a sufficiently complex
               | computer doesn't suddenly cross a "consciousness
               | threshold" of complexity, but if to you it does there's
               | no problem, which we could call "camp 3" in this
               | discussion.
        
               | joeberon wrote:
               | A vast majority of people who believe that consciousness
               | is an emergent property would also say that a
               | sufficiently advanced computer is exactly as conscious as
               | you or I. At least when I believed that human
               | consciousness was an emergent property I believed that an
               | advanced robot would be totally conscious.
        
               | lovecg wrote:
               | In the original Chinese Room thought experiment it's a
               | human inside performing all the calculations by hand. To
               | be consistent you would also believe that this room with
               | a human in it has its own consciousness as a whole,
               | correct? This just seems too weird to me.
        
               | lostmsu wrote:
               | > This just seems too weird to me.
               | 
               | I don't think that something feeling weird justifies
               | violation of Occam's Razor of the same scale as making up
               | souls, gods, etc.
        
               | lovecg wrote:
               | Absolutely, no argument there. It seems the answer will
               | be weird whatever it is.
        
               | comex wrote:
               | For me this is analogous to, say, a PC running a NES
               | emulator.
               | 
               | You can talk about the emulated CPU. Its state: its 8-bit
               | register values, the contents of its 2KB of RAM, what
               | 6502-architecture code it's currently running. Its
               | concept of time: how many clock cycles it spends doing
               | what, at an assumed fixed number of nanoseconds per clock
               | cycle. Its inputs and outputs: controller button presses,
               | video out.
               | 
               | Or you can talk about the PC's CPU. Its 64-bit register
               | values, the contents of its gigabytes of RAM, what
               | x86-architecture code it's currently running. Its own
               | clock cycles, its own inputs and outputs.
               | 
               | Both of those can be considered to exist at the same
               | time.
               | 
               | But of course, the emulated CPU doesn't have an
               | _independent_ physical existence; it's just an
               | abstraction that exists within the PC. Its register
               | values and RAM contents are represented by some part of
               | the PC's register values and RAM contents, with some
               | arbitrary encoding depending on the emulator in use. Its
               | time flows whenever the PC feels like running the
               | emulation. The emulator might be set to 1x real time, or
               | 2x or 0.5x, but even that setting only applies on
               | average; individual clock cycles will always proceed at a
               | highly erratic rate. The emulated CPU's output might go
               | to a real screen or it might just be saved as a video
               | file. And so on.
               | 
               | But if the CPU isn't real:
               | 
               | (1) Does that mean it's a "p-zombie" that is only
               | pretending to run 6502 code, but isn't really?
               | 
               | (2) Does that mean you're not really playing Super Mario
               | Bros. if you play on an emulator?
               | 
               | My answer to 1 is: maybe, maybe not, but it makes no
               | difference. Because my answer to 2 is: no, you definitely
               | are playing the same game regardless of whether you're
               | using an emulator or not. The essence of what it means to
               | "play Super Mario Bros." is to interact with a system
               | that follows certain rules to map controller inputs to
               | video outputs. The rules are a mathematical abstraction,
               | not inherently connected to any physical object. So it
               | doesn't matter whether the rules are implemented by a
               | physical CPU or an emulator inside another CPU.
               | 
               | And I see consciousness as basically the same thing. A
               | conscious being, to me, is fundamentally a mathematical
               | abstraction consisting of some state, some inputs and
               | outputs, and rules for manipulating the state and
               | producing outputs from inputs, where the rules have to
               | meet certain standards to count as conscious. For
               | example, the rules should be able to perform general-
               | purpose logical reasoning; they should include some kind
               | of concept of self; etc. The exact boundaries could be
               | litigated, but at minimum the rules corresponding to the
               | operation of the human brain quality.
               | 
               | And a physical brain is really just one possible physical
               | representation of that abstraction. A Chinese room would
               | work too. It would be impossibly slow, and the person
               | would need an astronomical number of filing cabinets to
               | store the necessary data, and they'd inevitably commit
               | calculation errors, but aside from that it would work.
               | 
               | So yes, the Chinese room, or the mathematical process it
               | represents, can have consciousness, while the person
               | within it has their own independent consciousness. Just
               | as a NES CPU can exist "inside" a PC CPU, an emulated
               | brain can exist "inside" a real brain (well, "inside"
               | except for the filing cabinets).
        
               | ivanbakel wrote:
               | >This is the Chinese Room problem restated. A computer is
               | capable of behaving as a conscious actor without any
               | concept of knowledge or qualia as we would recognize it.
               | 
               | This is begging the question of p-zombie existence.
               | Searle and Chalmers both think it's possible for
               | arbitrarily complex systems to exist without qualia. But
               | given that qualia is unmeasurable, there's no way to know
               | for sure.
               | 
               | That's why I consider the statement of the problem to be
               | faith-based. It is possible that consciousness is an
               | emergent property, and qualia is experienced by every
               | sufficiently-complex system. Declaring that p-zombies are
               | conceivable requires assuming that this is _not_ so.
        
               | nickelpro wrote:
               | I concur with this completely. Rejecting p-zombies as a
               | concept aligns you either with (3) or (4) of the toy
               | framing above, depending on if you're materialist
               | (consciousness emerges from physical phenomenon/doesn't
               | exist) or non-materialist (consciousness emerges from
               | non-physical phenomenon, and does so universally) about
               | the follow on question.
        
               | Zigurd wrote:
               | The problem with the Chinese Room formulation is there
               | are arguments against the near term possibility of
               | autonomous vehicles that go "It's the same as AGI." In
               | other words, a "Chinese Room" driver is not good enough.
               | 
               | What the Chinese Room may really be telling us is that
               | machine consciousness might be as far from us as usable
               | ML was to 1970's AI researchers, or at least much farther
               | than we think. And, on top of that, if and when it does
               | appear, it won't look human. It won't even look like an
               | animal because it isn't one.
        
               | the8472 wrote:
               | > This is the Chinese Room problem restated.
               | 
               | Which is non-physical because the storage complexity of a
               | lookup table containing all possible answers to all
               | possible inputs grows exponentially. It wouldn't fit in
               | the universe.
               | 
               | https://www.scottaaronson.com/papers/philos.pdf
        
             | eli_gottlieb wrote:
             | Blocks of steel don't have nervous systems of any kind, let
             | alone complex brains. Hell, they aren't even alive at all:
             | they lack the underlying physiology for a nervous system to
             | sense and control in the first place.
             | 
             | Seems like a bit of a hint for why we're conscious and
             | they're not.
        
               | GonzaloRizzo wrote:
               | Why are you so sure about consciousness emerging from the
               | brain?
        
               | eli_gottlieb wrote:
               | I didn't say "from the brain". I also didn't say
               | "emerging". I mentioned the nervous system as a whole,
               | and physiology (eg: internal bodily control systems) to
               | boot.
        
             | tim333 wrote:
             | I'd go with a variation on 2:
             | 
             | 2a) I am not like a block of steel, but I know why
             | ("evolution")
             | 
             | In the same way that a camera takes pictures while a block
             | of steel doesn't because someone designed it that way, we
             | are conscious because evolution designed us that way.
             | Conscious people survive better than those knocked
             | unconscious.
        
               | nickelpro wrote:
               | But then you are like a block of steel, you are
               | deterministically governed by the laws of physics. Any
               | block of matter arranged just so is conscious in that
               | framing, which differs from the religious framing.
        
               | PKop wrote:
               | >you are deterministically governed by the laws of
               | physics
               | 
               | Yes.
        
               | joeberon wrote:
               | > Any block of matter arranged just so is conscious in
               | that framing, which differs from the religious framing.
               | 
               | That is literally the secular materialist view of
               | consciousness, yes.
        
             | disambiguation wrote:
             | I think (4) gets more interesting if we replace "block of
             | steel" with "fetal tissue" at various stages of development
             | 
             | i personally lean towards (4) for this reason. if we say
             | "soul assignment" is not the answer, then we have to
             | explain how some unconscious cells eventually acquire
             | consciousness, and then go back to unconscious again. one
             | solution is that its been there all along, and is every
             | where.
             | 
             | Perhaps the human brain (any brain?) is just a lens for
             | something quantum, or a radio for some signal? or maybe
             | there are some immaterial forces that we can never observe
             | under a microscope.
        
             | callesgg wrote:
             | 7) I am a model of a primate created/executed by the actual
             | brain of the primate that the model is modeling. Ie, the
             | primate is not like a block of steel cause it is modeling
             | its own existence, while the block of steel is not modeling
             | it's own existence.
        
             | GlennS wrote:
             | I think the opposite problem - or rather the assumed
             | explanation - is harder.
             | 
             | Start with empiricism: conciousness appears when we add
             | other mental capacities, so probably it is built upon them.
             | 
             | The idea that conciousness (or abstraction) is a
             | fundamental building block (as espoused by neoplatonists)
             | is a much more unlikely explanation to me.
             | 
             | > I am not like a block of steel, but I don't know why.
             | 
             | Let's start from here. This is obviously correct, I think?
             | 
             | Why should I know why?
             | 
             | Most of my conciousness is build on top of unconcious
             | processes that I can't control and only receive input from.
        
               | CuriouslyC wrote:
               | That's a nice looking dead end, because your subjectivity
               | biases you. Instead, step outside your own consciousness.
               | 
               | Based on objective observation, you are dynamic and
               | reactive, steel is static and mostly unreactive. Beyond
               | that, you might as well be the same. You claim to
               | experience your existence, but what evidence do you have?
               | And what evidence do you have that a bar of steel doesn't
               | experience something of some sort?
        
               | GlennS wrote:
               | > That's a nice looking dead end, because your
               | subjectivity biases you. Instead, step outside your own
               | consciousness.
               | 
               | Don't understand, sorry.
               | 
               | > You claim to experience your existence, but what
               | evidence do you have?
               | 
               | Others also make that claim. We appear to each-other to
               | be similar.
               | 
               | > And what evidence do you have that a bar of steel
               | doesn't experience something of some sort?
               | 
               | Magnetic scanners. Strong evidence in my opinion.
               | 
               | I think you shouldn't start with steel, you should start
               | with chemicals that produce nervous signals.
        
               | nickelpro wrote:
               | > Magnetic scanners. Strong evidence in my opinion.
               | 
               | > I think you shouldn't start with steel, you should
               | start with chemicals that produce nervous signals.
               | 
               | The idea being that none of this is direct evidence of
               | consciousness. These give you evidence for answers to
               | soft problems, a block of steel cannot see because it
               | does not have eyes to see. If you attached eyes it would
               | not see, because it would not have an optical cortex. If
               | you gave it an optical cortex it would not see because it
               | does not have... etc.
               | 
               | You can solve all of these soft problems, but they would
               | never tell you how the block of steel experiences the
               | color red, the "qualia" of vision. That's the hard
               | problem. All of the soft problems are "minor" by
               | comparison.
        
             | NoGravitas wrote:
             | I agree about the best parties. But what about the idea
             | that consciousness exists on a continuum -- that everything
             | that processes information has a scope of consciousness
             | that corresponds to the complexity of what it processes? A
             | block of steel doesn't really process any information, but
             | a thermostat does, and so does a paramecium. I'm happy to
             | grant qualia to a paramecium, even if I don't think those
             | qualia are very substantial or interesting in human terms.
             | I feel a little weirder about the thermostat, but, um,
             | sure, why not, I guess?
             | 
             | I guess I'm siding with group 4, but just saying that
             | quantity has a quality all its own. Or that universal
             | consciousness doesn't have to be spooky.
        
               | nickelpro wrote:
               | Asking "how is consciousness quantized?" definitely would
               | put you in (4) and is favored by Chalmers I believe. The
               | idea that consciousness can somehow be "aggregated" by
               | sufficiently complex or anti-entropic systems lends
               | itself to this view.
        
               | Hayarotle wrote:
               | I like that option. You don't have to find something that
               | would give a hard border between "conscious" and
               | "unconscious" entities, nor claim qualia doesn't exist,
               | nor claim a rock has a level of consciousness equivalent
               | to that of humans.
        
               | joeberon wrote:
               | This is personally my feeling of it too. I think the
               | boundaries between us are illusory, and that it is all
               | continuous and fuzzy. So a universal consciousness is no
               | issue for me, one that is neither singular nor separate.
        
               | suzzer99 wrote:
               | I agree with this. IMO any theory wrt to consciousness
               | has to first and foremost consider that no hard line can
               | be drawn along the continuum from human beings to
               | elephants to ants to yeast cells.
        
             | mr_overalls wrote:
             | 7) A purely phenomenological viewpoint (such as the one
             | held by some schools of Mahayana Buddhism) would claim that
             | the only reality is one's own mind. The steel block along
             | with all other aspects of perceived reality, are akin to a
             | magical illusion or dream.
        
           | robotresearcher wrote:
           | > I just know that there's something wrong or missing.
           | 
           | We know with certainty - since you tell us - that you have
           | that feeling. People have strong feelings about lot of
           | things. People disagree about things of the most intense
           | importance to them, such as the existence of god. I won't
           | list any more examples to avoid arguing about them.
           | 
           | But since people have mutually exclusive very strong
           | intuitions/knowledge/feelings, it seems very clear to me that
           | the presence of the feeling entails absolutely nothing about
           | the further reality of the thing being believed.
           | 
           | A simple and complete explanation is that you have this
           | feeling, and the feeling is all there is to it. Maybe the
           | feeling has some functionality in a social ape. The feeling
           | entails nothing.
           | 
           | Susan Greenfield has a nice expression of a similar idea: the
           | only known functionality of consciousness is that when you
           | ask yourself 'am I conscious?' the answer seems to come back
           | 'yes'.
           | 
           | [edit: typos]
        
           | 53r3PB9769ba wrote:
           | I'm quite firmly in the "do not feel the incongruity" camp.
           | 
           | What I'm about to say might make no sense at all, but I seem
           | to remember (or at least think I do, it might be a false
           | memory arising from thinking too much about it) slowly
           | gaining consciousness in early childhood.
           | 
           | I have a birthmark on the back of my hand and some of my
           | earliest memories are triggered by noticing it and realizing
           | I'm a separate being from other people and can influence my
           | actions, stop being a mere observer of my body running on
           | "autopilot".
           | 
           | I have only passing knowledge of lucid dreaming, but from
           | what I've read I'd say becoming a lucid dreamer is not at all
           | dissimilar to developing consciousness in childhood. So maybe
           | that's a potential avenue for consciousness research.
        
             | mtberatwork wrote:
             | I think a lot discussion around consciousness focuses too
             | much on the binary (e.g. either you have it or you don't),
             | but I like to think of consciousness as a spectrum, with
             | organisms falling into various points along that spectrum.
             | Your take fits nicely into that picture I think.
        
             | joeberon wrote:
             | Yeah that's a pretty common experience actually. There are
             | a lot of memes on tiktok and twitter joking about "gaining
             | consciousness" as a child. I think a lot of people remember
             | moments as young children where they first truly felt like
             | real separate beings.
        
       | ericmay wrote:
       | > Brains are not for rational thinking, linguistic communication,
       | or perceiving the world.
       | 
       | Isn't this false?
       | 
       | Brains are for whatever brains are for, and if brains are
       | thinking rationally (2+2 = 4), communicating linguistically (I'm
       | typing this using language), and perceiving the world using
       | senses than they are for those things just as much or as little
       | as they are for or not for anything else. I suppose you can say
       | something along the lines of "brains are for continuing survival
       | and propagation of genes" as they author did which is fine, but
       | it would make more sense to state that and try to prove it...
       | Because how do we know that thinking rationally _isn 't_ what
       | evolutionary drive is? How do we know that brains are evolved for
       | survival? Survival for what? Gut bacteria? The earth (keep
       | reproducing to keep organic material churning)? How do we know
       | that the emergence of language and propagating that isn't what
       | brains are for?
       | 
       | I think it's hard to really say what brains are for or not for.
       | That's a deceptively difficult question and probably shouldn't
       | casually throw out "brains are not for X".
       | 
       | -edit-
       | 
       | I also wouldn't take it for granted that brains or anything is
       | _for_ anything.
       | 
       | I do think we are "philosophical zombies" as someone else put it,
       | but it's not a distinction that matters because whether we are or
       | are not doesn't change anything. It's like accepting that you
       | don't have free will - it doesn't change anything. You still go
       | to jail if you rob someone (the interesting question comes from
       | the morality of this actually IMO).
       | 
       | Consciousness is probably an evolutionary adaptation for
       | increased cooperation - to that end it's a shared delusion
       | (language allows us to become "more" conscious), but a useful
       | one.
       | 
       | I'd posit that certainly all mammals are conscious (if they're
       | not, then I'm not sure you can argue that anything is conscious
       | besides yourself which is fine if you hold that view) and likely
       | all animals are conscious, just a matter of degree of genetic
       | expression. Panpsychism is tempting but if we wanted to say that
       | rocks are conscious than I think we could set aside still
       | different "levels" of conscious expression and draw meaningful
       | distinctions - it's not a hill one has to die on.
       | 
       | Most of our work on consciousness that I've observed has
       | historical religious baggage (not insulting religion here) with
       | humans being at the center and "special" but if you strip that
       | away I think it's pretty easy to see that we're not. Many
       | scientists that I've seen or observed kind of have the God
       | problem - which is that they're wrestling with a question or
       | dilemma and supposing God is involved or exists and so it must be
       | dealt with. Similarly in the study of consciousness it seems that
       | we're supposing that humans are conscious to some special degree
       | and trying to fit science to explain that - which is a faulty
       | premise from the start.
       | 
       | /rant :)
        
       | mensetmanusman wrote:
       | " Experiences of being you, or of being me, emerge from the way
       | the brain predicts and controls the internal state of the body."
       | 
       | I don't know about you other beast machines, but I rarely ever
       | think of the internal states of my body (trying to adjust beating
       | heart, pupil adjustment, etc.). To think that consciousness is
       | required for these rare moments of internal state reflection
       | seems a stretch (considering most life doesn't require it).
        
       | esarbe wrote:
       | Consciousness is nothing more but the final arbiter of attention.
       | As such it must have a way to evaluate situations, including our
       | own internal state and this is done by emotions. Emotions are our
       | default approach interpret a given situation.
       | 
       | I think the 'beast machines' analogy is a good one. We're
       | reproduction machines. We're not thinking machines. We do not
       | evaluate our environment using logical thinking and although some
       | may learn to use this tool (logical thinking) to justify paying
       | attention to something in particular, it's emotions that guide
       | our eyes and our attention.
       | 
       | Anything that has attention to guide has a consciousness. It
       | might not use words or other symbols to reinforce attention but
       | the basic mechanism, the cycle of emotion, evaluation and
       | attention stays the same.
        
       | state_less wrote:
       | We may have body consciousness at times, though conscious doesn't
       | depend on body consciousness or is required to be embodied. When
       | I dream I often am not conscious of breathing or my body for
       | example.
       | 
       | I once had a strange dream that exemplifies this. I was dreaming
       | in Wisconsin of being on a subway train in NYC. A man in the
       | train crouches down near me. I follow him down. He asks, "What is
       | your name?" I tell him, "it's Seth." "That's an old name", he
       | responds.
       | 
       | I am thinking, who is this guy?
       | 
       | Then as if to show me something, he pantomimes a mallet and
       | strikes an imaginary bell, except I hear the bell ring out as
       | clear as if I were awake. A disembodied voice says something in
       | German, which I don't understand, though I wish I could remember
       | it now. Then I wake.
       | 
       | See I was conscious of the bell, though none was struck. I didn't
       | hear it in my ear, I heard it in the mind. So I don't think
       | vibration is sound and EM waves aren't color, though they are the
       | most common precursors to this sort of consciousness.
       | 
       | We can go to places in our minds where our bodies can not follow.
       | I think it's a hopeful vision of what might be possible some day.
       | That we might find a way to transform ourselves into a world less
       | inhabited with fears and self loathing to world of wonder and
       | discovery.
       | 
       | P.S. I understand I'm not solving the hard problem of
       | consciousness in this passage. Though I wanted to add some notes
       | of what I've learned about consciousness along the way.
        
       ___________________________________________________________________
       (page generated 2021-10-15 23:03 UTC)