[HN Gopher] Tests for consciousness in humans and beyond
___________________________________________________________________
Tests for consciousness in humans and beyond
Author : benbreen
Score : 49 points
Date : 2024-03-14 16:37 UTC (3 days ago)
(HTM) web link (www.cell.com)
(TXT) w3m dump (www.cell.com)
| passion__desire wrote:
| I have a tangential question about non-human consciousness.
|
| It is almost impossible for a human to perceive color red as
| colour blue. But this won't be a problem for a futuristic
| humanoid. It would just be some kind of relabelling or a latent
| space transformation. Now imagine a AI virus that causes such
| reassignment of distinct colour values say in self driving
| scenario causing traffic accidents. Have people thought about it
| ?
| GenerocUsername wrote:
| Not really true.
|
| You are asking that humans perceived red as blue, but I can
| imagine red things as blue.
|
| But if you did not perceive things as red, how would you know
| to relabel them as blue.
|
| The robot in this case would undergo the same operation when
| relabeling. It first has to perceive red, relabel to blue. No
| major difference unless you swap at hardware level which would
| be analogous to genetically engineering our eye cones to fire
| at altered wavelengths
| esafak wrote:
| Robots can operate at the level of spectra (spectrophotometry)
| rather than color (projections of spectra, colorimetry). The
| mapping from the spectral power distribution to color co-
| ordinates for a given species is well understood and not
| arbitrary. The mapping from those co-ordinates (colors) to
| their names is also not arbitrary.
|
| The only issue I can think of that would cause confusion, which
| has nothing to do with AI, is _metamerism_ , which is when
| different spectra are perceived to be the same color, because
| information gets lost when the infinite-dimensional spectrum is
| projected down to three dimensional color. In practice it is
| not a problem; most people are not even aware of the
| phenomenon.
|
| https://en.wikipedia.org/wiki/Metamerism_(color)
| pixl97 wrote:
| >It is almost impossible for a human to perceive color red as
| colour blue.
|
| Eh, you mean like humans seeing the color green as the color
| red? Not sure why red/blue matters at all in this case as
| stoplights are red/green, one of the highest risk of human
| failure color combinations!
|
| Furthermore direct color encoding of safety controls tends to
| be coupled with shape or placement. Stop signs are octagons.
| Stop lights have a directional component.
|
| If you're worried about an AI virus, I'd be far more worried
| about it just directly crashing the car, then induced confusion
| crashing the car.
| andsoitis wrote:
| > There is persisting uncertainty about when consciousness arises
| in human development, when it is lost due to neurological
| disorders and brain injury,
|
| Not just when it arises during development or when it gets
| reduced due to disorders and brain injury, but also at it
| fluctuates under a range of other known (and possibly unknown)
| states such as when you're under general anesthesia[1].
|
| Besides the when, there's also kind/degree/nature, such as during
| meditation, sleep, influence under substances, etc.
|
| Exciting field.
|
| [1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6703193/#
| TriNetra wrote:
| At the core, we are pure awareness devoid of any object
| (thoughts). We can direct this awareness which we call the art
| of attention. We're pure awareness and we decide to attend
| something. However, to attend and receive information from the
| physical plane, we need appropriate instruments. Brains and its
| organs of senses are those instruments. If these instruments
| have fault, obviously we as awareness don't receive enough
| information and it seems our consciousness is reduced. In
| waking state, we receive info from physical senses. In dream
| state, these senses are in suspended mode but mind is active in
| imagining the experiences in a virtual world and hence
| awareness has an instrument. In deep sleep, even mind is in a
| state of rest and hence awareness has almost no active
| instrument. Still, we awareness does exist and thus we know
| that we had a good or bad sleep when we wake up.
| placebo wrote:
| I doubt the fact that we know we've had a good or bad sleep
| is related to the existence of awareness when in deep sleep.
| If someone remains indefinitely in deep sleep there will be
| no personal experience and therefore nothing to qualify as
| good or bad. When a sense of self returns with waking up then
| your body gives you the signals from the accumulated effects
| that you didn't sleep in a good position or whatever positive
| or negative aspect that has left a trace while you were out
| cold.
|
| I'm not negating the possibility that consciousness might be
| a primary aspect of existence - it's just that if that is the
| case then it is not something you have or can remember. It
| would be more accurate to say that it is something that has
| you, and as some spiritual masters would point out, it would
| be even more accurate not to say anything about it :)
| ninetyninenine wrote:
| I personally think this is a loaded concept. This is what I
| think:
|
| For the technology we have now (LLM)s there does not and will not
| ever exist a test that can perfectly differentiate between an LLM
| and any human.
|
| We will always have LLMs that will pass the test and we will have
| humans that will fail. The reason for this is two fold and
| contradictory.
|
| The first reason is that consciousness is just a loaded concept.
| It's some arbitrary category with fuzzy boundaries. One persons
| definition of consciousness includes language another persons
| definition of consciousness includes logical thought. It's a made
| up collection of "features" that's it, no more or no less. It's a
| very arbitrary set of features too... mashed together for no
| rhyme or reason.
|
| The second reason is that LLMs already meet the criteria for what
| most people technically define as consciousness. The resistance
| we see nowadays is simply moving the goal posts. Five years ago
| people were in agreement that the turing test was a really good
| test. We've surpassed that and changed our definitions adding
| more stringent criteria for sentience. AI already meets the
| criteria for sentience the way we defined it from 1950-2020. Thus
| any test to measure sentience will always be a moving target.
| pixl97 wrote:
| Hell, we don't even have a good definition for intelligence
| that is accepted across different fields.
|
| The problem I see is we keep looking for supersets of all these
| features added together, and as the article points out,
| something that humans have....
|
| >This rationale is particularly powerful if human experience is
| limited to a small, and perhaps idiosyncratic, region in the
| space of possible states of consciousness, as may be the case.
|
| There will be a large part of humanity that will want
| consciousness/sentience as something that humans have and other
| things don't (you already see this often with the 'has a soul'
| argument). This will set us up to have a large blind in to
| machine systems behavior in self organizing systems, and is a
| shared concern among those that see AI as a potential
| existential risk.
| dr_dshiv wrote:
| Turing test was never meant as a test for consciousness.
| Intelligence -- even AGI - that's one thing. But "feelingness"
| or sentience-- that's a totally different matter.
| ben_w wrote:
| The Argument from Consciousness
|
| This argument is very well expressed in Professor Jefferson's
| Lister Oration for 1949, from which I quote. "Not until a
| machine can write a sonnet or compose a concerto because of
| thoughts and emotions felt, and not by the chance fall of
| symbols, could we agree that machine equals brain--that is,
| not only write it but know that it had written it. No
| mechanism could feel (and not merely artificially signal, an
| easy contrivance) pleasure at its successes, grief when its
| valves fuse, be warmed by flattery, be made miserable by its
| mistakes, be charmed by sex, be angry or depressed when it
| cannot get what it wants."
|
| This argument appears to be a denial of the validity of our
| test. According to the most extreme form of this view the
| only way by which one could be sure that a machine thinks is
| to be the machine and to feel oneself thinking. One could
| then describe these feelings to the world, but of course no
| one would be justified in taking any notice. Likewise
| according to this view the only way to know that a man thinks
| is to be that particular man. It is in fact the solipsist
| point of view. It may be the most logical view to hold but it
| makes communication of ideas difficult. A is liable to
| believe 'A thinks but B does not' whilst B believes 'B thinks
| but A does not'. Instead of arguing continually over this
| point it is usual to have the polite convention that everyone
| thinks.
|
| --
|
| I.--COMPUTING MACHINERY AND INTELLIGENCE
|
| A. M. TURING
|
| Mind, Volume LIX, Issue 236, October 1950, Pages 433-460,
| https://doi.org/10.1093/mind/LIX.236.433
| verisimi wrote:
| Its easily conceivable now that one can create a machine
| that impersonates a human and emotions, and then that
| machine would be coded to write a sonnet or concerto.
|
| But, although it will give the impression (illusion) of
| feeling, it will still be running its code. Humans will
| have coded a fantastic illusionist, with no more emotional
| ability than any other tool, eg a hammer.
|
| > According to the most extreme form of this view the only
| way by which one could be sure that a machine thinks is to
| be the machine and to feel oneself thinking.
|
| This is true - one can only verify subjectivity in oneself.
| One might assume another living being does feel - but, we
| could be wrong - eg psychopathy can mean that others are
| unaware of a psychopath's "feelings" or lack of.
|
| It does seem to me, that if we were able to code an ai that
| gives the impression of feelings, composes concertos
| apparently independently, etc, that ai would have to be a
| sort of psychopath.
|
| Without getting too lost in spiritual BS, I think we are
| more than mere matter - the value in our experiences come
| from our emotions and feelings - not mere inputs that we
| label as 'emotions' and 'feelings' as would be the case for
| an ai. But emotions etc are subjective states, not
| objectively verifiable (even if we can see something
| correlating on a chart).
|
| Put simply, ai can never have a 'soul' even if it presents
| a wonderful appearance of kindliness, emotionality or
| whatever. Perhaps there will be no test in the world that
| will be able to find this out. But it still won't have
| feelings, it can only ever be a automaton.
| FrustratedMonky wrote:
| >"the value in our experiences come from our emotions and
| feelings - not mere inputs that we label as 'emotions'
| and 'feelings' as would be the case for an ai."
|
| </sarcasm> Because humans have emotions, but not just
| what we 'label as emotions' but you know, 'real'
| emotions, not like those AI emotions.
|
| So basically, you are arguing for a non-logic assumption
| that humans have souls, Machines can't, and we should
| just take that as gospel and move on.
| EMM_386 wrote:
| This particular study isn't limited to LLMs.
|
| It is about how to test consciousness in a wide variety of
| things. We have no tests.
|
| People may think their dog is conscious, but we can't test
| that. We can't even test if other people are conscious.
|
| What about invertebrates? Bacteria? Rocks, atoms, AI, your
| laptop etc? While somebody might say "yes, no, definetly not,
| yes, maybe" ... still, no tests.
|
| And consciousness in this particular proposal deals with
| whether or not it "'feels like' something to be the target
| system".
| lukeschlather wrote:
| > AI already meets the criteria for sentience the way we
| defined it from 1950-2020.
|
| If that is the case why hasn't Kurzweil already won the bet?
| https://longbets.org/1/
| jprival wrote:
| >Five years ago people were in agreement that the turing test
| was a really good test
|
| The limitations of the Turing test have been discussed for much
| longer than that. Ironically its dependence on human judgement
| is arguably a weak point.
| ForHackernews wrote:
| > LLMs already meet the criteria for what most people
| technically define as consciousness.
|
| I strongly dispute this. Try "talking" to an LLM over time. It
| has no memory, no agency, no emotions, no desires, no pain, no
| sense of self.
|
| I think cats, dogs and certainly primates are far more
| conscious than these stochastic parrots. It do agree it's
| frightening how many people are fooled by a fancy autocomplete.
| imbnwa wrote:
| I think a lot of Computer Science people need to go read Kant
| whitepaint wrote:
| What should they begin with?
| imbnwa wrote:
| Reverse order of publication back to the first critique
| h0p3 wrote:
| To my eyes, including for this topic, the most important
| work: //Groundwork of the Metaphysics of Morals//.
| deadbabe wrote:
| An LLM, in its perfect form is basically a p-zombie. There is
| no consciousness there. In its current form, its not much more
| than a really big and fast spreadsheet :)
| FrustratedMonky wrote:
| Guess the reason this keeps coming up, is because 'how can
| you be sure?'.
|
| I'd say a lot of humans qualify as p-zombies.
|
| The problem is, can you ever test for it? To be sure.
|
| I'm not sure, that by the time you have a duplicate human,
| that it can be achievable without some inner subjective
| experience.
|
| Put an LLM on a loop and continually learning in a 3d world.
| Then ask what is going on inside.
| mirekrusin wrote:
| Well, you can use some tricks - ie. hook it up to your brain,
| let it learn for a while and tell us how it feels when we flip
| the switch to OFF.
| thfuran wrote:
| >The second reason is that LLMs already meet the criteria for
| what most people technically define as consciousness.
|
| An LLM is an inert pile of data. You could perhaps argue that
| the process of inference exhibits consciousness, but the LLM
| itself certainly doesn't. There's simply no there there.
| BriggyDwiggs42 wrote:
| This is a bad argument. A brain frozen in ice also doesn't
| demonstrate consciousness, and we don't interpret that as a
| counterclaim to "the human brain exhibits consciousness."
| Saying an LLM can be conscious implicitly means an LLM while
| it does things.
| thfuran wrote:
| >Saying an LLM can be conscious implicitly means an LLM
| while it does things.
|
| But that's not like saying a human brain is conscious while
| it does things. A brain is continuously doing a thing,
| whereas an LLM is either inert at rest or briefy doing a
| thing before returning to exactly the state it was in
| beforehand. There is no underlying process that could even
| conceivably support ongoing consciousness. If there is
| consciousness, it is present only for the span of its
| interrogation during an inference, after which it is
| immediately destroyed.
| jajag wrote:
| I think you're confusing intelligence with consciousness
| FrustratedMonky wrote:
| "confusing intelligence with consciousness"
|
| Now that LLM's are becoming indistinguishable from humans,
| and other NN's are even solving proofs in geometry
| competitions. And there are NN that can fly an F-16 better
| than humans.
|
| All of sudden the goal post moved again.
|
| "No, that is just 'intelligence' not 'consciousness'."
|
| 12 months ago, nobody was making this distinction.
|
| We keep needing to redefine the problem to keep humans on
| top.
| hax0ron3 wrote:
| The distinction between intelligence and consciousness has
| been discussed in philosophy of mind for many decades,
| perhaps centuries. I don't know much much it has been
| discussed in the context of LLMs, but I'm sure it's not an
| entirely new phenomenon. The debate is not just a matter of
| people trying to argue over how capable LLMs are. Many
| people pursue this topic because it is inherently
| interesting.
| FrustratedMonky wrote:
| I think even in philosophy of mind, Intelligence and
| Consciousness are seen to progress together. They follow
| the same trend, and at some magic point of enough
| intelligence, then we call it 'consciousness'. They
| follow same upward trend.
|
| This concept of having high intelligence that is not-
| conscious, or low-intelligence that is conscious. Seems
| relatively new.
| shadowfoxx wrote:
| I don't think its necessarily to keep humans on top. I
| don't think my dog can fly an F-16 but I also think my dog
| is conscious.
| hax0ron3 wrote:
| The authors of the paper seem to define what they mean by
| consciousness quite precisely. They mean phenomenal
| consciousness:
|
| >The central goal of any C-test is to determine whether a
| target system has subjective and qualitative experience - often
| called 'phenomenal consciousness'.
|
| Phenomenal consciousness is, as far as I can tell, binary. You
| either have subjective experience or you don't. Even if you are
| subjectively aware of almost nothing, you still have subjective
| experience. For example, when you wake up in the morning your
| awareness might be quite limited for the first few seconds -
| let's say you have your eyes closed and the only thing you are
| aware of is the warmness of the blanket. But still, you are
| aware. You have subjective experience. Whereas in deep sleep
| you do not.
|
| Phenomenal consciousness is thus pretty well-defined. It's just
| that we have zero understanding of how physical systems could
| possibly give rise to it, and it might be in principle
| impossible to ever figure that out. It is possible that
| physical systems do not in fact give rise to consciousness,
| although obviously the presence vs absence of consciousness in
| humans is at least to some extent correlated with certain
| measurable physical states.
|
| I doubt that any actual test for consciousness is in principle
| possible, but the authors of this paper do at least clearly
| define what they mean when they refer to consciousness. They
| mean the presence of subjective experience, not intelligence or
| language or logical thought or having a model of oneself or any
| other things.
| naasking wrote:
| > The first reason is that consciousness is just a loaded
| concept. It's some arbitrary category with fuzzy boundaries
|
| It is right now. We might one day devise a mechanistic
| explanation for consciousness though, in which case any system
| that follows that mechanistic process would be conscious.
| yewenjie wrote:
| FWIW, Anil Seth from the list of authors has been a strong
| opponent of the Panpsychist view (consciousness is a fundamental
| property of matter, and everything is to some degree conscious)
| recently been popularized by David Chalmers et al.
| lolinder wrote:
| I'm not sure what to make of this comment. Can you elaborate on
| why you felt this was important to point out?
| witrak wrote:
| Relation between consciousness and intelligence seems to be a
| problem for everyone including the authors of the article.
|
| Hmm... Peter Watts, Blindsight.
|
| The novel explores themes of identity, consciousness, free will,
| artificial intelligence, neurology, and game theory as well as
| evolution and biology. [Wikipedia]
|
| Especially interesting, and contains a long list of literature.
| squigz wrote:
| > Relation between consciousness and intelligence seems to be a
| problem for everyone including the authors of the article.
|
| Which parts of the paper gave you the impression the authors
| struggled with this?
| htk wrote:
| If consciousness can't even be precisely defined, imagine
| devising a "test" for it.
|
| Laudable effort, but I don't see any progress resulting from it.
| sctb wrote:
| The tests themselves are what bring precision to the notion of
| consciousness. Think of a medical example in which doctors are
| meant to determine whether or not a patient receives
| anaesthesia--surely a more effective empirical toolkit would be
| valuable in that situation, no?
|
| There seems to be a tendency with the discourse around
| consciousness to slip into vague philosophizing and thus run
| into the hard problem. I feel like if it doesn't make sense to
| think of it that way then don't think of it that way.
| HarHarVeryFunny wrote:
| I find it amusing that they launch into a discussion of the
| "urgent need" for consciousness tests without ever stopping to
| define what they mean by consciousness!
|
| The word seems to be heavily overloaded and refer to a bunch of
| unrelated phenomena, with most people disagreeing over exactly
| what it means, or more often just breezing past any definition of
| what it means and proceeding to argue about it regardless.
| squigz wrote:
| I find this comment amusing, as a core premise of the article
| is trying to figure out how to define and measure consciousness
| and the problems that entails
| narrator wrote:
| Arguing that an AI chatbot is conscious is like ancient romans
| asking "Is a really big fire the same as lightning?" They both
| burn down big trees and can kill people. Thus, they must be the
| same! A time traveller would tell them they don't understand
| lightning and they won't for 2000 years and trying to understand
| it through philosophizing is only going to some sort of random
| conjectures like the best way to avoid lightning is to burn down
| all the nearby trees as big fires need wood to burn, therefore
| lightning needs trees to strike.
| marcosdumay wrote:
| They did understand it's different, didn't they?
|
| I mean, by the time of their empire, several Greek already
| understood it. I have no idea about how widespread that
| knowledge was, but every time I look into science spread on
| those times, I discover the answer is "widely".
| squigz wrote:
| Does this imply the discussion and growth related to learning
| these facts are useless?
| narrator wrote:
| It's an invitation to confabulate rationalistic nonsense when
| the correct answer is to just say "we don't know the nature
| of lightning yet." That doesn't mean we can't come up with
| theories, but they have to be disregarded when they don't
| predict anything in a falsifiable way.
| rb-2 wrote:
| I've noticed that conversations about "consciousness" tend to go
| in circles because the participants are using different
| definitions of the word without realizing it.
|
| Some people use the word "conscious" almost interchangeably with
| terms like "intelligent", "creative", or "responds to stimuli".
| Then people start saying things like LLMs are conscious because
| they pass the turing test.
|
| However, others (including the authors of this paper and myself)
| use the term "consciousness" to refer to something much more
| specific: the inner experience of perceiving the world.
|
| Here's a game you can play: describe the color red.
|
| You can give examples of things that are red (that other people
| will agree with). You can say that red is what happens when light
| of a certain wavelength enters your eyeball. You can even try
| saying things like "red is a warm color", grouping it with other
| colors and associating it with the sensation of temperature.
|
| But it is not possible to convey to another person how the color
| red appears to you. Red is completely internal experience.
|
| I can hook a light sensor up to an arduino and it can tell me
| that an apple is red and that grass is not red. But almost no one
| would conclude that the arduino is internally "experiencing" the
| color red like they themselves do.
|
| While the paper is using this more precise definition of
| consciousness, it seems to be trying to set up a framework for
| "detecting" consciousness by comparing external observations of
| the thing in question to external observations of adult human
| beings, who are widely considered by other adult human beings to
| be conscious entities [1]. I don't see how this approach could
| ever produce meaningful results because consciousness is entirely
| an internal experience.
|
| [1] There is a philosophical idea that a person can only ever be
| sure of their own consciousness; everyone else could be mindless
| machines and you have no way of knowing
| (https://en.wikipedia.org/wiki/Solipsism). Also related is the
| dead internet theory
| (https://en.wikipedia.org/wiki/Dead_Internet_theory).
| zero-sharp wrote:
| >But it is not possible to convey to another person how the
| color red appears to you. Red is completely internal
| experience.
|
| Let's say in the future we're able to engineer brains. Let's
| say we take a person and figure out how their brain
| fires/operates when it perceives a color and we manipulate
| another person's brain to mimic the firing. Finally, let's say
| we're able to show, in the end, that the two people have
| equivalent internal (neural) responses to the color. We've then
| "conveyed" one person's experience of perceiving the color to
| another. Why not?
|
| We don't fully understand our biology and our brain, but at the
| same time we speculate that our experience somehow can't be
| manipulated scientifically? Why?
| jimbokun wrote:
| That's the easy case.
|
| It's much trickier to figure out if software running on a
| silicon computer has the same kind of interior, subjective
| experience as us. Even when exhibiting the same outward
| behavior.
| zero-sharp wrote:
| I don't know what that means. My guess is that, if/when we
| start engineering neural structures, the consciousness
| debate will disappear.
|
| Internal subjective experience can be confirmed by the
| recipient of the modification. If we know one person
| suffers an ("internal") abnormality and we treat them by
| modifying their brain, and the abnormality disappears, then
| we have evidence that experience obeys science. Same idea
| with the discussion on "conveying the experience of color."
| It's probably more subtle because it's not a yes or no "did
| the abnormality disappear?". But that's beside the point.
| ben_w wrote:
| > There is a philosophical idea that a person can only ever be
| sure of their own consciousness; everyone else could be
| mindless machines and you have no way of knowing
|
| A while back I realised there must be at least two: me, and the
| first person who talked or wrote about it such that I could
| encounter the meme.
|
| _In principle_ all the philosophers might be stochastic
| parrots /P-zombies from that first source, but the first had to
| be there.
|
| (And to pick my own nit: technically they didn't have to exist,
| infinite monkeys on a typewriter and/or Boltzmann brain).
| jimbokun wrote:
| So just you and Descartes.
| ben_w wrote:
| No way of knowing if Descartes was simply parroting what he
| heard from another, just as you can't tell I'm not a large
| language model trained by the human who created this
| account ;P
| andoando wrote:
| I think the interesting discussion here is as you're putting
| it, consciousness, the subjective experience of living and
| feeling. These are not requirements for intelligence or any
| physical process, and yet it is an indisputable fact that it
| exists.
|
| The only conclusion I can make is that there is indeed a non
| physical reality.
| jimbokun wrote:
| That is exactly correct.
|
| I would only add that we attribute consciousness to our fellow
| humans, because we perceive them to be creatures like us from
| what we can observe about their physical bodies and behaviors
| being similar to ours.
|
| With AI, it is much less intuitive to assume creations we know
| to have arise from very different origins than ourselves have
| the same kind of interior experiences we do. Even if the
| surface behavior is the same.
| shadowfoxx wrote:
| I'm genuinely not certain how your definition of consciousness
| is distinct and different from 'responds to stimuli'.
| barrysteve wrote:
| The solipisist can't find reason to form agreements with
| others. Others are mindless in his view.
|
| He can't define consciousness in terms of what we agree,
| there's nobody to agree with.
|
| So the game of describing the color red to others, cannot be
| played to any meaningful end. Red is red to the solipsist.
|
| Coming up with your own interpretation of consciousness is an
| ability truly conscious people have.
|
| It can never be completely agreed upon in a philosophical
| conversation without dogma or compromise.
|
| Both solipsism and total agreement, cannot be truthfully used
| as philosohical tools to contain consciousness.
| shreezus wrote:
| Consciousness & intelligence are orthogonal. It's highly
| plausible we achieve superintelligence before we have conscious
| machines.
|
| That said, understanding consciousness is not _necessarily_ a
| prerequisite for engineering it. It may very much end up being an
| emergent phenomenon tied to sensory integration & processing, in
| which case it ends up self-assembling under the right
| circumstances. Exciting times...
| FrustratedMonky wrote:
| So my thermostat is 'intelligent', just not 'conscious'?
___________________________________________________________________
(page generated 2024-03-17 23:01 UTC)