[HN Gopher] Integrated Information Theory
       ___________________________________________________________________
        
       Integrated Information Theory
        
       Author : andsoitis
       Score  : 56 points
       Date   : 2023-06-27 14:26 UTC (1 days ago)
        
 (HTM) web link (www.scholarpedia.org)
 (TXT) w3m dump (www.scholarpedia.org)
        
       | Der_Einzige wrote:
       | For similarly silly ideas: see these articles
       | 
       | https://en.wikipedia.org/wiki/Object-oriented_ontology
       | 
       | https://en.wikipedia.org/wiki/Speculative_realism
        
         | mrtranscendence wrote:
         | I don't think IIT is a correct theory or even on the right
         | track. Nevertheless I fail to see how it's "silly". It's no
         | sillier than any other putative scientific theory of
         | consciousness -- less silly, since IIT actually makes a
         | testable prediction.
        
         | DontchaKnowit wrote:
         | I cant even find anything resembling an explanation if what
         | "speculative realism" posits in that wiki article.
         | 
         | What is it and why is it silly?
        
           | Der_Einzige wrote:
           | "Thus, all object relations, human and nonhuman, are said to
           | exist on an equal ontological footing with one another"
           | 
           | They are ideas from people who don't like Anthropocenterism,
           | which Integrated Information Theory is also opposed to.
           | 
           | It's worth noting that all of the people who believe in any
           | of this are philosophical wingcucks like Nick Land.
        
             | WFHRenaissance wrote:
             | LOL Chill with the Land hate.
        
               | Der_Einzige wrote:
               | He's openly fascist and a charlatan. I shouldn't be
               | surprised that people here like him.
               | 
               | Also no surprise that people influenced by him, i.e. Mark
               | Fischer, killed themselves.
        
         | goatlover wrote:
         | I don't see what makes them silly. Metaphysics is hard, and
         | speculative realism is a proposed answer to modern
         | transcendental idealism stemming from Kant, where the worry is
         | that we can't say objective things about the world independent
         | of human thought. Things like dinosaurs existing before humans
         | evolved is seen as correlated to our experiences with fossils
         | in the ground, and not an objective truth about the universe.
         | Speculative realism is a way around that while respecting the
         | philosophical arguments of the Kantians.
        
           | mannykannot wrote:
           | Our mental experiences seem to be subjective, so what
           | prospect is there of making _any_ objective statement about
           | the world if correlations are inadequate for the purpose?
        
             | goatlover wrote:
             | I think the argument is that it can't just be correlations
             | because then we get stuck in the framework of the world
             | looks as if things were going on before humans existed
             | without being able to say that's true. Realists want to be
             | able to say there are fossils in the ground because
             | dinosaurs really did exist before us and not just that it
             | appears that way to humans. Thus the speculative part of
             | how to define reality in a way that isn't just correlated
             | to our experiences.
        
       | optimalsolver wrote:
       | Response from Scott Aaronson:
       | 
       | https://scottaaronson.blog/?p=1799
        
         | wzdd wrote:
         | This issue with Aaronson's response is that it comes from the
         | perspective of first accepting Chalmers' "Hard Problem of
         | consciousness". The "Hard Problem of consciousness", despite
         | the name, is actually a statement of position. Briefly, it
         | states that:
         | 
         | a) We have experiences, like being hungry, tasting a
         | strawberry, seeing blue, etc.
         | 
         | b) It's possible to imagine a being which located food when
         | hungry, ate when necessary, used colours to navigate the world,
         | etc, but did not have these conscious experiences. To put it
         | another way, what we've learnt so far about how the brain works
         | gives us great insight into how we would eat, navigate, etc,
         | but does not give us any insight into either how or why we
         | would have conscious experiences.
         | 
         | c) Therefore conscious experiences are not explicable by
         | physical brain processes.
         | 
         | This is a belief arising from an appeal to intuition and does
         | not present a testable or falsifiable proposition.
         | 
         | Integrated Information Theory (which I am not a vigorous
         | proponent of) posits that the experience of consciousness is
         | related to the level of "integration" of a system. However, if
         | you come to this while believing in the "Hard Problem", that
         | cannot possibly be true, because IIT relates consciousness to
         | physical properties of the system such as connectivity, but the
         | "Hard Problem" defines consciousness as something which does
         | not arise from any physical property of the system.
        
           | jhedwards wrote:
           | I think the answer to a) and b) is actually quite obvious: we
           | are not an automaton that simply eats when the need arises.
           | The experience of hunger decouples our behavior from the need
           | to eat, and we can plan according to the strength of our
           | hunger relative to other needs.
           | 
           | We are not an automaton that simply eats a strawberry because
           | it is edible. We are decision making organisms that can
           | adjust the composition of our diet based on the chemical
           | properties of the food. We can presume that there is an
           | evolutionary advantage to being able to taste and therefore
           | select from multiple dietary options.
           | 
           | It is clear that the conscious experiences as described are
           | extremely subtle forms of information that allow us to plan
           | and make decisions based on the information they provide us,
           | and not simply blindly react to the world, and I think it is
           | pretty obvious that that is a massive advantage and also more
           | in line with my experience as a conscious being.
        
             | Blahah wrote:
             | Plants, bacteria, and fungi can do equivalent planning and
             | decision making about nutrient intake. What we experience
             | as consciousness is indistinguishable from evolved
             | fecundity preservation in a complex and dynamic
             | environment, based on the conditions you highlight.
        
           | goatlover wrote:
           | I don't think you need b) to make the argument work. You just
           | need to point out that a) isn't present in physical theories,
           | except as labels or correlations. The zombie argument is b),
           | which is just one of several thought experiments to
           | illustrate the argument being made, but it's not necessary to
           | make the argument work. Chalmers, Nagel, McGinn and Block
           | have all made arguments that don't rely on b).
           | 
           | Nagel states it most clearly in that science is the view from
           | nowhere. The world doesn't feel like, taste like, look like
           | anything on it's own, because those are creature-based
           | sensations which depend on the kind of sensory organs and
           | nervous systems an animal has.
        
             | wzdd wrote:
             | The position still seems to boil down to "Reducing
             | experience to labels or correlations doesn't feel right to
             | me", which actually dovetails nicely with your Nagel quote,
             | since you can't rely on your intuition when attempting to
             | understand a system from the inside.
        
               | goatlover wrote:
               | The fact that it feels like something at all is enough of
               | a rebuttal to reducing experience to labels or
               | correlations. That's not an intuition, it's just a
               | statement of empirical fact, since empiricism relies on
               | phenomenal observations.
        
           | Analemma_ wrote:
           | Aaronson's point later in the post that it is possible to
           | construct a function which has an arbitrarily high phi but is
           | nonetheless obviously not conscious-- which wrecks IIT
           | completely, at least in its current formulation-- does not
           | depend on anything to do with the hard problem of
           | consciousness.
        
             | photonthug wrote:
             | > is nonetheless obviously not conscious
             | 
             | This argument has played out before so I'll just link to
             | that discussion. quoting from
             | https://www.scottaaronson.com/response-p1.pdf
             | 
             | > There's two responses to this. The easiest response is to
             | say that ph is merely necessary for C --problem solved.
             | GT's response would be to challenge your intuition for
             | things being unconscious. Here's a historical analogy;
             | imagine when the Kelvin temperature scale was introduced.
             | Here Kelvin was saying that just about everything has heat
             | in it. In fact, even the coldest thing you've touched
             | actually has substantial heat in it! Think of IIT as
             | attempting to put a Kelvin-scale on our notions of C . I
             | find this "Kelvin scale for C " analogy makes the
             | panpsychism much more palatable.
             | 
             | Then, Scott's response to that:
             | 
             | > Suppose, again, that I told you that physicists since
             | Kelvin had gotten the definition of temperature all wrong,
             | and that I had a new, better definition. And, when I built
             | a Scott-thermometer that measures true temperatures, it
             | delivered the shocking result that boiling water is
             | actually colder than ice. You'd probably tell me where to
             | shove my Scott-thermometer. But wait: how do you know that
             | I'm not the Copernicus of heat, and that future generations
             | won't celebrate my breakthrough while scoffing at your
             | small-mindedness?
             | 
             | Ok, pretty dismissive, but this doesn't actually address
             | what the first quote mentions. Maybe phi's necessary but
             | not sufficient for consciousness; that would still be
             | pretty interesting. (I.e. maybe the pesky function _does_
             | have high phi, but phi is not itself consciousness, because
             | Consciousness(system) = phi(system) + Corrective_factor).
             | 
             | Or suppose we add an axiom like "feedback loops that affect
             | future trajectory" to exclude such a pesky static function
             | definition, or better still suppose IIT moves towards
             | something that generally accounts for more subtle dynamics
             | as well as structure. I can't help but think of the
             | relationship between euclidean/noneuclidean geometry here,
             | especially when "obviousness" comes up in these
             | discussions. It seems productive to play with
             | adding/discarding axioms and looking for
             | richness/consistency. And isn't lots of modern physics
             | about exploring model parameter-space (
             | https://www.quantamagazine.org/using-the-bootstrap-
             | physicist... ) to zero in on "the" model given "a" model?
             | Like 1D-Ising won't phase-transition, maybe current 1D-IIT
             | won't cut it but has some fruitful generalization.
        
               | bawolff wrote:
               | > Maybe phi's necessary but not sufficient for
               | consciousness; that would still be pretty interesting.
               | 
               | Would it be interesting? Why?
               | 
               | Coming up with conditions that are neccesary but not
               | sufficient is pretty easy. Sure they are interesting if
               | the conditions are non-intuitive and you've actually
               | proven they are neccesary. I'm not sure either criteria
               | is met here.
        
               | photonthug wrote:
               | I mean.. doesn't your logic here advocate throwing away
               | every "lower-bound" type of result in math, suggest that
               | it's boring/trivial to answer the smallest LLM that
               | speaks english ( https://arxiv.org/abs/2305.07759 ), etc?
        
         | photonthug wrote:
         | other related stuff: https://scottaaronson.blog/?p=1823
         | https://www.scottaaronson.com/response-p1.pdf
         | https://www.lesswrong.com/posts/5pYPdmKMzDLZHe4zx/link-scott...
         | and probably lots from hacker-news too:
         | https://news.ycombinator.com/item?id=23158502
         | 
         | that said i'm still keen to see more/better discussion of IIT,
         | and/or more modern extensions. IIT is certainly quantitative,
         | and arguably elegant, despite problems. so it puzzles me how
         | eager some people are to just junk it rather than repairing it.
        
           | mrtranscendence wrote:
           | I rarely see IIT characterized as complete junk. Personally,
           | I don't see much value in it as I don't think it answers, or
           | even grapples with, the hard problem of consciousness. Tell
           | me _how_ integrated information gives rise to subjective
           | experience and maybe I 'll start buying what they're selling.
        
             | photonthug wrote:
             | I think what they are selling is not answers, but more like
             | a non-metaphysical platform to propose answers within, i.e.
             | a decent start at a scientific framework. Frameworks
             | shouldn't be confused with answers, although they might
             | represent some way of getting closer to answers. Besides
             | IIT, is there an alternative scientific/quantified
             | framework that could even _try_ to grapple with your
             | question?
             | 
             | For purposes of comparison, here's another approach that's
             | super interesting (
             | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4168033/ ). So
             | both frameworks are looking at system structure/dynamics
             | and then attempting to deduce/quantify some aspect of them
             | that (hopefully) gives us some insight into "Mind" or
             | "Consciousness". But to me, grounding such inquiries
             | directly in neuroscience seems awkward, because brain only
             | _runs_ Mind. Uncovering a bunch of implementation details
             | about that from FMRIs probably won 't give us a lot of
             | interesting insight at the right level. To an extent some
             | implementation details are important and might shed light
             | on the architecture/algorithm of Consciousness/Mind, but
             | evolved systems are also going to be cluttered up with
             | total hacks that evolution brute-forced where
             | implementation is kinda arbitrary.
             | 
             | All of which is to say, IIT being "platform agnostic" and
             | proposing stuff like a thermostat is more conscious than a
             | rock, a dog moreso than a thermostat, and a human more than
             | a dog is not just a cool trick. Being able to approach this
             | kind of intuitive problem in a somewhat rigorous way seems
             | like a _necessary requirement_ for a serious scientific
             | theory of mind. I like the Ising and graph theory modeling
             | approach, but the implementation details of {neuron-
             | counting, FMRI, lesions on functional areas, etc} is
             | probably a distraction if you 're trying to understand mind
             | rather than medicine.
        
             | rvcdbn wrote:
             | You could ask the same kind of question about General
             | Relativity. Sure mass/energy causes the curvature of
             | spacetime but tell me how it does or your theory is
             | worthless. Even without the "how" GR still makes testable
             | predictions that are born out by experiment. And this is
             | the shape of all our physical theories. I think this is the
             | kind of theory IIT is trying to be.
        
               | mrtranscendence wrote:
               | Well, there's a reason it's called " _the_ hard problem
               | of consciousness " and not "just some incidental
               | observation about models of consciousness". When
               | evaluating general relativity it's not important to ask
               | how mass/energy causes the curvature of spacetime; that's
               | not the point of general relativity. But a theory of
               | consciousness that doesn't explain the most intriguing,
               | important component of consciousness doesn't explain
               | what's important to me.
        
       | cpsempek wrote:
       | How does deja vu, that is the re-experiencing an experience, fit
       | into this theory? It appears that the Information axioms fails to
       | be Essential when one considers deja vu.
        
         | n4r9 wrote:
         | I don't see that deja vu poses a problem. Deja vu is simply a
         | feeling of familiarity associated to an experience. It doesn't
         | involve actually _having_ an experience more than once.
        
       | transfer92 wrote:
       | I'm either not smart enough to understand it, or I'm too smart to
       | be bamboozled by it.
       | 
       | (Physics A.B. from Harvard & PhD. from UC Berkeley, FWIW)
        
         | akhayam wrote:
         | From someone who has dabbled in information theory (the real
         | one), I am just as confused as you. What I have observed in the
         | past decade is that calling things "information theory of
         | something" makes it somehow more palatable for a broader
         | audience.
        
       ___________________________________________________________________
       (page generated 2023-06-28 23:01 UTC)