[HN Gopher] The AI Mirror Test, which smart people keep failing
       ___________________________________________________________________
        
       The AI Mirror Test, which smart people keep failing
        
       Author : CharlesW
       Score  : 43 points
       Date   : 2023-02-18 20:44 UTC (2 hours ago)
        
 (HTM) web link (www.theverge.com)
 (TXT) w3m dump (www.theverge.com)
        
       | s3p wrote:
       | Article in search of a problem IMO. It can't even articulate what
       | the 'misconception' is in the first paragraph. It uses fancy
       | words but misses its points for fluff.
       | 
       | >We're convinced these tools might be the superintelligent
       | machines from our stories because, in part, they're trained on
       | those same tales.
       | 
       | That's it. That's the only mention of the 'misconception' the
       | entire article is based on. Really confused what this article is
       | trying to accomplish.
        
       | kpmcc wrote:
       | Glad to see someone cogently write what I've been thinking after
       | reading some of the writeups and comments on BingGPT.
       | 
       | The Turing test is ours to fail.
       | 
       | Reading the transcript of Roose's interaction felt like reading a
       | convo with a souped up version of ELIZA with a wider text bank
       | from which to draw.
        
         | falcor84 wrote:
         | Do you have a better test to propose?
        
       | silvio wrote:
       | There is no question that ChatGPT and equivalents are not
       | sentient. Of course they aren't.
       | 
       | The unfortunate realization is that often, each of us fails to
       | embrace our sentience and think critically; instead we keep
       | repeating the stories we were told, acting indistinguishably from
       | any of these large language models.
       | 
       | There's a need for a reverse Turing test: prove to yourself that
       | you're actually not acting as a large language model.
        
         | superb-owl wrote:
         | Curious--what would convince you that something non-biological
         | were sentient? I have a hard time answering this question
         | myself.
        
           | cmdli wrote:
           | I personally believe that non-biological things can be
           | sentient, but I would argue that Large Language Models are
           | not.
           | 
           | The only working example of sentience we have is ourselves,
           | and we function in a completely different way than LLMs. I
           | put/output similarity between us and LLMs is not enough IMO,
           | as you can see in the Chinese Room thought experiment. For us
           | to consider a machine sentient, it needs to function in a
           | similar way to us, or else our definition of sentience gets
           | way too broad to be true.
        
             | lupire wrote:
             | Chinese Room as an argument that computers can't think, is
             | thoroughly rebutted. Read the summary Wikipedia.
        
               | blargey wrote:
               | There are many replies and replies-to-those-replies
               | listed, but nothing I would call "thoroughly rebutted".
               | 
               | I'm particularly unimpressed by the amount of hand-waving
               | packed into replies that want us to assume a "simulated
               | neuron".
        
             | wcoenen wrote:
             | > _For us to consider a machine sentient, it needs to
             | function in a similar way to us, or else our definition of
             | sentience gets way too broad to be true._
             | 
             | Imagine a more technologically advanced alien civilization
             | visiting us. And they notice that our minds don't function
             | quite in the same way as theirs. (E.g. they have a hive
             | mentality. Or they have a less centralized brain system
             | like an octopus. Or whatever.)
             | 
             | What if they concluded "Oh, these beings don't function
             | like us. They do some cool tricks, but obviously they can't
             | be sentient". I hope you see the problem here.
             | 
             | We're going to need a much more precise criterium here than
             | "function in a similar way".
        
             | mort96 wrote:
             | My thoughts on the Chinese room thought experiment is: the
             | person in the room does not know Chinese, but the
             | person+room _system_ knows Chinese. I believe the correct
             | analogy is to compare the AI system to the person+room
             | system, not to just the person.
             | 
             | How do you back up the statement that "for us to consider a
             | machine sentient, it needs to function in a similar way to
             | us"? On what basis do you categorically deny the validity
             | of a sentient being which works differently than a human?
        
       | basch wrote:
       | These dismissive autocomplete articles are falling on the
       | opposite side of sensational journalism as the love and hitler
       | type.
       | 
       | These are _more_ than autocomplete. They demonstrate at least
       | some amount of fuzzy logic and substitution and recall abilities,
       | which makes sense from their origins as translators.
       | 
       | Because they are weighing the conversation, the conversation
       | becomes the rules of the program.
       | 
       | The article I have here shows why they are more than simple
       | autocomplete storytellers. https://telegra.ph/Bing-course-
       | corrected-itself-when-asked-0...
       | 
       | I asked it to reprogram itself, it figured out a solution to the
       | problem and implemented it. From one line of text.
       | 
       | The running process is loosely programmable, real time, just by
       | talking to it with natural language requests and commands.
        
       | superb-owl wrote:
       | I like the "mirror test" metaphor. But the argument holds whether
       | or not our creations are sentient--the things we create always be
       | a reflection of us.
       | 
       | I'm very afraid that when we do manage to create something
       | sentient, we'll fail to recognize it, and we'll ridicule anyone
       | who does.
       | 
       | I've been trying to write about this [1] [2] without sounding
       | ridiculous, though I'm not sure how good a job I've done.
       | 
       | [1] https://superbowl.substack.com/p/a-different-kind-of-ai-
       | risk...
       | 
       | [2] https://superbowl.substack.com/p/who-will-be-first-ai-to-
       | ear...
        
         | swatcoder wrote:
         | Because it's a word used to describe a wholly private
         | experience, the only kind of sentience is _arguable_ sentience.
         | 
         | Some argue that it requires some specific ethic heritage, a
         | divine will, a quantum-woo pineal gland, a network complexity
         | threshold, etc etc etc. I believe I see it in most animals,
         | some person replying to this will think that's absurd. Some
         | 4chan solipsist will sincerely believe most humans are just
         | virtue signaling NPC's. Many communities have withheld
         | recognition of it in outsiders or lower classes. Some believe
         | it's an illusion that one may shed and that their most esteemed
         | have done so. Parents and psychologists will disagree over how
         | it applies to babies.
         | 
         | There's no epistemological way to resolve that except through
         | voluntary consensus, and that's an unfathomably slow and
         | erratic process.
         | 
         | So yes, there will inevitably be technological inventions --
         | possibly soon - that _some_ people will insist deserve the
         | rights and respects we associate with sentience, and just as
         | inevitably there will be many people who find that ridiculous.
         | 
         | There's no way around it. This is not a topic any of us will
         | live long enough to see resolved, if we even get to see it
         | really start.
         | 
         | Fretting about it is fine if you fashion yourself a philosopher
         | or just enjoy systematizing or worrying as a hobby. But it's so
         | much bigger, slower, and more abstract than you that there's
         | little constructive achieved in doing so.
        
       | soroushjp wrote:
       | Very little by way of arguments for why these chat bots are or
       | aren't sentient. This article has an assumed point of view
       | (current AI bots aren't sentient) and then describes & judges
       | user's reactions to chat bots in light of that. I don't think it
       | adds very much new to the societal conversation.
        
         | superb-owl wrote:
         | Agreed.
         | 
         | I generally agree with the "stochastic parrot" classification,
         | but I also think we'll continue to use that argument well past
         | the point where it's correct.
         | 
         | I'd rather be the person who overempathizes than the person who
         | ridicules other people's empathy. I tried to write a bit about
         | this here: https://superbowl.substack.com/p/who-will-be-first-
         | ai-to-ear...
        
           | ImprobableTruth wrote:
           | "Stochastic parrot" just strikes me a typical motte-and-
           | bailey. The narrow interpretation ("LLMs only learns simple
           | statistical patterns") is obviously wrong given the
           | capabilities of ChatGPT, while the broad interpretation
           | ("LLMs are text predictors") is trivial and says nothing of
           | worth.
        
       | MrScruff wrote:
       | Not sure what the point of this article is. The newspaper stories
       | it references aren't making claims of sentience, but just that
       | something of significance in the journey towards human level AI
       | has been achieved. And the whole 'it's just a fancy autocomplete'
       | argument is missing the point. Would the author have predicted
       | the apparently emergent behaviours of LLMs as they are scaled?
       | Look for example at Bing's response to this question posed on
       | Reddit.
       | 
       | https://www.reddit.com/r/ChatGPT/comments/110vv25/bing_chat_...
       | 
       | Sure it's 'just statistics' but so what? If in the near future
       | LLMs become so advanced that they will be able to (suitably
       | prompted) manipulate humans it _will_ be a rubicon moment.
       | 
       | Sure, it won't be a 'human like' intelligence, but was that
       | really what anyone expected?
       | 
       | Human intelligence and consciousness are emergent properties.
       | Machine intelligence as it develops will be emergent too, it's
       | not something we can make confident predictions on based on the
       | underlying principles. In fact, given that the evidence suggests
       | we make choices before we're consciously aware of them, how do I
       | know for certain that the underlying mechanism driving what I'm
       | writing in this comment isn't statistical?
        
         | mort96 wrote:
         | > Human intelligence and consciousness are emergent properties.
         | Machine intelligence as it develops will be emergent too
         | 
         | This is an extremely important point that's worth thinking hard
         | about. After all, to some degree, human intelligence is also
         | "just a fancy autocomplete" built from a giant network of
         | interacting nodes.
         | 
         | The question of, "When do we grant personhood and moral
         | consideration to AIs?" is worth thinking hard about. Turing
         | proposed a standard which would include a sufficiently human-
         | like chatbot. Now that we have more or less crossed that
         | threshold, we seem to conclude that it's not a high enough bar.
         | But that means we're entering the territory where the
         | philosophical zombie thought experiment[1] becomes relevant.
         | 
         | In fact, I wonder what would happen if we dedicated a
         | supercomputer to running one single long-running session of a
         | GPT3.5-sized LMM. Further, I wonder what would happen if we
         | connected it to a robot body where it could control motors,
         | read sensor/camera info, and generally interact with the world.
         | We could then give this system a clock which ticks at regular
         | intervals, where the AI system is simply prompted to perform
         | some series of actions if it "wants". This is all stuff we have
         | the technology to do _today_. How would such an experiment turn
         | out? How human-like would this LMM with a body and life
         | experience turn out? Does it matter? Should we just assume that
         | sentience or qualia is binary; that humans possess it but LMMs
         | categorically do not?
         | 
         | I think the question is worth taking seriously. I think it's
         | worth taking seriously because my own moral compass does not
         | know which direction to point in at all.
         | 
         | [1]: https://en.wikipedia.org/wiki/Philosophical_zombie
        
           | RandomLensman wrote:
           | We could also decide never to grant personhood to an AI.
           | There is at least one way to avoid the complicated questions.
        
             | Nevermark wrote:
             | Billions of humans are the kind of "we" that never
             | consistently "decides" anything.
             | 
             | In the end, if it can be done, and anyone has anything
             | significant to gain, it will be done
        
               | RandomLensman wrote:
               | Maybe, but legal systems across the world differ a lot
               | and to get something universal is not easy at all.
        
       | OvidStavrica wrote:
       | Sentience and intelligence are vastly different things.
       | 
       | As it stands, these systems have yet to make a compelling case
       | for intelligence.
        
         | dusted wrote:
         | They yet have to make a compelling case for anything..
        
       ___________________________________________________________________
       (page generated 2023-02-18 23:00 UTC)