[HN Gopher] The "computers are social actors" theory no longer a...
       ___________________________________________________________________
        
       The "computers are social actors" theory no longer applies to
       desktop computers
        
       Author : vasco
       Score  : 52 points
       Date   : 2023-11-14 07:44 UTC (1 days ago)
        
 (HTM) web link (www.nature.com)
 (TXT) w3m dump (www.nature.com)
        
       | cobaltoxide wrote:
       | Abstract
       | 
       | The Computers Are Social Actors (CASA) theory is the most
       | important theoretical contribution that has shaped the field of
       | human-computer interaction. The theory states that humans
       | interact with computers as if they are human, and is the
       | cornerstone on which all social human-machine communication
       | (e.g., chatbots, robots, virtual agents) are designed. However,
       | the theory itself dates back to the early 1990s, and, since then,
       | technology and its place in society has evolved and changed
       | drastically. Here we show, via a direct replication of the
       | original study, that participants no longer interact with desktop
       | computers as if they are human. This suggests that the CASA
       | Theory may only work for emergent technology, an important
       | concept that needs to be taken into account when designing and
       | researching human-computer interaction.
        
         | vlovich123 wrote:
         | Was the original paper ever replicated? Cause if not, that's
         | too strong a conclusion. A simpler explanation could be that
         | the original paper got something wrong and wasn't replicable.
        
         | weinzierl wrote:
         | _" This suggests that the CASA Theory may only work for
         | emergent technology[..]"_
         | 
         | I think we can see this unfold in real time with large language
         | model based chatbots like ChatGPT. First they seem almost human
         | and the initial reaction is to treat them like that. Always say
         | "Thank you!" and potentially get angry at them for being wrong.
         | It doesn't take long though, to realize that the bot is a bot
         | and even if it speaks human language it behaves significantly
         | different from a real human. Then the human behavior starts to
         | change as well and the bot is treated differently.
        
           | pixl97 wrote:
           | Except we see to see behaviors in asking questions to LLMs
           | where we get different/better responses if we ask "please".
        
             | chihuahua wrote:
             | And also, we get better answers from LLMs for solving
             | captchas if we claim that the hard-to-decipher letters were
             | written on our dear grandmother's Christmas ornament. I
             | find this quite amusing.
        
             | l33t7332273 wrote:
             | This makes sense if you think about chatGPT as a ML model
             | and not as a sentient AI. Its training data would show that
             | asking nicely elicits better answers.
        
       | kmeisthax wrote:
       | >One main reason why no direct replication has been conducted
       | before now, is that the proposed underlying psychological basis
       | for the CASA Theory suggested that a short period of time (e.g.,
       | 30 years) should have no influence on the effect. Specifically,
       | the original authors postulated that the reason we respond to
       | social computers as if they have an awareness is because our
       | brains are not evolving at the same rate as technology; our
       | brains are still adapted to our early ancestors' environment4.
       | 
       | I find this hypothesis... shockingly naive, bordering on
       | psuedoscientific. Brains are there _to learn behaviors faster
       | than evolution can_. Neuroplasticity (as well as just the general
       | experience of _being human_ ) means that brains can un- and re-
       | learn things as needed rather than being stuck with the same
       | static behaviors that, say, a plant might have. No surprise the
       | CASA theory got falsified, and good on the researchers for doing
       | the necessary falsification work.
        
         | amelius wrote:
         | Perhaps they were talking about test subjects that haven't yet
         | been exposed to computers.
        
           | mrdatawolf wrote:
           | "Nass and Reeves make a point of stating in their methodology
           | that all the participants 'have extensive experience with
           | computers ... they were all daily users, and many even did
           | their own programming'"
        
         | KMnO4 wrote:
         | My goodness, how did that pass peer review?
         | 
         | Postulating that evolution is strictly the result natural
         | selection (ie our ancestral chain is solely responsible for
         | what we are capable of doing) is such an archaic view.
         | 
         | We don't have to "randomly mutate and wait for the inferior
         | genes to drop out of the gene pool" to see human adaption, and
         | there are a plethora of counterexamples to remove any doubt.
        
         | patcon wrote:
         | I say this with utmost respect (and a little bit of devil's
         | advocacy heh :) -- I might suggest your critique is a little
         | too self-assured.
         | 
         | Disclaimer: Not a neuroscientist, but a biochemist *shrug*
         | 
         | We recognize faces easily due to deep wiring. Some linguistic
         | behavior is deeply rooted as well, e.g., our ability to be
         | immersed in written or visual narratives of experiences that
         | are not our own ("aesthetic illusion") is an evolved extension
         | of "play". Both of these are related to deep brain structures,
         | not just culture. (Though HOW we play is certainly also in the
         | realm of culture.)
         | 
         | Lots of conclusions about brains can be rooted in there being
         | really foundational neural structures that can't simply be
         | rewired by culture. Just because culture exists, it doesn't
         | mean that conclusions rooted in biology are irrational. And
         | assuming the effect was reproduced over many cultures (which
         | perhaps it was not) then it would be fair to assume it was
         | rooted in some deeper system.
         | 
         | tldr- I feel you are falling for hindsight bias. Respectfully,
         | I suggest we're all best-served to navigate this world by
         | cultivating a healthy dose of humility, and I say that more for
         | all the self-assured readers upvoting you for making a dunk.
         | 
         | I'm guessing the truth is somewhere in between :)
        
           | acomar wrote:
           | except no such neural structure has ever been found. humans
           | have been using tools for longer than we've been human --
           | without solid evidence that _this_ tool is interpreted as a
           | social actor, based on real neuroscience, this kind of claim
           | rooted in an evolutionary argument _is_ psuedoscience. people
           | have been making arguments from evolution to say all kinds of
           | nonsense things since Darwin (like justifying racial
           | hierachies). which neural structure is posited to cause us to
           | humanize our tools?
           | 
           | if anything, the historical evidence points in the opposite
           | direction -- that people objectify far more than they
           | humanize, even when the cost is measured in hundreds,
           | thousands, or millions of lives. that's merely an
           | observation, not a hypothesis or a claim about what people
           | will do or about what they are capable of doing. we ought to
           | humanize more often.
        
             | danaris wrote:
             | "We haven't yet found a specific neural structure for
             | recognizing faces" is far from evidence that no such
             | structure exists. Our understanding of the brain still has
             | massive gaps in it, and I can testify from my own
             | experience working for a psychology & neuroscience
             | department (which includes one person particularly
             | specializing in perception, from the very basic "light
             | hitting the optic nerve" stage all the way to object
             | categorization and recognition) that we still have a lot to
             | learn in this area specifically.
             | 
             | It may very well be that there _isn 't_ a brain structure
             | dedicated to this, and that would be fascinating, too! But
             | to denigrate the people doing their best to understand this
             | stuff 30 years ago as "pseudoscientific" just because they
             | made an assumption about how plastic the brain was without
             | our benefit of 20/20 hindsight is very much uncalled-for.
        
               | acomar wrote:
               | > "We haven't yet found a specific neural structure for
               | recognizing faces" is far from evidence that no such
               | structure exists.
               | 
               | proving a negative is, famously, quite hard. an unsolved
               | problem, even. facial recognition has a plethora of
               | evidence beyond an argument from evolution. the notion
               | that we humanize tools is one that, as yet, lacks that
               | evidence. I urge people to be more skeptical of arguments
               | from evolution. we understand very little about our
               | evolution and it's easy to insert our own worldviews and
               | beliefs into such arguments, allowing them to state
               | virtually anything we like in a plausible envelope with
               | the shape of a scientific argument. I'm not just calling
               | the argument about humanizing tools pseudoscience -- I'm
               | applying it equally to every other argument from
               | evolution that lacks other motivating evidence.
        
               | anonymouskimmer wrote:
               | I understand your original point was about a neural
               | structure involved in humanization, and not of facial
               | recognition, but am responding to the point you let the
               | interlocutor derail this to.
               | 
               | > > "We haven't yet found a specific neural structure for
               | recognizing faces" is far from evidence that no such
               | structure exists.
               | 
               | > proving a negative is, famously, quite hard.
               | 
               | Whether structure or not, we do have very strong evidence
               | that a _mechanism_ of facial recognition exists as there
               | are people who lack this mechanism to various degrees.
               | 
               | This article posits that we have indeed discovered a
               | specific neural structure involved in facial recognition:
               | https://www.aipc.net.au/articles/the-neuroscience-of-
               | facial-...
               | 
               | > The brain has even evolved a dedicated area in the
               | neural landscape, the fusiform face area or FFA
               | (Kanwisher et al, 1997), to specialise in facial
               | recognition. This is part of a complex visual system that
               | can determine a surprising number of things about another
               | person.
        
               | Legend2440 wrote:
               | Your brain is a structure for learning structures.
               | 
               | It doesn't need to have a built-in module for recognizing
               | faces; it wires up a face-recognition system on the fly,
               | from visual data.
        
             | wolverine876 wrote:
             | > humans have been using tools for longer than we've been
             | human
             | 
             | It depends on how you define 'human'. Our line spit from
             | chimpanzees 7 million years ago (mya); we walked upright 6
             | mya. Tool use began ~2.58 mya (possibly 3.3 mya, depending
             | on some uncertain evidence).
        
             | anonymouskimmer wrote:
             | https://neurosciencenews.com/empathy-human-robots-
             | psychology...
             | 
             | > They performed electroencephalography (EEG) in 15 healthy
             | adults who were observing pictures of either a human or
             | robotic hand in painful or non-painful situations, such as
             | a finger being cut by a knife. _Event-related brain
             | potentials for empathy toward humanoid robots in perceived
             | pain were similar to those for empathy toward humans in
             | pain. However, the beginning of the top-down process of
             | empathy was weaker in empathy toward robots than toward
             | humans._
             | 
             | So basically it seems we potentiate empathy toward similar
             | kinds of beings and then maybe pattern-recognize that they
             | are not similar to clamp down on the potentiated empathetic
             | response?
        
         | anonymouskimmer wrote:
         | > Brains are there to learn behaviors faster than evolution
         | can.
         | 
         | Reflexes whether innate, enforced, or purely learned are
         | basically impossible to modify without either conscious
         | attention or a lot of training. I presume the CASA proponents
         | were arguing that these social responses are akin to social
         | reflexes.
         | 
         | Plants "learn" too. https://theconversation.com/pavlovs-plants-
         | new-study-shows-p...
         | 
         | The mechanistic hypothesis being:
         | 
         | > Plants may lack brains and neural tissues but they do possess
         | a sophisticated calcium-based signaling network in their cells
         | similar to animals' memory processes.
        
           | fredgrott wrote:
           | It is not just that as plant cell signaling is how animals
           | got neurotransmitters through evolution. In fact many plants
           | have neurotransmitter analogs due their signaling needs
           | through evolution.
        
       | layer8 wrote:
       | Sounds more like this is just an exhibit of the replication
       | crisis. I don't remember people interacting with computers as if
       | they are human in the 1990s -- at least the people who actually
       | did interact with computers on a regular basis.
        
         | pixl97 wrote:
         | I mean, it might have been before your time, but people have
         | been doing this since the 60's.
         | 
         | https://en.wikipedia.org/wiki/ELIZA
        
           | layer8 wrote:
           | I'm familiar with this of course, but it's conditioned on a
           | chat program that actually tries to appear human, and the
           | credulous reactions to ELIZA were mostly lay people who
           | otherwise hadn't used computers before. Likewise, some people
           | -- famously including some software engineers -- believe that
           | certain AI chatbots are sentient, which also doesn't falsify
           | the linked study. (This argument holds regardless of whether
           | you consider AI chatbots to be sentient or not.)
        
           | crote wrote:
           | But what does that actually _show_?
           | 
           | Humans have been communicating with other humans via text for
           | _ages_. Fooling a human that the person on the other side of
           | text communication is also a human isn 't too hard,
           | especially when they are not looking for signs of deception.
           | 
           | Even if they are aware it is a computer, humans will
           | anthropomorphize literally anything. Animals are obvious, but
           | we'll even extent it to simple objects like cars. We'll talk
           | to them with zero expectation of any response or sign of
           | sentience. It's a suspension of disbelief, really. It's not
           | any different than kids playing Cops And Robbers, people
           | performing a theatre play, or playing a board game with
           | friends.
           | 
           | CASA seems to suggest that humans will view _any_ computer
           | interaction as one between two humans. That 's not just
           | interacting with chat bots, that's also something like
           | _opening the Configuration Panel_. If you ask me, that 's a
           | massive stretch!
        
         | Probiotic6081 wrote:
         | Did you read the article? Did you understand it? CASA doesn't
         | postulate that users in the 90s interacted with computers as if
         | they were literally other humans.
        
           | snakeyjake wrote:
           | layer8:
           | 
           | I don't remember people interacting with computers as if they
           | are human in the 1990s -- at least the people who actually
           | did interact with computers on a regular basis.
           | 
           | You:
           | 
           | CASA doesn't postulate that users in the 90s interacted with
           | computers as if they were literally other humans.
           | 
           | The article:
           | 
           | The Computers Are Social Actors (CASA) theory is the most
           | important theoretical contribution that has shaped the field
           | of human-computer interaction. The theory states that humans
           | interact with computers as if they are human, and is the
           | cornerstone on which all social human-machine communication
           | (e.g., chatbots, robots, virtual agents) are designed.
           | However, the theory itself dates back to the early 1990s,
           | and, since then, technology and its place in society has
           | evolved and changed drastically.
           | 
           | Me:
           | 
           | Confused.
        
           | epcoa wrote:
           | From the original article: "The [CASA] theory states that
           | humans interact with computers as if they are human, and is
           | the cornerstone on which all social human-machine
           | communication (e.g., chatbots, robots, virtual agents) are
           | designed. "
        
         | make3 wrote:
         | I agree. this theory sounds ludicrous to me except for people
         | who maybe have absolutely zero technical know-how
        
           | wolverine876 wrote:
           | That describes people using computers for the first time, a
           | widespread experience when the first experiment was done.
        
             | mrdatawolf wrote:
             | True, but they did say the experiment was done with people
             | who had been using computers regularly.
        
               | wolverine876 wrote:
               | Good point. Though they still could be pretty new - not
               | like people who grew up with them.
        
             | TeMPOraL wrote:
             | That doesn't make sense. Computers aren't, and weren't even
             | back then, some kind of alien technology, unlike anything
             | else people have experienced earlier. I can't imagine those
             | people treating their calculators, vacuums or cars as
             | persons. A computer is a qualitative jump, but _not that
             | big_.
        
               | wolverine876 wrote:
               | What is that based on? And what did they use that was
               | like a computer?
        
         | wolverine876 wrote:
         | > Sounds more like this is just an exhibit of the replication
         | crisis.
         | 
         | Do you have evidence of that, or are you saying it looks the
         | same?
         | 
         | The replication crisis on HN is replication of these comments
         | in every discussion of social science research. The papers that
         | they tried to replicate at least were at least based on
         | evidence, and the conclusion was not that they were completely
         | wrong but that, for many papers (not all by any measure), the
         | conclusion was replicated but the evidence wasn't quite as
         | strong as in the original.
        
           | layer8 wrote:
           | There are two explanations for the discrepancy between the
           | original study and the present one:
           | 
           | 1) The behavior of people has changed substantively since the
           | first study. (Which is what the present study seems to be
           | concluding.)
           | 
           | 2) At least one of the studies wasn't representative and/or
           | didn't actually measure what it claimed to measure. (That is,
           | an exhibit of the replication crisis.)
           | 
           | What I'm saying is that explanation 2 seems much more
           | plausible to me, given that the results of the first study
           | don't match my recollection and experience at all.
        
             | wolverine876 wrote:
             | There are other explanations too, and every study flaw, if
             | it exists, doesn't implicate the "replication crisis".
             | 
             | > the results of the first study don't match my
             | recollection and experience at all.
             | 
             | 30 years ago you had some anecdotal impressions. 30 years
             | ago the researchers had a hypothesis, focused on this exact
             | question, designed an experiment, and collected evidence.
             | (And many since have found it valid.) 30 years later, we
             | have a detailed written report of their experience, and
             | your 30-year old memory as posted on HN.
             | 
             | It's just too easy to say, 'that's not my experience'
             | without evidence. Everything is 'debunked' on that basis.
             | Now we have 30 year old memories too.
        
               | mvdtnz wrote:
               | You know the replication crisis is a real thing, right?
        
         | karaterobot wrote:
         | It seems like they found that the CASA effect was real _when
         | the technology was novel_. Once you get comfortable with
         | computers, it wears off.
        
         | ordu wrote:
         | _> Sounds more like this is just an exhibit of the replication
         | crisis_
         | 
         | Yeah. In the article we can find that _" the original study
         | recruited 30 Stanford University undergraduates (10 in each of
         | the original 3 conditions, see Procedure)."_
         | 
         | I wouldn't think twice before throwing away the results and
         | forgetting about them.
         | 
         | The replication study does better, recruiting 132 participants.
         | This is much harder to ignore.
        
         | svachalek wrote:
         | From what I can understand, the original finding is along the
         | lines of when the computer says "insert floppy disc 3", people
         | instinctively responded as if it was a person giving them that
         | order and subconsciously went through all the usual social
         | processing for that response - do they have authority over me?
         | is anyone else watching? etc. Not that they thought the
         | computer actually was capable of human interaction at a
         | conscious level.
         | 
         | This is more believable than the way most people are
         | interpreting it but I still have doubts. I was an adult in that
         | era, and what I remember is computers were just so much more
         | mechanical then. It was more like operating a microwave oven
         | than today's much more sophisticated machines. It's hard to
         | imagine those simple error codes triggering human social
         | reactions.
        
           | layer8 wrote:
           | True. Though I would argue that people nowadays are generally
           | less submissive to authority than 30 years ago, hence that
           | might not be specific to computers.
        
       | anonymouskimmer wrote:
       | An alternate hypothesis is that CASA is still correct, but that
       | people start interacting with computers in a less than
       | compassionate manner. E.g. people may start to see themselves as
       | "wealthy" compared to the computer (or whatever the general
       | mechanism of a reduction in empathy):
       | https://greatergood.berkeley.edu/article/item/how_money_chan...
        
         | rossdavidh wrote:
         | It's still a social actor, but I'm higher ranking, it's a
         | peasant? Interesting, but hard to test.
        
           | anonymouskimmer wrote:
           | Maybe in the future for computers that are expected to make
           | decisions directly affecting the welfare of humans.
           | 
           | I think we would _start_ to see this in people defending the
           | decision of an AI in arguments with other people (such as for
           | automated vehicles). What it ultimately turns out to be I don
           | 't know.
        
       | dj_mc_merlin wrote:
       | Seems like possibly bad timing giving advances in AI.
       | ChatGPT/GitHub Copilot can already be used to interface with a
       | computer (there's some git repos floating about if one looks), I
       | think it's only a matter of time before someone makes an actually
       | useful AI interface.
        
         | make3 wrote:
         | tool use is already that.
        
         | danaris wrote:
         | If you think it seems like bad timing, I suggest you read
         | closer: They very specifically say that _emerging_ technologies
         | (like the recent advances in ML systems) _are_ still subject to
         | the CASA theory.
        
           | rossdavidh wrote:
           | My 18yo daughter, not tech illiterate but not particularly
           | interested in programming, has mentioned to me that she found
           | AI-generated artwork really interesting for about a month,
           | but it quickly became boring. I wonder if it comes from a
           | "wow what kind of creative person could come up with
           | something that unusual?" initial response, which quickly
           | gives way to boredom as one realizes that there is no
           | creative person behind there?
           | 
           | Also, I wonder if the period before "emerging" technologies
           | stop seeming interesting (e.g. like another person) is
           | speeding up over time, like so much else?
        
             | layer8 wrote:
             | I suspect it's more because most AI-generated artwork looks
             | same-ish after a while. At least that's my own perception.
        
       | rezmason wrote:
       | In 2006 I shipped my iBook to Apple to fix its screen bezel. I
       | made a Flash animation where my laptop cheekily described its
       | issue, thanking the technicians in advance, and I made it launch
       | on startup.
       | 
       | The iBook came back with a factory reset. I restored from backup
       | and stopped anthropomorphizing my computers.
        
       | russellbeattie wrote:
       | Humans seem to naturally anthropomorphize anything which exhibits
       | systematic yet unpredictable or capricious behavior: Pets,
       | machines, mountains, oceans, weather, planets, ex-wives, etc. We
       | give them names and project a sense of agency to them, because
       | they must obviously act certain ways some times on purpose. It's
       | the reason we have gods, fairies and superstitions. It's why we
       | name our cars and our storms. It's just what we do.
       | 
       | So to me, it makes sense that as we understand something more,
       | being able to predict what and why it does something, the less
       | likely we are to think of it as having some sort of inner being.
        
         | anonymouskimmer wrote:
         | > So to me, it makes sense that as we understand something
         | more, being able to predict what and why it does something, the
         | less likely we are to think of it as having some sort of inner
         | being.
         | 
         | Pretty much the opposite with respect to pets and humans we are
         | acquainted or intimate with.
         | 
         | The sheer speed with which computers change (the modern
         | automated update cycles) might be a counter to any impulse we
         | have to anthropomorphize them. Even with animals we are much
         | more likely to anthropomorphize longer-living animals than
         | shorter living ones. This could even be something like a
         | survival mechanism - don't become emotionally attached to
         | something with you only a brief while.
        
       | mvdtnz wrote:
       | I've never heard this theory but it just sounds incredibly stupid
       | and unrealistic to me. The kind of thing some academic nonce who
       | never stepped foot off a university campus would come up with. I
       | don't treat any computer I've ever owned like it was a "social
       | actor". Computers are tools.
        
       ___________________________________________________________________
       (page generated 2023-11-15 23:02 UTC)