[HN Gopher] AI or Ain't: Eliza
       ___________________________________________________________________
        
       AI or Ain't: Eliza
        
       Author : john-doe
       Score  : 75 points
       Date   : 2024-01-07 11:20 UTC (11 hours ago)
        
 (HTM) web link (zserge.com)
 (TXT) w3m dump (zserge.com)
        
       | jll29 wrote:
       | Here is the pointer to the original Eliza paper
       | https://dl.acm.org/doi/10.1145/365153.365168
       | 
       | Note that Weizenbaum was an AI critic: Weizenbaum's intention was
       | not for Eliza to pass the Turing test, but to show to people that
       | a clearly not intelligent program based on primitive pattern
       | matching can _appear_ to behave intelligently.
       | 
       | He failed: His own secretary wanted to be left alone with the
       | software and typed in her personal problems. Work on Eliza
       | (1963-65, paper published 1966) until today is mostly
       | misunderstood.
        
         | leethargo wrote:
         | Not only his secretary, also some psychiatrists wanted Eliza as
         | a tool to scale up their work clinically.
        
           | moffkalast wrote:
           | Say, has anyone deployed Hartford's Samantha yet?
        
         | adestefan wrote:
         | Weizenbaum even wrote an entire book, Computer Power and Human
         | Reason: From Judgment To Calculation, on how AI is a crank term
         | for computer science. The basic premise is that humans are not
         | machine so stop using language that treats them as such.
         | Especially that computers can decide what comes next, but only
         | a human can choose what to do.
         | 
         | The book also has one of the best and most succinct
         | descriptions of Turing machines and the theoretical
         | underpinning of computer science that I have ever read. Even if
         | you're an AI maximalist you should read the third chapter of
         | the book.
        
           | stavros wrote:
           | > Especially that computers can decide what comes next, but
           | only a human can choose what to do.
           | 
           | I don't understand this, all the programs I've ever written
           | make decisions based on some factors.
           | 
           | Are you talking about free will? If so, what is free will?
        
             | aatd86 wrote:
             | Doing things because you can and not because you have to?
             | Creative endeavors in the largest sense?
        
               | stavros wrote:
               | Does a program _have to_ do things? What _can_ it do?
               | What does a human have to /can do?
        
               | aatd86 wrote:
               | Traditionally, a program is a series of instructions. The
               | program doesn't really act on its own.
               | 
               | Now, a program which is objective driven and can infer
               | from new inputs might be something else.
               | 
               | Just like humans try to maximize the stability of their
               | structures via a reward system. (it got slighty complex,
               | faulty at times, or the tradeoff between work vs reward
               | is not always in favor of work because we do not control
               | every variable, hence procrastination for example, or
               | addiction which is not a conscious process but neuro-
               | chemically induced).
        
               | stavros wrote:
               | But what does "act on its own" mean? If I give the
               | program some randomness over its next action, is that
               | "acting on its own"? When I'm at work, I act according to
               | a series of instructions. Am I not acting on my own?
               | 
               | This is a very philosophical discussion, but if I had an
               | infinitely-powerful computer and could simulate an entire
               | universe based on a series of instructions (physical
               | laws), would the beings in that universe that created
               | societies not be "acting on their own"?
        
               | aatd86 wrote:
               | Yes, as long as the computer chooses its next set of
               | instructions in order to maximize a given value (the
               | objective), I would say that it acts on its own.
               | Instruction set that was never defined by anyone that is.
               | 
               | If the instruction set is limited and defined by someone
               | else, I believe it doesn't.
               | 
               | I think, re. the simulated universe, that for us, they
               | wouldn't have free will because we know causality (as
               | long as we are all knowing about the simulation). But as
               | far as they would be concerned, wouldn't they have free
               | will if they know that they don't know everything and
               | whether the future they imagine is realizable?
               | 
               | If they knew with certainty that something was not
               | realizable, they wouldn't bother, but since they don't
               | know, either they try to realize a given future or they
               | don't.
               | 
               | Partial information provides choice of action, therefore
               | free will.
        
               | pixl97 wrote:
               | >Partial information provides choice of action, therefore
               | free will.
               | 
               | So how would an agent based system connected to a multi-
               | modal LLM/AI fall into this?
        
               | arrrg wrote:
               | Am I ever doing things because I can and not because I
               | have to? Also, what mechanism determines what things I
               | want to do because I can do them? And isn't that
               | mechanism then just not just another part of the machine.
               | 
               | Just because it feels as though I do things because I can
               | doesn't mean that is actually true.
        
               | aatd86 wrote:
               | As long as you can imagine different possible futures and
               | decide upon which one you want to try and realize, I
               | think you have choice.
               | 
               | Choice stems from uncertainty, partial knowledge. It
               | might be an illusion for an observer outside of the
               | system, but as far as a participant within the system is
               | concerned, there is choice, then there is free will.
               | 
               | I am writing this because I ca n but I don't need to do
               | it. I have futures where I don't do that and do something
               | more rewarding instead and still. As long as I am aware
               | of the choices, then I have free will.
        
               | vidarh wrote:
               | This is the compatibilist view. But if it is an illusion,
               | then that means the "choice" is computable and a computer
               | can create the same outcomes.
        
               | moffkalast wrote:
               | Just because we have quantum RNG in our heads that
               | doesn't make us automatically better. If anything it
               | makes us worse since we don't act on reason alone.
        
               | aatd86 wrote:
               | I don't know if there is a quantum rng or just an
               | inference machine that manages to recognize patterns
               | within input data and can do some math sometimes.
        
               | vidarh wrote:
               | How do you imagine that act of choosing happens in your
               | brain in a way that isn't computable?
        
             | johnnyworker wrote:
             | snippet from the WP article on the book:
             | 
             | > Weizenbaum makes the crucial distinction between deciding
             | and choosing. Deciding is a computational activity,
             | something that can ultimately be programmed. It is the
             | capacity to choose that ultimately makes one a human being.
             | Choice, however, is the product of judgment, not
             | calculation. Comprehensive human judgment is able to
             | include non-mathematical factors such as emotions. Judgment
             | can compare apples and oranges, and can do so without
             | quantifying each fruit type and then reductively
             | quantifying each to factors necessary for mathematical
             | comparison.
             | 
             | Okay, so what is judgement? I haven't read that particular
             | book and I don't quite remember his argument from
             | interviews and lectures I saw, so this might be wrong, but
             | I'd say it's for example saying "this is fair" when you
             | measure the slices of pie you cut a cake into. That is,
             | calculating that they're of equal size is pure computation;
             | but there is no way to compute that when sharing cake with
             | your friends, the slices _should_ be equal.
             | 
             | Just like you can compute how much clean drinking water an
             | average or specific person needs a day, with at least some
             | accuracy, but when it comes to the question "should there
             | be life in the universe" or "should people die of thirst",
             | no computation could answer it. You could _choose_ to write
             | a program that decides it based on a random seed or a super
             | complex algorithm taking a billion factors into account,
             | but and then that program would _decide_ the question, but
             | it 's essentially still something _you_ did  / chose.
        
               | vidarh wrote:
               | It's basically a religious view. For a "judgement" to be
               | non-computable, it'd need to come from some factor in the
               | human brain which violates know physics _and_ can 't be
               | reproduced outside a human brain.
               | 
               | It's little more than arguing for a "soul" with no
               | evidence for any effect that can't be explained by cause
               | and effect.
        
               | johnnyworker wrote:
               | > For a "judgement" to be non-computable, it'd need to
               | come from some factor in the human brain which violates
               | know physics and can't be reproduced outside a human
               | brain.
               | 
               | You say this as if we are even close to understanding
               | much less reproducing the human brain completely, which
               | probably would have to include the web of relations with
               | all sorts of other living things that also go into the
               | judgements we make, and the emotions we have while making
               | them. Until you actually _do_ draw the rest of the owl,
               | it 's not exactly "religious" to say there's no owl.
        
               | vidarh wrote:
               | No, it's an argument from logic that applies to _any_
               | claim that any given entity can do things that are not
               | computable.
               | 
               | > Until you actually do draw the rest of the owl, it's
               | not exactly "religious" to say there's no owl.
               | 
               | The "real owl" here is to assume the human brain does
               | something non-computable, in violation of all known
               | physics and logic.
        
               | johnnyworker wrote:
               | You cannot compute what you don't understand, and even if
               | you did by accident, you wouldn't _know_ you computed it,
               | as long as you don 't understand what you're trying to
               | do. That seems obvious to me.
               | 
               | And "computable" and "computable for us" are very
               | different things. It's not about the machines or
               | algorithms we _might_ make one day, provided _that_ we
               | fully understand everything that goes into our our
               | thoughts and emotions with nothing left unaccounted for,
               | and everything turning out to be countable; it 's about
               | the ones we are actually making, back then and today, and
               | then in some cases outsource our decisions to.
        
               | vidarh wrote:
               | You're misunderstanding the terms. For something to be
               | computable is very different from whether or not we know
               | or are presently able to compute it.
               | 
               | For something to be computable, it only needs to be
               | possible to show that it is logically possible, by e.g.
               | decomposing the problem into elements we know are
               | computable _or showing an example_.
               | 
               | The existence of the human brain _absent any evidence of
               | any supernatural element_ is strong evidence that human
               | reasoning is computable, and it 's a reasonable,
               | testable, falsifiable hypothesis to make: If you want to
               | counter it "all" you need to do is to show evidence of
               | _any_ state transition in even a single brain that does
               | not follow known laws of physics. Just one.
               | 
               | Alternatively, even just coherently _describing_ a
               | decision-making process that it is possible to construct
               | a proof wouldn 't be computable using known logic.
               | 
               | Either would get you a Nobel Prize, in either physics or
               | maths. Absent that, even just a testable _hypothesis_
               | that if proven would increase the likelihood of finding
               | either of the above would be a huge step.
               | 
               | In the absence of all of that, it's pure faith to presume
               | human reasoning isn't computable.
        
             | adestefan wrote:
             | A girl would like to ask a boy to the high school dance.
             | 
             | A computer can do all the calculations to decide on if it's
             | a good idea. Given the inputs of the time they have spent
             | together, the number of glances that are passed between
             | then in the halls between classes, if he doesn't have a
             | date yet or not, etc. The probability adds up to ask.
             | 
             | So the machine decides to ask.
             | 
             | The girl feels it. Has all the time they've spent together
             | has made her feel a certain way? Maybe a weird tingle each
             | time their arms touch. Is that glance in the hall this
             | morning not just an accident, but him going a little out of
             | his way for her to notice? She's asked around and knows
             | that no one else has asked him, but doe he really not have
             | a date yet? Can she overcome the bit of anxiety about
             | asking a boy to the dance? Will she be able to accept the
             | risk of rejection knowing that the chances may be high he
             | says yes?
             | 
             | Only she can choose.
        
               | holoduke wrote:
               | All the tingles, feelings, anxieties and hesitations are
               | activities triggered by little programs that work
               | autonomously and are fully deterministic. The girl is
               | fooled
        
               | adestefan wrote:
               | HN consistently reminds me of the park bench scene in
               | Good Will Hunting.
        
           | erikerikson wrote:
           | > humans are not machine
           | 
           | Aren't we? Casual chains upon our matter produce emergent
           | behaviors using the same physics and chemistry that our
           | mechanistic creations rely upon.
           | 
           | Certainly those behaviors and results do not produce the same
           | repeatable, predictable results as our clockworks but that is
           | the whole point of the field of AI (as opposed to the
           | marketing corruption/term that is currently in vogue, so GAI
           | if you prefer), to produce system and algorithm structures
           | designed with architecture and patterns more like our own.
           | 
           | Perhaps you believe in the ghost in the machine hypothesis?
           | The magical soul that is more than the emergent evolving
           | pattern produced across time by DNA replicators? That this
           | undefinable, unmeasurable spirit makes us forever different?
        
         | scotty79 wrote:
         | > He failed
         | 
         | I'd say he succeeded. It just seems that people are perfectly
         | content with just appearance of intelligence.
        
         | tempodox wrote:
         | All this "AI" hype is a constant reminder to me that you cannot
         | reason anybody out of something that they _want_ to believe.
         | People 's need to believe in miracles is obviously stronger
         | than all reason.
        
       | nbzso wrote:
       | So the Turing test actually tests not a technology but the level
       | of intelligence of the user. So, we are doomed, it seems.:)
        
         | oneeyedpigeon wrote:
         | Definitely the main flaw of the Turing test -- having two human
         | actors taking part alongside one computer is just introducing
         | two too many variables :)
        
         | masswerk wrote:
         | To me, Turing's argument had always been that the attribution
         | of intelligence (or not) doesn't make much sense: rather than
         | being a question of any substance, it merely diffuses into a
         | matter of appearance. However, as things usually are, it had
         | become the holy grail for claiming "intelligence" (which really
         | should be used in this context in quotes only).
        
         | __MatrixMan__ wrote:
         | He originally made the argument about gender, not intelligence.
         | I think he was arguing for a whole class of properties for
         | which there's no difference between authenticity and convincing
         | fakery.
         | 
         | I think the point is less that there is a truth and we're too
         | dumb to figure it out, and more that in certain circumstances
         | we'll just have to accept a lower bar for evidence about
         | whether those properties apply.
         | 
         | It reminds me of how no class of computer can solve the halting
         | problem for itself. No matter how intelligent you are, there
         | will be holes like this in your certainty about some things.
        
           | pixl97 wrote:
           | Or another way to put this, it's not a binary problem, it's a
           | probability continuum.
           | 
           | Even the definition of 'human intelligence' is a continuum
           | from the smartest to the dumbest of us, that doesn't even
           | stop there and descends thought all animal life.
        
             | __MatrixMan__ wrote:
             | I did some research to prove you wrong, because I don't
             | think continuum is the right concept, but it turns out that
             | Turing seems to agree with you. Quoting him in "Computing
             | Machinery and Intelligence":
             | 
             | > In short, then, there might be men cleverer than any
             | given machine, but then again there might be other machines
             | cleverer again, and so on.
             | 
             | So now I think you're both wrong :) Particularly I take
             | issue with the assumption that the "cleverer" relation is
             | transitive. We've only really studied a few relations in
             | this space:
             | 
             | - pushdown automatons are cleverer than finite state
             | machines
             | 
             | - turing machines are cleverer than pushdown automata
             | 
             | - humans are cleverer than turing machines (I'd argue for
             | this, but others would disagree)
             | 
             | Presumably there are other points which we have overlooked
             | or not yet discovered. For instance, maybe something which
             | has the "memory" quality of a pushdown automaton, but lacks
             | the "state tracking" property of a finite state machine.
             | When compared with an FSM, such a thing would not be more
             | or less clever than it, it would just be clever in an
             | orthogonal way.
             | 
             | I strongly suspect that two intelligences (of greater power
             | than the theoretical machines that we yet have) could meet
             | and discover that they each have a capability that the
             | other lacks. This would be a situation that you couldn't
             | map onto a continuum--you'd need something with branches, a
             | tree or a dag or a topological space: something on which
             | the two intelligences could be considered cousins: neither
             | possessing more capabilities than the other, but each
             | possessing different capabilities. (Unlike the FSM example,
             | they would have to share some capabilities, otherwise they
             | couldn't recognize each other as intelligent).
             | 
             | Further, I suspect that in order to adequately classify
             | both intelligences as cousins, you'd have to be cleverer
             | than both. Each of the cousin-intelligences would be able
             | to prove among themselves that theirs is the superior kind,
             | but they'd also have to doubt these proofs because the
             | unfamiliar intelligence would be capable of spooky things
             | which the familiar intelligence is not.
        
               | pixl97 wrote:
               | I mean an evolutionary tree where intelligence features
               | are added in some branches makes sense.
               | 
               | I guess part of what I was trying to address is that we
               | like to think of intelligence as what people do and are
               | the pinnacle of, and discounting anything that is not
               | covered by that.
        
               | __MatrixMan__ wrote:
               | I definitely agree that defining intelligence as what
               | humans do is a problematic practice. I guess I just
               | wanted to nit pick a little.
               | 
               | There's definitely a lot of "it's not real intelligence
               | because it's not human intelligence" going around these
               | days. Doesn't seem like it's going anywhere useful
               | though.
        
         | derbOac wrote:
         | I've often felt that a better version is not whether a person
         | can guess that it's AI or a human, but whether people behave
         | and feel differently with an AI or human.
         | 
         | That's vague and covers a universe of criteria -- mood,
         | satisfaction with the conversation, actual behavior and so
         | forth -- but it also I think is a more realistic gauge of AI
         | performance. It's probably unattainable but that's not
         | necessarily a bad thing. If it is attainable within confidence
         | then it's a pretty powerful AI.
         | 
         | There are probably some people who would be ok with some AI for
         | some purposes.
        
           | masswerk wrote:
           | In a sense, the question of the "intelligent machine" is
           | somewhat self-contradictory: To _us_ , the question of
           | intelligence matters as a preposition or qualifying term, for
           | to what extent, probability and prospects we may pose an
           | appeal to sympathy, moral and ethics. (In other words, it is
           | not about trust in any realistic faculties, but about
           | judgement - and then, to what extent we may trust in this.)
           | However, this prospect doesn't fit well our expectations
           | towards machines, which are all about repeatability and
           | reproducible results in given tolerances... (Compare
           | ChatGPT's so-called winter depression and the arising need to
           | plead and argue with the device for any complex results. As
           | the device gains in the emotional domain, its worth in the
           | application domain radically decreases.)
        
       | master-lincoln wrote:
       | > Clearly, Eliza is not an AI
       | 
       | Sadly the author doesn't elaborate on this. I thought nowadays
       | 'AI' is a synonym for 'algorithm', which would fit ELIZA
       | 
       | Is there an accepted definition of the word AI?
        
         | perthmad wrote:
         | Any algorithm you do not understand.
        
           | mysterydip wrote:
           | So, any of my own code I haven't looked at for a few months?
           | :)
        
           | throwaway4aday wrote:
           | Including the one in your head.
        
         | maweaver wrote:
         | Seems like the modern definition is something along the lines
         | of an algorithm whose behavior depends on data which was itself
         | machine generated, rather than hand-created by a human like
         | Eliza's rules.
        
         | oneeyedpigeon wrote:
         | We've been misusing the term ever since observing the ghosts in
         | Pac Man.
        
       | sixothree wrote:
       | I remember this being written in basic for the c64. Not sure if I
       | had the real Eliza or a clone. But it was fun to look at all of
       | the canned responses and try to get it to respond with them.
        
         | smcameron wrote:
         | Probably a variant of the Eliza program in "More BASIC Computer
         | Games", see p. 56 of this PDF:
         | https://ia802707.us.archive.org/33/items/More_BASIC_Computer...
        
       | aldousd666 wrote:
       | Eliza's meant to us to be an illustration of the problem. In good
       | old fashioned AI sentiment, it illustrates the fact that you need
       | another if statement for every new kind of construct you want to
       | simulate. But you deign to simulate each thing, like say turning
       | a verb into a gerund, by writing a specific "gerundification"
       | routine. Another to swap the Mes to Yous, etc. this isn't how
       | people think nor most modern AI. This is totally different from
       | just looking at the world distilling patterns from it and using
       | those patterns as the basis for a response. To teach a modern AI
       | new stuff, you don't have to write another if statement. They
       | have similar intentions and at some resolution or distance, they
       | are trying to do similar things. However, they work and totally
       | different ways and the new dynamic generative AI strategy that
       | learns from input as opposed to just symbolically transforming it
       | syntactically is a totally different paradigm. I don't care
       | whether you call it AI or not.
        
         | masswerk wrote:
         | With the small reservation that this is not how Eliza works.
         | Eliza sits on top of MAD/SLIP which does all the heavy work and
         | provides lists and integer indexes, which is what is processed
         | by Eliza. This allows Eliza to work on decomposition rules,
         | which isolate keywords per position and context, and
         | transformation (composition) rules to recombine elements and
         | links between those two. Meaning, the model is much more
         | topological than this. (Arguably, this is closer to regular
         | expressions than to if-else trees.)
         | 
         | However, this isn't what Eliza is all about. It's rather about
         | the question, how little do you actually need in terms of
         | knowledge, world model, or rule sets to give the impression of
         | an "intelligent" (even sympathetic) participant in a
         | conversation (as long as you're able to constrain the
         | conversation to a setting, which doesn't require any world
         | knowledge, at all.) To a certain degree, it is also about how
         | eager we are to overestimate the capabilities of such a partner
         | in conversation, as soon as some criteria seem to be satisfied.
         | Which is arguably still relevant today.
         | 
         | The rule set, BTW, is actually small, just 3 pages in a
         | printout, achieving a surprising generality (or rather,
         | appearance thereof) for its size. Compare:
         | https://cse.buffalo.edu/~rapaport/572/S02/weizenbaum.eliza.1...
        
           | pixl97 wrote:
           | >how eager we are to overestimate the capabilities of such a
           | partner in conversation, as soon as some criteria seem to be
           | satisfied. Which is arguably still relevant today.
           | 
           | Honestly, AI shouldn't be the takeaway point here, but how we
           | do the same for politics.
        
             | masswerk wrote:
             | On a not so serious note, most political careers seem to be
             | built on public utterances that seem to be generated by a
             | rule set that could fit into 3 pages, triggered by a
             | handful of keywords or trigger phrases, also known as
             | talking points. With the advent of the so-called culture
             | wars, most of this is also increasingly context-free and
             | doesn't require much of world knowledge. Users, err, voters
             | will fill in the gaps eagerly, each according to their
             | respective phantasies and understanding. To the point that
             | Eliza may eventually become a worthy contestant. An
             | approval rate of 27% is certainly a good starting point...
        
       | seanwilson wrote:
       | > One of the first computer programs that successfully passed the
       | Turing test was Eliza. Created in 1966 by Joseph Weizenbaum,
       | Eliza skillfully emulated the speech patterns of a
       | psychotherapist in its conversations.
       | 
       | Why was the Turing test still relevant after this? Didn't this
       | indicate it was very flawed test? Or it was hard to come up with
       | a better test?
        
         | recursivecaveat wrote:
         | I can find no reference of an actual Turing test being done for
         | Eliza. If you look at the link from the article it is clearly
         | demonstrably failing their (different, and more difficult to
         | interpret, but still fair I thiiiink) runs today as well. Note
         | that people _constantly_ _willfully_ misinterpret what a turing
         | test is.
         | 
         | A turing test means you enter into two conversations. Then you
         | pick which one was with a computer. If people answer wrong 50%
         | of the time, the computer is indistinguishable, hence it
         | passes. Note that it is not "People get wrong whether their
         | single conversation is talking to an AI >50% of the time" and
         | it is definitely not "sometimes people don't realize they're
         | talking to an AI". In particular people constantly write about
         | the latter because it generates clicks.
        
         | canjobear wrote:
         | Because Eliza didn't pass the Turing Test. It is trivially easy
         | to trip it up.
        
       | throwaway4aday wrote:
       | > Interestingly, Eliza still outperforms ChatGPT in certain
       | Turing test variations.
       | 
       | I see we have a new entry for the 2024 Lies of Omission award.
       | 
       | The article linked to plainly shows that Eliza only beats
       | ChatpGPT-3.5 and is in the bottom half when ranked against a
       | variety of different system prompts. An excellent ass covering
       | strategy that relies on the reader not checking sources.
       | 
       | An honest author would have actually quoted the article saying:
       | 
       | > GPT-4 achieved a success rate of 41 percent, second only to
       | actual humans.
       | 
       | instead of constructing a deliberately misleading paraphrase.
        
         | masswerk wrote:
         | Hum, note that this was not an argument about or against GPT,
         | but about the "unreasonable" success of a, by all standards,
         | primitive algorithm that manages to get (somewhat) away by
         | crafting the pre- and context of the conversation. By no means,
         | on the other hand, could I read this article and understand it
         | as claiming any superiority over modern applications.
         | 
         | (Nobody with even the crudest understanding of the principles
         | of Eliza could claim this, and the article clearly demonstrates
         | a detailed understanding. Disclaimer: I wrote the JS
         | implementation linked in the article, many years ago.)
         | 
         | Edit: The question rightfully raised - and answered - by Eliza,
         | which is still relevant today in the context of GPT, is: does
         | the appearance of intelligent conversation (necessarily) hint
         | at the presence of a world model in any rudimentary form?
        
           | throwaway4aday wrote:
           | Several people in this thread appear to have misunderstood
           | due to the way this article was written.
        
         | notahacker wrote:
         | "GPT-4 achieved a success rate of 41 percent, second only to
         | actual humans" also feels like a (much bigger) lie of omission
         | looking at the original paper. GPT4's performance was in the
         | range of 6% to 41%, Eliza's 27% score sat in the upper middle
         | of that range, and considering the bots tested consisted of 8
         | GPT4 prompts, 2 GPT3.5 prompts and a naive script from the
         | 1960s, GPT4 would have had to be remarkably consistently
         | inhuman not to finish "second only to humans" with its highest
         | scoring prompt
         | 
         | The blog appears to have been updated to specify GPT3.5, but
         | the original version was accurate.
         | 
         | The paper itself is interesting as it covers the limitations
         | (it has big methodological issues), how the GPT prompts
         | attempted to overridei default chatGPT tone and reasons why
         | ELIZA performed surprisingly well (some thought it was so
         | uncooperative, it must be human!)
         | https://arxiv.org/pdf/2310.20216.pdf
        
           | throwuxiytayq wrote:
           | The example ELIZA responses in the paper are so laughably bad
           | and trivial to pick up, I'm not convinced the human
           | interrogators were sober/conscious/awake during the
           | experiment.
        
             | notahacker wrote:
             | tbf the human side of those conversations isn't much
             | better. I think if someone tried prompt injection hacks on
             | me I'd be tempted to be politely obtuse to troll them right
             | back.
             | 
             | Turing's version involves experts who definitely aren't in
             | the same room waving to each other, but the fundamental
             | problem is it isn't a particularly good test
        
               | bandrami wrote:
               | Is there a name for the reverse Turing test? How can a
               | Python script convince me it's not actually a human?
        
           | pixl97 wrote:
           | Yea, It's really hard to get GPT to sound human because the
           | RLHF really wants to let you know it's not a human.
           | 
           | GPT4 + a RLHF that was trained to think it was human would be
           | a much different beast.
        
             | mewpmewp2 wrote:
             | Yeah GPT4 is not trained to beat turing test, it is trained
             | to be an AI assistant.
             | 
             | Imagine you take a human and train them to be an AI
             | assistant since they were a baby. I imagine their behaviour
             | would also be very odd compared to average people.
        
       | dboreham wrote:
       | It turns out, 50 years later, that Eliza was on the right track.
        
       | acidburnNSA wrote:
       | So was Eliza from 1966 more intelligent than Dr SBAITSO from
       | 1990?
       | 
       | https://en.wikipedia.org/wiki/Dr._Sbaitso
        
         | the_af wrote:
         | Wasn't SBAITSO pretty much the same idea as Eliza, only using
         | speech synthesis with a Soundblaster?
        
           | acidburnNSA wrote:
           | Yes. I'm surprised to hear Eliza passed a turing test.
           | SBAITSO was fun but pretty clearly not a human.
        
             | NavinF wrote:
             | Eliza never passed a turing test. Nobody tried to test
             | Eliza because they knew what the result would be
        
               | acidburnNSA wrote:
               | I'm responding to the quote in TFA:
               | 
               | > One of the first computer programs that successfully
               | passed the Turing test was Eliza.
               | 
               | I haven't studied the history of it.
        
               | the_af wrote:
               | Mentioned in arstechnica to my surprise, but do note the
               | paper wasn't peer reviewed and they mention flaws in the
               | methodology.
               | 
               | I cannot believe anyone passingly familiar with ELIZA
               | would be fooled by it.
        
       | scotty79 wrote:
       | Can one see original Eliza sourcode anywhere?
        
         | proamdev123 wrote:
         | There is a version of it in Emacs, called up with `M-x doctor`.
         | 
         | The source is in `doctor.el`.
         | 
         | https://www.emacswiki.org/emacs/EmacsDoctor
        
         | RugnirViking wrote:
         | We had it running in terminals when I used to work in the
         | national museum of computing in the UK (on machines where you
         | can just pull the full source up from floppy disk)
        
       | nsajko wrote:
       | The article is missing the interesting bits, so here are the
       | relevant Wikipedia pages:
       | 
       | https://en.wikipedia.org/wiki/ELIZA
       | 
       | https://en.wikipedia.org/wiki/ELIZA_effect
       | 
       | https://en.wikipedia.org/wiki/Joseph_Weizenbaum
        
       | drakonka wrote:
       | Many years ago I spent a lot of time on a website called
       | Personality Forge, where you would create chat bots very similar
       | to this and set them loose. They could then talk to other people
       | or each other and you could review the transcripts. At one point
       | I even entered my chat bot in The Chatterbox Challenge. It was so
       | much fun to work on this thing at the time, but when I
       | rediscovered it years later[0] I was mostly just disappointed in
       | how "fake" all that effort was.
       | 
       | Now here I am talking about life, the universe, and everything
       | with ChatGPT. It makes me both inexplicably happy/hopeful and
       | simultaneously weirdly melancholic.
       | 
       | [0] https://liza.io/its-not-so-bad.-we-have-snail-babies-
       | after-a...
        
         | bee_rider wrote:
         | The great thing about being a teenager or kid is that you don't
         | know why the grownups don't think your project is worth doing,
         | so you just do it. Even if it doesn't change the world (most
         | things don't, after all) you can still learn something and have
         | fun.
        
       ___________________________________________________________________
       (page generated 2024-01-07 23:00 UTC)