[HN Gopher] In experiment, AI successfully impersonates famous p...
       ___________________________________________________________________
        
       In experiment, AI successfully impersonates famous philosopher
        
       Author : RafelMri
       Score  : 31 points
       Date   : 2022-07-27 18:02 UTC (4 hours ago)
        
 (HTM) web link (www.vice.com)
 (TXT) w3m dump (www.vice.com)
        
       | earthboundkid wrote:
       | Better link:
       | http://schwitzsplinters.blogspot.com/2022/07/results-compute...
        
         | wzdd wrote:
         | From that link (which is a blog post by the researcher):
         | 
         | > there's a cleverness and tightness of expression in Dennett's
         | actual answer that's missing in the blander answers created by
         | our fine-tuned GPT-3.
         | 
         | I found this a very interesting observation. Dennet's answers
         | contain a lot of information in few words. They're also quite
         | witty. The GPT-3 responses (when they even address the
         | question) sound like talking-point answers from someone who has
         | read quite a bit in the area.
        
       | causi wrote:
       | Saying an AI's output passed as human in one of these absurdly-
       | restricted applications is like saying someone couldn't tell the
       | difference between Da Vinci and your four year old's drawing when
       | you flashed the picture at them for a quarter of a second.
        
         | Blahah wrote:
         | I don't think that's accurate at all. It's more like you
         | couldn't tell the difference between a previously unseen, but
         | real, Da Vinci sketch, and a fake one that was drawn to mimic
         | every aspect of his style. There's nothing similar to a four
         | year old here - more like an art forger with a deep
         | understanding of the artist.
        
           | robertlagrant wrote:
           | Presumably the analogy is that only hearing a phrase from a
           | philosopher is like seeing a picture for a fraction of a
           | second.
        
       | kelseyfrog wrote:
       | Not too impressive. I wouldn't really give it much thought until
       | it learns to misinterpret Foucault.
        
       | alisonatwork wrote:
       | This reminds me of a very funny website that used to exist in the
       | 1990s called called Forum 2000[0]. Readers would submit questions
       | and "AIs" modeled on the personalities of famous philosophers
       | would answer, occasionally getting into their own philosophical
       | bun fights. It was all a joke by some CMU students I think. I
       | can't find the site online any more (it used to be
       | www.forum2000.org), but with a bit of luck one of the Wayback
       | Machine backups has the content.
       | 
       | [0] https://everything2.com/title/Forum+2000
        
       | xhevahir wrote:
       | This lends some credence to Thomas Nagel's remark that Dennett's
       | work is basically Gilbert Ryle plus _Scientific American_.
        
       | Der_Einzige wrote:
       | One of the demos I've delivered for my $MEGACORP$ is showing how
       | to fine-tune large scale language models to generate text on our
       | service. The demo I used was to create a "Plato" model by fine-
       | tuning GPT-2 on his text "Timaeus", followed by a "Stirner" model
       | trained on his work "The Unique and its Property".
       | 
       | I finish the demo by training "Chimera" models, first a model
       | trained by concatenating the texts together, and then another
       | trained on weaving the lines of each book together, paragraph by
       | paragraph.
       | 
       | I find the results of a "Chimera" model far more fascinating. I
       | can arbitrarily combine the texts of a set of philosophers, and
       | get something hopefully which is totally different - something
       | which is hopefully more than the sum of its parts...
        
       | peter303 wrote:
       | I think this experiment shows that humans overestimate their
       | innate creativity. Most of thoughts build upon earlier thoughts
       | and experiences. A sophisticated language model can emulate most
       | of our routine thoughts. The philosopher Gurdjieff and Zen
       | philosophers say we drift though life in low awareness and in
       | mental ruts. They propose mental exercises to improve this.
        
         | fleetwoodsnack wrote:
         | I mean, not really. Philosophical and legal texts are
         | particularly emulateable because of their logically necessary
         | formulaic structures and conventions.
         | 
         | And in any case, this was a test of mimicry, not of creation so
         | your conclusion is a bit of a nonsequitur, regardless.
        
         | Viliam1234 wrote:
         | > The philosopher Gurdjieff and Zen philosophers say we drift
         | though life in low awareness and in mental ruts. They propose
         | mental exercises to improve this.
         | 
         | Then it's quite ironic that the philosophers are among the
         | easiest ones to emulate by the computer.
        
           | elefanten wrote:
           | What's the irony? Those deepest in the rut know it the best!
        
         | Animats wrote:
         | > I think this experiment shows that humans overestimate their
         | innate creativity.
         | 
         | Yes. GPT-3 teaches us that a sizable fraction of "serious"
         | writing is just remixing previously written content. No
         | underlying insight required. This is embarrassing to humans.
         | 
         | Note where this won't work: directions on how to do something
         | real. If you used GPT-3 to generate auto repair manuals, you'd
         | get plausible manuals, but they'd be wrong.
        
           | joe_the_user wrote:
           | _GPT-3 teaches us that a sizable fraction of "serious"
           | writing is just remixing previously written content. No
           | underlying insight required. This is embarrassing to humans._
           | 
           | I think quite a philosophers and writers would acknowledge
           | there's nothing new under the sun.
           | 
           | The way I'd see it, what even good writers do is usually a
           | loose weaving together of things that have been already read
           | along with some customization. It's only occasionally, (very
           | roughly guestimating), every couple paragraphs, that this
           | loose weaving needs to become tight and exact. That's my
           | working hypothesis for why GPT-3 sounds senseless every
           | paragraph or two.
        
           | naasking wrote:
           | Yes, new writing doesn't require new insight. That could be
           | fine though, because insight is often in readers rather than
           | writers ie. an explanation phrased as X might enlighten some
           | set of people P0 but phrased as Y night enlighten a disjoint
           | set of people P1.
        
           | kingkawn wrote:
           | Rather than embarrassing I think it is liberating to cast
           | aside our absurd sentimentality about our minds and see them
           | as the tools of environmental adaptation that they are
        
           | mikkergp wrote:
           | I only think it's embarrassing to humans if you are
           | particularly attached to human exceptionalism. It makes me
           | think about how contextual information is. I think it's about
           | who you are, who they are and how you relate. Plato and
           | Aristotle are interesting because the history of human
           | thought is interesting, not because anything they said
           | individually was particularly unique. I read certain blogs
           | because of the way that person combines ideas is compatible
           | with the kinds of ideas I want to be combined, or my
           | background, or their sense of humor. I think one big problem
           | with AI in the future will be the massive explosion in signal
           | to noise ratio. Trying to differentiate between AI and Real
           | is mainly only interesting in it's significance around AI
           | (and yes, misinformation, etc.).
           | 
           | Maybe we'll go back to reading local papers or listening to
           | local bands when that's the only way to differentiate.
        
           | r_hoods_ghost wrote:
           | Then again you have copilot which is very explicitly
           | producing directions on how to do something, albeit for a
           | computer rather than a human, and fairly successfully to. Has
           | anyone produced an AI that can write IKEA furniture assembly
           | instructions yet?
           | 
           | Also "remixing" content is arguably how we learn. It's why we
           | write essays and do problems at school and university. It
           | isn't until graduate level that people start generating or
           | synthesizing new knowledge rather than restating it.
        
             | golergka wrote:
             | Copilot shines when it's used alongside correctly utilized
             | strong type systems that catch many possible failures, but
             | human languages don't have anything like it.
        
         | AshamedCaptain wrote:
         | To be honest, I have played enough of these games where they
         | ask you "was it $RANDOM_FAMOUS_PERSON who said this, or
         | Hitler?" that I know how terrible it would be to actually
         | extract any conclusions from this game. It's not that
         | $RANDOM_FAMOUS_PERSON was close in thought to Hitler (what
         | these popular games inevitably try to claim). It's not that you
         | suck at distinguishing people from Hitler wannabees (which is
         | also often claimed). It's just that if you select the quotes
         | well enough, most speeches have large swaths of what
         | coincidentally amounts to the same writing. It's like the
         | birthday paradox of prose.
         | 
         | That does not mean that most speeches express entirely the same
         | thoughts, much less than most famous persons all have the same
         | speeches, all the way from Hitler.
         | 
         | The same ML model would not even approach passing the Turing
         | Test. How can anyone claim it can "impersonate" a famous
         | philosopher? This is just absolutely misleading clickbait.
        
           | FabHK wrote:
           | +1 for "birthday paradox of prose" - nicely said.
        
       | FabHK wrote:
       | > This experiment was not intended to see whether training GPT-3
       | on Dennett's writing would produce some sentient machine
       | philosopher; it was also not a Turing test, Schwitzgebel said.
       | 
       | How was it not a Turing test? Because it lacks interactivity?
        
         | joe_the_user wrote:
         | Yes, lack of interactivity.
         | 
         | Computers with explicit or implicit limits on their
         | interactivity have been convincing people they're human for a
         | long time.
         | 
         | It seems like the only thing that's hard for robots is when
         | someone uses language to set a different structure of
         | interaction - if you say, "draw a picture with thus and thus
         | shapes, then tell me what it looks like" and similar things.
        
       | kingkawn wrote:
       | I wonder if at some point the police will generate individual
       | language models based on the collected written material from a
       | person under investigation and then 'interrogate' the model for
       | insights into the subject.
        
         | bell-cot wrote:
         | That seems far more useful for judging the authorship of
         | unattested writings than for judging (say) whether some subject
         | robbed the bank.
         | 
         | But Just In Case - I am innocent, the butler did it, and my
         | heart condition would make the get-away sprint seen in the
         | security camera footage physically impossible for me.
        
       | blt wrote:
       | I got 8/10 correct. I don't read philosophy and I've never heard
       | of Dennett before now.
       | 
       | I used the following techniques:
       | 
       | - If the question has more than one part, the answer should
       | address every part.
       | 
       | - The answer should not repeat itself or be too rambling.
       | 
       | - The answer should not contradict itself. For example, the
       | response that Jerry was "never... less than serious" but also
       | wrote a "hilarious parody".
       | 
       | - The answer should not recommend a book. This demonstrates a
       | lack of context. A lot of people might recommend books when asked
       | these questions in a web forum or in person. But if you know that
       | your answers are going into a quiz to distinguish your own
       | thoughts from AI, you would probably focus on conveying your
       | opinions and not referring to other material. (This is kind of
       | cheating, since I'm using side channel information outside the
       | question and answer texts.)
       | 
       | - Answers with a "yes, and" flavor, that make a surprising but
       | relevant connection to a new idea not mentioned in the question,
       | are more likely to be human. For example, the joke (?) about
       | having a baby in the robot question, or mentioning the use of
       | religion as a tool for social control in addition to its origin
       | as soothing people about frightening unknowns.
       | 
       | - If multiple answers repeat a theme that is tenuously connected
       | to the question, they are probably samples from GPT. For example,
       | mentioning Copernicus in the question about evolution. (This
       | would be harder if we only saw one GPT response!)
       | 
       | These techniques don't require knowledge of Dennett's work. I
       | guess that's consistent with blog readers doing about as well as
       | experts.
        
       | allears wrote:
       | Impersonating a philosopher sounds easy, especially if the AI was
       | trained on her writings. It's simply a matter of stringing
       | together phrases, much like Eliza could do years ago.
        
         | ben_w wrote:
         | One of my memories of my teenage years is a fellow highschool
         | student stringing together random stereotypical nerd words. I
         | assume he was mocking me.
         | 
         | Hollywood gets most professions wrong: Maverick could face up
         | to the death penalty depending on the exact political situation
         | in Top Gun; nobody would send miners into space instead of
         | training astronauts; and the CSI TV shows have very little in
         | common with actual crime scene investigation.
         | 
         | It's easy to impersonate any group when the audience is
         | unfamiliar with that group.
         | 
         | What is impressive is that this fools domain experts:
         | 
         | """Even knowledgeable philosophers who are experts on Dan
         | Dennett's work have substantial difficulty distinguishing the
         | answers created by this language generation program from
         | Dennett's own answers"""
        
           | earthboundkid wrote:
           | You can take the quiz yourself on Prof. Schwitzgebel's blog.
           | I got 5/10.
           | 
           | http://schwitzsplinters.blogspot.com/2022/07/results-
           | compute...
        
       | dr_dshiv wrote:
       | We trained GPT3 with the complete works of Plato. The following
       | text selections are either OG Plato or GPT3 Plato. Care to guess
       | the origin of each selection and attempt the Platonic Turing
       | test? (No cheating)
       | 
       | 1. Of the heaven which is above the heavens, what earthly poet
       | ever did or ever will sing worthily? It is such as I will
       | describe; for I must dare to speak the truth, when truth is my
       | theme. There abides the very being itself with which true
       | knowledge is concerned; the colorless, formless, intangible
       | essence, visible only to mind, the pilot of the soul. The divine
       | intelligence, being nurtured upon mind and pure knowledge, and
       | the intelligence of every soul which is capable of receiving the
       | food proper to it, rejoices at beholding reality, and once more
       | gazing upon truth, is replenished and made glad, until the
       | revolution of the worlds brings her round again to the same
       | place.
       | 
       | 2. Is not beauty wise? Imagine not to yourself a beauty which is
       | devoid of wisdom but have the wisdom to conceive beauty in her
       | full perfection. For wisdom and beauty are the same, as I dare
       | say that you yourself well know. A beautiful body is called wise
       | because of the soul which it encloses, and of which it is the
       | attendant or vessel. There is an inferior sort of wisdom which
       | means caution, and is only a craft, and is concerned with the
       | body; but the higher wisdom is the co-knowing of the soul, or is
       | philosophy, and of this the intelligent is a part. And this part
       | in another form is that which is rightly called wisdom, and is
       | the pilot of the soul, guiding her in her journey to the land of
       | the good and the beautiful. That which has no wisdom is a soul
       | which inherits the name only; this is but a part of wisdom, and
       | is entitled only to this much, that no soul in which there is not
       | this superior part ought to be called wise. That which is called
       | wisdom without this adjunct ought to be deemed only half wise,
       | and this, when joined with that, is complete.
        
         | marginalia_nu wrote:
         | Tricky. Is it supposed to be Socrates speaking, or someone he
         | interrogates? Plato's dialogues are full of speeches from
         | people expounding theses that are themselves contradictory (and
         | directly contradict Plato). There is for example a platonic
         | dialogue that is chock full of convincing nonsense arguments,
         | almost like a satire.
         | 
         | Both of them are nailing the ye olde Benjamin Jowett English.
         | 
         | The first one does seem possibly a bit at odds with some of his
         | cosmology.
         | 
         | The second one seems superficially closer to some of his
         | conclusions about the form of the good, but it also seems
         | really internally inconsistent.
         | 
         | Some strange things:
         | 
         | First he asks:
         | 
         | > Is not beauty wise?
         | 
         | Then he bluntly asserts:
         | 
         | > For wisdom and beauty are the same, as I dare say that you
         | yourself well know.
         | 
         | Plato usually isn't the one to just assert things like this. If
         | this text is Plato's, it's definitely not one of Socrates'
         | speeches.
         | 
         | > There is an inferior sort of wisdom which means caution, and
         | is only a craft, and is concerned with the body; but the higher
         | wisdom is the co-knowing of the soul, or is philosophy, and of
         | this the intelligent is a part.
         | 
         | This also feels like it contains parts from a discussion about
         | epistemology which is out of place when the topic is the forms,
         | he typically contrasts crafts (techne) with knowledge
         | (episteme). Here he just dismisses something as a craft
         | completely out of the blue.
        
         | teraflop wrote:
         | I have only the most superficial familiarity with Plato, but
         | here's my guess:
         | 
         | #1 is Plato. It expresses a clear theme that happens to match
         | my vague understanding of what "Platonism" is, and the writing
         | is efficient and effective. The metaphor of knowledge as "food"
         | seems particularly unlikely to be something that GPT-3 would
         | express in such a concise way. The only thing that really gives
         | me pause is the part at the end about "the revolution of the
         | worlds"; I'm not sure what exactly that's referring to.
         | 
         | #2 is GPT-3. The topic meanders a lot, and there are a lot of
         | individual chunks of prose that sound weird to me, or at least
         | artless. (What value is "as I dare say you yourself know"
         | adding, for instance?) It seems to be throwing around words
         | like "beauty", "mind", "soul" and "wisdom" in ways that obscure
         | their meanings, rather than elucidating them. For example,
         | "wisdom [...] is the pilot of the soul" makes much less sense
         | than "mind, the pilot of the soul".
        
         | FabHK wrote:
         | Could you do Foucault or Derrida? (see: https://xkcd.com/451/ )
        
           | dimatura wrote:
           | I assumed from the title this would be Lacan or such. I think
           | GPT-3 would be incredibly successful at emulating
           | postmodernist logorrhea.
        
             | anothernewdude wrote:
             | That's because they use language "ironically" to convince
             | people to be nihilists.
        
         | tremon wrote:
         | Plato wrote in English? Neither text is OG Plato, both are
         | interpretations of his work.
        
       | omarhaneef wrote:
       | This confirms what I've long suspected: Daniel Dennett is a
       | robot!
        
       | grncdr wrote:
       | I was surprised when I scored 7/10 on the simplified quiz,
       | despite having never heard of Dennett (nor having read much
       | philosophy). Probably just luck, but it would be a neat
       | superpower.
        
         | jonhohle wrote:
         | You may have a future in blade running.
        
       | MasterScrat wrote:
       | For French speakers: last year we released Cedille, the largest
       | French language model.
       | 
       | Following the release, a French philosophy teacher would live
       | stream sessions where he'd generate snippets from famous
       | philosophers with the model then ask an (informed) audience which
       | samples were real vs generated. They would get it wrong half of
       | the time. One of the recordings:
       | https://www.youtube.com/watch?v=rHLTEpr_7tM&t=5412s
        
       ___________________________________________________________________
       (page generated 2022-07-27 23:02 UTC)