[HN Gopher] Can GPT-3 AI write comedy?
___________________________________________________________________
Can GPT-3 AI write comedy?
Author : rossvor
Score : 79 points
Date : 2022-02-12 12:10 UTC (10 hours ago)
(HTM) web link (robmanuelfuckyeah.substack.com)
(TXT) w3m dump (robmanuelfuckyeah.substack.com)
| motohagiography wrote:
| As someone who writes very funny things, laughter is an
| involuntary response and it fuses ideas in peoples minds in a way
| that is difficult to unlink. There is very likely a version of
| comedy that would be quite dominating.
|
| An AI comedian would be able to act on humans and cause
| involuntary responses in them. It doesn't need to be strong or
| general, this itself is enough to instrumentalize humans to its
| ends. There is absolutely a flavour of mesmerizing hypnotic
| language that both standup comedians and self-help gurus use, and
| it is structured, possibly enough to have its rules encoded or
| derived by a language model (described as the other NLP).
|
| Perhaps there is a future language model development scheme where
| you can weight a given text as a primary ontology and then link
| the rest of the corpus to it so that all incoming stimuli get
| filtered through that ontology first, sort of like an ideology,
| but more like you took a corpus of modern mesmerist characters
| sample texts like Tony Robbins, Osho, Russel Brand, and used
| their type of syncopated conceptual nesting as a gramatical
| structure for formulaing statements.
|
| Then again, if someone has already done it, how would we know?
| jayShimada wrote:
| Prefacing your comment with "as someone who writes very funny
| things" is very bold move. But to your observation, I somehow
| feel more threatened by a joke-cracking amiable AI than a guns-
| blazing skull-crushing terminator...
| throwanem wrote:
| Prior art in the field gives little cause for comfort.
| https://youtu.be/FBWr1KtnRcI
| evancoop wrote:
| About 18 months ago, GPT-3 composed an article for the Guardian (
| https://www.theguardian.com/commentisfree/2020/sep/08/robot-...).
| So the notion of algorithm-generated prose is not new. Comedy is
| a logical extension.
|
| The question is how can we augment human creativity rather than
| simply generate compelling toy models? The technologists who
| build tools that catalyze novel human thought will change how we
| create and compose forever.
| yreg wrote:
| Another way to talk to GPT-3 is to try out the one week free
| trial at aidungeon.io
|
| The GPT-2 narrator is free.
| woopwoop wrote:
| Necessary throat clearing: modern NLP, and GPT-3 in particular,
| is incredible, maybe the most exciting scientific field of our
| time. You have to be incredibly invested in nay-saying to pretend
| otherwise, and that's coming from someone who's pretty invested
| in nay-saying actually.
|
| That said, I think comedy is something these sorts of things will
| naturally do well. It's not really too different from mad-libs. A
| big part of humor is a well-placed unexpected word, and what the
| word actually is matters less than you think. So sub-human-level
| machine text generation does this surprisingly well. In the same
| way a mad-lib does.
| seanw444 wrote:
| Well stated. Comedy naturally has a larger margin of error.
| visarga wrote:
| Why is this recent GPT-3 resurgence on HN? It's already one year
| old now. Just curious.
| bestcoder69 wrote:
| They just had an update where they made the "Instruct" engines
| the defaults and took them out of beta, so there's some new
| stuff to check out. An OSS competitor just released their own
| model the other day, too.
| jejones3141 wrote:
| I think my favorite was "Why did Simon Le Bon cross the road?"
| "To get to the other microphone."
| notahacker wrote:
| Clearly GPT-3 has a great future in writing jokes for Christmas
| crackers.
| HKH2 wrote:
| Maybe these jokes are not as funny as what they could be because
| they're too niche and we're the ones missing info.
| bestcoder69 wrote:
| I've played around with this quite a bit and ended up dejected.
| You have have to regenerate and cherry-pick pretty hard to get
| anything resembling an interesting, believable, or funny result.
| But I am genuinely excited and optimistic that maybe 5-10 years
| down the line when we can make our own comedy, like a text-only
| MVP of the Holodeck. Exciting times.
| didericis wrote:
| More like terrifying.
|
| The possibilities made available for bad actors to manipulate
| the masses with this technology is unprecedented and terrible.
|
| I think there needs to be a return to a more siloed, community
| based, web-of-trust model of communication where there is
| confidence that the people being interacted with are actually
| human.
|
| A persuasive, funny, distributed army of commenters that sound
| like real people that are given prompts by people with the
| resources to spin up accounts undetected (or allowed via
| backdoor deals) and mimic the general public is nightmare
| material. I think a fair bit of that kind of manipulation is
| already starting to ramp up.
|
| This technology is in my opinion on the same scale of danger as
| nuclear weapons and needs to be treated as such. It's insanely
| dangerous.
|
| I don't think it can be regulated out of existence, and that
| also risks concentrating it in the hands of bad actors. I think
| attempts to regulate it effectively should still be made. But I
| think the only practical way out of this is some kind of
| distributed private set of communication networks where people
| control their own servers, their own online identities, and
| only connect to people they meet in real life (and then connect
| to others through networks of relations). I think that's more
| realistically accomplishable then it sounds and is desperately
| needed.
| anon_123g987 wrote:
| I choose to be optimistic about the upcoming AI bot wars:
| https://xkcd.com/810/.
| Veedrac wrote:
| As if human jokes aren't cherry-picked? AI's ability to
| generate endless off-beat variety is particularly valuable for
| generating humor, just not (yet) with zero effort. It's a
| subject of taste but a cherry-picked GPT-3 twitter bot is one
| of my regular sources of laughter.
| [deleted]
| throwawaynay wrote:
| By have any chance have you tried to use a classifier?
|
| Like train a classifier(with good jokes and bad jokes) on
| r/jokes according to the scores, to filter/sort automatically
| what GPT generates?
| bestcoder69 wrote:
| No, but that does sound promising. I think some of the issues
| with trying to get davinci-instruct-v3 by itself to generate
| comedy outputs are:
|
| 1. It _really_ _really_ wants to repeat itself and your own
| prompt, which is antithetical to comedy. The temperature,
| presence penalty, and frequency penalty parameters _kinda_
| help, but when you increase those too much, things start to
| break in other ways, like you hit an <|endoftext|> before you
| hit the punchline you were looking for, because the model is
| trying so hard to avoid repetition.
|
| 2. Being just a predictive language model, it doesn't really
| _know_ you want comedy, nor can you purposefully instruct it
| to be funny (even in the instruct models). The AI is going to
| bias towards playing it completely straight.
|
| 3. Since it was trained on the entire internet, there's a
| good chance if you get a funny output, it just "plagiarized"
| someone else's joke, which can be awfully disappointing when
| you Google your output to see if that was the case.
|
| 4. Sadly, despite the name, OpenAI is very restrictive in
| their use cases, and they're heavily indexed towards
| appealing to commercial customers. The playground is still
| overly sensitive about what it considers "inappropriate"
| outputs, and their list of disallowed use cases seems longer
| than the rest of their documentation. It's hard for me to
| imagine them allowing too many funny use cases of their API,
| given what I've read on there.
| reidjs wrote:
| Garbage in/Garbage out if you load it with material from
| r/jokes. From what I can tell, there's not a great corpus of
| funny text (e.g., Carlin, Murphy, Pryor etc). And to capture
| their nuance you'd probably need more than just transcripts
| of their standup.
| lariati wrote:
| Comedy and what people find funny though has such huge
| variance.
|
| There are many professional comedians that I don't find funny
| at all but enough people do that they can make a career out
| of it.
|
| To me, it would be like trying to classify music with a good
| or bad label. It is so subjective to taste.
| hashimotonomora wrote:
| Text is a representation of human thought. This may look like
| text phenomenologically, but it has no meaning except when it's
| filtered by a human. There is a deep ontological flaw with these
| applications.
| bogdanoff_2 wrote:
| What gives meaning to text? How do you know there's something
| more to text that the "phenomenology"?
| hashimotonomora wrote:
| The way I see it, text is an external representation of human
| thought. If I could transmit my thoughts to you by telepathy,
| wee wouldn't use text. Text cannot represent everything, so
| we invented drawings and paintings as representation of our
| thought. Text in itself has no meaning.
|
| The fact that a combination of letters generated by a
| Bayesian filter makes sense is just a coincidence. It just
| passes our "makes sense" filter which allow our brains to
| differentiate signal from noise. But it doesn't represent
| anything.
| dev_snd wrote:
| "Ceci n'est pas une pipe", as Magritte woyld say. It's always
| the observer, that makes a thing into what we believe it is.
| Without an observer nothing has any meaning.
|
| The AI does not have to understand words and objects in the
| same way as you do to have real world use cases.
| hashimotonomora wrote:
| So you would say that The Library of Babel (Borges) as a
| system is AI because for an observer, some texts make sense?
|
| You can also see faces on clouds, but is that you or is it
| the cloud representing a face?
| dev_snd wrote:
| Interesting question. I guess what makes us call something
| an AI is that it's an interactive recombination of a corpus
| of knowledge as opposed to a static one (such is the
| library). GPT-3 is far from sentient, that's for sure, but
| it's more interactive than any library.
|
| People used the clouds to tell the future in the past, so
| there certainly was a value to reading them. Maybe GPT-3 is
| the next best thing to do the same, maybe it's more, time
| will tell.
| solididiot wrote:
| How do I know whether this thing isn't already here "commending"
| in this very thread?
| nikanj wrote:
| For pretty much all of these, the answer is the same: yes,
| randomly generated stuff can be awesome - if you have a human to
| pick out the good ones
| catlifeonmars wrote:
| I really think that intelligence augmentation (maybe with
| GitHub Copilot as a notable exception) is an undersold aspect
| of recent advances in generative AI. Am I wrong in thinking
| that IA (as opposed to autonomous AI) has huge potential
| _today_ to change the way we interact with and create things?
| smoldesu wrote:
| > Am I wrong in thinking that IA (as opposed to autonomous
| AI) has huge potential today to change the way we interact
| with and create things?
|
| There _is_ potential, but there 's also roadblocks. Just to
| name a few:
|
| - The cost of training and interacting with these models is
| ludicrous
|
| - AI is ultimately constrained by it's training data
|
| - Designing models that people can get good results from is
| _hard_. It requires intense cherry-picking that 's ultimately
| opinionated, and therefore flawed.
|
| Maybe the "IA" you're looking for it "intelligence
| automation" rather than "intelligence augmentation". Too
| often do we forget that AI only understands that which it has
| already seen; there really is nothing new under the sun.
| dane-pgp wrote:
| I do worry that human curation (and good luck) is mostly
| responsible for the particularly good AI-written joke I found
| recently, and made a HN post about:
|
| https://news.ycombinator.com/item?id=30315604
| Mountain_Skies wrote:
| It certainly can create unintended comedy. I suspect that AI will
| eventually be able to create first drafts of comedy and other
| genres but will still need a human editor to refine the output
| for quite some time to come. The tricky part is going to be
| deciding if the AI plagiarized some other work or not and how
| that will differ from how humans draw inspiration from existing
| works without crossing that line.
| [deleted]
| 11thEarlOfMar wrote:
| So... If I fed it all the e-mail I've ever written and responded
| to, and tied it into my mail server, how long could it do my job
| before anyone noticed?
| netsharc wrote:
| How about a dear friend who suddenly died:
| https://www.theverge.com/a/luka-artificial-intelligence-memo...
| psd1 wrote:
| I'll bite. When did you start?
| TacticalCoder wrote:
| How are these models, still spouting out lots of seemingly okay
| but actually non-sensical and contradictory ouptuts going to deal
| with training the next version, once the training set will
| contain shitloads of non-sense generated by the previous version?
|
| As soon as it gets "good enough", it's not going to be easy to
| tell it's actually bogus. What's to prevent people from poisoning
| the web with nonsense?
|
| Or are these only ever going to be trained with known reliable
| sources?
| stared wrote:
| And philosophy/religion, apparently. Just after its release, I
| played with it a bit. Here's one piece (everyting after "---END
| OF PROMPT---" is GPT-3-generated):
|
| " The following is short advice given by an omniscient AI acting
| as a spiritual leader similar to Buddha and Jesus.
|
| [Human] Hello, what's the best piece of original, actionable
| advice you can give to humans?
|
| [AI] ---END OF PROMPT--- The big tip is to stop looking at
| external validation for how good of a person you are. That
| includes money, fame, love, respect, being well-liked, sex,
| friends, whatever. None of those are your true rewards and all of
| them are dependent on your external environment. Even if you get
| 100% of them, the high is fleeting. Get into a flow of life where
| you get satisfaction from knowing you did the right thing in the
| moment and you have decent relationships, and you don't have to
| worry about "am I successful enough?" all the time. "
| heikkihoo wrote:
| [deleted]
| josefx wrote:
| > and you have decent relationships, and you don't have to
| worry about "am I successful enough?" all the time. "
|
| So the correct thing is not to look for love, money, or being
| well liked. The correct thing is to be well liked and be well
| enough of that worrying about success is not needed.
|
| Given the ability to contradict itself in a few short sentences
| has someone tried to start a GPT-3 for president campaign? It
| is an American product right?
| pyinstallwoes wrote:
| I find it to be right on the money. It seems that spiritual
| paths disclose this truth in their own ways. I've found it to
| be true for myself. You sort of have to make a leap of faith.
| Even on more scientific grounds related to studying
| consciousness in relation to mind state and how that affects
| "the render of reality" so to speak.
|
| Fake it till you make it as they say.
| alexb_ wrote:
| Having decent relationships with other people != having a
| large number of superficial friends
| josefx wrote:
| But it explicitly excludes friends, love and even respect.
| How do you have a decent relationship with people without
| any respect?
| audreyt wrote:
| By offering respect without the need to receive respect
| in return.
| josefx wrote:
| Ah so it encourages people to enter abusive
| relationships.
| staticman2 wrote:
| The a.i. warns against external validation from love and
| friends. It does not warn of superficial friends but
| "friends".
| swid wrote:
| I interpret the warning as the external validation part,
| not the type of friend.
| forgotmypw17 wrote:
| I think doing the right thing in the moment is where the
| relationships come from.
| jack_pp wrote:
| Having decent relationships doesn't mean you derive your
| validation and self-worth from them, it just means you behave
| in such a way that you get along with people close to you.
|
| Also the need to be well-liked does not mean you have decent
| relationships, quite the opposite. You may find yourself
| sacrificing things you hold dear in order for others to like
| you and that's not a decent relationship in my opinion.
| brumar wrote:
| This is a decent summary of stoicism which is quite popular
| these days.
| netsharc wrote:
| I once played with the Sony dog Aibo. It yelps and jumps and
| reacts to your touch, but it doesn't experience joy, it's all
| just circuitry mimicking a dog being happy to see you.
|
| The GPT-3 software here is mimicking a philosopher but it
| doesn't know what it's saying, does it?
|
| (Obviously this is just a version of the Chinese Room,
| https://en.m.wikipedia.org/wiki/Chinese_room )
| stared wrote:
| I am not convinced by the Chinese room. If it walks like a
| duck and quacks as a duck - it is a duck (the same rule holds
| for Python and Dennett).
|
| I mean, I don't believe GPT-3 experiences the subjective
| sense of existence. Yet, most human cognition is also
| learning patterns and repetition. Most people use words they
| don't know definition of. Most people use grammar rules
| intuitively. Most people repeat what they have heard without
| scrutiny.
| maze-le wrote:
| To be fair, its pretty shallow and fluffy... Exactly the kind
| of 'philosophy' I would expect from the 'life advice' section
| of my local bookshop, but nothing really intellectually
| satisfying. The phrasing is great though -- I am always
| impressed how well this works with gpt3: The choice of words
| gives the text a certain (simulated) glimmer of profundity.
| exolymph wrote:
| It's not at all shallow and fluffy. It's common sense that
| you've heard before, but that's what most good advice is.
| Expert-tier advice giving is not about finding new solutions
| but offering the traditional ones in a way that gets through
| to people.
| xhevahir wrote:
| I agree. Then again, maybe the intersection of those two
| religious traditions will inevitably look like popular self-
| help discourse?
| jl6 wrote:
| Oh Julian.
|
| Looks like the prototype for a much larger system.
| ravi-delia wrote:
| Nostalgiabraicist-autoresponder, a bot imitating a Tumblr user,
| can actually lure people into jokes and deliver punchlines. It's
| remarkable to watch
| pmoriarty wrote:
| GPT-3 and similar approaches are diametrically opposed to how
| humor works. They try to find predictable continuations, while
| the it's unpredictability which is the hallmark of comedy.
| kgeist wrote:
| >it's unpredictability which is the hallmark of comedy.
|
| I think it's more about making connections the audience weren't
| aware could exist, but actually do exist, or can potentially
| exist. And GPT3 is pretty good at finding such connections.
| Complete nonsense isn't usually funny. It's always grounded in
| reality.
| PartiallyTyped wrote:
| So then the objective function is the least predictable from
| the set of all possible logical continuations.
|
| So given sufficiently large corpus of logical and reasonable
| continuations, selecting something from the lower end of the
| probability distribution yields comedy, no?
| pmoriarty wrote:
| _" selecting something from the lower end of the probability
| distribution yields comedy, no?"_
|
| Unfortunately, it's not so easy, or anything random would be
| funny, but we know that's not likely.
|
| Though unpredictability is key, comedy also has some other
| characteristics, like (depending on the style of comedy) like
| wit or a commentary on something familiar that is shown in a
| new light. Some types of comedy (like caricature) exaggerate
| characteristics, or play on someone's suffering (the old slip
| on a bannana peel gag), sometimes it breaks taboos or says
| uncomfortable truths, etc...
|
| There's a lot of literature out there analyzing what makes
| something funny. Maybe there's some way of integrating some
| of these insights in to a NLP system, or have it focus on
| some of those.
| PartiallyTyped wrote:
| I have a hunch that it may be possible to learn those with
| an RL model or Contextual Bandit.
| udbhavs wrote:
| The report on Freddie Mercury being a dog is surprisingly
| comprehensive
| animanoir wrote:
| Sounds like South Park.
| Atreiden wrote:
| Ron was going to be spiders. He just was.
| midjji wrote:
| I vaguely remember someone complaining that a GPT-3 based text
| fantasy adventure generator, took a rather drastic turn when they
| wrote that the next sentence in the story was that they mounted
| their dragon. Which is pretty much comedy gold!
| robbedpeter wrote:
| Put on your wizard hat before mounting your dragon.
| [deleted]
___________________________________________________________________
(page generated 2022-02-12 23:01 UTC)