[HN Gopher] AI Dungeon will block certain words, review content ...
       ___________________________________________________________________
        
       AI Dungeon will block certain words, review content flagged as
       inappropriate
        
       Author : rahidz
       Score  : 77 points
       Date   : 2021-04-28 10:11 UTC (12 hours ago)
        
 (HTM) web link (latitude.io)
 (TXT) w3m dump (latitude.io)
        
       | [deleted]
        
       | Traubenfuchs wrote:
       | This is the peak intersection of "thoughtcrime prevention" and
       | "corporate self harm for the sake of wokeness". It paints a grim
       | picture of our future with castrated, sterilized and limited AIs
       | that rat us out if we even want to talk about something that goes
       | against what those in power deem acceptable. All of that for the
       | sake of "protecting minors" -which minors, the virtual ones
       | dreamt up by GPT3? Ridiculous.
        
         | jasonlotito wrote:
         | It's nothing more than what happens here on HN. It has
         | guidelines, rules, restrictions on what you can say. They will
         | even stop conversations that start naturally.
        
           | Traubenfuchs wrote:
           | HN is, for the most part, a sterile hugbox for pretend-polite
           | faux-intellectuals -that's why my strongly worded
           | inflammatory comments are sometimes a hit.
        
             | dqx wrote:
             | Strongly worded? Trite cliches aren't very strong.
        
         | notahacker wrote:
         | On an internet where websites have clumsily censored
         | objectionable content like the town of S---thorpe for the past
         | quarter century and where even liberal democratic states
         | administered by people calling themselves free speech
         | conservatives support global filters for "adult content", a
         | private project backed by the marketing idea that AI is
         | inherently unsafe deciding that people using its procedural
         | text generation to roleplay their child rape fantasies might be
         | bad for PR despite its otherwise unusually permissive content
         | policy doesn't really sound like a step change...
         | 
         | If anything, peak faux outrage over "wokeness" is achieved when
         | _this_ is what people get angry about
        
         | qayxc wrote:
         | Nah, it's a "damned you do, damned if you don't"-situation.
         | 
         | There's simply no winning this - if you don't apply preventive
         | measures of any kind, your product will deteriorate into a
         | homophobic, racist, sexist ultra-cringe clusterfuck in the
         | blink of an eye [0].
         | 
         | All it takes is a handful of bored teenagers or middle-aged
         | basement-dwellers to turn your product into something the vast
         | majority of your clientele finds appalling.
         | 
         | If you are a company you have the right to choose what you want
         | your product to be and they simply don't want certain headlines
         | pointing at their product.
         | 
         | This has as much to do with "wokeness" as hardcore porn being
         | banned from the Apple app-store, i.e. nothing. It's company
         | policy and you can like it or not, but don't overrate this.
         | 
         | It's nothing new in the slightest and just like publishers can
         | decide to not put books in print that they don't want to be
         | associated with, this company tries to eliminate content they
         | don't want to be associated with - that's all.
         | 
         | [0] https://www.cbsnews.com/news/microsoft-shuts-down-ai-
         | chatbot...
        
           | winkeltripel wrote:
           | Perhaps you don't see how once a product contains a
           | censorship feature, it begs to be expanded. Each step appears
           | reasonable, but at the far side, the censor is very
           | restrictive.
        
             | gameswithgo wrote:
             | Take the products in the category of forums. Most of them
             | do censorship, THIS forum does censorship. A few do not,
             | like Parler. If you like, take a tour of the forum products
             | that do not do any censorship, spend some time there,
             | report back on what it is like.
             | 
             | Which forum products do you enjoy and which do you not?
        
           | selfhoster11 wrote:
           | If OpenAI was truly _open_ as their name says, there would be
           | no dilemma. The technology would be democratised and anyone
           | would be able to run their own instance with their own
           | censorship.
           | 
           | Centralised systems is the only reason why you get these
           | problems.
        
           | trash3 wrote:
           | Do both then.
        
           | sixothree wrote:
           | You act as if the two are mutually exclusive. I don't think
           | the points in the comment you replied to are negated by
           | anything you argued.
        
           | the8472 wrote:
           | Is AI dungeon used to train newer models/update the current
           | model? If not then there's no risk of another Tay.
           | 
           | If they use a frozen model then one person using it to
           | generate some NSFW content shouldn't make the output more
           | NSFW-laden for anyone else.
           | 
           | So I don't see how they're "damned if they don't".
        
             | qayxc wrote:
             | > So I don't see how they're "damned if they don't".
             | 
             | The specifics of their approach don't even matter - all it
             | takes is one random Twitter user or media outlet to report
             | on "immoral" output generated by someone to damage the
             | brand image.
             | 
             | That's what they want to avoid and that's why they are
             | trying to take preventive measures.
        
               | wernercd wrote:
               | People make penis shaped blocks in Minecraft all day
               | long... and that doesn't tarnish the Minecraft brand.
               | Why? because people understand that it's not Minecraft
               | the company doing so - or condoning the practice. And
               | that's just scratching the surface of what can be done in
               | games like that which allow "creative expression".
               | 
               | Why would you think that this company needs to Thought
               | Police their gamers when thousands of others are able to
               | host creativity based games without being dystopian?
        
               | ljp_206 wrote:
               | For the record, I'm not sure where I land with regards to
               | this change by AI dungeon. That being said, I find that
               | arranging blocks in obscene shapes (let's even call it,
               | generally, 'creating or arranging potentially
               | objectionable content in-game'), an act that will likely
               | happen offline, or on a privately owned server, is
               | different than a game/toy/service where output is
               | requested, generated, then sent to the user. (Trying to
               | keep that description simple since I don't really know if
               | that's the right way to describe it)
               | 
               | When comparing the Minecraft scenario to the AI Dungeon
               | scenario under the purview of "doing creative things with
               | games," I don't find the two scenarios to be much
               | different. However, for one of those things, it's a
               | possibility within the vast play space of Minecraft - you
               | could even create genitalia and display profanity with
               | just alphabet blocks - but for the other scenario,
               | depending on how you see it, it's content that is
               | generated, served, and potentially stored by AI Dungeon.
               | I'd be concerned about the publicity if I was AI Dungeon
               | as well.
        
               | qayxc wrote:
               | > Why would you think that this company needs to Thought
               | Police their gamers when thousands of others are able to
               | host creativity based games without being dystopian?
               | 
               | Wow. You act as if that's something new :D
               | 
               | There's even an actual job for doing just that: it's
               | called an editor. You know, the people who work in media,
               | publishing houses, TV, radio, papers, magazines, that
               | kind of thing. It's what they do. Everywhere. Editing
               | content. Removing comments, changing words, making sure
               | the views represented in the publication match the
               | intentions and policies of the investors and parent
               | organisations, etc.
               | 
               | You act as if something is new and evil just because it's
               | done with computers. It isn't. It has been the case for
               | as long as papyrus and cave paintings have been around.
               | Back then it was don't anger the chief/king/Pharao, today
               | it's don't anger the Twitter mob or the hand that feeds
               | you. Same difference.
               | 
               | Games are a medium just like magazines or online blogs
               | and comparing blocky dongs in Minecraft with explicit
               | child pornography says a lot about your understanding of
               | "creative expression". Really makes me wonder sometimes.
        
               | wernercd wrote:
               | There is no parallel between me playing a game where I
               | have the freedom to express myself and me submitting a
               | paper to an editorial review board.
               | 
               | Unless you back the idea that every corner of society
               | needs to have the Religion of Woke determining what's
               | "good" and "bad"... and the last thing I think we need is
               | new Crusades with the Knights Wokelar deploying the Holy
               | Creed and woe be the Heathen who goes against the
               | Churches Religion.
               | 
               | "You act as if something is new" and you act as if
               | history isn't full of thought police turning into judge,
               | jury and executioners... from the Crusades to the French
               | Revolution to Auschwitz...
               | 
               | It's funny how "they are just controlling things for YOUR
               | own good" often turns for the worst...
               | 
               | "same difference" Ignorance of history leads to a repeat
               | of history.
               | 
               | From book burnings to "we just want to stop racism"...
               | 
               | Really makes me wonder as well...
        
               | qayxc wrote:
               | So right to Godwin's Law we go then - yeah.
               | 
               | I get the feeling it's not me who is the radical here.
               | 
               | There is a difference between wanting to remove content
               | that a company don't want to associate itself with and
               | genocide.
               | 
               | For you to even suggest there's a connection says far
               | more about your twisted logic and dysfunctional thought
               | process than mine.
               | 
               | Also AI dungeon is a multiplayer game so yeah - YOUR
               | freedom of expression is limited by OTHER players freedom
               | of not being subjected to certain content. If you don't
               | understand that, work on your social skills.
        
               | [deleted]
        
               | contravariant wrote:
               | If they think that blocking certain words is going to
               | prevent that from happening then well, good luck with
               | that.
        
               | jerf wrote:
               | It _does_ prevent the news article from saying  "And they
               | knew about it and they did nothing!"
               | 
               | I'd be very worried about this in their shoes and acting
               | very similarly.
        
               | ThalesX wrote:
               | But then don't you have to answer to all the media
               | queries?
               | 
               | "They knew about this problem, they even attempted to
               | half ass fix it, but it's obviously their mind is on the
               | subscriber money, and not preventing blatant CHILD
               | EXPLOITATION!"
               | 
               | Then you throw some more money at a yet unsolvable task,
               | and then in 2 years, when some other outlet gets
               | triggered, you have to defend yourself once again.
               | 
               | "They've know about this problem for 5 years and it's
               | still rampant in their community! Shame on you JERF
               | INC.!"
        
             | ainiriand wrote:
             | Correct. And they can go one step further, they could just
             | use the input they currently do not flag if they want to
             | feed back into the model.
             | 
             | This is not a technical consideration, it is just PC at its
             | worst.
        
               | qayxc wrote:
               | Oh come on. This kind of thing is done everywhere in
               | media.
               | 
               | You either deny that games are a medium or you're under
               | the impression that whatever you read in news
               | publications, books, see in films and TV is the result of
               | the unfiltered creative outlet of the producers. News
               | flash: it isn't.
               | 
               | There's heavy editing going on everywhere and just
               | because it targets a specific computer model in this case
               | instead of a more "traditional" product like a novel,
               | film or stage play doesn't make an iota of difference.
               | 
               | In fact this demonstrates the maturity of the technology
               | and its use if it gets the same treatment as every other
               | public media.
        
           | DyslexicAtheist wrote:
           | > All it takes is a handful of bored teenagers or middle-aged
           | basement-dwellers to turn your product into something the
           | vast majority of your clientele finds appalling.
           | 
           | the Internet's original sin!
           | 
           | the irony is that Tech amplifies it but Tech won't (IMHO) be
           | able to fix it with a technological solution (it's not a
           | technical problem).
        
         | DyingAdonis wrote:
         | Yikes. I think it's time to finally be done with Hacker News.
        
         | Loughla wrote:
         | Make your own game, then. This is a company deciding what they
         | value and what they do not tolerate. You are free to make your
         | own 'free-speech' version of this game. If you believe there is
         | a market for it, why not?
        
           | Traubenfuchs wrote:
           | I would, if I had the skills, maybe call it "LI Dungeon"
           | (Liberal Intelligence Dungeon). Regarding the market: I am
           | surprised there even is a market for AI Dungeon. Any output I
           | have seen is complete trash.
        
         | dqx wrote:
         | Thoughtcrime? It's content they are preventing, not thoughts.
        
           | eplanit wrote:
           | And those thoughts are conveyed through content, no?
        
             | dqx wrote:
             | ...and the content is being policed, not the thoughts. "All
             | content is just thoughts, conveyed" is a meaningless
             | slippery slope.
        
               | TheDong wrote:
               | I do not think that's the slippery slope being made
               | though.
               | 
               | The distinction between "thoughts" and "content" here
               | could be made as "whether it is shared with others".
               | 
               | For example, if someone has a private journal they keep
               | using notebook.exe + a local "journal.txt", and in that
               | journal they write a fantasy work (with a header "this is
               | a fantasy") about overthrowing the government, should
               | they be arrested for that?
               | 
               | What about if they write it in google docs, but don't
               | share it with anyone? What about if they write it as a
               | facebook post? What about as a public facebook post
               | without the "this is fantasy" disclaimer?
               | 
               | In the case of AI Dungeon, my understanding is the
               | stories are by default private, and obviously have an
               | understood "this is fantasy" header by their very nature.
               | 
               | If AI Dungeon were creating facebook posts and the
               | limitation they imposed was "you cannot post this to
               | facebook if our algorithm dislikes it", I don't think so
               | many people would call that "thoughtcrime policing".
               | 
               | What they're doing, as I understand it, is closer to
               | someone requiring that notebook.exe can't edit private
               | files that are deemed "bad" in some way, regardless if
               | anyone else will ever see them.
        
               | ModernMech wrote:
               | This crucial difference here is which party is generating
               | the content. In the case of a private journal, it's one
               | way communication from the writer into the journal. In
               | the case of Google docs, you communicate with Google's
               | servers, but Google is never generating any content for
               | your document it doesn't want to. It's still you
               | generating your own thoughts. Google is not on the hook
               | for anything you write in a Google doc.
               | 
               | In the case of AI Dungeon that distinction is lifted. By
               | using AI Dungeon what you're effectively doing is
               | engaging in a creative process with the AI. It's no
               | longer a one way channel of your own thoughts. Now AI
               | Dungeon is actually generating content, and that's where
               | things become problematic.
               | 
               | It's also the reason this is not thought crime or thought
               | policing. You are still free to think your own thoughts,
               | and you're even free to write them down. You can write
               | down your thoughts in AI dungeon if you wish. But AI
               | Dungeon is under no obligation to engage with your
               | thoughts if they contain subject matter to which they
               | object. If you want to engage in a fantasy with the AI
               | about abusing minors, you are free to attempt the
               | engagement, and the AI is free to say no. That's not
               | thought policing, that's freedom of association.
        
         | DyslexicAtheist wrote:
         | Intel just announced "Bleep" to _eliminate bad language in
         | gaming_. I am still waiting for this to backfire. The interface
         | alone is comedy gold:
         | https://www.youtube.com/watch?v=W9f0h4nB6VM
        
         | vsareto wrote:
         | >"thoughtcrime prevention"
         | 
         | You do know the idea of thoughtcrime was something enforced by
         | the state and not by a company that's free to control what
         | their product generates, right?
         | 
         | >"corporate self harm for the sake of wokeness"
         | 
         | "Oh no, we lost the business of people who like sexual
         | depictions of minors, what a fucking shame"
        
           | taneq wrote:
           | > You do know the idea of thoughtcrime was something enforced
           | by the state and not by a company that's free to control what
           | their product generates, right?
           | 
           | Straying away from the prior context a little, the term
           | 'thoughtcrime' along with others such as 'free speech' were
           | coined when it was incomprehensible that a single corporation
           | would have more control over the global population than any
           | government. I don't know whether the time has yet come to re-
           | evaluate which entities these concepts ought to apply to, but
           | it _will_ come.
        
       | vagab0nd wrote:
       | A bit off topic but who else thinks AI Dungeon is a bit
       | overrated? Does anyone actually play the game and enjoy it? I
       | tried it (with GPT-3 turned on) a few times and lost interest
       | quickly due to non-logical output.
        
         | sergiotapia wrote:
         | Just spent 30 minutes on a cosmic horror story and it handled
         | it really well, I got absorbed into it then realized this is
         | all AI throwaway content haha, what a world!
        
         | ThalesX wrote:
         | I played it awhile ago and it was great. The output seemed
         | fluid, I could write awesome fantasy with the help of the AI.
         | It truly felt like the future. Then they started making changes
         | to the model, adding features, scenarios, all sorts of
         | additional stuff. Right now, when I try to play, it just seems
         | nonsensical. I can't even follow it for 4 - 5 prompts anymore.
        
       | hprotagonist wrote:
       | Scunthorpe meets Transformer: who will win?
        
       | teddyh wrote:
       | > _What kind of content are you preventing?_
       | 
       | > _This test is focused on preventing the use of AI Dungeon to
       | create child sexual abuse material._
       | 
       | How can you even begin to argue against this? It's one of the
       | horsemen of the infocalypse; any counterarguments are doomed.
        
         | CodeArtisan wrote:
         | Reminds me this (SFW)
         | https://abload.de/img/hcccpmlddov1oj9m.jpeg
         | 
         | A few years ago, United Nations tried to ban lolicon worldwide,
         | Japan and USA refused.
         | 
         | Their then rebuttals:
         | 
         | https://nichegamer.com/2019/06/03/us-and-japan-reject-united...
        
         | gambiting wrote:
         | By pointing out that traditionally this sort of thing
         | inveitably follows the same pattern. You start with saying
         | you're protecting the children, because no one can argue with
         | it. Then ban any content that's potentially offensive to
         | anybody. Or filter it so only "one way" is acceptable. Next AI
         | dungeon won't be able to generate a character called Jesus or
         | Prophet Muhammad because might be offensive. Then of course
         | anything that might be interpreted as leaning
         | liberal/conservative, depending on what the authors think is
         | "correct". Then eventually you can't create a character named
         | after a politician because "they want to keep the game clean
         | and free of politics".
         | 
         | Obviously I'm not equating any of this with CP - but I wish
         | someone had the energy to stand up to it and say "look, you're
         | censoring AI. It's dumb". But of course no one will because
         | being accused as defender of CP is one of the worst things that
         | can happen in any online discussion.
        
           | ModernMech wrote:
           | Can you give an example where this slippery slope occurred?
           | For example, in Fable III and Skyrim you can't kill children,
           | or in Morrowind there are no children at all, but you can
           | still name your character Jesus. You say this pattern is
           | inevitable, but if that were the case we'd see it everywhere
           | that limitations are put in place in the name of protecting
           | children.
        
             | hackinthebochs wrote:
             | An example that comes to mind is reddit banning the
             | jailbait subreddit. Not too long after reddit started its
             | campaign of removing "offensive" subs more broadly.
             | Fatpeoplehate, gendercritical, watchpeopledie, etc have all
             | been victims of the new censorious mindset.
        
               | BigJono wrote:
               | The problem with this sort of censorship is that it has
               | huge collateral damage that most people don't see. WPD is
               | the best example because that kind of content wasn't
               | hurting anyone. Go look at where it's moved to, and have
               | a look at the kinds of people commenting there as opposed
               | to the reasonably level headed comments on the reddit
               | version.
               | 
               | Every time you kill a subreddit or facebook group, 99% of
               | the users stop participating in that kind of content, and
               | the other 1% scurry off to some out of sight echo chamber
               | until one day they randomly pop up again to murder a
               | bunch of innocent people because they've been corrupted
               | beyond saving.
               | 
               | Everyone was talking about this for a hot moment after
               | the Christchurch massacre, but nothing was done and now
               | nobody gives a shit again.
               | 
               | It's important to let people with divergent views to feel
               | some sort of social pressure to change. Those 99% of
               | people that have no interest in blowing up buildings or
               | murdering children are the best weapon we have to
               | convince the other 1% of people with weird interests that
               | the world as it is ok without them taking some drastic
               | action. There's always going to be a small subset of
               | people that will rebel, but the important thing is to
               | make sure that otherwise normal people (that want to
               | watch porn, or learn how to safely handle a gun, or learn
               | about cybersecurity, or get desensitised to gore, or
               | whatever else is on this week's "think of the children"
               | hitlist) are integrating into civilised society and not
               | being dragged into cesspits of violence and terrorism
               | because their interests have been deemed by a bunch of
               | fucking software engineers to be "bad".
               | 
               | We're well past the critical point where enough large
               | platforms have banned all the "bad" stuff that any new
               | contenders either need to ban it too or become one of
               | those out of sight echo chambers themselves. The only way
               | to fix it now is for everyone to agree to be less
               | stringent all at once, together.
               | 
               | It's worth remembering that the internet wasn't very
               | censored 20 years ago. Most of us grew up during that
               | time and turned out fine. I'll take tubgirl and lemon
               | party over neo-nazis and conspiracy nutjobs any day of
               | the week.
        
               | Akronymus wrote:
               | > Every time you kill a subreddit or facebook group, 99%
               | of the users stop participating in that kind of content,
               | and the other 1% scurry off to some out of sight echo
               | chamber... > It's important to let people with divergent
               | views to feel some sort of social pressure to change.
               | 
               | I would love to have actual discussions with people
               | regarding certain views I hold. But quite often others
               | just refuse to even entertain that I have a different
               | view because x and y. And to them I am just
               | dumb/uneducated/other things to discredit me having an
               | opinion at all.
               | 
               | Hell, I went quite a bit more towards "bad" opinions,
               | just because that side is more accepting of
               | discussion/dissent.
               | 
               | It isn't just corporations banning subreddits/websites
               | that drive people into echochambers, but also people
               | simply refusing to engage at all.
        
             | mulander wrote:
             | In Fallout one and two, the main character had the option
             | to kill NPC child characters. The game went so far as to
             | mark you with a specific perk "Childkiller"[1] after
             | performing that act.
             | 
             | From the linked fandom wiki, quote by Tim Cain about the
             | subject:
             | 
             | > This led to the child killing controversy. We said look,
             | we're going to have kids in the game; you shoot them, it's
             | a huge penalty to karma, you're really disliked, there are
             | places that won't sell to you, people will shoot you on
             | sight, and we thought people can decide what they want to
             | do. [...] This of course contributed to our M-rating,
             | however, Europe said "no". They wouldn't even sell the game
             | if there were children in the game. We didn't have time to
             | rewrite all the quests, we just deleted kids off the disc.
             | 
             | [1] - https://fallout.fandom.com/wiki/Childkiller
        
               | ModernMech wrote:
               | Right, but the charge is that this is a slippery slope.
               | That as soon as you put some limitations in the name of
               | protecting children, eventually and inevitably you are
               | preventing your characters being named after religious
               | icons and politicians. I'm just wondering where that's
               | actually happened. As far as I'm aware you can name you
               | character whatever you want in Fallout.
        
               | mulander wrote:
               | I don't think there is an explicit filter installed.
               | Quick search reveals a companion that can pronounce names
               | and Jesus is not recorded as something the character can
               | say while Mohammed is allowed and voiced. Though this
               | might not be censorship but just something that wasn't
               | recorded. Fallout also evolved into an online game. I
               | haven't played it but that might lead to character name
               | restrictions.
               | 
               | > Codsworth is known to say the Sole Survivor's chosen
               | name if it is an option, although it may be shortened,
               | extended, or have a word omitted. A list of spoken names
               | can be found here.
               | 
               | > Ironically you can be Mohammed but you can't be Jesus.
               | I know I missed a bunch of the good ones, feel free to
               | add some below.
               | 
               | [1] - https://fallout.fandom.com/wiki/Codsworth
               | 
               | [2] - https://steamcommunity.com/app/377160/discussions/0
               | /49688113...
        
           | Ggshjtcnjfxhg wrote:
           | AI Dungeon is subscription driven, not advertiser driven, so
           | it's unlikely they're particularly sensitive to offensive
           | content. It seems like they're filtering out sexually
           | explicit depictions of children to minimize legal risk, which
           | doesn't apply to offensive output in general.
        
             | gambiting wrote:
             | That's fine, but AI Dungeon is purely text based, right? I
             | might be wrong so please correct me, but if you were to
             | write say.....a book with very explicit minor sexual
             | content.....that's not illegal. Images are illegal. In some
             | places drawings are too. But text isn't. I mean, surely? We
             | cannot be at a point as humanity where text is illegal.
             | Right?
        
               | Ggshjtcnjfxhg wrote:
               | > if you were to write say.....a book with very explicit
               | minor sexual content.....that's not illegal
               | 
               | Not in the US, but in Canada and many European countries,
               | I believe it's illegal.
               | 
               | > We cannot be at a point as humanity where text is
               | illegal. Right?
               | 
               | Even in the US, much text is illegal in certain contexts.
               | Think false advertising, written plausible threats, etc.
        
               | jfk13 wrote:
               | Why can't text be illegal? There's plenty of precedent
               | for text being censored, at various times and in various
               | places. It just depends what laws any given society
               | chooses to have.
        
         | etiam wrote:
         | Well, arguing as such isn't particularly difficult, but it does
         | certainly speak volumes about the climate that would ostensibly
         | hear the arguments.
        
         | lindy2021 wrote:
         | https://en.wikipedia.org/wiki/Slippery_slope#Non-fallacious_...
        
         | ainiriand wrote:
         | Yeah but you can crush other people's skulls and eat their
         | organs, that's allowed, I've tried.
         | 
         | They're cool with that.
        
           | Ggshjtcnjfxhg wrote:
           | That's the kind of over the top, Kill Bill style gore that
           | people are often fine with _because_ it 's extreme to the
           | point of absurdity. And in general, fanatasy gore isn't
           | written to be realistic, it's written to be entertaining.
           | Descriptions of sexual acts with children, however, are often
           | written to be realistic, so they're _a lot_ more disturbing
           | to many people.
        
             | the8472 wrote:
             | > disturbing to many people.
             | 
             | But the disturbance only happens when you publish the
             | content somewhere else. Why does AI dungeon need to pre-
             | censor something that might never be published on the off-
             | chance that it might be disturbing to someone?
        
               | Ggshjtcnjfxhg wrote:
               | Their customers could be disturbed if GPT3 outputs child
               | pornography when they aren't expecting it. There's more
               | than an off chance a given customer would find child
               | pornography disturbing; the majority of customers would
               | find child pornography disturbing.
        
               | the8472 wrote:
               | They're also doing
               | 
               |  _> Additionally, we are updating our community
               | guidelines and policies to clarify prohibited types of
               | user activity._
               | 
               | So they clearly want to prevent everyone from using it
               | that way, not just the unsuspecting users.
               | 
               | If cared about the latter they'd just add an NSFW toggle
               | or something like that.
        
               | Ggshjtcnjfxhg wrote:
               | There's a pretty huge difference between general NSFW and
               | child porn. I don't think adding a child porn toggle
               | would go very well for Latitude.
        
             | captaincurrie wrote:
             | What about the descriptions of sexual acts with children
             | that are extreme to the point of absurdity?
        
       | MikeUt wrote:
       | > preventing the use of AI Dungeon to create child sexual abuse
       | material.
       | 
       | I am shocked to hear this was possible to begin with. It was my
       | understanding that AI Dungeon could only generate text, and did
       | so entirely on computers. But now we learn that not only are
       | children somehow involved (violating child labor laws?), but that
       | they can even be sexually abused?
       | 
       | In that case, "blocking certain words" is not nearly enough -
       | whoever was responsible for creating this system should be
       | charged with, if not child sexual abuse, then at the very least
       | reckless child endangerment!
        
       | edenhyacinth wrote:
       | If I were an editor, and someone passed me their amatuerish
       | version of Lolita to edit through, I'd be well within my rights
       | to say that I didn't want to be involved in it.
       | 
       | More broadly, the editing company I worked for could say - even
       | if you don't intend on releasing this and even if our individual
       | editors don't mind reviewing it - we don't want to have to edit
       | it, and we don't want to be associated with it.
       | 
       | This is no different, but at scale. AI Dungeon, due to their
       | agreement with OpenAI, don't want to have to work with this
       | content. They've found a pretty awful way of implementing it to
       | save the relationship with OpenAI, and hopefully they'll find a
       | better one in the future.
        
         | Zababa wrote:
         | The big difference is that Lolita is a book, so it aims to be
         | published, while most if not all AI Dungeon content stays
         | private and unpublished, so I don't think it's the same.
        
       | inopinatus wrote:
       | Good grief: _" Latitude reviews content flagged by the model"_ -
       | or, as it was put in another forum: every time the AI flags your
       | game, a Latitude employee will be personally reading your private
       | content.
       | 
       | The key reason is perhaps this, buried deep in the text: _" We
       | have also received feedback from OpenAI, which asked us to
       | implement changes"_. Given the volume of prompts that AI Dungeon
       | throws at GPT-3 in the course of a game, it's easy to conclude
       | that Latitude has a real sweetheart deal on the usual pricing,
       | and that they basically have to follow orders from their
       | benefactors.
       | 
       | Whatever may be said of the robocensor they've thrown together -
       | and early anecdotal reports are, it is painfully crude, both
       | oversensitive and underspecific - how they've handled
       | communicating the change is extraordinarily naive. Not for the
       | first time, either: Latitude has form on suddenly imposing major
       | service constraints in a peremptory, underhanded fashion that
       | infuriates their customers. Repeating past PR mistakes, and now
       | doubling down by complaining about "misinformation" and throwing
       | shade onto others, is starting to look like a pattern.
        
         | spullara wrote:
         | They are implementing the same things everyone has to implement
         | to be a public implementation. It is part of their terms of
         | service. I went through the same process making a Slackbot that
         | could sort of pretend to be me and other folks given different
         | prompts.
        
         | lainga wrote:
         | Thus far I have seen screenshots of it flagging the phrases "I
         | would like to buy 4 watermelons" and "I just broke my 8 year
         | old laptop". Regardless of your opinion on the ethics of this
         | feature it seems to need a little polish.
        
           | cjhveal wrote:
           | Reminds me of the 2005 conversational simulation game,
           | Facade[0], in which any mention of the word "melon" would be
           | met with being immediately kicked out of your host's dinner
           | party.
           | 
           | [0]: https://www.playablstudios.com/facade
        
           | klyrs wrote:
           | Yeah, that would get flagged in human code review, too. Way
           | too many magic numbers
        
             | crooked-v wrote:
             | One number is 'too many'?
        
               | klyrs wrote:
               | I've gotten into way too many fights over this, and my
               | bitter sarcasm is leaking here. I actually like numbers
               | in code. But I've literally been told that
               | distance = ((x1-x0)**2 + (y1-y0)**2)**.5
               | 
               | has "magic numbers" and that's a "code smell"
        
               | ectopod wrote:
               | Raising to the power of .5 rather than using the sqrt
               | function does whiff a bit. Even better, use hypot.
               | 
               | https://developer.mozilla.org/en-
               | US/docs/web/javascript/refe...
               | 
               | https://docs.python.org/3/library/math.html
        
         | minimaxir wrote:
         | It should also be noted that OpenAI's content filters are
         | _extremely_ sensitive which may explain some of the downstream
         | effects.
        
       | 4dahalibut wrote:
       | I know this is not directly relevant to this article, but I have
       | a story about AI Dungeon.
       | 
       | I loaded it up with my (female) roommate a few months ago during
       | the dark of the pandemic, and long story short, what ended up
       | happening was this.
       | 
       | Our character had a AI man approach the door of their house with
       | magic "love potion" berries. We tried to get our character to not
       | eat the berries, but the AI "tricked us" into eating them. Then,
       | no matter what choice we made, we had no way out. The AI forced
       | us into a bedroom and raped our character.
       | 
       | We closed the laptop and haven't brought this up again.
        
         | PeterisP wrote:
         | It seems weird but plausible. I mean, there has been lots of
         | NSFW writing that involve nonconsensual relations; this is part
         | of the AI Dungeon training data, probably intentionally,
         | because sex sells; but I believe that there are almost no
         | stories about sexual assult starting that don't eventually
         | result in a description of sexual assault or at least
         | descriptions of sex as such. If the vast majority of stories
         | about such topic contain descriptions of "ways out" failing
         | instead of succeeding, then prompting the system with a way out
         | would result in a response of how that attempt failed, ergo the
         | "no way out" issue because of path dependence after early
         | random choices.
         | 
         | Like, imagine that you've stumbled on a weird internet story
         | where in the first page someone is approached with magic "love
         | potion" berries but refuses to eat them. That is a solid
         | indicator of what genre the story is. If you had to bet lots of
         | money, what's the probability that the second page will contain
         | something horrific versus the probability that the "seduction"
         | just fizzles out and becomes irrelevant? If you see a movie
         | where the first scene involves a creepy character making a
         | pass, wouldn't you be fairly certain that an escalation of that
         | will follow later? It's like Chekov's gun, once it's there, it
         | almost certainly means that the story is about that - perhaps
         | it could be turned into a "just revenge" story by inserting
         | descriptions of some heroic rescuer or references to how the
         | protagonist expected this to happen in order to punish the
         | assaulter, because stories like that have been written, but a
         | "mediocre" outcome where eventually nothing dramatic happens
         | and the protagonist just gets out won't be generated, because
         | that doesn't get written about, the training data says that
         | such a result is very unlikely. It's obviously a problem, but
         | since it's a "honest probability" based on tropes we see in
         | actual literature, it's going to be hard to fix; the system
         | expects escalation and drama (because all the training stories
         | had that), so you can choose the direction of that escalation,
         | but it won't allow you to have a "non-story" where the
         | suggested drama results in nothing dramatic.
        
         | Arnavion wrote:
         | I heard once they got more aggressive with their monetization
         | tiers, they nerfed the free tier to the extent that it
         | basically decides on some story path and ignores anything you
         | say to try to change it.
         | 
         | It's certainly the impression I got from watching some
         | youtubers playing it before and after the monetization change.
        
         | causality0 wrote:
         | The dichotomy of sex and violence in tabletop roleplaying is
         | always fascinating. If Steve the Rogue breaks into a house,
         | slaughters an entire family, and then makes lawn decorations
         | with their entrails, his tablemates will probably be
         | exasperated with him. If Steve the Rogue breaks into a house
         | and rapes one of the NPCs, he's probably getting ejected from
         | the game and most likely the friend group.
        
           | Barrin92 wrote:
           | That dichotomy exists in American society broadly. I remember
           | an episode of Hannibal had to cover the butts of two
           | dismembered corpses up with blood as to avoid a higher age
           | rating for nudity
        
           | wolverine876 wrote:
           | It's sometimes philosophically interesting to try to define
           | them explicitly, but let's start from an honest basis: the
           | differences are obvious.
           | 
           | Also, I don't agree with the example: Steve wouldn't be
           | invited back after either act. YMMV.
        
             | causality0 wrote:
             | Are they obvious? From a logical standpoint, it's quite odd
             | that actual murder is considered a worse crime than actual
             | rape, but fantasy murder is much less objectionable than
             | fantasy rape. It extends beyond roleplaying with other
             | people. Fantasy murder is a feature of most videogames, but
             | fantasy rape is limited to low-budget niche titles not
             | offered on most digital or physical storefronts. I'd be
             | interested in the psychology behind that. Could it perhaps
             | be related to a perceived permanence? That is, maybe
             | resetting the game more effectively un-murders the
             | characters than it would un-rape one? Maybe it's
             | relatability. Most of us have fantasized unseriously about
             | murdering someone, be it in traffic or at work, but fewer
             | of us regularly fantasize about raping someone. Other
             | immoral acts such as animal abuse have some of the same
             | taint as virtual rape and are similarly rare in the daily
             | fantasies of the average person.
        
           | [deleted]
        
       | [deleted]
        
       | [deleted]
        
       | dejj wrote:
       | Seems like a clever disguise to pivoting into AI-driven
       | censorship. This should be more profitable than dungeons.
        
       | karaterobot wrote:
       | To the extent that this story is generating the predictable
       | amount of internet outrage, that outrage seems to be because
       | people think the developers are making a decision about what
       | content is acceptable on their platforms. I've seen people imply
       | that they're deciding that violence is good, and sex is bad.
       | 
       | That does not appear to be what they're doing: to me, it looks
       | like they're trying to make sure they don't get taken down for
       | creating child pornography on accident. I don't see this as
       | having anything to do with their philosophical positions, it's
       | just CYA.
       | 
       | The interesting part of this is that it may be a corollary to
       | that old question about who owns content created by AI. The other
       | side of that coin is, who gets blamed when the AI commits a
       | crime? Latitude seem to just want to NOT be a test case for that
       | situation.
        
       | miohtama wrote:
       | Thoughtcrime became a law in the US in 1996. There was a lot of
       | discussion around alt.sex.stories USENET group by the time. Even
       | if you are not harming anyone, mere act of a creative work could
       | be a crime.
       | 
       | https://academic.oup.com/jcmc/article/2/2/JCMC227/4584343
        
       | iandanforth wrote:
       | I do think encoding a puritanical censor into the meaning space
       | of GPT-3 is an interesting research problem. How exactly do you
       | create the perfect mix of paternalism, hypocrisy, self-
       | righteousness and myopia that lets you block _bad_ strings of
       | text, but not say a description of the immaculate conception,
       | Shakespearean romance between the houses Montague and Capulet, or
       | the holy love of the Mother of the Believers?
       | 
       | What a time to be alive!
        
         | notahacker wrote:
         | tbf if you're working with software which is context aware
         | enough to _usually_ generate plausible sounding text-responses,
         | training it to usually identify stuff you think is bad is a
         | closely related problem. (Sure, there 's still a fine line
         | between "sick fantasy" and "Stephen King novel", but your
         | procedural text generator has to attempt to handle that to not
         | disgust its customers anyway.)
        
           | ben_w wrote:
           | > there's still a fine line between "sick fantasy" and
           | "Stephen King novel"
           | 
           | Surely the important distinction is not the text itself but
           | which character a reader empathises with -- the monster or
           | the victim.
           | 
           | (Personally I don't understand why violent horror as a genre
           | exists, and literally _cannot_ empathise with people who
           | enjoy it. Nonetheless I recognise that enjoyment of horror
           | does not make one a monster).
        
             | karlp wrote:
             | > Personally I don't understand why violent horror as a
             | genre exists, and literally cannot empathise with people
             | who enjoy it.
             | 
             | What's wrong with it as entertainment?
        
             | scandox wrote:
             | Thomas Ligotti proposes a theory which I will paraphrase
             | badly as "people who like horror need a horrific reality
             | they can cope with, because the horror of actual existence
             | is something they cannot face". This rather neatly makes
             | those who do not like Horror the insensitive ones,
             | reversing the more conventional view.
             | 
             | There's a bunch of other stuff about The Nightmare of
             | Consciousness and so on in his book The Conspiracy Against
             | the Human Race.
        
         | qayxc wrote:
         | Where exactly do you see the difference to human editors?
         | 
         | This kind of thing happens everywhere, everyday in TV stations,
         | editorial offices, at publishing companies, radio stations -
         | all kind of media really.
         | 
         | Depending on the political or moral views of the parent
         | organisation or investors, this content censoring/massaging is
         | everyday business and shouldn't shock or surprise you in the
         | slightest.
        
           | Loughla wrote:
           | Exactly!
           | 
           | All of this just smacks of the old, worn-out argument that
           | "it's different because it uses computers!"
           | 
           | Editor is a literal career field, and has been for years and
           | years.
        
         | [deleted]
        
         | Ggshjtcnjfxhg wrote:
         | Puritanical?
         | 
         | > AI Dungeon will continue to support other NSFW content,
         | including consensual adult content, violence, and profanity.
        
           | JohnWhigham wrote:
           | Killing people? Perfectly A-OK!
           | 
           | Anything sexual? Oh no no no! The children!
           | 
           | Americans are fucking weird...
        
             | Ggshjtcnjfxhg wrote:
             | They explicitly say they will continue to allow sexual
             | content.
        
             | Derek_MK wrote:
             | Read the article, it's specifically trying to get rid of CP
        
       | [deleted]
        
       | voldacar wrote:
       | Do they use GPT3 or their own special version of it?
       | 
       | And I havent been following it much so sorry if it's a dumb
       | question, but is it still impossible to get your hands on GPT3
       | and run it yourself instead of paying ClosedAI?
        
       | dmarchand90 wrote:
       | I'm not sure why people are so stressed about this. GPT-3 models
       | currently produce long chains of plausible text that's ultimately
       | gobbledygoop. For it to be of any use at all at least some
       | minimal control is needed.
       | 
       | This seems like a great first step to filtering output into
       | something more coherent and interesting. Besides I can't imagine
       | this technology in any serious consumer application without some
       | basic verbal restraint
        
       ___________________________________________________________________
       (page generated 2021-04-28 23:01 UTC)