Post AUPs3sJ4J6OLmpHnUG by teixi@mastodon.social
 (DIR) More posts by teixi@mastodon.social
 (DIR) Post #AUPld49ZYN2Uz8g4rA by simon@fedi.simonwillison.net
       2023-04-07T16:45:20Z
       
       0 likes, 0 repeats
       
       Wrote about why I think it's better to tell people "ChatGPT will lie to you", despite "lying" misleadingly implying intent and the risk of encouraging anthropomorphization.https://simonwillison.net/2023/Apr/7/chatgpt-lies/
       
 (DIR) Post #AUPloCLpPmpRaxibku by simon@fedi.simonwillison.net
       2023-04-07T16:46:24Z
       
       0 likes, 0 repeats
       
       Which of these two messages do you think is more effective?**ChatGPT will lie to you**Or**ChatGPT doesn’t lie, lying is too human and implies intent. It hallucinates. Actually no, hallucination still implies human-like thought. It confabulates. That’s a term used in psychiatry to describe when someone replaces a gap in one’s memory by a falsification that one believes to be true—though of course these things don’t have human minds so even confabulation is unnecessarily anthropomorphic.**
       
 (DIR) Post #AUPmEziGVkiIDXzTpg by simonzerafa@infosec.exchange
       2023-04-07T16:50:38Z
       
       0 likes, 0 repeats
       
       @simon I would simply say it's emitting words, sentences and paragraphs of text based on previous user input.It's has no concept of truth or accuracy as that isn't part of the purpose or the model 🫤🤷‍♂️
       
 (DIR) Post #AUPmQEngkyCGCCt9YO by HandgunYoga@mastodon.world
       2023-04-07T16:50:51Z
       
       0 likes, 0 repeats
       
       @simon ChatGPT is an AI that Mansplains: supreme confidence plus low accuracy.
       
 (DIR) Post #AUPmciUICqZxkBlamO by ttpphd@mastodon.social
       2023-04-07T16:50:58Z
       
       0 likes, 0 repeats
       
       @simonSorry but I dislike both. I prefer "there is no algorithm for truth".
       
 (DIR) Post #AUPmoGYzqxmCc7TSW8 by ellie@hachyderm.io
       2023-04-07T16:51:44Z
       
       0 likes, 0 repeats
       
       @simon I would probably just say that it’s wrong. Inanimate things can be wrong, much like a sign that says “the sky is purple”. They’re not lying, they’re not hallucinating - but they’re also not correct.
       
 (DIR) Post #AUPn3vsOXFFD5T1A3s by GavinChait@wandering.shop
       2023-04-07T16:52:19Z
       
       0 likes, 0 repeats
       
       @simon How about : you will lie to self, and chatGPT will serve to confirm your biases...
       
 (DIR) Post #AUPn89b5Sn4knew2SW by bananarama@mstdn.social
       2023-04-07T17:04:05Z
       
       0 likes, 0 repeats
       
       @simon A bit more vulgar, perhaps not as effective: "ChatGPT spews bullshit"
       
 (DIR) Post #AUPnDqSRXIuLB4bcUi by isaac@hachyderm.io
       2023-04-07T17:01:01Z
       
       0 likes, 0 repeats
       
       @simon At first I thought, "If the word 'lying' is anthropomorphic, we should work to de-anthromorphize it for this circumstance." But now I'm thinking that it's better just to use "lying" and recognize that ChatGPT is a product created by humans who made human decisions about what is acceptable behaviors to release in a product like this.Saying that "lying" is too anthropomorphic here IMO separates the product creators from the consequences of their decisions.
       
 (DIR) Post #AUPnDrVJe6neQGxQbw by bananarama@mstdn.social
       2023-04-07T17:05:06Z
       
       0 likes, 0 repeats
       
       @isaac @simon I like this!
       
 (DIR) Post #AUPnFJQ7L87NIDDY0G by simon@fedi.simonwillison.net
       2023-04-07T16:52:37Z
       
       0 likes, 0 repeats
       
       TLDR: There's a time for linguistics, and there's a time for grabbing the general public by the shoulders and shouting "It lies! The computer lies to you! Don't trust anything it says!"
       
 (DIR) Post #AUPnSEvLMu2J70zINk by erbridge@tech.lgbt
       2023-04-07T16:54:20Z
       
       0 likes, 0 repeats
       
       @simon "ChatGPT will tell you what you want to hear"
       
 (DIR) Post #AUPnfFNuxxnuJ2YUoy by zzzeek@fosstodon.org
       2023-04-07T16:54:54Z
       
       0 likes, 1 repeats
       
       @simon it lies because it refers to itself in the first person, delivers answers in authoritative phrases, and all of its training data is completely secret.
       
 (DIR) Post #AUPnqqemjJVt5YycQy by mistersql@mastodon.social
       2023-04-07T16:56:02Z
       
       0 likes, 0 repeats
       
       @simon I'm in the "circumlocutions don't help" camp. I think the bot doesn't know what it doesn't know and can't tell truth from fiction.The eagerness to deny words to describe the bots comes from religious and cultural world view problems, that don't involve the more fundamental question: What can these bots do & not do? Do they do them well?"Do we use words that imply they are in our moral community" is a distraction. (unless someone is advocating giving them citizenship & rights I mean)
       
 (DIR) Post #AUPo2InWRWN7YGWccq by jonathanmatthews@fosstodon.org
       2023-04-07T16:57:46Z
       
       0 likes, 0 repeats
       
       @simon I don’t see the downside in popularising “ChatGPT will bullshit you”. Less anthropwossname; more succinct; more visceral, hence might get internalised where alternatives wouldn’t.
       
 (DIR) Post #AUPoDknkfMQPaa5pYW by tshirtman@mas.to
       2023-04-07T16:59:06Z
       
       0 likes, 0 repeats
       
       @simon i've been using (and seen others use) "bullshit" rather than "lie" in this context, and i think it's better because it implies less (i think) an intent to deceive, and more a way to generate things to say that achieve some objective, whether these things are correct or not.
       
 (DIR) Post #AUPoU5HsGfphl3yB0a by justinmaurer@phpc.social
       2023-04-07T17:00:36Z
       
       0 likes, 0 repeats
       
       @simon effective at what goal? The first is certainly more effective at dissuading people from using the product, but it's not more effective at communicating the truth of how the technology works.
       
 (DIR) Post #AUPovw6E1PfTN2nl7Q by NIH_LLAMAS@mastodon.social
       2023-04-07T17:01:42Z
       
       0 likes, 0 repeats
       
       @simon The most accurate word we currently have:https://undark.org/2023/04/06/chatgpt-isnt-hallucinating-its-bullshitting/
       
 (DIR) Post #AUPp8mpRnioKNo577Q by simon@fedi.simonwillison.net
       2023-04-07T17:03:25Z
       
       0 likes, 0 repeats
       
       @justinmaurer The message I want to get across is "Be careful: it will lie to you. And yet it is also incredibly useful! You need to understand that it can lie before you can learn how to use it for the things that it's truly amazing at."
       
 (DIR) Post #AUPpcBrFp2S0PJwnWy by simon@fedi.simonwillison.net
       2023-04-07T17:04:13Z
       
       0 likes, 0 repeats
       
       Also in my post:"Honestly, at this point using ChatGPT in the way that I do feels like a massively unfair competitive advantage. I’m not worried about AI taking people’s jobs: I’m worried about the impact of AI-enhanced developers like myself.It genuinely feels unethical for me *not* to help other people learn to use these tools as effectively as possible. I want everyone to be able to do what I can do with them, as safely and responsibly as possible."
       
 (DIR) Post #AUPptW3Rz6MHy9uheK by Meyerweb@mastodon.social
       2023-04-07T17:06:38Z
       
       0 likes, 0 repeats
       
       @simon The time for the latter is always.
       
 (DIR) Post #AUPqBp4m0CoWAwGHmy by misc@mastodon.social
       2023-04-07T17:11:51Z
       
       0 likes, 0 repeats
       
       @simon This is a silly strawman. You're presenting a parody of the discourse as the only alternative to your preferred language. Meta discourse can be tedious, but it can also be valuable, or at least unavoidable. I'd also add that you're engaging in it.
       
 (DIR) Post #AUPqN06rJtFba5ZEki by simon@fedi.simonwillison.net
       2023-04-07T17:13:38Z
       
       0 likes, 0 repeats
       
       @misc Sometimes silly strawman arguments are a useful way to illustrate a point
       
 (DIR) Post #AUPqkG6i0VwRrCaTia by thedansimonson@lingo.lol
       2023-04-07T17:14:18Z
       
       0 likes, 0 repeats
       
       @simon “lying” also entails some intention to deceive—knowing something to be true, saying otherwise, and the recipient believing the statementI’d rather say it’s “incapable of truth.” It has no knowledge about the world or what the language it mimics maps to. That said, framing it as a “bullshitter” is probably the most straightforward. A bullshitter doesn’t necessarily intend to deceive, but extrapolates from its own ignorance into novel circumstances
       
 (DIR) Post #AUPqurqqWwE1oYE6LY by simon@fedi.simonwillison.net
       2023-04-07T17:14:54Z
       
       0 likes, 0 repeats
       
       @jonathanmatthews Mainly I worry about that getting **d out in mainstream media coverage to be honest
       
 (DIR) Post #AUPr5HCn7U1jLOSEbI by kissane@mstdn.social
       2023-04-07T17:15:28Z
       
       0 likes, 1 repeats
       
       @simon I think the larger thing here is that communicating with a general public is a totally different art from communicating within a specialized field, and giving in to the pressures of insiders usually (but not always) degrades the usefulness of the communication to the public.Viz the public health time spent in the full crisis stage of early covid insisting on things like distinguishing between quarantine and isolation.
       
 (DIR) Post #AUPr5J685Dt1DJLwUi by kissane@mstdn.social
       2023-04-07T17:17:41Z
       
       0 likes, 0 repeats
       
       @simon Sometimes you can put together the right combination of insider and communicator knowledge and make it work—Ed Yong is a champion at this in his science journalism—but fwiw I think it’s vital to prize clarity of risk and action in the first stage of crisis comms.
       
 (DIR) Post #AUPrQ7HlCFgYewiRii by simon@fedi.simonwillison.net
       2023-04-07T17:15:58Z
       
       0 likes, 0 repeats
       
       @kissane That's a fantastic way of describing how I feel about this
       
 (DIR) Post #AUPs3sJ4J6OLmpHnUG by teixi@mastodon.social
       2023-04-07T17:22:16Z
       
       0 likes, 0 repeats
       
       @simon A first display of public misleading:Calling it `hallucination` instead of  ie: simple computing heuristics output limits!
       
 (DIR) Post #AUPsHdOYXocRarPiy0 by simon@fedi.simonwillison.net
       2023-04-07T17:22:33Z
       
       0 likes, 0 repeats
       
       The ChatGPT footer currently reads "ChatGPT may produce inaccurate information about people, places, or facts"I wonder if that message would be more effective if it said "ChatGPT will lie to you about people, places and facts" instead.
       
 (DIR) Post #AUPsTP4xb4hIfTrNlw by justinmaurer@phpc.social
       2023-04-07T17:26:02Z
       
       0 likes, 0 repeats
       
       @simon That actually sounds better, and more accurate than either of the original options.
       
 (DIR) Post #AUPsyE1qjOgxNE6Njk by misc@mastodon.social
       2023-04-07T17:39:58Z
       
       0 likes, 0 repeats
       
       @simon They are certainly a useful rhetorical device for persuasion, hence their abiding popularity.
       
 (DIR) Post #AUPtBTo7NW7s4g01SK by TheSteve0@data-folks.masto.host
       2023-04-07T17:42:38Z
       
       0 likes, 0 repeats
       
       @simon Actually I would tell people that all that ChatGPT does is make predictions and often it gets those predictions wrong. Wait - I actually did tell people that 😀 https://thesteve0.blog/2023/03/28/what-chatgpt-is-not/I think today I will try to spend some time on writing why I am excited about ChatGPT
       
 (DIR) Post #AUPtO7W9YvawhzckIi by simonzerafa@infosec.exchange
       2023-04-07T17:43:54Z
       
       0 likes, 0 repeats
       
       @simon Given the torrents of utter drivel many of these so called AI bots generate there will be more jobs for people to fact check and correct output.Or better still write the text yourself so it can be accurate the first time? 🫤🤷‍♂️
       
 (DIR) Post #AUPteGTG2f3V6sy4GG by parsingphase@m.phase.org
       2023-04-07T17:52:09Z
       
       0 likes, 0 repeats
       
       @simon “ChatGPT has no context of truth, and is therefore unable to tell the truth except by chance.”
       
 (DIR) Post #AUPu11cxOcCnjZMEm8 by openclimatedata@mastodon.social
       2023-04-07T17:57:27Z
       
       0 likes, 0 repeats
       
       @simon How about "produces lies", "might answer with lies", "generates lies"? - not as anthropomorphizing as "is lying" but not as complicated as the long detour?
       
 (DIR) Post #AUPuGHTQlIgLDyAzGS by mistersql@mastodon.social
       2023-04-07T18:03:05Z
       
       0 likes, 0 repeats
       
       @simon If you give it a binary question & it reliably lies, then you know the true answer to be the other answer.https://www.youtube.com/watch?v=rMz7JBRbmNo
       
 (DIR) Post #AUPuTQxOoXueOvSvcu by seanb@hachyderm.io
       2023-04-07T18:04:04Z
       
       0 likes, 0 repeats
       
       @simon "Everything ChatGPT says is a lie made up on the spot. Some of these lies happen to be correct."
       
 (DIR) Post #AUPuvbTLJQuJ7bq5p2 by ted@an.errant.cloud
       2023-04-07T17:55:35Z
       
       0 likes, 0 repeats
       
       @Meyerweb @simon Yeah.  Between ”It lies" and “It makes shit up” I will always prefer the latter.For me it's less about the intent requirements and more about aligning the rhetorical frames.  Preserving the anthropomorphism feels right colloquially, but it also accepts the category error that this thing is an agent.
       
 (DIR) Post #AUPuvcVrRYW2Li1cO0 by ted@an.errant.cloud
       2023-04-07T18:09:40Z
       
       0 likes, 0 repeats
       
       @Meyerweb @simon Oh you know what, this is the difference highlighted in "On Bullshit”.https://press.princeton.edu/books/hardcover/9780691122946/on-bullshit
       
 (DIR) Post #AUPuvdI4YJ52lEa7X6 by simon@fedi.simonwillison.net
       2023-04-07T18:11:12Z
       
       0 likes, 0 repeats
       
       @ted @Meyerweb My mental model of the term "bullshit" is irreversibly linked to Natasha Lyonne in Poker Face at this point
       
 (DIR) Post #AUPv6lwgiHYSn0tIdU by fool@mastodon.world
       2023-04-07T18:11:34Z
       
       0 likes, 0 repeats
       
       @simon Or, "speak your audience's language".People who believe in LLM providing reliable factual information are unlikely to be interested in linguistic, and more likely to have their attentions hijacked by catchphrase.Just the necessary, and lesser evil.
       
 (DIR) Post #AUPvOGQcdFaqFlEVn6 by wordshaper@weatherishappening.network
       2023-04-07T18:14:56Z
       
       0 likes, 0 repeats
       
       @simon That still gives too much agency to ChatGPT. Maybe "ChatGPT will make up random stuff and its relationship to reality is only accidental."
       
 (DIR) Post #AUPvcXe1Q5ABhbJD2u by yacc143@mastodon.social
       2023-04-07T18:19:49Z
       
       0 likes, 0 repeats
       
       @simon It's a lose/lose situation.Although, you could try, The "next word predictor known as ChatGPT often spits out false facts in a very convincing way, so it takes quite a bit of work to verify its output as correct or false. Not surprising, as it's optimized statistically to output human sounding text."Another bit that you might want to drop that humans not always agree what the truth about certain facts is.
       
 (DIR) Post #AUPvpbvknOdQl9po8m by KevinMarks@xoxo.zone
       2023-04-07T18:26:35Z
       
       0 likes, 0 repeats
       
       @simon Can we just say it makes shit up?This is like "did Boris Johnson lie to Parliament?"The LLM's are very similar to that dreadful windbag - they have been expensively spoon fed all manner of writing and they regurgitate something plausible with no regard at all for its veracity.
       
 (DIR) Post #AUPypEJOuPP1N1PBi4 by apkoponen@mstdn.social
       2023-04-07T19:13:10Z
       
       0 likes, 0 repeats
       
       @simon Even using ”often” instead of ”may” would be a significant improvement.”ChatGPT often produces inaccurate information about people, places, or facts.”But it would definitely hurt their business.
       
 (DIR) Post #AUPzf845NdEP3OzObo by asmeurer@mastodon.social
       2023-04-07T19:22:28Z
       
       0 likes, 0 repeats
       
       @simon I genuinely don't understand. What's the problem with "anthropomorphising" things?
       
 (DIR) Post #AUQ0i8Xs9Nc5Mnv5Xc by KevinMarks@xoxo.zone
       2023-04-07T19:34:27Z
       
       0 likes, 0 repeats
       
       @simon @ted @Meyerweb Charlie Cale as Shoggoth trainer is the second series
       
 (DIR) Post #AUQ4yPUXJPcoNHNHeK by PatrickOBeirne@mastodon.ie
       2023-04-07T20:21:53Z
       
       0 likes, 0 repeats
       
       @simon just say "it makes stuff up" Some might like "an unreliable witness "
       
 (DIR) Post #AUQ67OCHat1Mv7gLi4 by bettelheim@mastodon.social
       2023-04-07T20:34:57Z
       
       0 likes, 0 repeats
       
       @simon I think you’re missing the legal implications here. Whatever language they have on that page will undoubtedly be used for reference in future lawsuits, and as a result liguistics matter a great deal.
       
 (DIR) Post #AUQ6Um9EiA9HrZPOGu by sil@mastodon.social
       2023-04-07T20:39:14Z
       
       0 likes, 0 repeats
       
       @simon those complaining about you saying “lies” because it’s anthropomorphising should also be finding everyone who says “I asked chatgpt about X and it told me Y” and complaining at them too. But of course they are not doing so and are not lobbying for “I made a function call to chatgpt with the query X and the generated return value was Y”.
       
 (DIR) Post #AUQ6g26BkMRyWFLoQa by simon@fedi.simonwillison.net
       2023-04-07T20:40:33Z
       
       0 likes, 0 repeats
       
       @bettelheim Yeah, that's a good explanation as to why the language in the ChatGPT footer is purposefully vague! https://fedi.simonwillison.net/@simon/110158685998904032
       
 (DIR) Post #AUQ87SexCcPIfWFrJg by dagnymol@pony.social
       2023-04-07T20:57:15Z
       
       0 likes, 0 repeats
       
       @simon We anthropomorphize literally everything. Weather patterns. Printers. Pieces of toast. Everything. It's impractical to think we will stop doing this because something actually starts to resemble a person.
       
 (DIR) Post #AUQ8orP35Bd5rSVed6 by overunderclover@mstdn.social
       2023-04-07T21:05:13Z
       
       0 likes, 0 repeats
       
       @simon **ChatGPT will lie to you.It can't tell fact from fiction.It just imitates HOW ppl talk about things.
       
 (DIR) Post #AUQCB57j3AfWeFh6zA by dunzo@fosstodon.org
       2023-04-07T21:42:40Z
       
       0 likes, 0 repeats
       
       @simon I’ve been using “ChatGPT is a bullshit machine.” The difference between “bullshit” and “lying” as brilliantly given by Harry Frankfurt works here; Bullshit (and ChatGPT’s output) may or may not be factual, but it _feels_ true. The feeling of truth _is_ the intent…https://bookshop.org/p/books/on-bullshit-harry-g-frankfurt/10471257?ean=9780691122946
       
 (DIR) Post #AUQEOiZ3v9h8e36vlw by corbin@defcon.social
       2023-04-07T22:07:34Z
       
       0 likes, 0 repeats
       
       @simon How about "ChatGPT is a scam" or "ChatGPT is falsely advertised"? The issue here is that OpenAI is selling this product; it's like a scene from The Jungle, and regulation is incoming in several jurisdictions.But, what should we do if OpenAI were to act ethically and pull ChatGPT? Well, we would still need to explain why large models are prone to confabulation!
       
 (DIR) Post #AUQGfvQFZJiJM1K12G by billseitz@toolsforthought.rocks
       2023-04-07T22:33:20Z
       
       0 likes, 0 repeats
       
       @simon the shitshow of CDC/FDA communications during COVID should make us wary of over-simplified claims.
       
 (DIR) Post #AUQIaMrmopp2dpgBv6 by Migueldeicaza@mastodon.social
       2023-04-07T22:54:36Z
       
       0 likes, 0 repeats
       
       @simon excellent piece Simon
       
 (DIR) Post #AUQLPJcwp23BGvA0ES by jamiemccarthy@mastodon.social
       2023-04-07T23:26:20Z
       
       0 likes, 0 repeats
       
       @simon “ChatGPT spews bullshit” probably walks that tightrope a little better.I do still love the term “spicy autocomplete.” At first it rings true, then after a bit of playing around it seems inadequate, and later, after much more investigation, it rings true again
       
 (DIR) Post #AUQPLwaYQF6Mlbgki0 by jaredwhite@indieweb.social
       2023-04-08T00:10:14Z
       
       0 likes, 0 repeats
       
       @simon @Migueldeicaza I like "ChatGPT will bullshit you" 😁
       
 (DIR) Post #AUQS9SiJyBWuV23rOq by ratwerks@sfba.social
       2023-04-08T00:39:59Z
       
       0 likes, 0 repeats
       
       @simonLying is a deliberate act of deception and requires knowledge of truth. ChatGPT knows nothing and so cannot lie.
       
 (DIR) Post #AUQWnIdlF3fAenBCwC by betsybookworm@cloudisland.nz
       2023-04-08T01:33:35Z
       
       0 likes, 0 repeats
       
       @simon I'd go more with saying it has no concept of truth or facts - because that's true!
       
 (DIR) Post #AUQZYghNoKLMn8b8qG by brandonhorst@techhub.social
       2023-04-08T02:04:40Z
       
       0 likes, 0 repeats
       
       @simon This article may have the highest quality-to-hacker-news-discussion-terribleness ratio that I’ve seen in a while
       
 (DIR) Post #AUQbty22lFfiXUXK1w by maegul@hachyderm.io
       2023-04-08T02:30:55Z
       
       0 likes, 0 repeats
       
       @simon so I’ve probably pestered you about this before, but on the unfair advantage point, do you have any thoughts on whether Senior devs are more able to professionally leverage AI than junior devs and therefore have a disproportionate advantage from the inclusion of AI in dev workflows ?
       
 (DIR) Post #AUQcYrxSFUqdZdtSxk by glyph@mastodon.social
       2023-04-08T02:38:34Z
       
       0 likes, 0 repeats
       
       @simon @glyph okay so you clearly have a point here, particularly in the immediate future, but my issue with this oversimplified message is that it suggests an incorrect model of how the various falsities are created.if it’s “lying” then it must be trying to deceive, and if it’s trying to deceive then there must have some reason for doing that. So you just need to divine its reason, convince it *not* to lie to you, and then you can trust it!
       
 (DIR) Post #AUQcYvwBSACxurKVXM by glyph@mastodon.social
       2023-04-08T02:38:35Z
       
       0 likes, 0 repeats
       
       @simon @glyph If this seems too abstruse a concern, consider the degree of behavior modification that has been achieved by broad misunderstandings of the “algorithm” on social media sites. Algospeak and other shamanistic practices are *everywhere*, and it is a rich source of material for grifters. We’re maybe 6 months out from all the YouTube channel growth hack contrepreneurs pivoting to “how to get chatgpt to stop lying to you so it can do your job” webinars
       
 (DIR) Post #AUQd6GofbRFmJrexKC by simon@fedi.simonwillison.net
       2023-04-08T02:44:46Z
       
       0 likes, 0 repeats
       
       @maegul absolutely. This stuff compounds your existing experience, so devs with a lot of experience will be able to put it to use much more effectivelyMy intuition right now is that it can be a massive productivity boost for less experienced devs too though, provided they can learn how to use it well
       
 (DIR) Post #AUQfVjBPTN8dRfYxDE by williamgunn@mastodon.social
       2023-04-08T03:11:44Z
       
       0 likes, 0 repeats
       
       @simon "Use of tobacco may promote lung cancer" vs "Roken is dodelijk"
       
 (DIR) Post #AURQUmsOx1EcSLj1XM by rysiek@mstdn.social
       2023-04-08T11:58:00Z
       
       0 likes, 0 repeats
       
       @simon > I completely agree that anthropomorphism is bad: these models are fancy matrix arithmetic, not entities with intent and opinions.> But in this case, I think the visceral clarity of being able to say “ChatGPT will lie to you” is a worthwhile trade.💯 especially if in case of a longer form piece it gets contextualized.
       
 (DIR) Post #AURdRkZcM6zVqhoz0i by AmeliaBR@front-end.social
       2023-04-08T14:22:54Z
       
       0 likes, 0 repeats
       
       @simon I personally prefer:"ChatGPT was not designed to understand or prioritize truth or accuracy."Put the agency where it belongs: on the developers.Then continue with:"Fact and fiction are just different styles of writing in its database. If you ask it a factual question, it generates text that is styled to sound like something that could be factual. But the apparent facts are generated by grabbing bits and pieces of things it has read from different sources & smashing them together."
       
 (DIR) Post #AURfFcYMY5ElPMZv0q by simon@fedi.simonwillison.net
       2023-04-08T14:40:50Z
       
       0 likes, 0 repeats
       
       @AmeliaBR I don't think "was not designed to understand or prioritize truth or accuracy" is quite rightIt WAS designed for those things - but they turn out to be incredibly hard problems to solve, so despite existing efforts it still makes a lot of errorsGPT4 does significantly better than 3.5, which is an indication that they're still putting in the work - it's still not nearly good enough yet though
       
 (DIR) Post #AURfScrMijLg7hfTX6 by vanderwal@mastodon.social
       2023-04-08T14:44:36Z
       
       0 likes, 0 repeats
       
       @simon I think a better warning up front is: ChatGPT has no understanding of truth nor fact, but is good at mimicking good confident writing. It doesn’t understand the subject matter and can not understand true context nor corrolation.
       
 (DIR) Post #AURgdEeb9q96BoJqfQ by AmeliaBR@front-end.social
       2023-04-08T14:58:41Z
       
       0 likes, 0 repeats
       
       @simon I guess that leads to another linguistics minefield: if you tried to create software to do X, but it instead did Y, did you design it to do X or Y?But yeah, I agree that it is important to acknowledge is that it may one day be possible to have more reliable results. But the current generation of programs are not there, and they have been vastly oversold as a replacement for human research, or even as a replacement for general-purpose search engines.
       
 (DIR) Post #AURhqOI2k2zhnJbdTs by simon@fedi.simonwillison.net
       2023-04-08T15:12:27Z
       
       0 likes, 0 repeats
       
       @AmeliaBR "they have been vastly oversold as a replacement for human research, or even as a replacement for general-purpose search engines"They've been massively hyped, but it's rarely the direct developers of the systems that are doing that hyping - somuch of the misguided mental models of this stuff are due to loud but uninformed commentary
       
 (DIR) Post #AURi1HP8gMk9cn7Kkq by simon@fedi.simonwillison.net
       2023-04-08T15:13:58Z
       
       0 likes, 0 repeats
       
       @vanderwal that's not accurate, but I still think it's too subtle. I don't trust people to pay attention to any warning that's longer than five words at this point, based on how much confusion I've seen around this already
       
 (DIR) Post #AURqW6ogtfxvOhHRqK by vanderwal@mastodon.social
       2023-04-08T16:49:31Z
       
       0 likes, 0 repeats
       
       @simon Fair and true, re: length. The warning needs to be up front. But, the probabalistic language engines aren’t writing out of any understanding of fact nor truth, but probable next words and ideas based on observed patterns.
       
 (DIR) Post #AURueZPIsikoCchCNs by vanderwal@mastodon.social
       2023-04-08T17:35:37Z
       
       0 likes, 0 repeats
       
       @simon I’m also curious which part isn’t accurate?
       
 (DIR) Post #AURygEOoMKi9RYVs8m by jimluther@techhub.social
       2023-04-08T18:20:53Z
       
       0 likes, 0 repeats
       
       @simon @provuejim I use the word “guess”.I’ve been telling people ChatGDT is like a “know in all” human who always has an answer, even if it is wrong. Because they can’t admit the don’t know something, they throw out a guess without prefacing it with “This is only a guess, but…“.
       
 (DIR) Post #AURz7tHgSR2645MtdI by simon@fedi.simonwillison.net
       2023-04-08T18:24:12Z
       
       0 likes, 0 repeats
       
       @vanderwal no idea where the word "not" came from there (I just edited to fix it) - I meant to say "that's accurate"!
       
 (DIR) Post #AUS0u94x8WgGuBzFrc by vanderwal@mastodon.social
       2023-04-08T18:45:56Z
       
       0 likes, 0 repeats
       
       @simon I’ve been spending a lot of time and effort in my personal time to sort out what this current wave of AI is doing and where it works and fails. I thought I had missed something, not certain I haven’t.There is a lot that is decent, but so much that is troubling, problematic, and way over hyped.
       
 (DIR) Post #AUS6twPsoYvLMI3xGi by simon@fedi.simonwillison.net
       2023-04-08T19:52:53Z
       
       0 likes, 0 repeats
       
       @vanderwal +100 for overhypedI've never encountered a technology before that confuses the positives and negatives as thoroughly as LLMs do
       
 (DIR) Post #AUT1rHQRwQTb2HCQYi by ramiboutas@fosstodon.org
       2023-04-09T06:31:23Z
       
       0 likes, 0 repeats
       
       @simon that last message would never work. As humans we reject lies but at the same time we accept that not everything is perfect. The truth behind both message is the same but one will be accepted and other rejected.
       
 (DIR) Post #AUTMiWub5njrUqyAxk by carlton@fosstodon.org
       2023-04-09T10:25:11Z
       
       0 likes, 0 repeats
       
       Hey @simon - you prompted me to jot down my thoughts: https://noumenal.es/posts/llms-and-the-business-of-truth/Kn3/
       
 (DIR) Post #AUTfoKKAhhDjK8PS6a by benjamineskola@hachyderm.io
       2023-04-09T13:59:06Z
       
       0 likes, 0 repeats
       
       @simon I think this is a bit of a strawman. Nobody's suggesting replacing a short explanation with a long one. “ChatGPT produces incorrect answers" is accurate and concise — if the current disclaimer is flawed it’s simply that it's not emphatic enough.I don't think of the problem with ‘lying' as a linguistic one — it's simply misrepresenting what an LLM does and why it does it. It's a barrier to understanding.
       
 (DIR) Post #AUTol3K0MVo1JFqSI4 by simon@fedi.simonwillison.net
       2023-04-09T15:32:34Z
       
       0 likes, 0 repeats
       
       @benjamineskola yes, it was a deliberate strawman to illustrate the point I was trying to make in hopefully a humorous way
       
 (DIR) Post #AUTolHej4GhPlZeCX2 by simon@fedi.simonwillison.net
       2023-04-09T15:33:36Z
       
       0 likes, 0 repeats
       
       @benjamineskola I do genuinely think that using the word "confabulation" is a bigger barrier to understanding than saying it lies
       
 (DIR) Post #AUTpFVcjhSbbm584bg by benjamineskola@hachyderm.io
       2023-04-09T15:44:53Z
       
       0 likes, 0 repeats
       
       @simon I’d agree — it’s too technical a term, I think, most people (whether software engineers or laypeople) wouldn’t be familiar with it. Unhelpful for explanation even if in some sense analogous (maybe?) to what’s happening.Really I think just saying “false” is clear enough: the idea that something on the internet can be incorrect is not unfamiliar to many people.
       
 (DIR) Post #AUTqAfXdUyJD2Y02gi by benjamineskola@hachyderm.io
       2023-04-09T15:55:11Z
       
       0 likes, 0 repeats
       
       @simon To me “lying” seems to bring with it problems of motivation. Like, OK, if ChatGPT is lying to me, why is it lying? Can I persuade it not to? People don’t lie for no reason, generally. But trying to understand LLMs is surely only made more complicated by thinking in terms of intent.(If it was clearly understood as a metaphor then that’d be a different matter. We talk about ordinary programs as if they had intent. But the hype about LLMs being intelligent has muddied the waters.)
       
 (DIR) Post #AUTvkCdivpLlbAbtw0 by simon@fedi.simonwillison.net
       2023-04-09T16:47:50Z
       
       0 likes, 0 repeats
       
       @benjamineskola that's one of the things I find so interesting about explaining to people that ChatGPT will lie to them: the obvious next question is "why would it lie to me?" - and answering that is where we can get into the really interesting details of how these things actually work
       
 (DIR) Post #AUYExZt6TzFdE8jdJo by scriptautomate@fosstodon.org
       2023-04-11T18:51:37Z
       
       0 likes, 0 repeats
       
       @simonPerhaps "Fabricate" seems like it may be a better word here, due to it being able to carry multiple meanings. ChatGPT creates output (as a machine that manufactures), which may be bullshit (as a human would interpret false info).Depends on the audience. If you get too caught up in technicalities, people glaze over. Saying that something spouts lies/bullshit, it provides something simplified and actionable.