Post ASVdBUsidBktfjoEnA by UncivilServant@med-mastodon.com
 (DIR) More posts by UncivilServant@med-mastodon.com
 (DIR) Post #ASVYYWkjLBfcMhEdBw by simon@fedi.simonwillison.net
       2023-02-09T16:03:06Z
       
       0 likes, 0 repeats
       
       The biggest question for me about large language model interfaces - ChatGPT, the new Bing, Google's Bard - is this:How long does it take for regular users (as opposed to experts, or people who just try them once or twice) to convince themselves that these tools frequently makes things up that aren't accurate?And assuming they figure this out, how does knowing it affect the way they use these tools?
       
 (DIR) Post #ASVYpZvamxPT8TRjN2 by simon@fedi.simonwillison.net
       2023-02-09T16:05:59Z
       
       0 likes, 0 repeats
       
       Someone must have done research on this, right? It feels pretty fundamental!
       
 (DIR) Post #ASVZ0BdFSbhyy7vN8C by tekkie@mstdn.social
       2023-02-09T16:06:05Z
       
       0 likes, 0 repeats
       
       @simon one could argue that the first impact will come from the authorities esp. if the tools end up spinning misinformation.
       
 (DIR) Post #ASVZBsSKaAl2yDOOGm by moebeus@mastodon.online
       2023-02-09T16:09:37Z
       
       0 likes, 0 repeats
       
       @simon We fall for flattery, vote for politicians that tell lies we want to hear, not that much of stretch to think we'll favor whatever chatbot serves us back our bias just the way we like it?
       
 (DIR) Post #ASVZZIJgbEazpsVzNo by mikesten@mastodon.social
       2023-02-09T16:12:33Z
       
       0 likes, 0 repeats
       
       @simon To be fair, it's taken _me_ a while to properly understand it, despite thinking I had a good handle on it. And - ridiculously - it was ChatGPT confidently reporting a completely made up tally of Scrabble scores that drove the point home. I expected it to get obscure stuff wrong but adding up a bunch of numbers?
       
 (DIR) Post #ASVZkzx6hduYM5XCz2 by apkoponen@mstdn.social
       2023-02-09T16:12:51Z
       
       0 likes, 0 repeats
       
       @simon Also, will people be less likely to realize this, if the language model caters to their own biases?
       
 (DIR) Post #ASVaACeibouQzfx5ou by simon@fedi.simonwillison.net
       2023-02-09T16:15:51Z
       
       0 likes, 0 repeats
       
       @mikesten yeah, the "wait a second, this thing is a COMPUTER and it can't even do MATH?" learning moment is a pretty powerful one!
       
 (DIR) Post #ASVaNAjT4UQmnEMEqW by johnmark@freeradical.zone
       2023-02-09T16:18:53Z
       
       0 likes, 0 repeats
       
       @simon taking it a step further - how long will it take non-experts and general bullshitters to learn how to influence and spin the chatbots?
       
 (DIR) Post #ASVb0JoLFdPWFAIttY by thomasfuchs@hachyderm.io
       2023-02-09T16:25:46Z
       
       1 likes, 0 repeats
       
       @simon People don’t figure things out like this, that’s why we should regulate AI bullshit.Same reason many places regulate hate speech from humans.This is harmful to society at large, but especially to groups of people  who are already at the receiving end of hate from other people.Right now tech companies treat people like cattle in some form of demented psychological study.
       
 (DIR) Post #ASVbCchIrE8qIHsYM4 by marshallapplewhite@mastodon.world
       2023-02-09T16:27:15Z
       
       0 likes, 0 repeats
       
       @simon Given the argument I had over the weekend with a "regular user" who absolutely refused to accept my suggestion that the code ChatGPT was "helping" him with had problems, I don't think there's much hope for the future.
       
 (DIR) Post #ASVbTPwXWxsDf00v8C by simon@fedi.simonwillison.net
       2023-02-09T16:30:11Z
       
       0 likes, 0 repeats
       
       @thomasfuchs this is why I want to see researchIf a regular person uses a chatbot and it tells them something that is very clearly false to them, how does that affect their future use of the tool?
       
 (DIR) Post #ASVbejBgJzVWWxUPYW by williamgunn@mastodon.social
       2023-02-09T16:32:16Z
       
       0 likes, 0 repeats
       
       @simon not exactly what you're looking for, but this says a little about whether people can recognize generated content when they see it in the wild, and how helpful they find it: https://arxiv.org/abs/2301.07597
       
 (DIR) Post #ASVbwXlfWAGsKGojvE by John@socks.masto.host
       2023-02-09T16:36:03Z
       
       0 likes, 0 repeats
       
       @simon My mental model for this is the arrival of inexpensive pocket calculators. I'm old enough to remember when they weren't allowed on tests, then they were allowed on tests, and it was finally recognized that they were just ubiquitous.I think ML-as-AI will rapidly become ubiquitous, but it will take normal people some time to figure out when they can be trusted.It will be an interesting era.
       
 (DIR) Post #ASVbwZTfA53Zd0PO8O by John@socks.masto.host
       2023-02-09T16:38:53Z
       
       0 likes, 0 repeats
       
       @simon it is possible that there could be a bounce back. People could say "that's just an AI talking" to mean something plausible but totally unverifiable.
       
 (DIR) Post #ASVc9o7O2U3snfIgtc by danhon@dan.mastohon.com
       2023-02-09T16:36:50Z
       
       0 likes, 0 repeats
       
       @simon people don't figure this out in the (very) large because evolution and fitness doesn't reward truth/accuracy.
       
 (DIR) Post #ASVco5QX6Y3C48SjuS by thomasfuchs@hachyderm.io
       2023-02-09T16:39:34Z
       
       0 likes, 0 repeats
       
       @simon They do not know that it’s false, neither do they care that they do not know.It’s extremely unethical and in my opinion is criminal behavior by big tech to unleash this crap _before_ they even assess potential damage to society in general and minorities especially.As someone else noted, it’s like the Tesla autopilot.People will die and inadvertently kill others because of this, when they e.g. look for medical advice.This is not some theoretical game or interesting mind exercise.
       
 (DIR) Post #ASVdBUsidBktfjoEnA by UncivilServant@med-mastodon.com
       2023-02-09T16:41:17Z
       
       0 likes, 0 repeats
       
       @simon I fully expect chatbots to turbocharge the existing hatred of experts.  Instead of "ChatGPT told me something incorrect", most people will just say "hah, ChatGPT proves those experts are wrong."And if everyone has access to ChatGPT and maybe 1/5 of the population even knows an SME?Maybe every field will look like medicine does now (even pre-pandemic), where virtually every non-expert believes a half-dozen impossible things they heard once without context.
       
 (DIR) Post #ASVdQXfsWsrAg5yeoK by simon@fedi.simonwillison.net
       2023-02-09T16:42:31Z
       
       0 likes, 0 repeats
       
       One argument here is that people will blindly trust any chatbot that supports their previous biasesIs that cynicism justified?What happens when the chatbot speaks against their biases? In particular, what if it both counters their biases AND does so in a way that is demonstrably factually incorrect?We are already seeing furious complaints from some corners that ChatGPT has a liberal bias - how does that affect how those complainants trust and use these tools?
       
 (DIR) Post #ASVdQcKnChxd899O8e by simon@fedi.simonwillison.net
       2023-02-09T16:46:13Z
       
       0 likes, 0 repeats
       
       Hindu nationalists are FURIOUS about ChatGPT right now: https://www.wired.com/story/chatgpt-has-been-sucked-into-indias-culture-wars/How will that impact their trust of systems like this in the future?
       
 (DIR) Post #ASVdesooiFEcJFzqfQ by alexhern@tech.intersects.art
       2023-02-09T16:40:04Z
       
       0 likes, 0 repeats
       
       @danhon @simon if it’s inaccurate but people-pleasing regular users not only won’t figure it out, they’ll assume the people telling them the true facts are malicious liars
       
 (DIR) Post #ASVdetV0BOykQ5jXQ8 by simon@fedi.simonwillison.net
       2023-02-09T16:43:41Z
       
       0 likes, 0 repeats
       
       @alexhern @danhon what if it gives them a reply that angers them? Will they still believe it then?
       
 (DIR) Post #ASVdqXsMP4fApcDO4G by simon@fedi.simonwillison.net
       2023-02-09T16:44:53Z
       
       0 likes, 0 repeats
       
       @thomasfuchs my requests for research here feels compatible with your desire to "assess potential damage to society in general"
       
 (DIR) Post #ASVe3Hi3otpQSRTJh2 by django@mastodon.social
       2023-02-09T16:45:53Z
       
       0 likes, 0 repeats
       
       @simon Are the models less accurate than if a person were generating answers?
       
 (DIR) Post #ASVeSLfIFcyTxs8384 by vortex_egg@ioc.exchange
       2023-02-09T16:41:26Z
       
       0 likes, 0 repeats
       
       @danhon @simon And further, there is an ongoing political project to corrupt information ecosystems.Within which I would include the slate of articles that came out after the Google Bard ad launch fiasco trying to claim that no, the incorrect information in the advertisement was actually correct and the known scientific reality was the thing that was wrong. Which surely would feed back into a generalized perception about the veracity of these systems...
       
 (DIR) Post #ASVeSMEO992ziiY4pc by vortex_egg@ioc.exchange
       2023-02-09T16:42:33Z
       
       0 likes, 0 repeats
       
       @danhon @simon Or there's also this...https://hachyderm.io/@itamarst/109831444046773659
       
 (DIR) Post #ASVeSMlMAZQ1MxyPDc by simon@fedi.simonwillison.net
       2023-02-09T16:47:44Z
       
       0 likes, 0 repeats
       
       @vortex_egg @danhon that doesn't look like the new Bing language model stuff to me - that looks like an older, even dumber implementation similar to Google's existing answer extraction
       
 (DIR) Post #ASVekU79Zj7ZNloAbY by simon@fedi.simonwillison.net
       2023-02-09T16:53:40Z
       
       0 likes, 0 repeats
       
       @django depends on the question!The bigger problem here is that the models are faster (can generate answers in less than a second) and output everything with an extremely confident writing style
       
 (DIR) Post #ASVezmfIszitGD4pRw by wordshaper@weatherishappening.network
       2023-02-09T17:03:03Z
       
       0 likes, 0 repeats
       
       @simon This isn't really cynicism, I think it's more an optimistic view of people.
       
 (DIR) Post #ASVfHy7z0VJ81WA97Q by toychicken@mastodon.social
       2023-02-09T17:08:32Z
       
       0 likes, 0 repeats
       
       @simon No research, but after an afternoon of 'playing' with Chat GPT, I had worked out it's limitations.My takeaway, and note of optimism, is that people will be able to 'smell' bot-generated text quite easily. Whether they'll care is another discussion.
       
 (DIR) Post #ASVfWbpcBc8HDUYIYC by julian@social.synesthesia.co.uk
       2023-02-09T17:09:31Z
       
       0 likes, 0 repeats
       
       @simon Thank you for giving me the quote of the week for my all-company roundup tomorrow! (not public)
       
 (DIR) Post #ASVfinQwsLbUfupH3w by CodexArcanum@hachyderm.io
       2023-02-09T17:10:29Z
       
       0 likes, 0 repeats
       
       @simon In my experience, people can hold a real strong grudge against a technology they perceive as having wronged them.People do need to stop falling for the "liberal bias" line though. Look, it is never not an advantage to claim that X has "my enemy's bias." If you are correct, then you've accurately called out bias, good job. If you're wrong, who cares? When what you care about is winning, not facts, then you always want more bias in your favor.
       
 (DIR) Post #ASVftFt1wX63oewqnY by simon@fedi.simonwillison.net
       2023-02-09T17:11:26Z
       
       0 likes, 0 repeats
       
       @julian I'd love to hear how that goes over!
       
 (DIR) Post #ASVgFoqwcTgodJjRHE by rlitchfield@fosstodon.org
       2023-02-09T17:16:38Z
       
       0 likes, 0 repeats
       
       @simon I work in a School Division, and what some are starting to find is that there is currently a limit to how good these are and while the reach of information is very wide, the depth is not. There are frequent mistakes and downright plagiarism occurring in some of the responses that the Machine Learning system is providing.This technology will make things easier, but does not eliminate the need to validate the information.
       
 (DIR) Post #ASVgSNVxZglvbouBDk by adam_casto@mastodon.social
       2023-02-09T17:16:56Z
       
       0 likes, 0 repeats
       
       @simon To be fair though, they also thought a plain red cup had a liberal bias.
       
 (DIR) Post #ASVgghRkGT2DTtlLw8 by sebleier@mastodon.social
       2023-02-09T17:17:26Z
       
       0 likes, 0 repeats
       
       @simon I think we're going to see more ChatGPTs out there and my guess is that they are going to attract different people based on their biases.  People select their echo chambers in social media and we've seen the feedback loop it has produced with respect to political extremism.  I think we're about to see another feedback loop with ChatGPTs.  That is, people seeking out models that confirm their biases, which then drives them to produce biased content to feed back into it, and repeat.
       
 (DIR) Post #ASVh04rpPYVncaCdpg by simon@fedi.simonwillison.net
       2023-02-09T17:28:12Z
       
       0 likes, 0 repeats
       
       @rlitchfield are your students firguring that out? How does their usage of these tools change once they realize how inaccurate they can be?
       
 (DIR) Post #ASVhHNmGMAcNC2qqcy by jamesravey@fosstodon.org
       2023-02-09T17:40:12Z
       
       0 likes, 0 repeats
       
       @simon I think if you take journalism as a model (people are more likely to blindly trust articles that they agree with and strongly criticise material that they disagree with) I'd hypothesise that people will do the same with chatbots - only questioning what they read if they don't like what it says. This will be exacerbated by chatbots being super confident and subtly wrong and having to look harder to find the problems which you won't want to do if you agree with the output
       
 (DIR) Post #ASVhpsohSYWXJ1RUtU by jsmith@freeatlantis.com
       2023-02-09T17:48:50Z
       
       0 likes, 0 repeats
       
       @simon its not related to AI. Ask how people react when confronted with facts that counter their own beliefs and you get the same result. Same problem, same answer. So its just AI as the delivery mechanism, but it could just as easily be the media.
       
 (DIR) Post #ASVjNzrvqWqIyfmikS by simon@fedi.simonwillison.net
       2023-02-09T18:04:12Z
       
       0 likes, 0 repeats
       
       @jamesravey Journalism is an interesting comparison: people tend to develop trust in some journalists and distrust in others based on their evaluation of that individual's outputSome exploit this by only covering things in ways that resonate with their baseWhat happens when a chatbot clearly plays both sides? Are people protected against assuming it's all-knowing if it presents them information they agree with but then presents them with information they disagree with in a separate session?
       
 (DIR) Post #ASVjP9wLt39wVEPGdM by simon@fedi.simonwillison.net
       2023-02-09T18:05:15Z
       
       0 likes, 0 repeats
       
       @jsmith I think this is different. People use chatbots more than once. What happens to their mental model of how reliable a chatbot is if it supports their biases the first time they use it, but then contradicts their biases in a subsequent session?
       
 (DIR) Post #ASVjhvvCERraWiomR6 by simon@fedi.simonwillison.net
       2023-02-09T18:06:37Z
       
       0 likes, 0 repeats
       
       @sebleier What will happen when a right-leaning chatbot gains popularity, but then people figure out ways to trick it into supporting left wing talking points and start sharing prompts and screenshots?
       
 (DIR) Post #ASVlemeGS7Wqo3WYpk by aweiss@mas.to
       2023-02-09T18:29:38Z
       
       0 likes, 0 repeats
       
       @simonSeems like it's too early for there to be much research already, but I can't wait to see it.The reaction to ChatGPT's bullshit (ha ha look at the funny computer) is markedly different than the reaction when you attach a bullshit generator to a Google search box, which has represented authoritative answers for 20 years.
       
 (DIR) Post #ASVlsHWkxNr0L9Rsps by codedread@mastodon.cloud
       2023-02-09T18:32:11Z
       
       0 likes, 0 repeats
       
       @simon This is probably my biggest concern, since trust is transitive, what effects do these things have beyond their own usages? Maybe more interestingly, why do these things frequently make up things that aren't accurate?
       
 (DIR) Post #ASVn7whYFqTgKyWzeS by jsmith@freeatlantis.com
       2023-02-09T18:48:08Z
       
       0 likes, 0 repeats
       
       @simon I still say its the same effect. What happened to millions of Americans when they heard friends and family voice support for Trump in 2016?
       
 (DIR) Post #ASVoBAmKTNAoorruXw by jamesravey@fosstodon.org
       2023-02-09T18:55:42Z
       
       0 likes, 0 repeats
       
       @simon yes possibly or I wonder if you'd see some version version of Knoll's law of media accuracy where people are happy to trust the bot on stuff they don't know much about but more distrustful when they know about the subject matter or they have strong biases one way or another?
       
 (DIR) Post #ASVoSNyydMHch4zZtg by sebleier@mastodon.social
       2023-02-09T19:01:19Z
       
       0 likes, 0 repeats
       
       @simon – People are Bayesian by nature, so depending on how they prioritize truth vs. satisfying their biases, you'll see some people dock their favorite chat bot a few points if it spouts an opposing ideology.  If it gets to a certain point, you'll see a phase transition and you may see people migrate to another platform.  I see it as analogous to the recent migration of people moving from Fox News to OANN or Newsmax.
       
 (DIR) Post #ASVocrnmcXY24akogy by simon@fedi.simonwillison.net
       2023-02-09T19:01:25Z
       
       0 likes, 0 repeats
       
       @codedread I have a good understanding of why they lie so much: all they're ever doing is predicting the next word in a sequence of words based on their training dataThey have no concept of truth - they just know statistically which words are most likely to follow "The Kennedy assasination was a conspiracy by ..." - based on the TBs of scraper data that was used to build their modelsThe fact that they get anything right at all is pretty astonishing!
       
 (DIR) Post #ASVpVhEDdPJvCsGQIS by alexhern@tech.intersects.art
       2023-02-09T19:12:43Z
       
       0 likes, 0 repeats
       
       @simon @danhon “they censored the AI because it was telling the truth”
       
 (DIR) Post #ASVphB2p7RGMriPNNw by SnoopJ@hachyderm.io
       2023-02-09T19:14:15Z
       
       0 likes, 0 repeats
       
       @simon it really does feel like this market is heading for a bifurcation point where people either buy the BS hook line and sinker, or realize that these tools have flaws just like any other overhyped market-creating technology, and the target applications become narrower in scope.But I can't say I'm feeling very optimistic about that second option.
       
 (DIR) Post #ASVtTTWW1A6nwI1CNc by rlitchfield@fosstodon.org
       2023-02-09T19:57:35Z
       
       0 likes, 0 repeats
       
       @simon Junior/High school students that are using these technologies to cheat are not the type to look to deeply as all they want is a shortcut.  It is the teacher's that are seeing how weak some of the results are.
       
 (DIR) Post #ASVwI89DS4sEJvP00O by simon@fedi.simonwillison.net
       2023-02-09T20:29:00Z
       
       0 likes, 0 repeats
       
       @rlitchfield How does a students opinions of the technology change over time, in particular after the second or third time they've been caught using it because it gave them facts that were obviously untrue and were marked as such?
       
 (DIR) Post #ASW0nStQlptTSGDRNg by zzzeek@fosstodon.org
       2023-02-09T21:17:28Z
       
       0 likes, 0 repeats
       
       @simon the people who did the research like @timnitGebru were fired
       
 (DIR) Post #ASW2jrRXke9uLVa4oa by glyph@mastodon.social
       2023-02-09T21:06:47Z
       
       0 likes, 0 repeats
       
       @simon have you turned on any US political news in the last 8 years? I think that the idea that there is such a thing as a consensus view of “demonstrably factually incorrect” is a statement so bold as to be unsupportable
       
 (DIR) Post #ASW2jrx5rLObvMLGzY by simon@fedi.simonwillison.net
       2023-02-09T21:39:37Z
       
       0 likes, 0 repeats
       
       @glyph My question remains: if a right-leaning person encounters replies from ChatGPT that directly counters their existing beliefs (and which they can fact check through other sources), do they stop believing that ChatGPT is an infallible source of information?Even if their conclusion is "It's a conspiracy! The chatbot has been neutered!", does it still provide some level of protection for them in terms of helping them understand that these things are deeply fallible?
       
 (DIR) Post #ASW2w1Gz1xj2J6RI2K by mikesten@mastodon.social
       2023-02-09T21:40:07Z
       
       0 likes, 0 repeats
       
       @simon On the bright side, it gave us both 200 instead of giving Syl 240 and me 220. So.. I sort of owe it a beer.
       
 (DIR) Post #ASW4Rk6GVwwkGd7HbE by glyph@mastodon.social
       2023-02-09T21:58:51Z
       
       0 likes, 0 repeats
       
       @simon Their epistemic foundation is culturally authoritarian, not empirical, and I don't think they'll perceive ChatGPT itself as an agent with its own authority, more like an esoteric fountain of information to be incorporated into their (already incoherent) syncretic model of the world. So they'll poke at it until it reveals some "hidden truth" and they'll believe or not-believe various its various mumblings on a case-by-case basis.
       
 (DIR) Post #ASW96Ep26qIwwxcaZs by josephholsten@mstdn.social
       2023-02-09T22:52:35Z
       
       0 likes, 0 repeats
       
       @simon permaquote literally every time I go spelunking for journal articles.
       
 (DIR) Post #ASWBel58sGoekwkE9g by Rob_Russell@mastodon.cloud
       2023-02-09T23:20:58Z
       
       0 likes, 0 repeats
       
       @simon @codedread I really wish they would remove the personification and chat UI. It should not look the same as a box where I send messages to people and people reply to me. User expectations would be better matched to the tools with a better explanation of the prompt and response.I really get a lot out of using GPT-3 through the playground interface (thanks to your intro, Simon) but I haven't been using the chat interfaces because it feels like the wrong tool.
       
 (DIR) Post #ASWCpP7Kyilnm2PZtA by simon@fedi.simonwillison.net
       2023-02-09T23:34:16Z
       
       0 likes, 0 repeats
       
       @Rob_Russell @codedread oh that's really interesting - I hadn't thought about how strongly the chat interface reinforces the science fiction "AI" aspect of it allThe playground interface never seemed to click for a lot of people
       
 (DIR) Post #ASXSuvhh2jzOiA289Y by satya@mas.to
       2023-02-10T14:09:21Z
       
       0 likes, 0 repeats
       
       @simon There is no placating the Hindu nationalists. Not sure where this will go.