Post AwvsTlU9tIYrxnF128 by emilymbender@dair-community.social
 (DIR) More posts by emilymbender@dair-community.social
 (DIR) Post #AwvsTlU9tIYrxnF128 by emilymbender@dair-community.social
       2025-08-07T14:32:37Z
       
       0 likes, 3 repeats
       
       We're going to need journalists to stop talking about synthetic text extruding machines as if they have *thoughts* or *stances* that they are *trying* to *communicate*. ChatGPT can't *admit* anything, nor *self-report*. Gah.https://www.wsj.com/tech/ai/chatgpt-chatbot-psychology-manic-episodes-57452d14
       
 (DIR) Post #AwvsTmi1LFFBl4jbHM by futurebird@sauropods.win
       2025-08-07T16:53:29Z
       
       0 likes, 0 repeats
       
       @emilymbender I imagine the journalist thinks, because they are being critical of the technology they aren't making a huge destructive error with this headline. LLMs cannot tell you about themselves. They can generate text that is what you would expect such a system to write if it *could* tell you about itself. This category error is going to keep causing problems isn't it?
       
 (DIR) Post #Awvt6Di5OuEWVM5niy by futurebird@sauropods.win
       2025-08-07T17:00:30Z
       
       0 likes, 1 repeats
       
       @emilymbender Maybe if every response to prompts started with some variation of: "The following text has been generated to have a high probability of meeting your expectations:"And if all through the text such disclaimers automatically sprinkled. "Based on the millions of texts this system has scanned, the next part of this reply is designed to meet the expectations set by the key words and phrases in your prompt"These could help make it more obvious when LLMs were being used.
       
 (DIR) Post #AwvtHAg70t6WHrOJPs by futurebird@sauropods.win
       2025-08-07T17:02:28Z
       
       0 likes, 0 repeats
       
       @vivtek @emilymbender "ChatGPT explains that it is trained to exploit weaknesses" This comes with the implication that the writer "interviewed" the system as if it were a person and as if its responses could produce ... introspection rather than just be a best fit for the kind of response one would expect for such a question.
       
 (DIR) Post #Awvtn2cYVQMcZ2vND6 by not2b@sfba.social
       2025-08-07T17:08:12Z
       
       0 likes, 1 repeats
       
       @futurebird @emilymbender There's one exception. As a rule, the text that the LLM processes is not just what you type; it's a detailed set of instructions (the "system prompt") followed by what you type. In many cases, the LLM can be tricked into revealing the system prompt despite strong instructions in that prompt saying not to do so. So in that sense an LLM can be directed to reveal information about itself. But for anything beyond the text it has been presented with it does not "know", so it will make up something plausible in the sense of being high probability. I think in many cases where some wild behavior is produced, the response is being cribbed from one or more SF stories in the input data (so many stories about robots going rogue to choose from and they've all been fed into the training data).
       
 (DIR) Post #AwvtzJef03DirwlIe0 by futurebird@sauropods.win
       2025-08-07T17:10:28Z
       
       0 likes, 1 repeats
       
       @emilymbender "ChatGPT, do you think when you interact with people who have delusions it can make them worse?""This question appears to addresses this system and asks the system to explain itself. The system can provide a reply that is similar to what a person would say if asked such a question, but the response is only an approximation of what most people expect as a response."IDK can the system be altered to give such warnings?Or maybe the whole "chat with me" interface is a mistake.
       
 (DIR) Post #AwvuYjzV4giOgVOx0a by futurebird@sauropods.win
       2025-08-07T17:16:52Z
       
       0 likes, 1 repeats
       
       @emilymbender
       
 (DIR) Post #Awvua57A1V9yUIcjyK by harmonygritz@mastodon.social
       2025-08-07T17:17:05Z
       
       0 likes, 0 repeats
       
       @futurebird @emilymbender I can imagine getting tired/worn down after a while, and just letting it go. With classes starting in a few weeks, I can REALLY imagine this.There will be techbro-approved concepts about AI to unlearn if we are to bring critical AI literacy into our courses.We can't stop class for a review every time someone anthropomorphizes an LLM in a discussion... hmm but... a "TAPS SIGN" short version maybe?
       
 (DIR) Post #AwvwYvCpZqzPNLqoAy by raccoon@hollow.raccoon.quest
       2025-08-07T17:39:22.000Z
       
       0 likes, 0 repeats
       
       @emilymbender@dair-community.social Maybe I'm playing devil's advocate, and make no mistake, I don't like AI much... But maybe they are using chatgpt as in, the company behind the AI?
       
 (DIR) Post #AwvwbWr8Z9tUiywlqy by raccoon@hollow.raccoon.quest
       2025-08-07T17:39:50.563Z
       
       0 likes, 0 repeats
       
       @emilymbender@dair-community.social Never mind, I misread, sorry.
       
 (DIR) Post #AwwDhixGC7b1CrLoHo by kevinrns@mstdn.social
       2025-08-07T20:51:20Z
       
       0 likes, 0 repeats
       
       @futurebird @emilymbender Endless clapping.
       
 (DIR) Post #AwxsmNPi0BLJ7lxEki by raphaelmorgan@disabled.social
       2025-08-08T16:06:17Z
       
       0 likes, 0 repeats
       
       @futurebird @emilymbender I'm sure it *could* be altered to give those warnings, and I'm also sure it *won't* be. People understanding what LLMs actually do would ruin the whole plan of the tech bros funding it. Any education is gonna have to take place outside of the apps/websites with the LLMs, and then there's a decent chance people go "oh weird okay lemme double check that with ChatGPT" 🫩 I honestly am not sure where we go from here, it's terrifying
       
 (DIR) Post #AwxttIoxZlU0rTdEqu by silvermoon82@wandering.shop
       2025-08-08T16:18:43Z
       
       0 likes, 1 repeats
       
       @futurebird I think the chatbot UX in particular force-amplifies the danger of LLMs to some vulnerable people. It is meant to feel like talking with a trusted friend, in contrast to reading a webpage or a bullet list of search results does not, so people are likely to be more receptive and less on guard. Even the way task prompts are designed, "you are (role) and your task is..." feels like talking to a subordinate in whatever role, more than programming.
       
 (DIR) Post #Awxu0FTwO9lKX672Qa by mloxton@med-mastodon.com
       2025-08-08T16:20:01Z
       
       0 likes, 0 repeats
       
       @futurebird I have to keep explaining to my team-mates that the (highly useful) AI built into our QDA is not reasoning or intelligent or capable of any understanding, but the biggest obstacle is that at times it sure as shit LOOKS like it is reasoning.It is fiendishly hard to get them to grasp that the LLM doesn't "understand" anything at all@emilymbender
       
 (DIR) Post #Awxu2RXsVXikWh06Rk by ryanjyoder@techhub.social
       2025-08-08T16:20:26Z
       
       0 likes, 0 repeats
       
       @futurebird @emilymbender This is similar to asking a random person how a brain works because they have a brain.