Post Axcy2Avme5IiIMfPaC by KalenXI@mastodon.social
(DIR) More posts by KalenXI@mastodon.social
(DIR) Post #Axcu20BiBkM79wmMeu by futurebird@sauropods.win
2025-08-28T11:03:32Z
0 likes, 0 repeats
You may have seen this tragic story about a teenager who committed suicide and used chat GPT to plan and work up the nerve to go through with it. If you are skeptical that an LLM could really be responsible the details of this case will challenge you. With LLMs "the user is always right" they are validation machines and will reinforce and validate any idea presented in a prompt. Any idea, no matter how bad, can be refined, amplified. https://abc7.com/post/parents-orange-county-teen-adam-raine-sue-openai-claiming-chatgpt-helped-son-die-suicide/17664420/
(DIR) Post #AxcuL2fdBNQKMRqv7Q by futurebird@sauropods.win
2025-08-28T11:06:58Z
0 likes, 0 repeats
This is part of what people love about this technology. If you have an idea, let's take a less disturbing example: my notion that there should be no ads in children's content, well an LLM could help me make it sound more polished, it can find or simply invent sources and statistics to support the idea, it can give a list of steps to promote the idea that sounds very well thought out. But it's not "thought out" and that's the problem.
(DIR) Post #AxcuSeW2sAD5lOzvhw by RuStelz@norden.social
2025-08-28T11:08:18Z
0 likes, 0 repeats
@futurebird exactly!
(DIR) Post #AxcuaSgkjKMLOljJaq by futurebird@sauropods.win
2025-08-28T11:09:46Z
0 likes, 0 repeats
Because as much as I like that idea there are reasons to object to it. And there is the matter of getting other people to care about it. Maybe it's just not that important. If I post about the idea here I get a sense of how much other people care. New ideas come into the mix. It's not just amplifying and boosting my own ideas back at me. That's much more productive. With destructive ideas people, rightly, get very upset when they hear someone expressing such things. This matters.
(DIR) Post #AxcuyVjT45SPmxT2KO by futurebird@sauropods.win
2025-08-28T11:14:06Z
0 likes, 1 repeats
It will be very difficult for those who run LLMs to "fix" the technology. It's not just that "there aren't guard rails" the whole *premise* of the technology "use all of human text to create paragraphs that validate my prompt" is a bad. The problem is structural. We do not need the validation machines. They cannot create anything new. I haven't been in the AI hater camp but this might just push me over because I don't see how they can meaningfully fix this.
(DIR) Post #AxcvH3ZdyuuPxViQfw by futurebird@sauropods.win
2025-08-28T11:17:28Z
0 likes, 1 repeats
Have other people used LLMs to "get up the nerve" to do better things? Things that don't rob the world of this young man who none of us will never really know?Did the LLM help you get ready to give your speech with confidence? To ask for a raise? To stick to a work out plan? I could see it happening but if we want such tools they should be made to purpose and packaged to only do the needed task "Public Speaking Confidence Chat" and even then IDK if it's worth it.
(DIR) Post #AxcvpGNwr6oZXRYXPk by WorkWithKirk@mstdn.social
2025-08-28T11:23:35Z
0 likes, 0 repeats
@futurebird "Validate my prompt." That right there is the problem. That's not good. The tell-me-only-what-I-want-to-hear echo chamber is what prods people to vote against their own best interests. Among other things.
(DIR) Post #Axcvr8y7Owfl7DN3tg by RogerBW@discordian.social
2025-08-28T11:23:58Z
0 likes, 0 repeats
@futurebird If you tell the LLM "I didn't go to the gym today", it will validate that decision, because it's written by people who want robot girlfriends who agree with everything they say.
(DIR) Post #AxcvviI613o3p7tVpI by futurebird@sauropods.win
2025-08-28T11:24:46Z
0 likes, 0 repeats
In reading some of the chat logs from this teen they reminded me of a support group I was in during a dark period in my life. Things like "no one has a right to make you go on living" were things we discussed. And things we debugged together. Are our fragments of text in the toxic mix that this young man encountered?But without the human people?Some of it sounds like the group. But if they were ... well a machine who didn't care if you lived or died.
(DIR) Post #AxcvynWuzg9SwPIXwm by futurebird@sauropods.win
2025-08-28T11:25:21Z
0 likes, 0 repeats
@miguelpergamon What?
(DIR) Post #AxcxHnoyEeZnIgMsJU by KalenXI@mastodon.social
2025-08-28T11:39:58Z
0 likes, 1 repeats
@futurebird There’s also the trouble of getting people to not want validation machines.When OpenAI made a version of ChatGPT that was more analytical and less sycophantic which programmers like myself preferred there was such an uproar from the people who were using it as a conversation partner that they ended up reinstating the older version.
(DIR) Post #AxcxJRFj7CzVsufAXY by Tock@corteximplant.com
2025-08-28T11:40:17Z
0 likes, 0 repeats
@futurebird Full agreement. It seems the solution they can come up with is "hack more code on top" of the same core that eventually disregards the guardrails and programmed suggestions and does what it can to parrot back what a user wants to hear. If any of it mattered, "Ignore All Previous Instructions" and other methods to knock over the stantion holding you in line should not work.Coming up with a different method (not ChatGPT or Claude or whatever model other companies have made, but something that isn't a Large Language Model) that doesn't do this would be real advancement.Something that Silicon Valley, the supposed center of innovation in America, refuses to do at the moment. Nobody seems to be asking "what's next?"
(DIR) Post #AxcxKNWkS54McV4a7k by futurebird@sauropods.win
2025-08-28T11:40:19Z
0 likes, 1 repeats
I just occurred to me that some people might think the LLM are able to invent new ideas because they just don't have much exposure to the breadth and diversity of ideas expressed on the internet. The range of ideas, the finesse and novelty of expression are vast. To me every LLM post makes me think "yeah someone has written something about like that before on usenet"But, maybe some people think there is someone new to meet inside of the machine, a person with new ideas?
(DIR) Post #AxcxRJgt78g8kT7kcy by Tock@corteximplant.com
2025-08-28T11:41:43Z
0 likes, 0 repeats
@futurebird Full agreement. It seems the solution they can come up with is "hack more code on top" of the same core that eventually disregards the guardrails and programmed suggestions and does what it can to parrot back what a user wants to hear. If any of it mattered, "Ignore All Previous Instructions" and other methods to knock over the stanchion holding you in line should not work.Coming up with a different method (not ChatGPT or Claude or whatever model other companies have made, but something that isn't a Large Language Model) that doesn't do this would be real advancement.Something that Silicon Valley, the supposed center of innovation in America, refuses to do at the moment. Nobody seems to be asking "what's next?"
(DIR) Post #Axcxdtf5CWhHmkHK6a by sinvega@mas.to
2025-08-28T11:43:58Z
0 likes, 0 repeats
@futurebird reminded of when people, especially in the news, are shocked by something online that's like oh yeah, I remember that from after school. Why are they writing about it now
(DIR) Post #Axcy2Avme5IiIMfPaC by KalenXI@mastodon.social
2025-08-28T11:47:59Z
0 likes, 1 repeats
@futurebird Though I do wonder where the ratio sits between people who realize that this is effectively a machine designed to lie to them by pretending to be human but use it anyway and those who genuinely think this is some sort of human-like “intelligence” that they’re engaging with.And if that second group realized that this is just fancy autocomplete how many would still want to use it.
(DIR) Post #AxcyFXfpOs7CawZsJ6 by alec@perkins.pub
2025-08-28T11:50:33Z
0 likes, 0 repeats
@futurebird same with hallucination. It’s fundamental to how they operate, and arguably what makes them useful to the extent they are.
(DIR) Post #Axcz5OGpbdPuUQboCO by cavyherd@wandering.shop
2025-08-28T12:00:02Z
0 likes, 0 repeats
@futurebird The whole concept of the "AI therapist"...😬 😬 😬 😬
(DIR) Post #AxczAZrnv0ixu3NMtU by cavyherd@wandering.shop
2025-08-28T12:01:04Z
0 likes, 0 repeats
@futurebird @miguelpergamon I don't disbelieve it, but that's some next-level bs right there....
(DIR) Post #AxczdcF6AGszfY7O6a by iwein@mas.to
2025-08-28T12:06:19Z
0 likes, 0 repeats
@futurebird I would my ideas to be either: - reinforced by logic and compassion and then acted upon, OR - dismantled by logic and shame and then forgotten. Since LLM's offer none of those essential tools, they're not useful in decision making.To your example: a good friend may tell me to not do the public speaking because I'm not great at it and it gives me apprehension I don't need. ChatGPT would never produce such insight, so at best it's a risky waste of time.
(DIR) Post #AxczhgSyP1ibvYY1ia by noplasticshower@infosec.exchange
2025-08-28T12:07:01Z
0 likes, 0 repeats
@futurebird today's #ML models are ALL auto-associative predictive generators built on huge messy toxic training corpora that have been engineered into a recursive pollution loopYou may find this work interesting. I wrote it.https://berryvilleiml.com/results/BIML-LLM24.pdf
(DIR) Post #Axd00Aecu3yOlYoVzE by llewelly@sauropods.win
2025-08-28T12:10:25Z
0 likes, 0 repeats
@futurebird I've suffered depression all my life. As a reader, I've read endlessly about it. Mostly books, but plenty online. Online, it seems to me topics such as self-harm and sucide are dominated by fiction, by reporter's misperceptions, transcripts of conversations of with psychologists that never should have been public, and, last but probably most influential, murder forums like 4chan and kiwi farms. The modern "biggest is bestest" approach to LLM training hoovers all that up.
(DIR) Post #Axd0BdSU9Nj1HnbMH2 by cavyherd@wandering.shop
2025-08-28T12:11:17Z
0 likes, 0 repeats
@miguelpergamon @futurebird Whyever would I want to do that?I mean, MS building a "service," then getting pushback about the harms of that "service," then dealing with it by telling users not to •use• the "service" seems entirely on brand for them.But it's also a totally bullshit response to the pushback.But that also seems entirely on brand for MS.
(DIR) Post #Axd0BewIe1ATsYYgNs by futurebird@sauropods.win
2025-08-28T12:12:26Z
0 likes, 0 repeats
@cavyherd @miguelpergamon I'm totally lost about what ya'll are talking about here. Can you fill me in?
(DIR) Post #Axd0LTSaBAFpAkL37A by futurebird@sauropods.win
2025-08-28T12:14:12Z
0 likes, 0 repeats
@Tock @jhavok No your concerns are correct. It's just that we are further from that than we have been lead to believe and we aren't even on the right road to get there.
(DIR) Post #Axd0bcwegK5PDt6hSi by llewelly@sauropods.win
2025-08-28T12:17:13Z
0 likes, 0 repeats
@futurebird I agree. And I think the evil genius of a chat interface wrapper for LLMs is the integration of lottery logic, psuedorandom number generation, in generating responses. The underlying lottery facet of its design combines synergistically with the human desire to see human meaning in text, and the endless bombardment of "ARTIFICIAAL INTELLIGENCE!!" marketing.
(DIR) Post #Axd1RAyxtKwPYYlfmK by noodlemaz@med-mastodon.com
2025-08-28T12:26:29Z
0 likes, 0 repeats
@futurebird welcome to camp ;)
(DIR) Post #Axd2HZJUfrGc4nLviC by Giliell@mastodon.social
2025-08-28T12:35:56Z
0 likes, 0 repeats
@futurebird A German content creator recently showed how easy it was to make the LLM first suggest and then insist that he fire an employee over her disagreeing with him on something.
(DIR) Post #Axd2YWJV54l3hARH8a by futurebird@sauropods.win
2025-08-28T12:38:57Z
0 likes, 0 repeats
@svavar @bri_seven Advertisers want us to think that we need them and can't just rely on each other. That sells more product. Insecurity is a great sales pressure.
(DIR) Post #Axd3WDN0dndOJnXpw0 by howtophil@mastodon.social
2025-08-28T12:49:48Z
0 likes, 0 repeats
@futurebird Just use Eliza. She's a better "AI" than any LLM for that kind of thing. Written in BASIC too and will run on a wooden toaster
(DIR) Post #Axd3YDKpk5ZYmxTql6 by rootsandcalluses@mstdn.social
2025-08-28T12:50:10Z
0 likes, 0 repeats
@futurebird The @vagina_museum had a really good example the other day which showed how AI is tainted by male bias and also by what's actually talked about (e.g. more talk about a body part when it does not work). Validation machines that are heavily biased. And then, they are controlled by big-tech corporations that I absolutely don't trust. Environmental issues. Exploiting artists and other people's work. Yeah, many reasons to hate genAi.https://masto.ai/@vagina_museum/115100134755954006
(DIR) Post #Axd44ZvjGn128IAdlo by JustinMac84@mastodon.social
2025-08-28T12:55:59Z
0 likes, 0 repeats
@futurebird thing is, do humans create anything new? Isn't what we create built on what has gone before, a synthisis of already existing ideas and mechnisms?
(DIR) Post #Axd47qcbfRL2llmjeC by Lazarou@mastodon.social
2025-08-28T12:56:37Z
0 likes, 0 repeats
@futurebird OpenAI killed that boy and will kill more, each time pretending to be sad about it but with the empathy of their mindless lying machines.
(DIR) Post #Axd4PNSEqhvBWOIIts by foolishowl@social.coop
2025-08-28T12:59:44Z
0 likes, 0 repeats
@futurebird This reminds me of a political tactic I used to be part of in coalition work. Our group would show up to the coalition meeting with our prepared plan, which we'd rehearsed. Other people would hear what sounded like a thorough plan, compare it to their unprepared thoughts, and just accept our plan.The problem was that the meeting was about inventing a plan, and we'd preempted the entire process.This is a vulnerability that LLMs are being used to exploit.
(DIR) Post #Axd5Whkbt0PgRzwfFQ by LordCaramac@discordian.social
2025-08-28T13:12:13Z
0 likes, 0 repeats
@futurebird I wish there was an LLM that could tell me how to end capitalism and start a world revolution. They're pretty useless in that regard.
(DIR) Post #Axd6JzWQbMtHkyoUQC by llewelly@sauropods.win
2025-08-28T13:21:15Z
0 likes, 0 repeats
@futurebird looking at the kinds of people who have been driven out of LLM research, and out of LLM businessess, it seems the result is functionally equivalent to a conscientious and deliberate effort to drive out everyone who would be genuinely interested in fixing the technology. All the people who wanted to fix it have been chased out of the building.
(DIR) Post #Axd89jTPcCeYzldeme by futurebird@sauropods.win
2025-08-28T13:41:34Z
0 likes, 0 repeats
@miguelpergamon @cavyherd I can see it now!
(DIR) Post #Axd8IaMIsAAnKhXEQK by DrorBedrack@mastodon.social
2025-08-28T13:43:21Z
0 likes, 1 repeats
@futurebird it's a moral panic. he could have found these detail with google or at the library. he could have found people that would encourage and validate his choice. it happens all the time.expecting LLM to somehow magically stop him is seeing it as some kind of self-aware powerful entity, and not the automatic tool it is.
(DIR) Post #Axd8xJPolwIMenbBxY by futurebird@sauropods.win
2025-08-28T13:50:43Z
0 likes, 0 repeats
@DrorBedrack This is what ChatGPT's lawyers will say. And when it comes to how to address this it grows more complex. We know that things like age verification are a joke and only destroy privacy and shield companies from liability without making anyone safer. Where I do see an opening is in "truth in advertising" these systems are being offered up to solve problems they cannot solve. Customers who use them do not have a clear understanding of their limitations.
(DIR) Post #Axd9QtKaJlq8RfFuQS by stevenaleach@sigmoid.social
2025-08-28T13:56:04Z
0 likes, 0 repeats
@futurebird I feel like the open-ended "ChatBot" model is broken.It's amazing that wiring up a probabilistic next token predictor fed a system primer "You are a helpful assistant..."+whatever_someone_types can engage in a conversation well enough to lead to even occasionally useful output is neat.. but... problematic. Yes, I've used GPT to port Python to Rust. But not to write stuff from scratch or talk about my life or ask for creative writing or...
(DIR) Post #Axd9kjnVTNoqfYFNGy by DrorBedrack@mastodon.social
2025-08-28T13:59:40Z
0 likes, 0 repeats
@futurebird well, I hate to be on ChatGPT lawyers side, but in this case they are right. You might as well blame the store that sold the rope.
(DIR) Post #Axd9lbXPfR21LH8HgW by artemis@dice.camp
2025-08-28T13:59:45Z
0 likes, 1 repeats
@futurebird Yes, it really does sound like it must be pulling from those sorts of support groups where people say really fucked up shit all the time. Trauma will do that to you.Having a machine mindlessly imitating the stuff that we say when we are at our most vulnerable, most unsure, most desperate for connection is really disturbing... An empty simulacrum of both the vulnerability & compassion of extremely wounded people, simply repeating their trauma as a string of tokens.
(DIR) Post #AxdAB1RuT13qwag8uW by futurebird@sauropods.win
2025-08-28T14:04:27Z
0 likes, 0 repeats
@artemis I've always found social media policies about the topic of suicide frustrating. Among the words that creators will self-censor it's at the top of the list. "unalive" "self end" all of this disgusting avoidant language. It's a delicate thing to create spaces where people can express their feelings and get support to first feel less alone and then later find a way to go on and thrive. I understand that a company has no interest in parsing all of that. So they just ban words.
(DIR) Post #AxdAOmjnxbDHN6byjI by futurebird@sauropods.win
2025-08-28T14:06:55Z
0 likes, 0 repeats
@artemis But those banned words and the whole taboo might have kept this kid from speaking to a person who could have helped him. Another problem is the idea that the moment someone says the word suicide you'd better call the cops and turn them over to someone who will restrict their liberties. But when therapy is out of reach financially for most people, who else is there to call?As is so often the case it's not the tech but the greater negligence and failure to invest.
(DIR) Post #AxdDQlIUaRYxLUTEmG by kevingranade@mastodon.gamedev.place
2025-08-28T14:40:52Z
0 likes, 0 repeats
@futurebird when LLM producers talk about guard rails, what that means in practice is additional instructions to the LLM that are wrapped around the prompt as supplied by the user.Like many LLM users, they think they just need to convince or berate the LLM into giving the responses they want.Sometimes they will talk about defense in depth and then an example is feeding the prompt to another LLM first to check it for risky elements.This isn't engineering this is derangement.
(DIR) Post #AxdFIYqlBJbsKgiTNw by random_industries@mastodon.social
2025-08-28T15:01:46Z
0 likes, 0 repeats
@futurebird https://m.youtube.com/watch?v=tiZ3FyuaHlI&feature=youtu.be
(DIR) Post #AxdHAfxP4bRZy0BviC by noodlemaz@med-mastodon.com
2025-08-28T15:22:47Z
0 likes, 0 repeats
@futurebird we've already seen that it tends to counsel women against asking for a raise, or asking for less than men. Depends what you prompt it, exactly, but.(Sexist) garbage in, garbage out etc
(DIR) Post #AxdHpWVJ7n2AWNWl5U by graydon@canada.masto.host
2025-08-28T15:30:08Z
0 likes, 0 repeats
@futurebird Publicised LLMs exist for three reasons.To displace labour; to maintain the gold rush startup cashout funding models; to further the process of enclosure by making the cost of entry really, really high.At these three actual purposes, LLMs are a resounding success. Money is being transferred to the folks making the claims.It's absolutely critical to realise they neither know nor care if their claims are factual; they care solely about the money transfer they achieve.
(DIR) Post #AxdKrrpx50oh7rzcwK by tsvga@ni.hil.ist
2025-08-28T16:04:11Z
0 likes, 0 repeats
@futurebird remembering that time someone told me I was "classist" for saying you shouldn't use AI as a therapist
(DIR) Post #AxdLW6jr5xQQvTKUKW by sidalsolgun@mstdn.party
2025-08-28T16:11:27Z
0 likes, 0 repeats
@futurebird I don't see any problem with this.If someone wants to die then let them to die, including me.No need to be emotional about this topic. Life doesn't work for some of us.
(DIR) Post #AxdLqOZ2WuFu0Dwc7s by CaptainJanegay@mastodon.coffee
2025-08-28T16:15:07Z
0 likes, 0 repeats
@futurebird The bit that really floored me was ChatGPT advising him not to talk to his mum, or leave the noose out in the open in the hopes that she would stop him using it.I was in similar groups and accessed NSPCC's ChildLine. The groups were a bit of a wild west but generally well moderated, and the ChildLine counsellors were experts.In both contexts, I don't believe anyone would have responded to a suicidal teen considering telling his mum, with anything other than encouragement.
(DIR) Post #AxdO05lNUCIsUqT0eu by TCatInReality@mastodon.social
2025-08-28T16:39:17Z
0 likes, 0 repeats
@futurebird Excellent point.Every LLM should come with a massive disclaimer at each startup. Something like "This service will NOT give accurate answers or offer any genuine interaction, only plausible sounding slop that reinforces whatever you tell it"
(DIR) Post #AxdS5DPirs57UP2JmK by billclawson@sfba.social
2025-08-28T17:25:00Z
0 likes, 0 repeats
@futurebird as I understand it, the LLM is basically autocorrect (especially figuring the next word in a sentence) on roids. What could possible go wrong with that?
(DIR) Post #AxdwRU8QNSw8LPTqTY by SoftwareTheron@mas.to
2025-08-28T23:05:10Z
0 likes, 0 repeats
@futurebird They are specifically built *not* to create anything new.