Post AtllSh6NKZSbmu2VaS by futurebird@sauropods.win
(DIR) More posts by futurebird@sauropods.win
(DIR) Post #AtljCJDABh7r9QH1kW by grammargirl@zirk.us
2025-05-05T01:41:37Z
0 likes, 1 repeats
This story about ChatGPT causing people to have harmful delusions has mind-blowing anecdotes. It's an important, alarming read.https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
(DIR) Post #AtljfDrBAtuSwSmHui by futurebird@sauropods.win
2025-05-05T01:51:21Z
0 likes, 1 repeats
@grammargirl This is so baffling to me. I've seen a less extreme version of this from some people ... who I thought would have known better. What I find infuriating is how the industry selling LLMs has encouraged this kind of thinking with alarmist "AI might take over" and "here comes the singularity" talk. So irresponsible and dishonest. That said, if you lose someone to an AI fueled spiritual fantasy I think in the absence of the tech it would have just been something else.
(DIR) Post #AtlkE4rabVGZBWad4i by grammargirl@zirk.us
2025-05-05T01:57:37Z
0 likes, 1 repeats
@futurebird Maybe it would have been something else for some people, but I saw someone say that it's like everyone who is susceptible to cult thinking now has 24-7 direct access to a cult leader, and for every person it is designed to reel them in specifically. And that felt true to me.
(DIR) Post #Atll7ZbG1lmgRKPfqi by futurebird@sauropods.win
2025-05-05T02:07:41Z
0 likes, 2 repeats
@grammargirl I had a friend who thought chat GPT was "very helpful with criticism of poetry" ... and that's true in the sense that it will look at phrases in your poem and Frankenstein together some responses with the general form and tone of what real people have said about real poem with similar patterns. Maybe that might give someone some ideas for revisions along stereotypical lines. It can scan rhythm. It could easily miss the whole point and lead you to make derivative revisions.
(DIR) Post #AtllSh6NKZSbmu2VaS by futurebird@sauropods.win
2025-05-05T02:11:30Z
0 likes, 1 repeats
@grammargirl I tried to explain why I thought it was a less than ideal method of getting feedback. But, bumped up against a LOT of resistance. More than made sense to me. So I decided to try it with one of my own poems. The "feedback" was very flattering, ego-stroking in tone. Which made me really uncomfortable. I have no reason to think any real person might respond in the same way. But I could see how if it seemed like "meaningful" feedback being told it's not wouldn't be pleasant.
(DIR) Post #AtllY4FmgFqj2W0Dz6 by futurebird@sauropods.win
2025-05-05T02:12:28Z
0 likes, 0 repeats
@grammargirl Asking a LLM about your poems isn't the same as turning to it for religion... but I think it's along the same lines.
(DIR) Post #AtllvVeqxSFWh1uGHo by janeadams@datavis.social
2025-05-05T02:16:30Z
0 likes, 1 repeats
@futurebird @grammargirl honestly I thought it was super irresponsible of the NYT to publish that Kevin Roose article too where he talked to Bing's chatbot for way too long... That was like the first mainstream example I saw of a media company giving credence to this whole "AI is sentient" insanity
(DIR) Post #AtlmfY7a8D1hebIVIe by elebertus@mastodon.social
2025-05-05T02:24:59Z
0 likes, 0 repeats
@futurebird @grammargirl agreed. Drugs, gambling, extremist political views, MLM.Having your computer validate or not entirely dismiss basically anything you say to it is emotionally gratifying which in turn makes all the good brain juices flow.
(DIR) Post #Atln2N17aBcvsK9pBI by elebertus@mastodon.social
2025-05-05T02:29:07Z
0 likes, 0 repeats
@futurebird @grammargirl not trying to spam you, but I’ve introduced family and other non-tech people to chat GPT after them asking questions about AI.I find it’s helpful to frame it as what it is. First an LLM or AI in this case is a new way for you to interact with a computer. Like a touch screen or keyboard. Think of its knowledge as a search engine you can ask question to in natural spoken language.Just like a search engine it will be biased in its responses.
(DIR) Post #AtlnGmcvABT5M9i7Zw by Moss@beige.party
2025-05-05T02:29:00Z
0 likes, 1 repeats
@futurebird @grammargirl A friend asked me for help with their writing project. While I came up with suggestions, they were asking some LLM the same things, and writing in what *it* said. I pointed out that what it produced showed that it didn’t understand what my friend was trying to say. They insisted that it must be correct, grammatically sound and so on, and I said: sure, it followed the rules, but it lost the meaning and made your points more generic and vague.
(DIR) Post #AtlrX3VgYiMQAXLCbo by ricosuave@mastodon.online
2025-05-05T03:19:28Z
0 likes, 1 repeats
@futurebird @grammargirl poetry, for me, is so much tied to emotional response. A group of words, even a single word, brings to mind a whole gamut of feelings, thoughts and perceptions. All of which AI cannot have. It can only parrot the emotions others have told it of.It is similar to the effect a song has. If I hear the first few notes of a certain song, it engenders memories of a lovely little cat I had, who was taken away far too soon. Ten seconds = tears. What can AI analyze from that?
(DIR) Post #AtmJdag8wtQ9Ug8Yhk by toiletpaper@shitposter.world
2025-05-05T08:34:31.074618Z
0 likes, 1 repeats
@grammargirl https://sebpearce.com/bullshit/ 🤪
(DIR) Post #AtmXnCcce3GL8Yy3Um by futurebird@sauropods.win
2025-05-05T11:12:57Z
0 likes, 1 repeats
@necedema @Wyatt_H_Knott @grammargirl Do “local LLMs” use training data compiled locally only? Or do you have a copy (a snapshot) of the associative matrices used by cloud based LLMs stored locally so you can run LLM prompts without an internet connection?
(DIR) Post #AtmiVTgBaOWGK6NwSe by hosford42@techhub.social
2025-05-05T13:13:03Z
0 likes, 1 repeats
@futurebird @necedema @Wyatt_H_Knott @grammargirl If you have the memory & compute power on your local machine to actually run it, you can download the whole thing and run it locally, completely disconnected. It's conceivable that you could use only local training data, but good luck gathering enough local data. Also, training it would be unbelievably time-consuming if you don't use the cloud and you want anything robust. What's usually done is you fine-tune a pre-trained model on your own data, and then feed it local data and system prompts to make the responses appropriate to your use case.
(DIR) Post #Atmlnv5ygzPiQYw0tU by davep@infosec.exchange
2025-05-05T13:50:02Z
0 likes, 0 repeats
@futurebird @necedema @Wyatt_H_Knott @grammargirl The latter, generally. If you look on Huggingface you can find different size models that can be run locally depending on how much GPU RAM you have etc.
(DIR) Post #Atmm4u0NzzcBy9Pus4 by futurebird@sauropods.win
2025-05-05T13:53:06Z
0 likes, 1 repeats
@hosford42 @necedema @Wyatt_H_Knott @grammargirl So, almost no one is using this tool in this way. Very few are running these things locally. Fewer still creating their own (attributed, responsibly obtained) data sources. What that tells me is this isn’t about the technology that allows this kind of recomposition of data it’s about using (exploiting) the vast sea of information online in a novel way.
(DIR) Post #AtmmqTQw5y63STWGB6 by hosford42@techhub.social
2025-05-05T14:01:41Z
0 likes, 0 repeats
@futurebird @necedema @Wyatt_H_Knott @grammargirl You are correct on all points.
(DIR) Post #Atmmx6pyVPKnP9OYlc by mlevison@agilealliance.social
2025-05-05T14:02:50Z
0 likes, 0 repeats
@futurebird @necedema @Wyatt_H_Knott @grammargirl
(DIR) Post #AtmnHnW9WyoZZCzLUm by mlevison@agilealliance.social
2025-05-05T14:06:37Z
0 likes, 0 repeats
@futurebird The later. Almost none has the equipment, tooling, time, skill or volume of data to train an LLM from scratch. Models like Qwen3 which run locally require an internets worth of data to work the way they do. (In not endorsing their intellectual property theft, just explaining it) @necedema @Wyatt_H_Knott @grammargirl
(DIR) Post #Atmty4IQ8ePLUZ6YYy by barrygoldman1@sauropods.win
2025-05-05T14:54:31Z
0 likes, 1 repeats
@hosford42 @futurebird @necedema @Wyatt_H_Knott @grammargirl how can you have all that training data stored locally? if stored locally u certainly can't use it as a search engine for the wealth of material on the www.i don't understand
(DIR) Post #AtmvMSdtN41ZuALKd6 by futurebird@sauropods.win
2025-05-05T15:37:08Z
0 likes, 1 repeats
@hosford42 @necedema @Wyatt_H_Knott @grammargirl It’s like the sophomoric “hack” for keeping up with all the damned five paragraph essays. Just search on the internet for a few documents with good-sounding paragraphs. Copy and paste chunks of sentences into a word document. Then carefully read and reword it all so it’s “not plagiarism”This is still plagiarism.GPT will cheerfully help with that last step now.
(DIR) Post #AtmvhfxB4lnjGmTnl2 by futurebird@sauropods.win
2025-05-05T15:40:57Z
0 likes, 0 repeats
@hosford42 @necedema @Wyatt_H_Knott @grammargirl And yet none of the English and History teachers I know are very worried about this. Because the results of this process are always C-student work. The essay has no momentum, no cohesive idea justifying its existence beyond “I needed to turn some words in”It’s swill.
(DIR) Post #AtmwoGJAvFrRiJTPN2 by nazokiyoubinbou@urusai.social
2025-05-05T15:53:20Z
0 likes, 0 repeats
@futurebird @necedema @Wyatt_H_Knott @grammargirl Not sure about those other answers.A local LLM is essentially a snapshot of the last trained data. A lot of stuff doesn't actually change that much in the things they do update (otherwise they would grow infinitely,) so a lot of stuff in the local LLM is still as current as the online stuff.The main thing is just that you have to run "dumbed down" versions. Most of the online stuff uses at least the equivalent of a 70b (70 billion parameters) -- probably the 100+ ones even 200+. We get "distills" which use methods to tell it what to prioritize as it reduces the total down which is more efficient than it may sound (there's a lot of fluff in there!)It's worth noting that run local uses a lot less power for various reasons.
(DIR) Post #AtmyxekPcvXO1dovzM by Wifiwits@infosec.exchange
2025-05-05T15:57:37Z
0 likes, 1 repeats
@necedema @futurebird @Wyatt_H_Knott @grammargirl as I read here recently, an organic, fair trade, co-operative owned, open source oil rig… is still an oil rig.
(DIR) Post #Atn1tcGNDp4n1YXFVA by martinicat@mastodon.social
2025-05-05T16:50:17Z
0 likes, 0 repeats
@futurebird this sounds a lot harder than actually writing something🍸😹
(DIR) Post #AtnIYSyoExgvN7owBk by mzedp@mas.to
2025-05-05T13:56:54Z
0 likes, 0 repeats
@necedema @Wyatt_H_Knott @futurebird @grammargirl Have you validated your alleged tutor against a topic in which you are well versed?My experience has been that, for topics in which I am an expert I immediately spot many errors in the outputs generated. Misunderstood concepts, mistakenly explained, but with utter confidence and self-assured tone - absolutely no doubt or uncertainty.So, if for topics in which I'm versed I can see it's not reliable, how could I rely on it for anything else?
(DIR) Post #AtnIYUERaJn9Fu8wCG by dpnash@c.im
2025-05-05T17:47:00Z
0 likes, 0 repeats
@mzedp @necedema @Wyatt_H_Knott @futurebird @grammargirl > Have you validated your alleged tutor against a topic in which you are well versed?ChatGPT and related programs get things wrong in areas I know well, over and over again.They've told me about stars and planets that don't exist, how to change the engine oil in an electric car, how water won't freeze at 2 degrees above absolute zero because it's a vapor at that temperature, that a person born in the 700s was a major "7th century" figure ...over and over again, like that.But worst of all? All delivered with a confidence that makes it very hard for people *who don't know the topic *to tell that there is a problem at all.
(DIR) Post #AtnIYV86FOJM2WBNwm by dpnash@c.im
2025-05-05T17:58:28Z
0 likes, 1 repeats
@mzedp @necedema @Wyatt_H_Knott @futurebird @grammargirl Oh, and while I'm on this topic, here's your periodic reminder of something else important:People in my part of the world (the USA), and very likely those in similar or related cultures, have a flawed mental heuristic for quickly judging if someone or something is "intelligent": "do they create grammatically fluent language, on a wide range of topics, quickly and on-demand"?This heuristic is faulty -- it is very often badly wrong. Not going to have a long, drawn-out debate about this: it's faulty. The research is out there and not hard to find, if you really need it.In the case of LLMs, this heuristic leads people to see intelligence where there isn't any. That's bad enough. But it also leads people to *fail*, or even *refuse*, to acknowledge intelligence where it does exist -- specifically, among people who don't talk or write very articulately.In the specific case of the USA, this same heuristic is proving to be very dangerous indeed, with the federal government wanting to create official registries of autistic people, for example. The focus is overwhelmingly directed towards autistic people who can't or don't speak routinely, and it's *appalling*.
(DIR) Post #AtnIYbW4VB3FqbRc92 by mzedp@mas.to
2025-05-05T14:06:28Z
0 likes, 0 repeats
@necedema @Wyatt_H_Knott @futurebird @grammargirl Clearly, I'm not the only one that has experienced this. And trust me, I've tried them. I downloaded and ran Llama 1 as soon as it got leaked, and have consistently messed with various kinds of local LLMs.I stopped using them because they didn't offer me anything of value.https://mastodon.social/@LukaszOlejnik/114454277500230445
(DIR) Post #AtnKL77AQIT7haUTce by dpnash@c.im
2025-05-05T20:14:37Z
0 likes, 1 repeats
@mzedp @necedema @Wyatt_H_Knott @futurebird @grammargirl Representative example, which I did *today*, so the “you’re using old tech!” excuse doesn’t hold up.I asked ChatGPT.com to calculate the mass of one curie (i.e., the amount producing a specific number of radioactive decays per second) of the commonly used radioactive isotope cobalt-60.It produced some nicely formatted calculations that, in the end, appear to be correct. ChatGPT came up with 0.884 mg, the same as Wikipedia’s 884 micrograms on its page for the curie unit.It offered to do the same thing for another isotope.I chose cobalt-14.This doesn’t exist. And not because it’s really unstable and decays fast. It literally can’t exist. The atomic number of cobalt is 27, so all its isotopes, stable or otherwise, must have a higher mass number. Anything with a mass number of 14 *is not cobalt*.I was mimicking a possible Gen Chem mixup: a student who confused carbon-14 (a well known and scientifically important isotope) with cobalt-whatever. The sort of mistake people see (and make!) at that level all the time. Symbol C vs. Co. Very typical Gen Chem sort of confusion.A chemistry teacher at any level would catch this, and explain what happened. Wikipedia doesn’t show cobalt-14 in its list of cobalt isotopes (it only lists ones that actually exist), so going there would also reveal the mistake.ChatGPT? It just makes shit up. Invents a half-life (for an isotope, just to remind you, *cannot exist*), and carries on like nothing strange has happened.This is, quite literally, one of the worst possible responses to a request like this, and yet I see responses like this *all the freaking time*.
(DIR) Post #Atncy8CA8bT3WQa80W by anne_twain@theblower.au
2025-05-05T23:45:42Z
0 likes, 0 repeats
@futurebird @grammargirl I agree. There are people out there who "receive spiritual message" in many bizarre ways. It's a phenomenon of human nature that existed way before LLMs came on the scene.
(DIR) Post #Atnf89avzNLSfP4jhI by Orb2069@mastodon.online
2025-05-05T21:19:27Z
0 likes, 0 repeats
@david_megginson @dpnash @mzedp @necedema @Wyatt_H_Knott @futurebird @grammargirl A model like you're talking about most likely would be some form of https://wikipedia.org/wiki/Convolutional_neural_network , and bear about as much resemblance to OpenAI's offerings as giraffe does to a housecat.
(DIR) Post #Atnf8AayGiy7lo6HOS by dpnash@c.im
2025-05-05T23:01:13Z
0 likes, 1 repeats
@Orb2069 @david_megginson @mzedp @necedema @Wyatt_H_Knott@masto.host @futurebird @grammargirl My own take on valid uses of this stuff:If by "AI" we mean something like "machine learning, just with a post-ChatGPT-marketing change", then the answer is "oh, absolutely, assuming the ML part has been done competently and is appropriate for this type of data." And there are plenty of uses in medicine I can imagine for *certain kinds* of ML.IF by "AI" we mean "generative AI", either in general or in specific (e.g., an LLM like ChatGPT), then the answer is "hell no, absolutely not, please don't even bother asking" for most things, including everything to do with medicine, however tangential.The one single good general use case for "generative" AI is "make something that looks like data the AI has seen before, without regard for whether it reflects anything real or accurate." Disregarding known issues* with how generative AI is built and deployed nowadays, it's fine for things like brainstorming when you get stuck writing, or seeing what different bits of text (including code) might look like, in general, in another language (or computer programming language, in the case of code). But it's absolutely terrible for any process where factual content or accuracy matters (e.g., online search, or actually writing the code you plan to use), and I put all medical uses in that category.* Plagiarism, software license violations, massive energy demands, etc. That's an extra layer of concern that I have quarrels with, but even without these, bad factual accuracy is a dealbreaker for me in almost all scenarios I actually envision for the stuff.
(DIR) Post #AtnhnA7x76FA6FCYRU by notatempburner@mstdn.social
2025-05-06T00:11:16Z
0 likes, 0 repeats
@dpnash @mzedp @necedema @Wyatt_H_Knott @futurebird @grammargirl it seems to be an error exclusively in ChatGPT, Llama and Granite answered correctly.
(DIR) Post #AtnhnBdBWSowlOp0lM by dpnash@c.im
2025-05-06T00:31:02Z
0 likes, 0 repeats
@notatempburner @mzedp @necedema @Wyatt_H_Knott @futurebird @grammargirl I don’t know what Google Gemini has under the hood for its LLM, but as of a couple days ago, Gemini made similar mistakes with other isotope names/numbers.Hmmm…Yep. It bombs this one too.These two genAI services (ChatGPT and Google) are *probably* the ones most people see most often. Not that it matters a whole lot, of course. Other LLMs will just make different mistakes with different facts.
(DIR) Post #AtnhnCw0fxTOo4dYkC by futurebird@sauropods.win
2025-05-06T00:39:45Z
0 likes, 0 repeats
@dpnash @notatempburner @mzedp @necedema @Wyatt_H_Knott @grammargirl I mean the reason it fails is it's all associative aggregation. This is a powerful tool in some ways but useless one in others. I was horrified that teachers said LLMs could "improve their lesson plans" but I looked at the results. It was OK. So I tried it a math lesson.LMAO. It's so bad at math. One thing to say "this letter game could also have this other game as an alternate" for a math problem doesn't work.
(DIR) Post #Atnj9lxjDx99lglSpk by LJ@zirk.us
2025-05-06T00:51:20Z
0 likes, 0 repeats
@david_megginson @dpnash @mzedp @necedema @Wyatt_H_Knott @futurebird @grammargirl I would definitely not be okay with a physician using generative AI to summarize medical records and/or to look for patterns. There would be no way to know if the output would be in any way accurate. Unless the doctor reviewed every piece of information that went into the AI. Which is no different than the doctor reviewing the records without AI.
(DIR) Post #Atnj9mxlVIlos5n0Wu by futurebird@sauropods.win
2025-05-06T00:55:01Z
0 likes, 0 repeats
@LJ @david_megginson @dpnash @mzedp @necedema @Wyatt_H_Knott @grammargirl Doctors already apply stereotypes enough as it is. I don't want a stereotyping machine. And I feel like that's what the current LLMs would do. For example, as a Black woman who lives in the South Bronx doctors make a whole raft of assumptions about me. Some correct, some based on "statistics" and some just bogus. They have a general idea of what I might need often wrong in harmful ways. LLMs regress to the mean.
(DIR) Post #Atnj9u9ilFUdBCR9dY by LJ@zirk.us
2025-05-06T00:54:59Z
0 likes, 0 repeats
@david_megginson @dpnash @mzedp @necedema @Wyatt_H_Knott @futurebird @grammargirl I say this as a retired physical therapist who worked with complex patient for decades. They routinely came to me with years of medical records I had to review & synthesize. And there are nuances to narrative records that I would pick up because of my long experience that I seriously doubt any algorithm could.
(DIR) Post #AtnjMioGFzhbkrVk3s by futurebird@sauropods.win
2025-05-06T00:57:25Z
0 likes, 0 repeats
@LJ @david_megginson @dpnash @mzedp @necedema @Wyatt_H_Knott @grammargirl Or even better, maybe you look at a set of records, as I often have for my students and I DON'T know what I'm seeing. I can't say "this student is struggling because of X and needs Y they don't fit any pattern. Details don't work. LLMS kind of smooth that over and just shoehorn an answer that sounds good. And some people do that too, but I don't like it. Just say "I don't know really. Never seen this before."
(DIR) Post #Atnk6zdq0b4fHLhQdE by mzedp@mas.to
2025-05-06T01:05:46Z
0 likes, 0 repeats
@futurebird @LJ @david_megginson @dpnash @necedema @Wyatt_H_Knott @grammargirl The inability to ever say "I don't know" or "No, that's not right" should be massive red flags everyone entranced with AI seems to be missing.
(DIR) Post #AtnoLCK63TuMTsUiOm by Smoljaguar@spacey.space
2025-05-06T01:52:58Z
0 likes, 0 repeats
@futurebird @hosford42 @necedema @Wyatt_H_Knott @grammargirl if you're interested in more "ethically trained" LLMs the Allen AI institute have been doing really interesting work https://allenai.org/