Post AwxTXNxpAz6VdJfOmu by dpnash@c.im
(DIR) More posts by dpnash@c.im
(DIR) Post #AwvuoRcp7rJGMEHat6 by gerrymcgovern@mastodon.green
2025-08-07T11:50:15Z
0 likes, 0 repeats
Doctors Horrified After Google's Healthcare AI Makes Up a Body Part That Does Not Exist in Humans"What you’re talking about is super dangerous."https://futurism.com/neoscope/google-healthcare-ai-makes-up-body-partAI is an enormous pile of dangerous crap. It lies, lies, lies. Such an incredibly poor technology that's being sold as a god. When will enough people realize that AI is super-average and super-dodgy? The only thing it's really good at is destroying society.
(DIR) Post #AwvuoSmQpcabwJmmVE by shironeko@fedi.tesaguri.club
2025-08-07T17:19:42.827125Z
0 likes, 0 repeats
@gerrymcgovern only the PMC types thinks that LLMs are great, because it lies less than them.
(DIR) Post #AwxTXNxpAz6VdJfOmu by dpnash@c.im
2025-08-07T16:13:18Z
0 likes, 0 repeats
@gerrymcgovern One of the first prompts I gave ChatGPT back in 2022 came from my main hobby (amateur astronomy). I asked it to tell me something about the extrasolar planets orbiting the star [[fake star ID]].[[fake star ID]] was something that anyone who knew how to use Wikipedia half-intelligently could verify was fake within a few minutes.I wasn't even trying to be deceptive; I genuinely wanted to see how ChatGPT would handle a request for information that I knew couldn't be in its training data.The torrents of bullshit it produced -- paragraph after paragraph of totally confabulated data about these nonexistent planets orbiting a nonexistent star -- told me everything I needed to know about ChatGPT and its buddies, and I've never been tempted to use them for anything serious since.
(DIR) Post #AwxTXP27CW88wugL7A by dpnash@c.im
2025-08-07T20:34:20Z
0 likes, 1 repeats
@gerrymcgovern Ah, and since this seems to be making the rounds: it's time for my occasional reminder of one especially unpleasant aspect of the LLM hype storm.LLMs take advantage of a faulty heuristic many (probably most, but don't have exact stats) people have about human intelligence: namely, someone or something that produces fluent and grammatically coherent text about a wide range of topics, on demand, is "intelligent".This is a very compelling heuristic and also very wrong. In the case of LLMs, it's wrong in the way we're all talking about here: it leads people to see thinking or reasoning in the statistical word salad these beasties produce. But it's even more of a problem when people invert the heuristic and turn it into "someone or something is *not* producing fluent text about a wide range of topics, therefore they are *not* intelligent". This mirror-world version causes enormous social harm to people who for whatever reason (autism, selective mutism, etc.) can't speak the way people expect they would be able to "if they were smart".