Post AxS7QKD4YoMUs6miZs by tbortels@infosec.exchange
 (DIR) More posts by tbortels@infosec.exchange
 (DIR) Post #AxRWL0XWZNFi77x6Rc by tbortels@infosec.exchange
       2025-08-22T15:20:17Z
       
       0 likes, 0 repeats
       
       Am I the only one to be constantly, continually disappointed by AI? When I actually try basically anything non-trvivial, it quickly starts to sound like a n00b trying to BS their way through a technical discussion - just making things up that seem reasonable and ho-ing nobody checks their work.Until/unless they can solve for the hallucination issue, I don't see how this can be useful except in niche cases where reality doesn't matter (like a writing prompt) or the output can be rigorously automatically checked.
       
 (DIR) Post #AxRWL1XYqisNDWye8m by tomjennings@tldr.nettime.org
       2025-08-22T23:16:15Z
       
       0 likes, 0 repeats
       
       @tbortels I posted this earlier this week and got not one comment (complaint, lol) etc. Its a master of clarity on LLMs. Its not for vs against. Its quite brilliant, in fact the only thing ive ever read like it on so called AIs. https://www.programmablemutter.com/p/large-language-models-are-cultural
       
 (DIR) Post #AxRWwiBNGmjWBTZQW0 by tomjennings@tldr.nettime.org
       2025-08-22T23:23:05Z
       
       0 likes, 0 repeats
       
       @tbortels Language is itself a closed system, self contained. LLMs are every good at manipulating those symbols. The problem is that too many people think this is what humans do.  Its not. We tie language to bodily, grounded, lived experience. The difference is irretrievably immense. LLMs can't possibly work as sources of information about the world because they have no world, only language symbols. To me, the lesson of LLMs is not that machines are intelligent; the lesson is that intelligence is not required to wield language. Chess playing was once thought of as a claim to intelligence; we now know it's a big combinatorial math problem and humans are good at fudging that, but machines are better. But people got chess and class and education and race all tangled up.... Just likecso many so now with LLMs. And people like Altman and zuck are shits and parasites, scam artists exploiting us sll.
       
 (DIR) Post #AxRZjxUvJfU7mWKCi8 by mike805@noc.social
       2025-08-22T23:54:18Z
       
       0 likes, 0 repeats
       
       @tomjennings @tbortels I don't anyone has finished reading it yet.Interesting but long and assumes an academic background in language.
       
 (DIR) Post #AxS4gnBuHEydWgOry4 by tomjennings@tldr.nettime.org
       2025-08-23T05:41:10Z
       
       0 likes, 0 repeats
       
       @mike805 @tbortels Really! I don't have an academic background, it doesn't use academic language at all. It is however rigorous. In fact as I was reading it I mildly marvelled at how it *didn't* hide behind lofty language. But it's def not the language of the tech industry though. Its analytical and requires a certain way of reading I find more associated with the sciences. I'm pretty sensitive to snobbiness, touchy to maybe reactionary on fact. I didn't get a whiff, at all. Hmmm. Gonna go back and reread.
       
 (DIR) Post #AxS7QKD4YoMUs6miZs by tbortels@infosec.exchange
       2025-08-23T06:11:46Z
       
       0 likes, 0 repeats
       
       @tomjennings @mike805 It was certainly college-level - and verbose. Maybe necessarily, given the tricky nature of cognition and how easily we see patterns in things that don't really have them - faces in the wallpaper, animals in the clouds, and what appears to be actual thought in strings of words put together based on probability rather than meaning.But my simple point stands: aside from niche cases, if they can't solve for hallucinations - the technology is self-limiting. Natural selection will teach humanity (painfully) not to trust the LLM - some faster, some not so much. Assuming the upcoming crash doesn't sour us on the technology sooner.
       
 (DIR) Post #AxS7SU84eRz4DPTcOG by mike805@noc.social
       2025-08-23T06:10:05Z
       
       0 likes, 0 repeats
       
       @tomjennings @tbortels Well it's definitely not from a tech or computer science background. It's treating LLMs as a literary and language phenomenon. Has four interesting perspectives.
       
 (DIR) Post #AxSjmmhH7h37qt33my by chocobo13@mastodon.social
       2025-08-23T13:21:38Z
       
       0 likes, 0 repeats
       
       @tomjenningsI find one of the interesting common themes of new tech purporting to mimic intelligence is that it keeps showing us how much our anthrpocentrism falls apart under scrutiny, and that we're not as unique among animals as we like to pretend @tbortels
       
 (DIR) Post #AxSyGGIlX08RQRXBuS by tomjennings@tldr.nettime.org
       2025-08-23T16:03:51Z
       
       0 likes, 0 repeats
       
       @tbortels @mike805 As most point out, they're not hallucinations, simply output grammatically and syntactically correct, just not anchored to physical reality. "Fixes" seem to be human crafted rules directing generation of text, case by case. This problem isn't a problem it's inherent in the usage. So yeah it will never be "fixed".  Because it's not actually broken! For the offered use cases, LLMs are 100% scams. Outright deceit.