Post AzOd3QLr1RsoPHgiwa by ekis@mastodon.social
 (DIR) More posts by ekis@mastodon.social
 (DIR) Post #AzOd3QLr1RsoPHgiwa by ekis@mastodon.social
       2025-10-20T09:18:00Z
       
       1 likes, 0 repeats
       
       "Can AI Avoid the Enshittification Trap?"No."recently vacationed in Italy...ran my itinerary past GPT5 for sightseeing suggestions & restaurant recommendations"Embarrassing."When I got home, I asked the model how it chose that restaurant... The answer was complex and impressive"No, the answer was a hallucinationAn entire article based on hallucination by the editor-at-large of Wired by Steven LevyReporting on LLMs with a fundamental misunderstanding of how they work
       
 (DIR) Post #AzOd3aVRGfUXtuFWGu by ekis@mastodon.social
       2025-10-20T09:19:14Z
       
       0 likes, 0 repeats
       
       I'm not going to get into all of the details but fundamentally LLMs do not index information with their source, its simply not how it worksA RAG can be used to pre-gather facts and they can be fed into with your prompt, which is why some "sources" on responses are totally and completely wrong sometimesWhen people say its more like an autocomplete, in this sense they are right, its a high-dimensional tensor which is sophisticated, but it can not & does not know where training data came from