Post AYSeLUglY3hgQLTmjo by knbrindle@creativewriting.social
(DIR) More posts by knbrindle@creativewriting.social
(DIR) Post #AYJy8ajLFUyzGwnsJs by baldur@toot.cafe
2023-08-02T12:47:15Z
4 likes, 7 repeats
“Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’”More accurately, AI researchers have always said that this isn’t fixable but y’all were too obsessed with listening to con artists to pay attention but now the con is wearing thin. https://fortune.com/2023/08/01/can-ai-chatgpt-hallucinations-be-fixed-experts-doubt-altman-openai/
(DIR) Post #AYK9lWHqnnlIHTi46i by feld@bikeshed.party
2023-08-02T15:17:30.709467Z
0 likes, 0 repeats
https://blog.myscale.com/2023/07/17/teach-your-llm-vector-sql/
(DIR) Post #AYKE4a878cRnWCV5l2 by sj_zero@social.fbxl.net
2023-08-02T16:06:04.094817Z
1 likes, 1 repeats
I have been saying for a while that chatgpt is a "verisimilitude engine" which has no interest in producing output that is true, only interest in producing output which appears to be true. In some cases, a correct answer is the most true looking answer that it can come up with. On the other hand, often an incorrect answer is the most true looking answer that it can come up with. A lot of people that claim that it will replace software developers haven't been in the situation where it gives you provably wrong information, so you correct it, so it gives you provably wrong information, so you correct it, so it gives you provably wrong information, so you correct it. I also had a fun situation where I asked it to create a review of Beowulf in the style of beowulf, and it created something that was rhyming which Beowulf does not. I pointed out that Beowulf does not rhyme, and it said that's right Beowulf does not rhyme, so I said create a review of Beowulf in the style of Beowulf that does not run, and it produced a review of beowulf, that rhymed.
(DIR) Post #AYKMvJBzdnEVy8y4bQ by adamasnemesis@social.adamasnemesis.com
2023-08-02T17:44:13Z
0 likes, 0 repeats
@baldur Yeah. The "hallucinations" emanate from the fundamentals of how LLMs work; they perhaps could be reduced substantially, but it's not realistic to believe they'll ever be eliminated. An AI without this problem will require using a different paradigm from the ground-up.
(DIR) Post #AYL71n2u9gYEBdkHZY by pthenq1@mastodon.la
2023-08-03T02:21:53Z
0 likes, 0 repeats
@baldur Nope... It is mathematically probed...
(DIR) Post #AYSTC08MgMOTYdewzY by thomasfuchs@hachyderm.io
2023-08-02T13:18:36Z
2 likes, 0 repeats
@baldur if “starting to” means "yelling that it's a scam since like two years ago”, then yes
(DIR) Post #AYSeLUglY3hgQLTmjo by knbrindle@creativewriting.social
2023-08-02T16:56:04Z
0 likes, 1 repeats
@baldur exactly this:(Quote from the article):”””“This isn’t fixable,” said Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory. “It’s inherent in the mismatch between the technology and the proposed use cases.”“””It’s like trying to solve the problem that cars don’t provide any nutritional value. The idea of using LLMs to provide detailed factual information is just not tenable. That’s not what they are.