Post AUPxHAT2b2A9eA6mps by float13@hackers.town
(DIR) More posts by float13@hackers.town
(DIR) Post #AUPxH8JkabXiylkL4q by eaton@phire.place
2023-04-07T18:19:26Z
0 likes, 0 repeats
A large language model may tell you things that are not true, but it will never, ever lie to you. Because a language model has no conception of truth or falsehood, only probabilities of particular words appearing together. At scale that produces incredible things, but there is no inner model of true-ness or false-ness with which it can evaluate its own outputs. Only the rate at which we accept particular outputs.
(DIR) Post #AUPxH9AZQDnHcaSWPI by eaton@phire.place
2023-04-07T18:21:11Z
0 likes, 0 repeats
In a sense, if a LLM deceives us we only have ourselves to blame: humanity is shouting into a well and listening to the echoes, and learning that differently-shaped wells make different kinds of echoes. That isn't bad or good, but it is not "asking the well for advice."On the other hand, these tools are being put in the hands of people who don't actually understand the distinction, and are just being told that "Smart Wells Can Answer Your Questions."
(DIR) Post #AUPxH9mV9C8RWECoWu by vortex_egg@ioc.exchange
2023-04-07T18:27:10Z
0 likes, 0 repeats
@eaton We really need to reframe the discourse around LLMs such that we stop saying that LLMs hallucinate and start saying that LLMs cause the people who use them to hallucinate.
(DIR) Post #AUPxHAT2b2A9eA6mps by float13@hackers.town
2023-04-07T18:48:42Z
1 likes, 0 repeats
@vortex_egg @eaton ChatGPT is just 1000 monkeys with typewriters in a trenchcoat