Subj : Re: Writing with LLMs To : Bob Worm From : tenser Date : Sat Sep 20 2025 01:44:58 On 18 Sep 2025 at 09:08p, Bob Worm pondered and said... BW> > alternated confidently between asserting that the observed BW> > behavior was intended or a bug. BW> BW> I heard that this is a result of the way the models are trained. BW> Essentially the model gets no "points" for saying that it doesn't know BW> the answer to something, whereas a confidently stated answer with some BW> grain of truth in it (something you'd call bluffing if a person did it) BW> would score at least some points. So you get what's incentivised. Yup. That sounds about right. But the LLM isn't fishing for points on a true/false quiz; as a tool, that kind of behavior stinks. --- Mystic BBS v1.12 A48 (Linux/64) * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101) .