Subj : Re: ChatGPT Writing To : jimmylogan From : Bob Worm Date : Wed Dec 03 2025 17:47:11 Re: Re: ChatGPT Writing By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57:33 Hi, jimmylogan. > If you ask me a question and I give you an incorrect answer, but I > believe that it is true, am I hallucinating? Or am I mistaken? Or is > my information outdated? If a human is unsure they would say "I'm not sure", or "it's something like..." or "I think it's..." - possibly "I don't know". Our memories aren't perfect but it's unusual for us to assert with 100% confidence that something is correct when it's not. Apparently today's AI more-or-less always confidently asserts that (correct and incorrect) things are fact because during the training phase, confident answers get scored higher than wishy washy ones. Show me the incentives and I'll show you the outcome. A colleague of mine asked ChatGPT to answer some technical questions so he could fill in basic parts of an RFI document before taking it to the technical teams for completion. He asked it what OS ran on a particular piece of kit - there are actually two correct options for that, it offered neither and instead confidently asserted that it was a third, totally incorrect, option. It's not about getting outdated code / config (even a human could do that if not "in the know") - but when it just makes up syntax or entire non-existent libraries that's a different story. Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the case was irrelevant to the point, didn't contain what ChatGPT said it did or didn't exist at all. BobW --- þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net .