Subj : Re: ChatGPT Writing To : Bob Worm From : jimmylogan Date : Wed Dec 03 2025 20:58:51 -=> Bob Worm wrote to jimmylogan <=- > If you ask me a question and I give you an incorrect answer, but I > believe that it is true, am I hallucinating? Or am I mistaken? Or is > my information outdated? BW> If a human is unsure they would say "I'm not sure", or "it's something BW> like..." or "I think it's..." - possibly "I don't know". BW> Our memories aren't perfect but it's unusual for us to assert with 100% BW> confidence that something is correct when it's not. Apparently today's BW> AI more-or-less always confidently asserts that (correct and incorrect) BW> things are fact because during the training phase, confident answers BW> get scored higher than wishy washy ones. Show me the incentives and BW> I'll show you the outcome. LOL - yeah, I can see that... BW> A colleague of mine asked ChatGPT to answer some technical questions so BW> he could fill in basic parts of an RFI document before taking it to the BW> technical teams for completion. He asked it what OS ran on a particular BW> piece of kit - there are actually two correct options for that, it BW> offered neither and instead confidently asserted that it was a third, BW> totally incorrect, option. It's not about getting outdated code / BW> config (even a human could do that if not "in the know") - but when it BW> just makes up syntax or entire non-existent libraries that's a BW> different story. But that 'third option' - you're saying it didn't 'find' that somewhere in a dataset, and just made it up? BW> Just look at all the recent scandals around people filing court cases BW> prepared by ChatGPT which refer to legal precedents where either the BW> case was irrelevant to the point, didn't contain what ChatGPT said it BW> did or didn't exist at all. I've not seen/read those. Assuuming you have some links? :-) .... -- FOR SYSOP USE ONLY - Do not write below this line!! --- MultiMail/Mac v0.52 þ Synchronet þ Digital Distortion: digitaldistortionbbs.com .