Subj : Re: ChatGPT Writing To : Nightfox From : jimmylogan Date : Wed Dec 03 2025 09:02:12 -=> Nightfox wrote to jimmylogan <=- Ni> Re: Re: ChatGPT Writing Ni> By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57 am ji> This is the first time I've seen the objective definition. I was told, ji> paraphrasing here, 'AI will hallucinate - if it doesn't know the answer it ji> will make something up and profess that it is true.' ji> If you ask me a question and I give you an incorrect answer, but I believe ji> that it is true, am I hallucinating? Or am I mistaken? Or is my ji> information outdated? ji> You see what I mean? Lots of words, but hard to nail it down. :-) Ni> It sounds like you might be thinking too hard about it. For AI, the Ni> definition of "hallucinating" is simply what you said, making something Ni> up and professing that it's true. That's the definition of Ni> hallucinating that we have for our current AI systems; it's not about Ni> us. :) But again, is it 'making something up' if it is just mistaken? For example - I just asked about a particular code for homebrew on an older machine. I said 'gen2' and that means the 2nd generation of MacBook Air power adapter. It's just a slang that *I* use, but I've used it with ChatGPT many times. BUT - the last code I was talking about was for an M2. So it gave me an 'M2' answer and not Intel, so I had to modify my request. It then gave me the specifics that I was looking for. So that's hallucinating? And please don't misunderstand... I'm not beating a dead horse here - at least not on purpose. I guess I don't see a 'problem' inherent with incorrect data, since it's just a tool and not a be all - end all thing. .... Basic programmers never die, they gosub and don't return --- MultiMail/Mac v0.52 þ Synchronet þ Digital Distortion: digitaldistortionbbs.com .