Subj : Re: ChatGPT Writing To : jimmylogan From : Nightfox Date : Wed Dec 03 2025 10:03:52 Re: Re: ChatGPT Writing By: jimmylogan to Nightfox on Wed Dec 03 2025 09:02 am ji> But again, is it 'making something up' if it is just mistaken? In the case of AI, yes. ji> For example - I just asked about a particular code for homebrew on an ji> older machine. I said 'gen2' and that means the 2nd generation of MacBook ji> Air power adapter. It's just a slang that *I* use, but I've used it with ji> ChatGPT many times. BUT - the last code I was talking about was for an M2. ji> So it gave me an 'M2' answer and not Intel, so I had to modify my request. ji> It then gave me the specifics that I was looking for. ji> So that's hallucinating? Yes, in the case of AI. ji> And please don't misunderstand... I'm not beating a dead horse here - at ji> least not on purpose. I guess I don't see a 'problem' inherent with ji> incorrect data, since it's just a tool and not a be all - end all thing. You don't see a problem with incorrect data? I've heard of people who are looking for work who are using AI tools to help update their resume, as well as tailor their resume to specific jobs. I've heard of cases where the AI tools will say the person has certain skills when they don't.. So you really need to be careful to review the output of AI tools so you can correct things. Sometimes people might share AI-generated content without being careful to check and correct things. So yes, it's a problem. People are using AI tools to generate content, and sometimes the content it generats is wrong. And whether or not it's "simply mistaken", "hallucination" is the definition given to AI doing that. It's as simple as that. I'm surprised you don't seem to see the issue with it. Nightfox --- þ Synchronet þ Digital Distortion: digitaldistortionbbs.com .