Subj : Re: ChatGPT Writing To : Nightfox From : jimmylogan Date : Wed Dec 03 2025 20:58:51 -=> Nightfox wrote to jimmylogan <=- Ni> Re: Re: ChatGPT Writing Ni> By: jimmylogan to Nightfox on Wed Dec 03 2025 09:02 am ji> But again, is it 'making something up' if it is just mistaken? Ni> In the case of AI, yes. Gonna disagree with you there... If wikipedia has some info that is wrong, and I quote it, I'm not making it up. If 'it' pulls from the same source, it's not making it up either. To say it's making up incorrect info is akin to saying that it should be accurate 100% of the time, even with bad input. ji> For example - I just asked about a particular code for homebrew on an ji> older machine. I said 'gen2' and that means the 2nd generation of MacBook ji> Air power adapter. It's just a slang that *I* use, but I've used it with ji> ChatGPT many times. BUT - the last code I was talking about was for an M2. ji> So it gave me an 'M2' answer and not Intel, so I had to modify my request. ji> It then gave me the specifics that I was looking for. ji> So that's hallucinating? Ni> Yes, in the case of AI. Again, gonna disagree. From what you, and others, are saying, if I ask what 2 + 2 is and it doesn't know it will answer 17.648, if it wants to. Yes, that's an extreme example, but I think it conveys my point. In my case, today, I had been talking about Homebrew on an M1 Mac yesterday, and today I was working with an Intel. I said 'gen2' assuming it would remember that's what I call the older Intel machines, but it went by the last we worked on yesterday. partial answer --> but on an M1, Homebrew usually uses group admin, so it's almost always safe. When I corrected "intel, not silicone" --> On Intel Macs, Homebrew lives in: ji> And please don't misunderstand... I'm not beating a dead horse here - at ji> least not on purpose. I guess I don't see a 'problem' inherent with ji> incorrect data, since it's just a tool and not a be all - end all thing. Ni> You don't see a problem with incorrect data? Not to the point I'm not gonna use the tool. :-) I expect a spreadsheet to perform the calculations I program, but if I make a mistake in the programming, I expect it to fail. With LLM's, the dataset is so HUGE that there are a TON of variables! So there are potentially errors that can be made. But if I give it a list of numbers to add up, I've yet to see any kind of mistake. :-) Also - the 'fine print' always says that the data might not be correct, so be careful. Ni> I've heard of people who Ni> are looking for work who are using AI tools to help update their Ni> resume, as well as tailor their resume to specific jobs. I've heard of Ni> cases where the AI tools will say the person has certain skills when Ni> they don't.. So you really need to be careful to review the output of Ni> AI tools so you can correct things. Sometimes people might share Ni> AI-generated content without being careful to check and correct things. I'd like to see some data on that... Anecdotal 'evidence' is not always scientific proof. :-) Ni> So yes, it's a problem. People are using AI tools to generate content, Ni> and sometimes the content it generats is wrong. And whether or not Ni> it's "simply mistaken", "hallucination" is the definition given to AI Ni> doing that. It's as simple as that. I'm surprised you don't seem to Ni> see the issue with it. If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it is designed to be. I will also not take medical nor legal advice. :-) .... WARNING!! Do NOT reuse tagline. Please dispose of it properly after use. --- MultiMail/Mac v0.52 þ Synchronet þ Digital Distortion: digitaldistortionbbs.com .