[HN Gopher] "ChatGPT said this" Is Lazy
___________________________________________________________________
"ChatGPT said this" Is Lazy
Author : ragswag
Score : 38 points
Date : 2025-10-24 15:49 UTC (7 hours ago)
(HTM) web link (terriblesoftware.org)
(TXT) w3m dump (terriblesoftware.org)
| pavel_lishin wrote:
| I've come down pretty hard on friends who, when I ask for advice
| about something, come back with a ChatGPT snippet (mostly
| D&D-related, not work-related).
|
| I know ChatGPT exists. I could have fucking copied-and-pasted my
| question myself. I'm not asking you to be the interface between
| me and it. I'm asking _you_ , what _you_ think, what _your_
| thoughts and opinions are.
| einsteinx2 wrote:
| I've noticed this trend in comments across the internet. Someone
| will ask or say something, the someone else will reply with "I
| asked ChatGPT and it says..." or "According to AI..."
|
| ChatGPT is free and available to everyone, and so are a dozen
| other LLMs. If the person making the comment wanted to know what
| ChatGPT had to say, they could just ask it themselves. I guess
| people feel like they're being helpful, but I just don't get it.
|
| Though with that said, I'm happy when they at least say it's from
| an LLM. At least then I know I can ignore It. Worse is replying
| as if it's their own answer, but really it's just copy pasted
| from an LLM. Those are more insidious.
| minimaxir wrote:
| The irony is that the disclosure of "I asked ChatGPT and it
| says..." is done as a courtesy to let the reader be informed.
| Given the increasing backlash against that disclosure, people
| will just _stop disclosing_ which is worse for everyone.
|
| The only workaround is to just text as-is and call it out when
| it's wrong/bad, AI-generated or otherwise, as we've done before
| 2023.
| einsteinx2 wrote:
| That's true. Unfortunately the ideal takeaway from that
| sentiment _should_ be "don't reply with copy pasted LLM
| answers", but I know that what you're saying will happen
| instead.
| StrandedKitty wrote:
| I think it's fine to not disclose it. Like, don't you find
| "Sent from my iPhone" that iPhones automatically add to
| emails annoying? Technicalities like that don't bring
| anything to the conversation.
|
| I think typically, the reason people are disclosing their
| usage of LLMs is that they want offload responsibility. To me
| it's important to see them taking responsibility for their
| words. You wouldn't blame Google for bad search results,
| would you? You can only blame the entity that you can
| actually influence.
| Leherenn wrote:
| Isn't it the modern equivalent of "let me Google that for you"?
|
| My experience is that the vast majority of people do 0 research
| (AI assisted or not) before asking questions online. Questions
| that could have usually been answered in a few seconds if they
| had tried.
|
| If someone preface a question by saying they've done their
| research but would like validation, then yes it's in incredibly
| poor taste.
| einsteinx2 wrote:
| > Isn't it the modern equivalent of "let me Google that for
| you"?
|
| When you put it that way I guess it kind of is.
|
| > If someone preface a question by saying they've done their
| research but would like validation, then yes it's in
| incredibly poor taste.
|
| 100% agree with you there
| nitwit005 wrote:
| There's seemingly a difference in motive. The people sharing
| AI responses seem to be from people fascinated by AI
| generally, and want to share the response.
|
| The "let me Google that for you" was more trying to get
| people to look up trivial things on their own, rather than
| query some forum repeatedly.
| thousand_nights wrote:
| exactly, the "i asked chatgpt" people give off 'im helping'
| vibes but in reality they are just annoying and clogging up
| the internet with spam that nobody asked for
|
| they're more clueless than condescending
| noir_lord wrote:
| To modifying a hitchism.
|
| > What can be asserted without evidence can also be dismissed
| without evidence.
|
| Becomes
|
| > That which can be asserted without thought can be dismissed
| without thought.
|
| Since no current AI thinks but humans do I'm just going to
| dismiss anything an AI says out of hand because you are pushing
| the cost of parsing what it said onto me and off you and nah,
| ain't accepting that.
| globular-toast wrote:
| It must be the randomness built into LLMs that makes people
| think it's something worth sharing. I guess it's no different
| from sharing a cool Minecraft map with your friends or
| something. The difference is Minecraft is fun, reading LLM
| content is not.
| uberman wrote:
| This is an honest question. Did you try pasting your PR and the
| ChatGPT feedback into Claude and asking it for an analysis of the
| code and feedback?
| verdverm wrote:
| Careful with this idea, I had someone take a thread we were
| engaged in and feed it to an LLM, asking it to confirm his
| feelings about the conversation, only to post it back to the
| group thread. It was used to attack me personally in a public
| space.
|
| Fortunately
|
| 1. The person was transparent about it, even posting a link to
| the chat session
|
| 2. They had to follow on prompt to really engage the sycophancy
|
| 3. The forum admins stepped in to speak to this individual even
| before I was aware of it
|
| I actually did what you suggested, fed everything back into
| another LLM, but did so with various prompts to test things
| out. The responses where... interesting, the positive prompt
| did return something quite good. A (paraphrased) quote from it
|
| "LLMs are a powerful rhetorical tool. Bringing one to a online
| discussion is like bringing a gun to a knife fight."
|
| That being said, how you prompt will get you wildly different
| responses from the same (other) inputs. I was able to get it to
| sycophant my (not actually) hurt feelings.
| pavel_lishin wrote:
| Does that particularly matter in the context of this post?
| Either way, it sounds like OP was handed homework by the
| responder, and farming _that_ out to yet another LLM seems kind
| of pointless, when OP could just ask the LLM for its opinion
| directly.
| uberman wrote:
| While LLM code feedback might be wordy and dubious, I have
| personally found that asking Claude to review a PR and
| related feedback to provide some value. From my perspective
| anyways, Claude seems able to cut through the BS and say if a
| recommendation is worth the squeeze or in what contexts the
| feedback has merit or is just pedantic. Of course, your
| mileage my vary as they say.
| pavel_lishin wrote:
| Sure. But again, that's not what OP's post is about.
| blitzar wrote:
| "Google said this" ... "Wikipedia said this" ... "Encyclopedia
| Britannica said this"
| spot5010 wrote:
| The scenario the author describes is bound to happen more and
| more frequently, and IMO the way to address it is by evolving the
| culture and best practices for code reviews.
|
| A simple solution would be to mandate that while posting
| coversations with AI in PR comments is fine, all actions and
| suggested changes should be human generated.
|
| They human generated actions can't be a lazy: "Please look at AI
| suggestion and incorporate as appropriate. ", or "what do you
| think about this AI suggestion".
|
| Acceptable comments could be: - I agree with the AI for xyz
| reasons, please fix. - I thought about AIs suggestions, and
| here's the pros and cons. Based on that I feel we should make xyz
| changes for abc reasons.
|
| If these best practices are documented, and the reviewer does not
| follow them, the PR author can simply link to the best practices
| and kindly ask the reviewer to re-review.
| globular-toast wrote:
| It's kinda hilarious to watch people make themselves redundant.
| Like you're essentially saying "you don't need me, you could have
| just asked ChatGPT for a review".
|
| I wrote before about just sending me the prompt[0], but if your
| prompt is literally _my code_ then I don 't need you at all.
|
| [0] https://blog.gpkb.org/posts/just-send-me-the-prompt/
___________________________________________________________________
(page generated 2025-10-24 23:00 UTC)