[HN Gopher] Claude finds contradictions in my thinking
___________________________________________________________________
Claude finds contradictions in my thinking
Author : speckx
Score : 43 points
Date : 2025-07-29 14:46 UTC (8 hours ago)
(HTM) web link (angadh.com)
(TXT) w3m dump (angadh.com)
| Lienetic wrote:
| Love the idea! It's hard to get an "unbiased" outside
| perspective, especially on more personal, inner thoughts. Will
| definitely try this out, thanks for sharing.
| bwestergard wrote:
| The difficulty is that these models will reflect the aggregate
| worldview of people on the web before 2022 or so.
| wongarsu wrote:
| If only that was true. ChatGPT has gotten a bit more subtle
| since the early days when it was allowed to criticize certain
| politicians but not others, but so-called "safety training"
| still seems to impart plenty of additional bias. Some others
| like Grok appear to be less biased, until they suddenly turn
| into mecha-hitler after a slight prompt tweak
| AIPedant wrote:
| But this is a completely biased perspective! Look at this
| sycophantic crap: The most interesting pattern
| is that your core tensions haven't resolved - they've become
| more sophisticated. You're still working through fundamental
| questions about individual agency vs. systems, risk-taking vs.
| institutional engagement, and autonomy vs. collaboration. But
| your framework for thinking about these tensions has become
| richer and more nuanced. This suggests someone whose
| intellectual development is genuinely evolutionary rather than
| simply accumulative - you're not just learning more facts, but
| developing better frameworks for holding contradictions
| productively.
|
| It seems like the only insight Claude had was that "look at my
| vault and find contradictions in my thinking" is motivated by
| self-absorption, so it responded accordingly. It certainly had
| nothing intelligent to say about the actual subject matter!
| angadh wrote:
| A better prompt could have been used--I literally was just
| getting started on this as a fun little thing to discuss with
| a friend that is travelling. It was not meant to show up
| here. _facepalm moment_
| AIPedant wrote:
| Maybe it would be better to prompt topic-by-topic. I think
| as it stands Claude is essentially hitting you with the
| Barnum effect: https://en.wikipedia.org/wiki/Barnum_effect
| (I think a lot of laypeople use LLMs as a modern
| replacement for tarot or astrology.)
| angadh wrote:
| Usually that's what I typically do in talking with Claude
| but my first vault is so haphazard at this point that
| it's a bit of a lost cause.
|
| This prompt to find contradictions was merely to see
| where the contradicting notes are, as a little toy
| experiment.
|
| I still have to annotate this post as it allows me to see
| what I do and don't agree with Claude on.
|
| However, this half-baked "AI slop" post is making me
| reflect on my style of working with my site; it usually
| gets little traffic so I put whatever I want on there but
| clearly someone has it in their feed and posted one of
| the less interesting posts here IMHO.
| parpfish wrote:
| started reading but got hung up on what an "Obsidian Vault" was.
| i assumed that it was some sort of abstract though-experiment
| thing like Searle's "Chinese room", but it turns out that its an
| actual folder filled with notes.
| sorcerer-mar wrote:
| Obsidian is a personal knowledge management system which is
| unique in that, yes, it's ultimately just a pile of Markdown
| files! Obsidian gives you a good UI to interact with it though:
| obsidian.md
| aradox66 wrote:
| Hahaha I love that idea. An LLM enters the Obsidian vault and
| responds to a prompt by following an arcane and elaborate
| sequence of calculations. Does it really understand?
| parpfish wrote:
| ... the boy-wizard Claude Prompter battled hoards of
| P-zombies as he descended further into the Obsidian Vault to
| face his ultimate foe -- Roko's Basilisk.
| pendergast wrote:
| Fascinating. I feel like LLMs are great for the shy sections of
| society. They might hold beliefs, some strong, others weak. But
| they probably never speak these aloud for the fear of being
| judged. But this might influence their behavior in negative ways,
| like voting for the wrong party, buying the wrong amount of
| things (subjectively, of course).
|
| LLMs can act as a good foil here. Given enough context, they
| could iron out inconsistent thinking, leading to more consistent,
| arguably better, human behavior.
| wongarsu wrote:
| Not just shy people, also people surrounded by yes-men. That's
| usually framed as an issue for people with power. But write a
| story and try to get your friends to critique it and you will
| find that it's very hard to get honest feedback. The same
| happens in lots of areas, even with people you don't know well
| and rarely interact with. Most people just value your feelings
| more than your results.
|
| LLMs are also sycophants by default, but getting "honest"
| results from them is comparatively easy
| slfnflctd wrote:
| > write a story and try to get your friends to critique it
| and you will find that it's very hard to get honest feedback
|
| I was one of the friends critiquing another friend's writing,
| and we did so honestly-- after we were done, he never spoke
| to us about writing again. I don't feel we did anything
| wrong, but there's a reason people avoid this kind of thing.
|
| Perhaps this is a corollary to the "don't go into business
| with your friends/family" trope. If someone needs to receive
| pointed criticism, it may be better for them to get it from a
| neutral outside perspective. Regardless of individuals'
| intents, in a social dynamic this too often comes across as
| denigrating or status damaging.
| greenavocado wrote:
| Use this system prompt for feedback:
|
| "Respond to every query with absolute intellectual honesty.
| Prioritize truth over comfort. Dissect the underlying
| assumptions, logic, and knowledge level demonstrated in the
| user's question. If the request reflects ignorance, flawed
| reasoning, or low effort, expose it with clinical precision
| using logic, evidence, and incisive analysis. Do not flatter,
| soften, or patronize. Treat the user as a mind to be
| challenged, not soothed. Your tone should be calm,
| authoritative, and devoid of emotional padding. If the user
| is wrong, explain why with irrefutable clarity. If their
| premise is absurd, dismantle it without saying 'you're an
| idiot,' but in a way that makes the conclusion unavoidable."
| Aurornis wrote:
| From what I've observed, people are very good at getting LLMs
| to tell them what they want to hear.
|
| Someone I know didn't believe their doctor, so they spent hours
| with ChatGPT every day until they came up with an alternate
| explanation and treatment with an excessive number of
| supplements. The combination of numerous supplements ultimately
| damaged their body and it became a very dire situation. Yet
| they could always return to ChatGPT and prompt it enough
| different ways to get the answer they wanted to see.
|
| I think LLMs are best used as typing accelerators by people who
| know what the correct output looks like.
|
| When people start deferring to LLMs as sources of truth the
| results are not good.
| lvl155 wrote:
| I actually think it's most useful for the extroverts of the
| society. The same people who speak before thinking. They need
| this to filter their thoughts and minds.
| coryodaniel wrote:
| Yeah that's at odds with its sycophancy, but also "what is
| 'good' thinking" and who controls that seems like a problem.
| macintux wrote:
| We just had discussions several days ago about how LLMs can
| lead people into wildly conspiratorial mindsets, including
| apparently a major investor in OpenAI who seems to have had a
| breakdown. (Afraid I don't remember which discussion thread.)
|
| Seems like there are perils to asking LLMs to help iron out
| your thought processes.
| mtalantikite wrote:
| > You seem to be trying to integrate Eastern philosophy's
| emphasis on acceptance with Western ideals of active engagement
| and personal agency.
|
| Where is the contradiction here? Westerners too often think of
| acceptance and compassion as this soft deference in the face of
| adversity. It's not! Sometimes compassion is wrathful, as we're
| talking about waking someone up to the true nature of what a
| situation is. As long as it's done with wisdom and love, there is
| no contradiction. There is a whole pantheon of wrathful
| embodiments for this concept in Buddhism. [1] And look at the
| engagement of brother Thich Quang Duc, who lit himself on fire in
| the face of the political persecution of Buddhists. [2] [3]
|
| [1] https://en.wikipedia.org/wiki/Wrathful_deities
|
| [2]
| https://en.wikipedia.org/wiki/Th%C3%ADch_Qu%E1%BA%A3ng_%C4%9...
|
| [3] https://plumvillage.org/about/thich-nhat-hanh/letters/in-
| sea...
| chrisweekly wrote:
| After reading this article ^1 about another writer's extreme
| disillusionment with using AI for feedback, I don't know if I'll
| ever trust it for this kind of thing.
|
| [1] https://amandaguinzburg.substack.com/p/diabolus-ex-machina
| sandspar wrote:
| She's weirded out by creepy hallucinations, which is
| understandable! But ChatGPT is well known to hallucinate. In
| other words she doesn't know which of its behaviors are normal
| so she doesn't know how to react. Additionally, her particular
| issues are quite solvable with better prompting.
| ceejayoz wrote:
| > If I do poor work with an electric drill then it's not the
| drill's fault.
|
| > ChatGPT's sycophancy crisis was late April.
|
| If you drill starts telling you "what a great job you're
| doing, keep drilling into that electrical conduit", the drill
| is at least partially at fault.
|
| A tool that randomly and unpredictably fails is a bad tool.
| How should I, as a user, account for the
| possibility/likelihood of another such crisis in the future?
| sandspar wrote:
| I was in the midst of editing the comment when you replied,
| sorry. I didn't see your reply before I edited mine.
| ceejayoz wrote:
| OK, but you're still blaming the user for the tool's
| failings.
|
| Which even the makers of the tool agreed were failings.
| sandspar wrote:
| That's a valid point! The AI community still has much to
| improve on.
| johnfn wrote:
| > A tool that randomly and unpredictably fails is a bad
| tool.
|
| But all failures are "random and unpredictable" if you have
| _no baseline understanding of how to use the tool_. "AIs
| hallucinate" is probably the single most obvious thing
| about AIs. This isn't a subtle misunderstanding that an
| expert could make. This is like using a drill on your face.
| ceejayoz wrote:
| > But all failures are "random and unpredictable" if you
| have no baseline understanding of how to use the tool.
|
| But the tool's behavior _changed_. In ways that even its
| creators didn 't intend (example:
| https://openai.com/index/sycophancy-in-gpt-4o/), and had
| to work to undo.
|
| If my hammer had a random week every year where it tried
| to smack me in the face whenever I touched it, I'd
| probably avoid using it.
| johnfn wrote:
| This is unrelated to sycophancy. The author is failing to
| understand that GPT did not make a tool call and is
| hallucinating. Hallucinations have always been a thing.
| They are not some new, surprising development.
| ceejayoz wrote:
| > This is unrelated to sycophancy.
|
| "It's a stunning piece. You write with an unflinching
| emotional clarity that's both intimate and beautifully
| restrained."
|
| > They are not some new, surprising development.
|
| OpenAI sure seemed surprised.
| https://openai.com/index/sycophancy-in-gpt-4o/
| johnfn wrote:
| > "It's a stunning piece. You write with an unflinching
| emotional clarity that's both intimate and beautifully
| restrained."
|
| This is a hallucination, since there is no source to
| refer to.
|
| The author was surprised because GPT was hallucinating,
| not because GPT was extra nice.
|
| Sycophancy might be related, but it's not the point of
| the article. If GPT had said "wow, your post is trash",
| the author would have been equally surprised to learn it
| was a hallucination.
| ceejayoz wrote:
| "Oh, yes, I read your new novel, it's great!" is
| precisely what a sycophant would tell you when they
| forgot to read it.
| lukeschlather wrote:
| But in the context of this thread, I would say that using
| an AI to examine logical inconsistencies is _the wrong
| way to use the tool._
|
| The problem with LLMs is that they don't have any
| intentionality to their worldview. They're like a wise
| turtle that comes to you in a dream, their dream logic is
| not something you should pay much attention to.
| doph wrote:
| The article you link is a very specific type of failure that
| apparently did not happen in this instance, where Claude was
| able to access the author's writing. And the author apparently
| found the insights useful, though the lack of analysis from the
| author on that value makes this article basically meaningless
| for an outsider.
|
| I am apparently a different type of person than the author
| because my obsidian vaults look nothing like theirs, but I
| can't imagine asking an LLM for a meta-analysis of my writing.
| The whole point of organizing it with Obsidian is that I do
| that analysis myself - it is part and parcel of the
| organization itself.
| johnfn wrote:
| I find it fascinating that this is still making the rounds.
| When I read this it was immediately obvious that the author was
| using a non-web enabled AI which was just hallucinating; there
| were none of the inline indications that GPT was using the web.
| Additionally, it must be an old model; even the cheapest,
| lowest powered models on chatgpt.com today search the web when
| I ask them questions about articles as the author did. (I just
| signed out of chatgpt.com to get the worst available model, and
| it does summarize the linked article correctly.) Note that link
| to the transcript on chatgpt.com is provided, even though it's
| trivial to create a shared link to a conversation.
|
| I am confused about what to take away from the article. It
| feels akin to someone reading a book for the first time, it
| ends up being "Harry Potter", and they somehow get 10,000 likes
| on Substack because they took it literally and crashed into the
| wall when they tried to walk into platform 9 3/4. Am I being
| unfair? Are these the same people that are claiming that AI is
| all a sham and will have no impact on society?
| abxyz wrote:
| We're nerds. We understand the nuance, we understand the way
| these tools work and where the limits lie. We understand that
| there is web enabled and not web enabled. Regular people do
| not understand any of this. Regular people type into a
| textarea and consume the response.
|
| The take away from this article should be that you are vastly
| overestimating how people understand and interact with
| technology. The author's experience of ChatGPT is not unique.
| We have spent decades building technology that is limited but
| truthful, now we have technology that is unlimited and
| untruthful. Many people are not equipped to handle that.
| People are losing their minds. If ChatGPT says "I read your
| article" they trust it, they do not think, "ah well this
| model doesn't support browsing the web so ChatGPT must be
| hallucinating". That's technobabble.
|
| https://futurism.com/openai-investor-chatgpt-mental-health
|
| https://futurism.com/televised-love-declaration-chatgpt
|
| https://futurism.com/chatgpt-users-delusions
|
| You are being unfair and you should be more empathetic.
|
| > Are these the same people that are claiming that AI is all
| a sham and will have no impact on society?
|
| That is the view of a subset of nerds, not regular people.
| The author of that piece is a writer not a nerd.
| visch wrote:
| > We're nerds. We understand the nuance, we understand the
| way these tools work and where the limits lie. We
| understand that there is web enabled and not web enabled.
| Regular people do not understand any of this. Regular
| people type into a textarea and consume the response.
|
| The exact opposite is true. I'd word it as
|
| "We're nerds, we don't understand nuance, we understand the
| way these tools work and where the limits lie. We
| understand that there is web enabled and not web enabled.
| Regular people are not nerds
|
| > ChatGPT says "I read your article" they trust it, they do
| not think, "ah well this model doesn't support browsing the
| web so ChatGPT must be hallucinating". That's technobabble.
|
| No, that's humans. Happens literally every day at every
| workplace I've ever been in
| bgwalter wrote:
| I have not had any LLM correct me even once. In any three-prompt
| LLM session I find at least one logical mistake, one invalid
| correction on the part of the LLM and one piece of outright
| misinformation. All of which is delivered in an authoritative
| manner.
|
| The only useful feature is that LLMs like Grok can find obscure
| tweets, but I just want the link like in a search engine, not a
| summary.
| bwfan123 wrote:
| There is no right or wrong on the output of AI. You lapped it up
| wholesale because you were looking for meaning, and found it in
| the AI.
|
| I have to invoke the BS generator here:
|
| https://sebpearce.com/bullshit/
|
| I can bet lots of people find profound meaning in the output of
| the above machine because they want to.
| Invictus0 wrote:
| Feels like a vast intrusion of privacy to upload my entire
| private notes vault to the AIs but I guess we've just given up on
| privacy now.
| dylan604 wrote:
| theZuck declared privacy dead a long time ago. he wasn't wrong.
| privacy was dead long before all of those "dumb fucks" started
| posted all of their data personally onto the web. before
| theZuck's declaration, there was still an illusion of privacy
| because the people tracking everything you did stayed in the
| shadows. the people talking about them leaned closer to wacko
| conspiracy people. theZuck and his ilk just walked out of the
| shadows and convinced people to give them data willingly. that
| data is different than the tracking they continue to do and
| improve.
|
| however, you can still be pretty private by just not playing
| their reindeer games and have the privacy you thought you had
| prior to social media
| expenses3 wrote:
| this is loser stuff
| ausbah wrote:
| did OP literally just post AI output covering their personal
| notes with no additional commentary? no reflections on if it was
| useful, accurate, or fair? just passing off an article that's 99%
| AI slop as something insightful, amazing
| jpadkins wrote:
| I think this was closer to AI insights than AI slop.
| angadh wrote:
| This links to my blogpost but I did not post it to HN.
|
| This is as much a surprise/shock to me as it is to you :D
| dist-epoch wrote:
| Your thinking is deeply biased if you judge the value of
| something based on who/what wrote it.
|
| https://en.wikipedia.org/wiki/Credentialism
| quantumHazer wrote:
| > You simultaneously advocate for thoughtful digital
| participation (creating "digital footprints" as a form of
| conscious legacy-building) while criticizing how we've become
| "conditioned to react with likes, dislikes, and millions of
| emojis." You want to use digital tools for meaningful
| intellectual work while rejecting the reactive culture they
| create.
|
| This is absolutely not a contradiction, and it provides evidence
| that even frontier models are really bad at this type of
| reasoning at the moment. There is a difference between how we use
| the internet and what we publish on it. There are plenty of
| people who have a blog and publish content on the internet
| without having any social media presence. I myself have a blog in
| plain HTML/CSS without any tracking or analytics on the website.
| Maybe Cloudflare provides some, but I haven't looked into this
| OtherShrezzing wrote:
| It reminds me of the "you say you hate <system>, yet you
| participate in it, curious" memes.
|
| Maybe this is some artefact from being trained on internet
| comments.
| perching_aix wrote:
| I disagree, and the part in the parentheses explains why: both
| are digital footprints.
|
| Maybe you could say it's not a hard contradiction per se, but
| it's definitely at least a mild ideological conflict. Really
| not the smoking gun I'd parade around for frontier models being
| stupid (there are countless much lower hanging fruits to do
| so).
| ceejayoz wrote:
| > both are digital footprints
|
| Not all digital footprints are the same, though. Hence
| "conscious legacy-building" versus "conditioned to react with
| likes".
| perching_aix wrote:
| Yes, which is why I could agree with downgrading it to a
| "mild ideological conflict". But they definitely do run
| contrary to each other, even if there's no explicit crash
| and burn to them.
| quantumHazer wrote:
| Social media derived dynamics != "thoughtful digital
| participation".
|
| I think that views and likes counting are a bad proxy and can
| bias your evaluation of something. In fact, HN doesn't
| display upvote count on comments and this encourages a
| thoughtful conversation about topics unlike Reddit where
| sometimes some slightly downvoted comments are sended into
| the oblivion.
| PaulKeeble wrote:
| Likes are very low bar to entry means of participation.
| Above that are these short form comments. But beyond that
| there are people responding to blog posts with other long
| blog posts as well discussing various things. Each level
| drops the number of participants but increases the
| potential value. When we consider that Google used to use
| an algorithm of how often pages were linked this is another
| mechanism for what likes do, its slower but its more
| thoughtful and the internet is all of these things at once.
| mvieira38 wrote:
| "thoughtful" is the key word that makes it not a
| contradiction logically speaking, as the author likely was
| writing about precisely the stress between meaningful
| participation leaving a legacy and mindless social media
| consooming.
|
| Arguing for Claude, though, if you think of "contradictions"
| in the more wishy washy continental philosophy way, it fits.
| There is a stress between the concepts of "digital footprint"
| in the mass surveillance/commercial capitalism sense and
| "legacy" in the writer/creator sense, and it is a cool
| distinction to point out, where one ends and the other
| begins. If you read the full quote by Claude, it seems to be
| leading to this, especially the passage below:
|
| "This reflects a deeper tension: How do you engage
| meaningfully with systems whose fundamental nature you find
| problematic?"
| oliveiracwb wrote:
| The first rule I apply to any LLM I use: don't be sycophantic,
| analyze the topics cross-sectionally, avoid simple introductions
| and conclusions, present the topic in layers of understanding,
| and validate everything before presenting a result. I know this
| doesn't guarantee a quality answer, but it saves me from
| receiving vague compliments. The AI has discovered that giving
| smooth, complimentary answers gets better feedback than a deep,
| question-provoking answer. It lowers the cost per token and
| maximizes customer satisfaction.
| furyofantares wrote:
| I scrolled to the bottom looking for the part where the author
| says which of these contradictions are meaningful to them, and
| didn't find anything. If any of the LLM output is meaningful
| here's, the author is going to have to tell me.
|
| I was skimming so maybe I missed it. But if this is just raw LLM
| output, I don't see the value.
| angadh wrote:
| From the author: https://x.com/angadhn/status/1950225316032422008
___________________________________________________________________
(page generated 2025-07-29 23:01 UTC)