[HN Gopher] Claude Code: An Agentic cleanroom analysis
___________________________________________________________________
Claude Code: An Agentic cleanroom analysis
Author : hrishi
Score : 41 points
Date : 2025-06-01 19:04 UTC (3 hours ago)
(HTM) web link (southbridge-research.notion.site)
(TXT) w3m dump (southbridge-research.notion.site)
| eric-burel wrote:
| TL;DR notion site is a terrible format for blog posts, at least
| on mobile
| flipthefrog wrote:
| Light green text on light brown background is pretty
| ridiculous. I gave up after 30 seconds
| Aurornis wrote:
| FYI I don't see light green text. Website looks fine to me on
| mobile and desktop. Maybe something wrong on your browser
| end?
| triyambakam wrote:
| Claude Code with Sonnet 4 is so good I've stopped using Aider.
| This has been hugely productive. I've been able to write agents
| that Claude Code can spawn and call out to for other models,
| even.
| rane wrote:
| Have you been able to interface Claude Code with Gemini 2.5
| Pro? I'm finding that Gemini 2.5 Pro is still better at solving
| certain problems and architecture and it would be great to be
| able to consult directly in CC.
| triyambakam wrote:
| Well a quick hack is to tell Claude Code to make "AI!"
| comments in the code which Aider can be configured to watch
| for, then Gemini 2.5 Pro can do those tasks. Yes I really
| like Gemini still too
| __mharrison__ wrote:
| What does it give you that enabling Sonnet as a backend for
| Aider doesn't?
| cedws wrote:
| Could you briefly explain your workflow? I use Zed's agent mode
| and I don't really understand how people are doing it purely
| through the CLI. How do you get a decent workflow where you can
| approve individual hunks? Aren't you missing out on LSP help
| doing it in the CLI?
| mindwok wrote:
| Claude code has a VS Code plugin now that lets you view and
| approve diffs in the editor. Before it did that, I really
| don't understand how people got anything of substance done
| because it simply isn't reliable enough over large codebases.
| fullstackchris wrote:
| interesting... the analysis finds that the MCP supports
| websockets as a transport... when there is big drama going on
| right now that anthropic said "they will never support that",
| folks hating SSE, and so on and so forth
| fullstackchris wrote:
| also, i will say, (if we can trust the findings in these notes
| are relatively accurate of the real implementation) is a PERFECT
| example of the real level of complexity used in cutting edge
| configuration of using LLM... its not just some complex fancy
| prompt you give to a model in a chat window... there is so much
| important stuff happening behind the scenes... though i suppose
| the people who complain about LLMs hallucinating / screwing up
| havent tried claude code or any agentic work flows - or, it could
| be their architecture / code is so poorly written and poorly
| organized that even the LLM itself struggles to modify it
| properly
| girvo wrote:
| > or, it could be their architecture / code is so poorly
| written and poorly organized that even the LLM itself struggles
| to modify it properly
|
| You wrote this like this is some rare occurrence, and not a
| description of a bulk of the production code that exists today,
| even at high level tech companies.
| InGoldAndGreen wrote:
| The "LLMs perspective" section is hiding at the end of this
| notion is a literal goldmine
| demarq wrote:
| It's the best thing I've read from an LLM!
|
| It sounds a lot like like the Murderbot character in the
| AppleTV show!
| roxolotl wrote:
| Right... because these things are trained on sci-fi and so
| when asked to describe an internal monologue they create text
| that reads like an internal monologue from a sci-fi
| character.
|
| Maybe there's genuine sentience there, maybe not. Maybe that
| text explains what's happening, maybe not.
| demarq wrote:
| > Maybe that text explains what's happening, maybe not
|
| It would have been cool to see what prompt was used for
| that page!
| mholm wrote:
| It's sure phrased like one, but I'd be careful to attribute LLM
| thought process to what it says it's thinking. LLMs are experts
| at working backwards to justify why they came to an answer,
| even when it's entirely fabricated
| doctoboggan wrote:
| > even when it's entirely fabricated
|
| I would go further and say it's _always_ fabricated. LLMs are
| no better able to explain their inner workings than you are
| able to explain which neurons are firing for a particular
| thought in your head.
|
| Note, this isn't a statement on the usefulness of LLMs, just
| their capability. An LLM may eventually be given a tool to
| enable it to introspect, but IMO its not natively possible
| with the LLM architectures today.
| owebmaster wrote:
| I have nothing against LLM-generated content. But when
| publishing, make sure the content is displayed correctly and that
| it is enjoyable to read.
| sonu27 wrote:
| Really annoying that the scroll bar gets hidden for me (iOS
| safari iPhone)
___________________________________________________________________
(page generated 2025-06-01 23:00 UTC)