__  __      _        _____ _ _ _
|  \/  | ___| |_ __ _|  ___(_) | |_ ___ _ __
| |\/| |/ _ \ __/ _` | |_  | | | __/ _ \ '__|
| |  | |  __/ || (_| |  _| | | | ||  __/ |
|_|  |_|\___|\__\__,_|_|   |_|_|\__\___|_|
community weblog	

A specialized use of Claude code tool.

It seems to be very useful for de-compiling other code. HN talking about: "The unexpected effectiveness of one-shot decompilation with Claude (blog.chrislewis.au)"
The author's previous post on de-compiling an N64 game. https://blog.chrislewis.au/using-coding-agents-to-decompile-nintendo-64-games/ But it's not a magic button for people who don't know what they're doing. As a couple of HN comments describe: [snip] saagarjha 3 hours ago | prev | next [–] It's worth noting here that the author came up with a handful of good heuristics to guide Claude and a very specific goal, and the LLM did a good job given those constraints. Most seasoned reverse engineers I know have found similar wins with those in place. What LLMs are (still?) not good at is one-shot reverse engineering for understanding by a non-expert. If that's your goal, don't blindly use an LLM. People already know that you getting an LLM to write prose or code is bad, but it's worth remembering that doing this for decompilation is even harder :) zdware 1 hour ago | parent | next [–] Agree with this. I'm a software engineer that has mostly not had to manage memory for most of my career. I asked Opus how hard it would be to port the script extender for Baldurs Gate 3 from Windows to the native Linux Build. It outlined that it would be very difficult for someone without reverse engineering experience, and correctly pointed out they are using different compilers, so it's not a simple mapping exercise. It's recommendation was not to try unless I was a Ghrida master and had lots of time in my hands.
posted by aleph on Dec 06, 2025 at 11:48 AM

---------------------------

On one hand, this is very, very cool. On the other hand I can't help but wince when I see sentences like "let claude run for 8+ hours unattended". Yes, the planet got destroyed, but for a beautiful moment in time we generated a lot of value for shareholdersreverse engineered faintly-remembered N64 title "Snowboard Kids 2".
posted by phooky at 1:33 PM

---------------------------

Yep, it's insane. But then we seem to be at this point. I blame Future Shock and bad actors.
posted by aleph at 2:19 PM

---------------------------

That, and I *do* like getting better tools to take apart what needs to be taken apart. For a variety of reasons.
posted by aleph at 2:26 PM

---------------------------

Neat.

If I'm not mistaken this has pretty big implications for software piracy.
posted by storybored at 6:41 AM

---------------------------

Among other things.
posted by aleph at 6:54 AM

---------------------------

I think this type of workflow, batch processing something that's extremely repetitive but too tricky for ordinary tools, is one of the best uses of generative AI. Maybe it would be boring for an AI too, but it gets a fresh context window with a novel problem each time.
I've used the same type of workflow for similar but smaller types of codebase-wide audits, with a lot of success. (regular code applied the fixes, and it was all reviewed by humans afterwards! Verifiability is super important.)

The "run claude opus, the biggest model, 8+ hours a day" thing gives me pause too. This project is obviously important to the author and they'll see it through whatever resources it takes. But you'd think they'd start with a smaller model and tier up as needed.
posted by dustletter at 1:14 PM

---------------------------

Yes. Simpler models, run locally can do a lot of niche uses.

The other one I heard about recently was an Electrician used it as a fancy search engine into the local electrical code. It used to be an expensive (deliberately) set of printed docs. Then they allowed sets of PDF docs. He can ask something like "what is the spacing required for blah-blah type connection". And it may give it to him directly but even if it gives him many matches he can go through them pretty fast to find what he knows he's looking for.

*Not* what it's being sold as but still useful sometimes.
posted by aleph at 3:43 PM

---------------------------

After thinking about it, what is neat about the author's strategy, is that it leans into LLMs' known imperfections. By pre-sorting the code into easy vs hard routines, he's giving Claude a curated selection of machine code that it has the best chance of success with. Since it's super simple to verify if there's a success, it makes total sense to zero-shot Claude. This is like a decompilation sieve.
posted by storybored at 4:50 PM

---------------------------

Electrician used it as a fancy search engine into the local electrical code
Yeah, every document management system I've used over the last few decades has had worthless search. That an LLM regurgitation is more useful should be a wake-up call to these companies, but probably won't be.
posted by rhamphorhynchus at 9:41 AM

---------------------------

Naah. The Ai just does a sloppy/lossy pattern match on the data. The result is good enough for people who don't know how to do precise ones. Which is a lot more people than the ones that do.

Then the person who *knows* can scan the much smaller pool and winnow out the dross.

[shrug]

The Electrician said in the post that it saved him a bunch of time. If you can believe a rando on the internet. If you can believe I'm not just making it up.
posted by aleph at 11:55 AM

---------------------------

On one hand, this is very, very cool. On the other hand I can't help but wince when I see sentences like "let claude run for 8+ hours unattended". Yes, the planet got destroyed, but for a beautiful moment in time we generated a lot of value for shareholdersreverse engineered faintly-remembered N64 title "Snowboard Kids 2".


If every watt is the end of the world, then we've abandoned thinking for theatrics. Moral reasoning requires scale, not slogans.
posted by fragmede at 4:46 AM

---------------------------