[HN Gopher] Show HN: I made a heatmap diff viewer for code reviews
___________________________________________________________________
Show HN: I made a heatmap diff viewer for code reviews
0github.com is a pull request viewer that color-codes every diff
line/token by how much human attention it probably needs. Unlike
PR-review bots, we try to flag not just by "is it a bug?" but by
"is it worth a second look?" (examples: hard-coded secret, weird
crypto mode, gnarly logic, ugly code). To try it, replace
github.com with 0github.com in any pull-request URL. Under the
hood, we split the PR into individual files, and for each file, we
ask an LLM to annotate each line with a data structure that we
parse into a colored heatmap. Examples:
https://0github.com/manaflow-ai/cmux/pull/666
https://0github.com/stack-auth/stack-auth/pull/988
https://0github.com/tinygrad/tinygrad/pull/12995
https://0github.com/simonw/datasette/pull/2548 Notice how all the
example links have a 0 prepended before github.com. This navigates
you to our custom diff viewer where we handle the same URL path
parameters as github.com. Darker yellows indicate that an area
might require more investigation. Hover on the highlights to see
the LLM's explanation. There's also a slider on the top left to
adjust the "should review" threshold. Repo (MIT license):
https://github.com/manaflow-ai/cmux
Author : lawrencechen
Score : 150 points
Date : 2025-10-30 14:21 UTC (8 hours ago)
(HTM) web link (0github.com)
(TXT) w3m dump (0github.com)
| jtwaleson wrote:
| This is really useful. Might want to add a checkbox at a certain
| threshold, so that reviewers explicitly answer the concerns of
| the LLM. Also you can start collecting stats on how "easy to
| review" PR's of team members are, e.g. they'd probably get a
| better score if they address the concerns in the comments
| already.
| timenotwasted wrote:
| This is very cool and I could see it being really useful
| especially for those giant PRs. I'd prefer it if instead of the
| slider I could just click the different heatmap colors and if
| they indicated what exactly they were for (label not threshold).
| I get the underlying premise but at a glance it's more to process
| unless I was to end up using this constantly.
| lawrencechen wrote:
| Currently tooltips are shown when hovering on highlighted
| words. Need to make it visible on mobile though. Was wondering
| if you were thinking of another way to show the labels besides
| hovering?
| timenotwasted wrote:
| I was referring to something more akin to a legend like you
| have in the examples "(examples: hard-coded secret, weird
| crypto mode, gnarly logic)." where I could click "hard-coded
| secret" (not the best label but you get the idea) and it
| would filter on those instead of the slider.
| 383toast wrote:
| Reminds me of this one, highlighting for text
| https://github.com/mattneary/salience
| cdiamand wrote:
| This is something I have found missing in my current workflow
| when reviewing PR's. Particularly in the age of large AI
| generated PR's.
|
| I think most reviewers do this to some degree by looking at
| points of interest. It'd be cool if this could look at your prior
| reviews and try to learn your style.
|
| Is this the correct commit to look at?
| https://github.com/manaflow-ai/cmux/commit/661ea617d7b1fd392...
| lawrencechen wrote:
| https://github.com/manaflow-ai/cmux/blob/main/apps/www/lib/s...
|
| This file has most of the logic, the commit you linked to has a
| bunch of other experiments.
|
| > look at your prior reviews and try to learn your style.
|
| We're really interested in this direction too of maybe setting
| up a DSPy system to automatically fit reviews to your
| preferences
| cdiamand wrote:
| Thank you. This is a pretty cool feature that is just
| scratching the surface of a deep need, so keep at it.
|
| Another perspective where this exact feature would be useful
| is in security review.
|
| For example - there are many static security analyzers that
| look for patterns, and they're useful when you break a
| clearly predefined rule that is well known.
|
| However, there are situations that static tools miss, but a
| highlight tool like this could help bring a reviewer's eyes
| to a high risk "area". I.e. scrutinize this code more because
| it deals with user input information and there is the chance
| of SQL injection here, etc.
|
| I think that would be very useful as well.
| austinwang115 wrote:
| This is a very interesting idea that we'll definitely look
| into.
| austinwang115 wrote:
| This makes reading long PRs not instantly LGTM... now the heatmap
| guides my eyes so I know where to look.
| nzach wrote:
| I think this "'should review' threshold" is a really great idea,
| but I probably wouldn't be able to trust it enough to make it
| useful.
| wiether wrote:
| I like the idea!
|
| File `apps/client/electron/main/proxy-routing.ts` line 63
|
| Adding a comment to explain why the downgrade is done would have
| resulted in not raising the issue?
|
| Also two suggestions on the UI
|
| - anchors on lines
|
| - anchors on files and ability to copy a filename easily
| lawrencechen wrote:
| Good suggestions! Will make it more URL friendly.
|
| > Adding a comment to explain why the downgrade is done would
| have resulted in not raising the issue?
|
| Trying it out here with a new PR on same branch:
| https://0github.com/manaflow-ai/cmux/pull/809
|
| Will check back on it later!
|
| EDIT: seems like my comment online 62 got highlighted. Maybe we
| should surface the ability edit the prompt.
| skeptrune wrote:
| I feel like this is really smart. Going to have to set it up!
| austinwang115 wrote:
| Just prepend 0 in front of github in your PR link and it should
| work
| skeptrune wrote:
| Ah, I see now.
| n2d4 wrote:
| > https://0github.com/stack-auth/stack-auth/pull/988
|
| Very fun to see my own PR on Hacker News!
|
| This looks great. I'm probably gonna keep the threshold set to
| 0%, so a bit more gradient variety could be nice. Red-yellow-
| green maybe?
|
| Also, can I use this on AI-generated code before creating a PR
| somehow? I find myself spending a lot of time reviewing Codex and
| Claude Code edits in my IDE.
| lawrencechen wrote:
| Yeah we definitely want to make the gradient and colors
| configurable.
|
| What form factor would make the most sense for you? Maybe a a
| cli command that renders the diff in cli or html?
| n2d4 wrote:
| Either would work, I think. How I do it right now is that I
| let AI edit automatically, but then check the diff in Cursor
| before I stage my Git changes. May be different for others.
| lawrencechen wrote:
| Yeah, heatmapping the diff before creating a PR would need
| tighter IDE integration. We're working on cmux for this
| purpose. It's kinda an IDE, and it lives in the same repo:
| https://github.com/manaflow-ai/cmux.
|
| After we add the heatmap diff viewer into cmux, I expect
| that I'll be spending most of my time in between the
| heatmap diff and a browser preview:
| https://github.com/manaflow-
| ai/cmux/raw/main/docs/assets/cmu...
| froh wrote:
| colorbrewer has proven high contrast gradients and also color
| blind options.
|
| a cli command with two options, console (color) and HTML
| opens all doors, right?
| petralithic wrote:
| Change the domain name, you will likely get a cease and desist
| otherwise.
| ramonga wrote:
| Maybe add some caching? I clicked one of the example PRs and it
| kept loading forever...
| lawrencechen wrote:
| Shoot, we should have caching in place already. Taking a look
| now
| lawrencechen wrote:
| Getting rate limited by GitHub, gonna add caching here as
| well. Temporary workaround is to sign in manually and return
| to example page: https://0github.com/handler/sign-in
| austinwang115 wrote:
| pushed a fix, should work now
| kburman wrote:
| It's an interesting direction, but feels pretty expensive for
| what might still be a guess at what matters.
|
| I'm not sure an LLM can really capture project-specific context
| yet from a single PR diff.
|
| Honestly, a simple data-driven heatmap showing which parts of the
| code change most often or correlate with past bugs would probably
| give reviewers more trustworthy signals.
| lawrencechen wrote:
| Yeah this is honestly pretty expensive to run today.
|
| > I'm not sure an LLM can really capture project-specific
| context yet from a single PR diff.
|
| We had an even more expensive approach that cloned the repo
| into a VM and prompted codex to explore the codebase and run
| code before returning the heatmap data structure. Decided
| against it for now due to latency and cost, but I think we'll
| revisit it to help the LLM get project context.
|
| Distillation should help a bit with cost, but I haven't
| experimented enough to have a definitive answer. Excited to
| play around with it though!
|
| > which parts of the code change most often or correlate with
| past bugs
|
| I can think of a way to do the correlation that would require
| LLMs. Maybe I'm missing a simpler approach? But agree that
| conditioning on past bugs would be great
| kburman wrote:
| For the correlation idea, you might take a look at how Sentry
| does it, they rely mostly on stack traces, error messages,
| and pattern matching to map issues back to code areas. It's
| cheap, scalable, and doesn't need an LLM in the loop, which
| could be a good baseline before layering anything heavier on
| top.
|
| As for interactive reviews, one workflow I've found
| surprisingly useful is letting Claude Code simulate a
| conversation between two developers pair-programming through
| the PR. It's not perfect, but in practice the dialogue and
| clarifying questions it generates often give me more insight
| than a single shot LLM summary. You might find it an
| interesting pattern to experiment with once you revisit the
| more context-aware approaches.
| CuriouslyC wrote:
| Gemini is better than GPT5 variants for large context. Also,
| agents tend to be bad at gathering an optimal context set.
| The best approach is to intelligently select from the
| codebase to generate a "covering set" of everything touched
| in the PR, make a bundle, and fire it off at Gemini as a one
| shot. Because of caching, you can even fire off multiple
| queries to Gemini instructing it to evaluate the PR from
| different perspectives for cheap.
| lawrencechen wrote:
| Yeah, adding a context gathering step is a good idea. Our
| original approach used codex cli in a VM, so context
| gathering was pretty comprehensive. We switched to a more
| naive approach due to latency, but having a step using a
| smaller model (like SWE-grep) could be a nice tradeoff.
| nonethewiser wrote:
| A large portion of the lines of code I'm considering when I
| review a PR are not part of the diff. This has to be a common
| experience - think of how often you want to comment on a line
| of code or file that just isn't in the PR. It happens almost
| every PR for me. They materialize as lose comments, or comments
| on a line like "Not this line per-se but what about XYZ?" Or
| "you replaced this 3 places but I actually found 2 more it
| should be applied to."
|
| I mean these tools are fine. But let's be on the same page that
| they can only address a sub-class of problems.
| CuriouslyC wrote:
| This is not that expensive with Gemini, they give free keys
| that have plenty of req/day, you can upload your diff + a
| bundle of the relevant part of the codebase and get this
| behavior for free, at least for a small team with ~10-20 PR's /
| day. If you could run this with personal keys, anyhow.
| fluoridation wrote:
| Might just be me, but I understood "expensive" in terms of
| raw computation necessary to get the answer. Some things
| aren't really worth computing, even if it's someone else
| footing the bill.
| ivanjermakov wrote:
| Premise is amazing. Wonder if there are tools that do something
| similar by looking at diff entropy.
| cerved wrote:
| > Honestly, a simple data-driven heatmap showing which parts of
| the code change most often or correlate with past bugs would
| probably give reviewers more trustworthy signals.
|
| At first I thought this to but now I doubt that's a good
| heuristic. That's probably where people would be careful and/or
| look anyway. If I were to guess, regressions are less likely to
| occur in "hotspots".
|
| But this is just a hunch. There are tons of well reviewed and
| bug reported open source projects, would be interesting if
| someone tested it.
| mmastrac wrote:
| I tried it on a low-complexity Rust PR I worked on a few months
| back and it did a pretty good job. I'd probably change where the
| highlights live (for example x.y.z() -> x.w.z() should highlight
| y/w in a lot of cases).
|
| For the most part, it seems to draw the eye to the general area
| where you need to look closer. It found a near-invisible typo in
| a coworker's PR which was kind of interesting as well.
|
| https://0github.com/geldata/gel-rust/pull/530
|
| It seems to flag _some_ deletions as needing attention, but I
| feel like a lot of them are ignored.
|
| Is this using some sort of measure of distance between the
| expected token in this position vs the actual token?
|
| EDIT: Oh, I guess it's just an LLM prompt? I would be interested
| to see an approach where the expected token vs actual token
| generates a heatmap.
| lawrencechen wrote:
| Happy to hear!
|
| > Is this using some sort of measure of distance between the
| expected token in this position vs the actual token?
|
| The main implementation is in this file:
| https://github.com/manaflow-ai/cmux/blob/main/apps/www/lib/s...
|
| EDIT: yeah it's just a LLM prompt haha
|
| Just a simple prompt right now, but I think we could try an
| approach where we directly see which tokens might be
| hallucinated. Gonna try to find the paper for this idea. Might
| be kinda analogous to the "distance between the expected token
| in this position vs the actual token."
| rishabhaiover wrote:
| wondering what if you run a SAST (a fast one) and share that with
| codex alongside the code diff?
| antback wrote:
| Very, very useful. I'll give it a try. Thanks for sharing!
| fao_ wrote:
| How do I opt out of this tool? I do not want anyone reviewing my
| code or projects to use or engage with it and it is explicitly
| against the TOS of those projects. It would be nice if this tool
| screened for a robots.txt or something of the sort so that I
| could ensure that this tool never touches my projects.
| lpapez wrote:
| Don't share your code publicly then?
| smcleod wrote:
| Why does it require signing and granting you full access to act
| as me on Github to use?
|
| cmux-agent requires access to your Github account:
| Verify your GitHub identity Know what resources you can
| access Act on your behalf View your email
| addresses
|
| I would have logged an issue for this but I see you've disabled
| logging issues on the repo. Seems a bit sus to me.
| lawrencechen wrote:
| Public repos shouldn't require being signed in.
|
| Just tested these example links in incognito and seemed to
| work?
|
| https://0github.com/manaflow-ai/cmux/pull/666
|
| https://0github.com/stack-auth/stack-auth/pull/988
|
| https://0github.com/tinygrad/tinygrad/pull/12995
|
| https://0github.com/simonw/datasette/pull/2548
|
| > you've disabled logging issues on the repo
|
| Sorry, wasn't aware. Turning it on right now. EDIT:
| https://github.com/manaflow-ai/cmux/issues seems to be fine?
| smcleod wrote:
| It's when you first start the app it asks you to login using
| GitHub before you see anything else.
| tiffnami wrote:
| yoooooo this looks awesome!
___________________________________________________________________
(page generated 2025-10-30 23:00 UTC)