[HN Gopher] Launch HN: mrge.io (YC X25) - Cursor for code review
       ___________________________________________________________________
        
       Launch HN: mrge.io (YC X25) - Cursor for code review
        
       Hey HN, we're building mrge (https://www.mrge.io/home), an AI code
       review platform to help teams merge code faster with fewer bugs.
       Our early users include Better Auth, Cal.com, and n8n--teams that
       handle a lot of PRs every day.  Here's a demo video:
       https://www.youtube.com/watch?v=pglEoiv0BgY  We (Allis and Paul)
       are engineers who faced this problem when we worked together at our
       last startup. Code review quickly became our biggest bottleneck--
       especially as we started using AI to code more. We had more PRs to
       review, subtle AI-written bugs slipped through unnoticed, and we
       (humans) increasingly found ourselves rubber-stamping PRs without
       deeply understanding the changes.  We're building mrge to help
       solve that. Here's how it works:  1. Connect your GitHub repo via
       our Github app in two clicks (and optionally download our desktop
       app). Gitlab support is on the roadmap!  2. AI Review: When you
       open a PR, our AI reviews your changes directly in an ephemeral and
       secure container. It has context into not just that PR, but your
       whole codebase, so it can pick up patterns and leave comments
       directly on changed lines. Once the review is done, the sandbox is
       torn down and your code deleted - we don't store it for obvious
       reasons.  3. Human-friendly review workflow: Jump into our web app
       (it's like Linear but for PRs). Changes are grouped logically (not
       alphabetically), with important diffs highlighted, visualized, and
       ready for faster human review.  The AI reviewer works a bit like
       Cursor in the sense that it navigates your codebase using the same
       tools a developer would--like jumping to definitions or grepping
       through code.  But a big challenge was that, unlike Cursor, mrge
       doesn't run in your local IDE or editor. We had to recreate
       something similar entirely in the cloud.  Whenever you open a PR,
       mrge clones your repository and checks out your branch in a secure
       and isolated temporary sandbox. We provision this sandbox with
       shell access and a Language Server Protocol (LSP) server. The AI
       reviewer then reviews your code, navigating the codebase exactly as
       a human reviewer would--using shell commands and common editor
       features like "go to definition" or "find references". When the
       review finishes, we immediately tear down the sandbox and delete
       the code--we don't want to permanently store it for obvious
       reasons.  We know cloud-based review isn't for everyone, especially
       if security or compliance requires local deployments. But a cloud
       approach lets us run SOTA AI models without local GPU setups, and
       provide a consistent, single AI review per PR for an entire team.
       The platform itself focuses entirely on making _human_ code reviews
       easier. A big inspiration came from productivity-focused apps like
       Linear or Superhuman, products that show just how much thoughtful
       design can impact everyday workflows. We wanted to bring that same
       feeling into code review.  That's one reason we built a desktop
       app. It allowed us to deliver a more polished experience, complete
       with keyboard shortcuts and a snappy interface.  Beyond
       performance, the main thing we care about is making it easier for
       humans to read and understand code. For example, traditional review
       tools sort changed files alphabetically--which forces reviewers to
       figure out the order in which they should review changes. In mrge,
       files are automatically grouped and ordered based on logical
       connections, letting reviewers immediately jump in.  We think the
       future of coding isn't about AI replacing humans--it's about giving
       us better tools to quickly understand high-level changes,
       abstracting more and more of the code itself. As code volume
       continues to increase, this shift is going to become increasingly
       important.  You can sign up now (https://www.mrge.io/home). mrge is
       currently free while we're still early. Our plan for later is to
       charge closed-source projects on a per-seat basis, and to continue
       giving mrge away for free to open source ones.  We're very actively
       building and would love your honest feedback!
        
       Author : pomarie
       Score  : 161 points
       Date   : 2025-04-15 13:34 UTC (9 hours ago)
        
       | kerryritter wrote:
       | This looks like a cool solve for this problem. Some of the other
       | tools I tried didn't seem to contextualize the app, so the
       | comments were surface level and trite.
       | 
       | I'm on Bitbucket so will have to wait :)
        
         | pomarie wrote:
         | Thanks, really appreciate that! Yeah, giving the AI the ability
         | to fetch the context it needs was a big challenge (since larger
         | codebases can't all fit in an LLM's context window)
         | 
         | And totally hear you on Bitbucket--it's definitely on our
         | roadmap. Would love to loop back with you once we get closer on
         | that front!
        
       | bryanlarsen wrote:
       | It looks like graphite.dev has pivoted into this space too. Which
       | is annoying, because I'm interested in graphite.dev's core non-AI
       | product. Which appears to be stagnating from my perspective --
       | they still don't have gitlab support after several years.
        
         | pomarie wrote:
         | Yeah, noticed that too--what's the core graphite.dev feature
         | you're interested in? PR stacking, by chance?
         | 
         | If that's it, we actually support stacked PRs (currently in
         | beta, via CLI and native integrations). My co-founder, Allis,
         | used stacked PRs extensively at her previous company and loved
         | it, so we've built it into our workflow too. It's definitely
         | early-stage, but already quite useful.
         | 
         | Docs if you're curious: https://docs.mrge.io/overview
        
           | bryanlarsen wrote:
           | Yes, stacked PR's and a rebase-only flow. Unfortunately we're
           | a GitLab shop. Today's task is a particularly hairy review;
           | it's too bad I can't try you out.
        
             | pomarie wrote:
             | Ah, totally get it--that's frustrating. GitLab support is
             | on our roadmap, so hopefully we can help you out soon.
             | 
             | In the meantime, good luck with that hairy review--hope it
             | goes smoothly! If you're open to it, I'd love to reach out
             | directly once GitLab support is ready.
        
               | bryanlarsen wrote:
               | Email is in profile. You're welcome to add me to your
               | list.
        
       | justanotheratom wrote:
       | This is an awesome direction. Few thoughts:
       | 
       | It would be awesome if the custom rules were generalized on the
       | fly from ongoing reviewer conversations. Imaging two devs quibble
       | about line length in a PR, and in a future PR, the AI reminds
       | about this convention.
       | 
       | Would this work seamlessly with AI Engineers like Devin? I
       | imagine so.
       | 
       | This will be very handy for solo devs as well, even those who
       | don't use Coding CoPilots could benefit from an AI reviewer, if
       | it does not waste their time.
       | 
       | Maybe there can be multiple AI models review the PR at the same
       | time, and over time, we promote the ones whose feedback is
       | accepted more.
        
         | pomarie wrote:
         | These are all amazing ideas. We actually already see a lot of
         | solo devs using mrge precisely because they want something to
         | catch bugs before code goes live--they simply don't have
         | another pair of eyes.
         | 
         | And I absolutely love your idea of having multiple AI models
         | review PRs simultaneously. Benchmarking LLMs can be notoriously
         | tricky, so a "wisdom of the crowds" approach across a large
         | user base could genuinely help identify which models perform
         | best for specific codebases or even languages. We could even
         | imagine certain models emerging as specialists for particular
         | types of issues.
         | 
         | Really appreciate these suggestions!
        
         | allisonee wrote:
         | Appreciate the feedback! We currently auto-suggest custom rules
         | based on your comment history (and .cursorrules), however
         | continuing to suggest from history is now on the roadmap thanks
         | to your suggestion!
         | 
         | On working with Devin: Yes, right now we're focused on code
         | review, so whatever AI IDE you use would work. In fact, it
         | might even be better with autonomous tools like Devin since we
         | focus on helping you (as a human) understand the code they've
         | written faster.
         | 
         | Interesting idea on multiple AI models --we were also
         | separately toying with the idea of having different personas
         | (security, code architecture), will keep this one in mind!
        
           | justanotheratom wrote:
           | personas sounds great!
        
         | 8organicbits wrote:
         | Line length isn't something I'd want reviewed in a PR.
         | Typically I'd set up a linter with relevant limits and defer to
         | that, ideally using pre-commit testing or directly in my IDE.
         | Line length isn't an AI feature, it's largely a solved problem.
        
           | justanotheratom wrote:
           | bad example, sorry.
        
       | mdaniel wrote:
       | I see on your website that you claim the _subprocessors_ are SOC2
       | type 2 certified, but it doesn 't appear that you claim anything
       | about _your_ SOC2 status (in progress, certified, not
       | interested). I mention this because I would suspect the breach
       | risk is not that OpenAI gets popped but rather that a place which
       | gathers continuously updated mirrors of source code does. The
       | sandbox idea only protects the projects from one another, not
       | from a malicious actor injecting some bad dep into _your_ supply
       | chain
        
         | pomarie wrote:
         | That's a very good point. We actually just kicked off our own
         | SOC 2 certification process last week--I hadn't updated the
         | website yet, but I'll go ahead and do that now. Thanks for
         | raising this!
         | 
         | Appreciate the feedback around security as well; protecting
         | against supply-chain attacks is definitely top of mind for us
         | as we build this out.
        
           | mdaniel wrote:
           | I know I'm not supposed to mention website issues here, but
           | since you brought it up I wanted to bring to your attention
           | that the "fade in on scroll" isn't doing you any favors for
           | getting the information out of your head and into the heads
           | of your audience. That observation then went to 11 when I
           | scrolled back up and the entire page was solid black, not
           | even showing me the things it had previously swooshed into
           | visibility. It's your site, do what makes you happy, but I
           | just wanted to ensure you were aware of the tradeoff you were
           | making
        
             | pomarie wrote:
             | Hey, thanks again--really appreciate the heads-up! Could
             | you point me to the specific section where you're seeing
             | the fade in on scroll? Also, what browser are you using?
             | 
             | I don't remember adding that feature so it might be a bug
        
       | deveshanand18 wrote:
       | As far as I can see, this doesn't directly integrate with github
       | (we currently use coderabbit on github)? Is it on your timeline?
        
         | allisonee wrote:
         | good question! we currently support a direct integration with
         | github via a github app. we'll make that clearer in the post.
        
       | thefourthchime wrote:
       | One personal niggle: "Code Review For The AI Era". I hate when
       | people say era in relation to AI because it reminds me of
       | Google's tasteless Gemini era thing.
        
         | allisonee wrote:
         | that makes total sense, thanks for the feedback! we debated
         | this for a bit--will keep in mind for the next design pass on
         | the site :)
        
       | _insu6 wrote:
       | I've tried something similar in the past. The concept is cool,
       | but so far the solutions I've seen are not so useful in terms of
       | comments quality and ability to catch bugs.
       | 
       | Hope this is the right time, as this would be a huge time-saver
       | for me
        
         | allisonee wrote:
         | We had heard the same from a few early users, but they've
         | commented that our AI is a more context aware/useful. Of
         | course, that's just anecdotal. We'd love to give you a free
         | trial (https://mrge.io/invite?=hn) and get your feedback on
         | quality/bug catching. Feel free to reach out at contact@mrge.io
         | if you have any questions too!
        
       | william_stokes wrote:
       | I was wondering if it has information about previous commits with
       | deleted code? Sometimes we make a change and later realize that
       | the previous code worked better, would mrge be able to understand
       | that?
        
         | allisonee wrote:
         | that's a good question! today, we don't look at previous
         | commits--but thats something that we'll consider for future
         | roadmap. curious if this happens often to your team? and if so,
         | how you general gauge "better" (on the prev commits)
        
       | ukuina wrote:
       | How does this work for large monorepos?
       | 
       | If the repo is several GB, will you clone the whole thing for
       | every review?
        
         | allisonee wrote:
         | good q! today, we'd clone the whole thing, but we're actively
         | looking into solutions about that atm (ie: only cloning the
         | relevant subdirs)
         | 
         | for custom rules, we do handle large monorepos by allowing you
         | to add an allowlist (or exclude list) via glob patterns.
        
       | timfsu wrote:
       | Happy mrge user here - congrats on the launch! It's encouraged
       | our team to do more stacked PRs and made every review a bit nicer
        
         | pomarie wrote:
         | Really appreciate the feedback, really happy it's helping you
         | :)
        
         | allisonee wrote:
         | thanks Tim! So glad it's been helping your team move faster
        
       | _jayhack_ wrote:
       | If you are looking for an alternative that can also chat with you
       | in Slack, create PRs, edit/create/search tickets and Linear,
       | search the web and more, check out codegen.com
        
       | yoavz wrote:
       | Excellent product, congrats on the launch guys!
        
       | victorbjorklund wrote:
       | Would be great to have support for GitLab also (have a project
       | there that I would love to try this on and I can't switch it to
       | GitHub)
        
         | allisonee wrote:
         | On the roadmap! If you're happy to share your email for an
         | early link when we do support it, send to contact@mrge.io
        
           | victorbjorklund wrote:
           | Great! Will test it on Github first.
        
       | mushufasa wrote:
       | Honest initial reaction to your pitch: > Cursor for code review
       | 
       | Isn't cursor already the "cursor for code review?"
        
         | allisonee wrote:
         | appreciate the honest reaction! We'll think about this more,
         | what we were trying to get at is that cursor is more about code
         | writing, and we're tackling the review/collaboration side :)
         | curious if anything else would have immediately stuck out to
         | you more?
        
           | mushufasa wrote:
           | I think I got the pitch meaning immediately: this is a
           | specialized ai tool for code review.
           | 
           | That said, that doesn't sound like something very useful when
           | I already use an ai code editor for code review. And github
           | already supports automations for ci/ci for ai tools for code
           | review. Maybe I just don't see value in an extra tool for
           | this.
        
       | JofArnold wrote:
       | Congrats on the launch. Another happy user here. (Caught a really
       | sneaky issue too!)
        
         | pomarie wrote:
         | Thanks for sharing that Jof! Glad it's helpful :)
        
       | auscompgeek wrote:
       | I wanted to check this out, so I installed the GitHub app on my
       | account, with access to all my personal repos. However when I
       | went looking for one of my repos (auscompgeek/sphinxify) I
       | couldn't find it. It looks like I can only see the first 100
       | repos in the dashboard? I have a lot of forks under my account...
        
         | allisonee wrote:
         | sorry about that! we're looking into this now--if you go back
         | to https://github.com/apps/mrge-io-
         | dev/installations/select_tar... and just add repos you want to
         | use us with under the "select repositories", that should
         | unblock you until we fix it in the next hour or so.
        
           | allisonee wrote:
           | just to follow up--the fix for this is landing! thanks for
           | surfacing
        
         | pomarie wrote:
         | Quick update - we've merged a fix which should be live in ~15
         | mins! Thanks for reporting this :)
        
       | tomasen9987 wrote:
       | This looks interesting!
        
       | Arindam1729 wrote:
       | I've used CodeRabbit for Code Review. It does pretty cool work.
       | 
       | How different it is from that?
        
         | pomarie wrote:
         | Great question!
         | 
         | We've heard from users who've tried both that our AI reviewer
         | tends to catch more meaningful issues with less noise, that's
         | really something you should try for yourself and find out! (The
         | great thing is that it's really easy to start using)
         | 
         | Beyond the AI agent itself (which is somewhat similar to
         | CodeRabbit), our biggest differentiation comes from the human
         | review experience we've built. Our goal was to create a Linear-
         | like review workflow designed to help human reviewers
         | understand and merge code faster.
        
       | mw3155 wrote:
       | in the demo video i see that you can apply a recommended code
       | change with one click. how do you make sure that the code still
       | works after the AI changes?
       | 
       | also, i tried some other ai review tools before. one big issue
       | was always that they are too nice and even miss obvious bad
       | changes. did you encounter these problems? did you mitigate this
       | via prompting techniques or finetuning?
        
         | pomarie wrote:
         | Great questions!
         | 
         | For applying code changes with one-click: we keep suggestions
         | deliberately conservative (usually obvious one-line fixes like
         | typos) precisely to minimize risks of breaking things. Of
         | course, you should confirm suggestions first.
         | 
         | Regarding AI reviewers being "too nice" and missing obvious
         | mistakes--yes, that's a common issue and not easy to solve!
         | We've approached it partly via prompt-tuning, and partly by
         | equipping the AI with additional tools to better spot genuine
         | mistakes without nitpicking unnecessarily. Lastly, we've added
         | functionality allowing human reviewers to give immediate
         | feedback directly to the AI--so it can continuously learn to
         | pay attention to what's important to your team.
        
           | mw3155 wrote:
           | thanks for answering! will definitly check out the tool when
           | i have the chance. best of luck building this!
        
       | bilekas wrote:
       | > We know cloud-based review isn't for everyone, especially if
       | security or compliance requires local deployments. But a cloud
       | approach lets us run SOTA AI models without local GPU setups, and
       | provide a consistent, single AI review per PR for an entire team.
       | 
       | I feel like that's being overlooked here a bit too briefly. Is
       | your target market not primarily larger teams who are most likely
       | to have some security and privacy concerns?
       | 
       | I guess is there something on the roadmap to maybe offer
       | something later ?
        
         | pomarie wrote:
         | Definitely--larger teams do typically have more stringent
         | security and privacy requirements, especially if they're
         | already using self-hosted GitHub. Self-hosted or hybrid
         | deployment is definitely on our radar, and as we grow, it's
         | likely we'll offer a self-hosted version specifically to
         | support those larger teams.
         | 
         | If that's something your team might need, I'd love to chat more
         | and keep you posted as we explore this!
        
       | KyleForness wrote:
       | happy user here--our team moved from coderabbit to mrge, and
       | everyone seems to love how much more useful the AI comments are
        
         | pomarie wrote:
         | Really happy to hear mrge is useful! :) Thanks for sharing
        
         | allisonee wrote:
         | thanks for the feedback! Glad that our ai reviewer has been
         | useful to your team!
        
       | landkittipak wrote:
       | This looks incredible!
        
       | nikolayasdf123 wrote:
       | why not GitHub Copilot?
        
         | pomarie wrote:
         | Great question!
         | 
         | We've heard from users who've tried both that our AI reviewer
         | tends to catch more meaningful issues with less noise, that's
         | really something you should try for yourself and find out! (The
         | great thing is that it's really easy to start using)
         | 
         | Beyond the AI agent itself (which is somewhat similar to
         | Copilot), our biggest differentiation comes from the human
         | review experience we've built. Our goal was to create a Linear-
         | like review workflow designed to help human reviewers
         | understand and merge code faster.
        
       | jFriedensreich wrote:
       | Great that AI seemingly revives the stalled PR / review space. I
       | just hope that human and local workflows will not be an
       | afterthought or even made harder by these tools. Its also a great
       | chance for stacked PRs and jujutsu to shake up the market.
        
         | pomarie wrote:
         | Definitely! As AIs write a lot more code, I think that the
         | PR/review space is going to become way more important.
         | 
         | If you're interested in Stack PRs, you should definitely check
         | them out on Mrge. By the way, we natively support them (in beta
         | atm): https://docs.mrge.io/ai-review/overview
        
           | jFriedensreich wrote:
           | The beta setting of stacked PRs seems to have no effect for
           | me. Reading the mention of a cli in the docs for PR stacks
           | gives me shivers. Please don't say you are implement it like
           | graphite, which is the absolute worst way to do it and makes
           | graphite useless for every sapling and jujutsu user that
           | would need it most. You can also reach me at mrge@ntr.io
           | would be happy to chat!
        
       | dyeje wrote:
       | I've been evaluating AI code review vendors for my org. We've
       | trialed a couple so far. For me, taking the workflow out of
       | GitHub is a deal breaker. I'm trying to speed things along, not
       | upend my whole team's workflow. What's your take on that?
        
         | pomarie wrote:
         | Yeah, that's a totally legit point!
         | 
         | The good news with mrge is that it works just like any other AI
         | code reviewer out there (CodeRabbit, Copilot for PRs, etc.).
         | All AI-generated review comments sync directly back to GitHub,
         | and interacting with the platform itself is entirely optional.
         | In fact, several people in this thread mentioned they switched
         | from Copilot or CodeRabbit because they found mrge's reviews
         | more accurate.
         | 
         | If you prefer, you never need to leave GitHub at all.
        
         | berrazuriz wrote:
         | maybe blar.io works. Worth a try
        
       | alexchantavy wrote:
       | Been using this for https://github.com/cartography-
       | cncf/cartography and am very happy, thanks for building this.
       | 
       | Automated review tools like this are especially important for an
       | open source project because you have to maintain a quality bar to
       | keep yourself sane but if you're too picky then no one from the
       | community will want to contribute. AI tools are like linters and
       | have no feelings, so they will give the feedback that you as a
       | reviewer may have been hesitant to give, and that's awesome.
       | 
       | Oh, and on the product itself, I think it's super cool that it
       | comes up with rules on its own to check for based on conventions
       | and patterns that you've enforced over time. E.g. we use it to
       | make sure that all function calls that pull from an upstream API
       | are decorated with our standard error handler.
        
         | pomarie wrote:
         | Thanks for sharing that Alex! Definitely love having an AI be
         | the strict reviewer so that the human doesn't have to
        
       | eqvinox wrote:
       | Threw a random PR at it... of the 11 issues it flagged, only 1
       | was appropriate, and that one was also caught by pylint :(
       | 
       | (mixture of 400 lines of C and 100 lines of Python)
       | 
       | It also didn't flag the one SNAFU that really broke things (which
       | to be fair wasn't caught by human review either, it showed in an
       | ASAN fault in tests)
        
         | allisonee wrote:
         | sorry to hear that it didn't catch all the issues! if you
         | downvote/upvote or reply directly to the bot comment @mrge-io
         | <feedback>, we can improve it for your team.
         | 
         | We take all these into consideration when improving our AI, and
         | your direct reply will fine tune comments for your repository-
         | only.
        
           | eqvinox wrote:
           | That's good to know, but --assuming my sample of size 1 isn't
           | a bad outlier, I should really try a few more-- there's
           | another problem: I don't think we'd be willing to sink time
           | into tuning a currently-free subscription service that can be
           | yanked at any time. And I'm in a position to say it is highly
           | unlikely that we'd pay for the service.
           | 
           | (We already have problems with our human review being too
           | superficial; we've recently come to a consensus that we're
           | letting too much technical debt slip in, in the sense of
           | unnoticed design problems.)
           | 
           | Now the funny part is that I'm talking about a FOSS project
           | with nVidia involvement ;D
           | 
           | But also: this being a FOSS project, people have opened AI-
           | generated PRs. _Poor_ AI-generated PRs. This is indirectly
           | hurting the prospects of your product (by reputation). Might
           | I suggest adding an AI generated PR detector, if possible?
           | (It 's not in our guidelines yet but I expect we'll be
           | prohibiting AI generated contributions soon.)
        
             | allisonee wrote:
             | totally get where you're coming from--many big open source
             | repos have also been using it for a while and have seen
             | some FP but have generally felt that the quality overall
             | was worth it. would love to continue having you try it out,
             | but also understand that maintaining a FOSS project is a
             | ton of work!
             | 
             | if you have specific feedback on the pr--feel free to email
             | at contact@mrge.io and i'll take a look personally and see
             | if we can adjust anything for your repo.
             | 
             | nice idea on the fully AI-generated PRs! something in our
             | roadmap is to better highlight PRs or chunks that were
             | likely auto-gened. stay tuned !
        
       | manmal wrote:
       | Is that the four letter domain PG recently tweeted about?
       | Congrats!
        
         | pomarie wrote:
         | It's possible! What was the tweet?
        
       | gslepak wrote:
       | Looked at it, but as a security person, I have to recommend
       | against it as it requires permissions to act on behalf of
       | repository maintainers. That is asking for trouble, and
       | represents a backdoor into every project that signs up for it.
        
         | allisonee wrote:
         | thanks for bringing this up, and totally understand the
         | concern. we are committed to security, and we never
         | write/access your code without your action--the only reason
         | that setting is necessary is so that you can merge/1-click
         | commit suggestions from the AI directly from the code
         | suggestions it's posted.
        
       | mmmeff wrote:
       | Any plans to support github enterprise on different URLs? Would
       | love to give this a try with my team.
        
       | axelb78 wrote:
       | Looks awesome!
        
       | ggarnhart wrote:
       | Heyo your launch video is unlisted on youtube. Maybe intentional,
       | but you might benefit from having it be public :)
        
       | LinearEntropy wrote:
       | The call to action button says "Get Started for Free", while the
       | pricing page lists $20/month.
       | 
       | Clicking the get started button immediately wants me to sign up
       | with github.
       | 
       | Could you explain on the pricing page (or just to me) what the
       | 'free' is? I'm assuming a trial of 1 month or 1 PR?
       | 
       | I'm somewhat hesitant to add any AI tooling to my workflows,
       | however this is one of the use cases that makes sense to me. I'm
       | definitely interested in trying it out, I just think its odd that
       | this isn't explained anywhere I could find.
        
         | allisonee wrote:
         | thanks for bringing this up! we're currently free (unlimited
         | PRs) and will soon bill $20-$30/active user (has committed a
         | PR) per month.
         | 
         | We'll try to make this clearer!
        
       | thuanao wrote:
       | It's been useful at our company. My only gripe is I'd like to run
       | it locally. I don't want the feedback _after_ I open a PR.
        
       | dimal wrote:
       | Looks interesting. I'm a bit confused about how it knows the
       | codebase and the custom rules interface. I generally have coding
       | standards docs in the repo. Can it simply be made aware of those
       | docs instead of requiring me to maintain two sets of instructions
       | (one written one for humans, and one in the mrge interface for
       | AI)? I could imagine that without being highly aware of a team's
       | standards, the usefulness of its review would be pretty poor.
       | Getting general "best practices" type stuff wouldn't be helpful.
        
       | frabona wrote:
       | This is super well done - love the approach with cloud-based LSP
       | and the focus on making reviews actually faster for humans.
        
       | Nijikokun wrote:
       | the biggest issue i've had for things like this is that ai
       | doesn't understand context very well, anything that is beyond a
       | context window creates hallucinations and it starts making up
       | things that may exist in one location but it tries to apply it to
       | a completely unrelated scenario, would be curious if this does
       | understand the connected pieces appropriately and catches things
       | that break those connections, otherwise it's just another linter?
        
       ___________________________________________________________________
       (page generated 2025-04-15 23:00 UTC)