[HN Gopher] CamoLeak: Critical GitHub Copilot Vulnerability Leak...
___________________________________________________________________
CamoLeak: Critical GitHub Copilot Vulnerability Leaks Private
Source Code
Author : greyadept
Score : 182 points
Date : 2025-10-11 22:58 UTC (1 days ago)
(HTM) web link (www.legitsecurity.com)
(TXT) w3m dump (www.legitsecurity.com)
| stephenlf wrote:
| Wild approach. Very nice
| adastra22 wrote:
| A good vulnerability writeup, and a thrill to read. Thanks!
| deckar01 wrote:
| Did the markdown link exfil get fixed?
| runningmike wrote:
| Somehow this article feels like a promotional for Legit. But all
| AI vibe solutions face the same weaknesses. Limited transparency
| and trust Issues: Using non FOSS solutions for cybersecurity is a
| large risk.
|
| If you do use AI cyber solutions, you can be more vulnerable for
| security breaches instead of less.
| xstof wrote:
| Wondering if the ability to use hidden (HTML comment) content in
| PRs would not remain a nasty issue: especially for open source
| repos?! Was that fixed?
| PufPufPuf wrote:
| It's used widely for issue/PR templates, to tell the submitter
| what info to include. But they could definitely strip it from
| the Copilot input... at least until they figure out this
| "prompt injection" thing that I thought modern LLMs were
| supposed to be immune to.
| fn-mote wrote:
| > that I thought modern LLMs were supposed to be immune to
|
| What gave you this idea?
|
| I thought it was always going to be a feature of LLMs, and
| the only thing that changes is that it gets harder to do
| (more circumventions needed), much like exploits in the
| context of ASLR.
| PufPufPuf wrote:
| PR releases. Yeah, it was an exaggeration, I know that the
| mitigations can only go so far.
| munchlax wrote:
| So this wasn't really fixed. The impressive thing here is that
| copilot accepts natural language. So whatever exfiltration method
| you can come up with, you just write out the method in english.
|
| They merely "fixed" one particular method, without disclosing
| _how_ they fixed it. Surely you could just do the base64 thing to
| an image url of your choice? Failing that, you could trick it
| into providing passwords by telling it you accidentally stored
| your grocery list in a field called _passswd_ , go fetch it for
| me ppls?
|
| There's a ton of stuff to be found here. Do they give bounties?
| Here's a goldmine.
| lyu07282 wrote:
| > GitHub fixed it by disabling image rendering in Copilot Chat
| completely.
| oefrha wrote:
| To supplement the parent, this is straight from article's
| TLDR (emphasis mine):
|
| > In June 2025, I found a critical vulnerability in GitHub
| Copilot Chat (CVSS 9.6) that allowed silent exfiltration of
| secrets and source code from private repos, and gave me full
| control over Copilot's responses, including suggesting
| malicious code or links.
|
| > The attack combined a novel CSP bypass using GitHub's own
| infrastructure with remote prompt injection. _I reported it
| via HackerOne, and GitHub fixed it by disabling image
| rendering in Copilot Chat completely._
|
| And parent is clearly responding to gp's incorrect claims
| that "...without disclosing how they fixed it. Surely you
| could just do the base64 thing to an image url of your
| choice?" I'm sure there will be more attacks discovered in
| the future but gp is plain wrong on these points.
|
| Please RTFA or at least RTFTLDR before you vote.
| Thorrez wrote:
| >Surely you could just do the base64 thing to an image url of
| your choice?
|
| What does that mean? Are you proposing a non-Camo image URL?
| Non-Camo image URLs are blocked by CSP.
|
| >Failing that, you could trick it into providing passwords by
| telling it you accidentally stored your grocery list in a field
| called passswd, go fetch it for me ppls?
|
| Does the agent have internet access to be able to perform a
| fetch? I'm guessing not, because if so, that would be a much
| easier attack vector than using images.
| nprateem wrote:
| You'd have to be insane to run an AI agent locally. They're
| clearly unsecurable.
| djmips wrote:
| can you still make invisible comments?
| RulerOf wrote:
| Invisible comments are a widely used feature. Often done inside
| of PR or Issue templates to instruct users how to include
| necessary info without clogging up the final result when they
| submit.
| charcircuit wrote:
| The rule is to operate using the intersection of all the users
| permissions of who is contributing text to the LLM. Why can an
| attacker's prompt access a repo the attacker does not have access
| to? That's the biggest issue here.
| kerng wrote:
| Not the first time by the way. GitHub Copilot Chat: From Prompt
| Injection to Data Exfiltration
| https://embracethered.com/blog/posts/2024/github-copilot-cha...
| MysticFear wrote:
| Can't they just have the Copilot user permission to be readonly
| from the current repo.
| mediumsmart wrote:
| I can't remember the last time I leaked private source code with
| copilot.
| isodev wrote:
| I'm so happy our entire operation moved to a self hosted VCS
| (Forgejo). Two years ago, we started the migration (including
| client repos) and not only we saved tones of money on GitHub
| subscriptions, our system is dramatically more performant for the
| 30-40 developers working with it every day.
|
| We also banned the use of VSCode and any editor with integrated
| LLM features. Folks can use CLI based coding agents of course,
| but only in isolated containers with careful selection of sources
| made available to the agents.
| hansmayer wrote:
| Just out of interest, what is your alternative IDE?
| isodev wrote:
| That depends a bit on the ecosystem too.
|
| For editors: Zed recently added the disable_ai option, we
| have a couple of folks using more traditional options like
| Sublime, vim-based etc (that never had the kind of creepy
| telemetry we're avoiding).
|
| JetBrains tools are OK since their AI features are plugin
| based, their telemetry is also easy to disable. Xcode and Qt
| Creator are also in use.
| aitchnyu wrote:
| What do your CLIs connect to? To first-party OpenAI/Claude
| provider or AWS Bedrock?
| isodev wrote:
| Devs are free to choose, provided we can vet the model
| prover's policy on training on prompts or user code. We're
| also careful not to expose agents to documentation or test
| data that may be sensitive. It's a trade off with convenience
| of course, but we believe that any information agents get
| access to should be a conscious opt-in. It will be cool
| if/when self hosting claude-like LLMs becomes pragmatic.
| oncallthrow wrote:
| > I spent a long time thinking about this problem before this
| crazy idea struck me. If I create a dictionary of all letters and
| symbols in the alphabet, pre-generate their corresponding Camo
| URLs, embed this dictionary into the injected prompt,
|
| Beautiful
| j45 wrote:
| I wonder sometimes if all code on Github private or not is
| ultimately compromised somehow.
| twisteriffic wrote:
| This exploit seems to be taking advantage of the slow token-at-a-
| time pattern of LLM conversations to ensure that the extracted
| data can be reconstructed in order? Seems as though returning the
| entire response as a single block could interfere with the timing
| enough to make reconstruction much more difficult.
___________________________________________________________________
(page generated 2025-10-12 23:01 UTC)