Copilot Leaked Information and Misrouted to Another Users
Postedabout 2 months ago
docs.cloud.google.comTechstory
calmnegative
Debate
20/100
Github CopilotLLM SecurityData Privacy
Key topics
Github Copilot
LLM Security
Data Privacy
GitHub Copilot experienced a vulnerability that misrouted some model responses to other users, prompting concerns about running LLMs locally.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
N/A
Peak period
1
Start
Avg / period
1
Key moments
- 01Story posted
Nov 6, 2025 at 5:22 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 6, 2025 at 5:22 PM EST
0s after posting
Step 02 - 03Peak activity
1 comments in Start
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 6, 2025 at 5:22 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45841238Type: storyLast synced: 11/17/2025, 7:55:55 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
=======================================================================
We're writing to inform you that your GitHub Copilot usage between August 10, 2025 and September 23, 2025 was affected by a vulnerability that caused a small percentage of model responses to be misrouted to another user.
Your trust is essential to us, and we want to remain as transparent as possible about events like these. GitHub itself did not experience a compromise as a result of this event.
### What happened
On September 23, 2025, we received multiple reports from GitHub users that some of their prompts received out-of-context responses. We immediately began investigating the reports and learned that certain responses generated by the Sonnet 3.7 and Sonnet 4 models provided by one of our upstream providers, Google Cloud Platform (GCP), could be mismatched between users. This behavior occurred due to a bug in Google's proxy infrastructure that affected how requests were processed.
As a result, between August 10th, 2025 and September 23, 2025, certain responses (approximately 0.00092% of GitHub Copilot responses served by GCP for Sonnet models 3.7 and 4 in the affected timeframe) intended for one user were misrouted to another user. Google mitigated the issue on September 26th and disclosed via a public security bulletin: https://docs.cloud.google.com/support/bulletins#gcp-2025-059.
We are writing to inform you that one or more of your prompts' responses were misrouted and sent to another user. At the bottom of this email, you will find an appendix of prompt information owned by your account that were affected by this issue.
### What information was involved
In affected cases, a user could have received a model response that originated from another user's prompt. There is no indication of targeted or malicious activity, and GitHub systems themselves were not compromised. We've assessed that a malicious actor was not able to trigger or otherwise control the outcome of this vulnerability.
### What GitHub is doing
GitHub learned of the issue on September 23, 2025 at 19:45 UTC and immediately began investigating. Upon confirming the source of the issue, we reported our findings to Google on the same day at 21:00 UTC. By 21:37 UTC, GitHub completely disabled GCP endpoints used for Copilot to prevent further ocurrences. We worked with Google throughout their investigation, verified there were no more occurrences, and have since reenabled GCP traffic on September 29th, 2025 at 10:44 UTC following confirmation of Google's fix.
We then began working to identify which customers could have been affected.
Through the available telemetry, we have identified when the impacted prompt was sent, which client was used, the client request ID, and the user ID associated with the prompt author.
We are unable to provide which user the response(s) were sent to as we do not log model responses.
### What you can do
There is no action required on your part. We've identified the affected prompt(s) and included below the client request ID, when the prompt was sent, which client was used, and the user ID associated with the prompt author. The data provided may assist in finding the impacted prompt if you or your organization log this information. GitHub does not log user prompts or model responses. GitHub is committed to transparency during events like these and are sharing as much detail as available to enable you to investigate.
GitHub Support does not have any additional logging or data about these prompts. However, if you have questions or would like to discuss this further, please contact them using this link
=======================================================================