| CARVIEW |
Select Language
HTTP/2 200
content-type: text/html; charset=utf-8
date: Thu, 25 Dec 2025 15:41:19 GMT
x-download-options: noopen
x-permitted-cross-domain-policies: none
referrer-policy: strict-origin-when-cross-origin
x-statuspage-version: c05b48d3f7ad5c2214b282b00d8c2ddfc3edee22
strict-transport-security: max-age=259200
x-statuspage-skip-logging: true
access-control-allow-origin: *
cache-control: max-age=3, public, stale-while-revalidate=30, stale-if-error=3600
link: ; rel=preload; as=script; nopush,; rel=preload; as=script; nopush,; rel=preload; as=style; nopush,; rel=preload; as=style; nopush,; rel=preload; as=script; nopush,; rel=preload; as=script; nopush,; rel=preload; as=script; nopush,; rel=preload; as=script; nopush,; rel=preload; as=script; nopush
x-pollinator-metadata-service: status-page-web-pages
etag: W/"3dedf409435983d53dfb31ea475ba6d4"
x-runtime: 0.097101
server: AtlassianEdge
accept-ranges: bytes
x-content-type-options: nosniff
x-xss-protection: 1; mode=block
atl-traceid: a959d83e771049e58ab8158baa5e5833
atl-request-id: a959d83e-7710-49e5-8ab8-158baa5e5833
report-to: {"endpoints": [{"url": "https://dz8aopenkvv6s.cloudfront.net"}], "group": "endpoint-1", "include_subdomains": true, "max_age": 600}
nel: {"failure_fraction": 0.01, "include_subdomains": true, "max_age": 600, "report_to": "endpoint-1"}
content-encoding: gzip
vary: Accept,Accept-Encoding
x-cache: Hit from cloudfront
via: 1.1 2676819263ff0f7764e717c50ec31acc.cloudfront.net (CloudFront)
x-amz-cf-pop: BOM54-P1
x-amz-cf-id: qcmJQYgQgfGsytFtHBlYsTMHGI4KTqeEn-3GKQ8FioFQgCcu_mAkcQ==
age: 3
GitHub Status
Get email notifications whenever GitHub creates, updates or resolves an incident.
Get text message notifications whenever GitHub creates or resolves an incident.
Get incident updates and maintenance status messages in Slack.
Subscribe via Slack
By subscribing you acknowledge our Privacy Policy. In addition, you agree to the Atlassian Cloud Terms of Service and acknowledge Atlassian's Privacy Policy.
Get webhook notifications whenever GitHub creates an incident, updates an incident, resolves an incident or changes a component status.
Visit our support site.
All Systems Operational
About This Site
Check GitHub Enterprise Cloud status by region:
- Australia: au.githubstatus.com
- EU: eu.githubstatus.com
- Japan: jp.githubstatus.com
- US: us.githubstatus.com
Git Operations
Operational
Webhooks
Operational
Visit www.githubstatus.com for more information
Operational
API Requests
Operational
Issues
Operational
Pull Requests
Operational
Actions
Operational
Packages
Operational
Pages
Operational
Codespaces
Operational
Copilot
Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Dec 25, 2025
No incidents reported today.
Dec 24, 2025
No incidents reported.
Dec 23, 2025
Resolved -
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Dec 23, 10:32 UTC
Dec 23, 10:32 UTC
Update -
Issues and Pull Requests are operating normally.
Dec 23, 10:32 UTC
Dec 23, 10:32 UTC
Update -
We are seeing recovery in search indexing for Issues and Pull Requests. The queue has returned to normal processing times, and we continue to monitor service health. We'll post another update by 11:00 UTC.
Dec 23, 10:29 UTC
Dec 23, 10:29 UTC
Update -
We're experiencing delays in search indexing for Issues and Pull Requests. Search results may show data up to three minutes old due to elevated processing times in our indexing pipeline. We're working to restore normal performance. We'll post another update by 10:30 UTC.
Dec 23, 09:58 UTC
Dec 23, 09:58 UTC
Investigating -
We are investigating reports of degraded performance for Issues and Pull Requests
Dec 23, 09:56 UTC
Dec 23, 09:56 UTC
Resolved -
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Dec 23, 00:17 UTC
Dec 23, 00:17 UTC
Update -
All services at healthy levels. We're finalizing the change to prevent future degradations from the same source.
Dec 23, 00:06 UTC
Dec 23, 00:06 UTC
Update -
We're investigating elevated traffic affecting GitHub services, primarily impacting logged-out users with some increased latency on Issues. We're preparing additional mitigations to prevent further spikes.
Dec 22, 23:32 UTC
Dec 22, 23:32 UTC
Update -
We are experiencing elevated traffic affecting some GitHub services, primarily impacting logged-out users. We're actively investigating the full scope and working to restore normal service. We'll post another update by 23:45 UTC.
Dec 22, 22:57 UTC
Dec 22, 22:57 UTC
Update -
Issues is experiencing degraded performance. We are continuing to investigate.
Dec 22, 22:48 UTC
Dec 22, 22:48 UTC
Investigating -
We are investigating reports of impacted performance for some GitHub services.
Dec 22, 22:31 UTC
Dec 22, 22:31 UTC
Dec 22, 2025
Dec 21, 2025
No incidents reported.
Dec 20, 2025
No incidents reported.
Dec 19, 2025
No incidents reported.
Dec 18, 2025
Resolved -
On December 18, 2025, between 16:25 UTC and 19:09 UTC the service underlying Copilot policies was degraded and users, organizations, and enterprises were not able to update any policies related to Copilot. No other GitHub services, including other Copilot services were impacted. This was due to a database migration causing a schema drift.
We mitigated the incident by synchronizing the schema. We have hardened the service to make sure schema drift does not cause any further incidents, and will investigate improvements in our deployment pipeline to shorten time to mitigation in the future.
Dec 18, 19:09 UTC
We mitigated the incident by synchronizing the schema. We have hardened the service to make sure schema drift does not cause any further incidents, and will investigate improvements in our deployment pipeline to shorten time to mitigation in the future.
Dec 18, 19:09 UTC
Update -
Copilot is operating normally.
Dec 18, 19:09 UTC
Dec 18, 19:09 UTC
Update -
We have observed full recovery with updating copilot policy settings, and are validating that that there is no further impact.
Dec 18, 19:05 UTC
Dec 18, 19:05 UTC
Update -
Copilot is experiencing degraded performance. We are continuing to investigate.
Dec 18, 18:43 UTC
Dec 18, 18:43 UTC
Update -
We have identified the source of this regression and are preparing a fix for deployment. We will update again in one hour.
Dec 18, 18:10 UTC
Dec 18, 18:10 UTC
Update -
We are seeing an increase in errors on the User and Org policy settings page when updating policies. The errors are affecting the user copilot policies settings page, org copilot policies settings page when updating a policy.
Dec 18, 17:36 UTC
Dec 18, 17:36 UTC
Investigating -
We are investigating reports of impacted performance for some GitHub services.
Dec 18, 17:36 UTC
Dec 18, 17:36 UTC
Resolved -
On December 18th, 2025, from 08:15 UTC to 17:11 UTC, some GitHub Actions runners experienced intermittent timeouts for Github API calls, which led to failures during runner setup and workflow execution. This was caused by network packet loss between runners in the West US region and one of GitHub’s edge sites. Approximately 1.5% of jobs on larger and standard hosted runners in the West US region were impacted, 0.28% of all Actions jobs during this period.
By 17:11 UTC, all traffic was routed away from the affected edge site, mitigating the timeouts. We are working to improve early detection of cross-cloud connectivity issues and faster mitigation paths to reduce the impact of similar issues in the future.
Dec 18, 17:41 UTC
By 17:11 UTC, all traffic was routed away from the affected edge site, mitigating the timeouts. We are working to improve early detection of cross-cloud connectivity issues and faster mitigation paths to reduce the impact of similar issues in the future.
Dec 18, 17:41 UTC
Update -
We are observing recovery with request from GitHub-hosted Actions runners and will continue to monitor.
Dec 18, 17:29 UTC
Dec 18, 17:29 UTC
Update -
Since approximately 8:00 UTC, we have observed intermittent failures on GitHub-hosted actions runners. The failures have been observed both during runner setup, and workflow execution. We are continuing to investigate.
Self-hosted runners are not impacted.
Dec 18, 16:35 UTC
Self-hosted runners are not impacted.
Dec 18, 16:35 UTC
Investigating -
We are investigating reports of degraded performance for Actions
Dec 18, 16:33 UTC
Dec 18, 16:33 UTC
Dec 17, 2025
No incidents reported.
Dec 16, 2025
Resolved -
From 11:50-12:25 UTC, Copilot Coding Agent was unable to process new agent requests. This affected all users creating new jobs during this timeframe, while existing jobs remained unaffected. The cause of this issue was a change to the actions configuration where Copilot Coding Agent runs, which caused the setup of the Actions runner to fail, and the issue was resolved by rolling back this change.
As a short term solution, we hope to increase our alerting criteria so that we can be alerted more quickly when an incident occurs, and in the long term we hope to harden our runner configuration to be more resilient against errors.
Dec 16, 12:00 UTC
As a short term solution, we hope to increase our alerting criteria so that we can be alerted more quickly when an incident occurs, and in the long term we hope to harden our runner configuration to be more resilient against errors.
Dec 16, 12:00 UTC
Dec 15, 2025
Resolved -
On December 15, 2025, between 15:15 UTC and 18:22 UTC, Copilot Code Review experienced a service degradation that caused 46.97% of pull request review requests to fail, requiring users to re-request a review. Impacted users saw the error message: “Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.” The remaining requests completed successfully.
The degradation was caused by elevated response times in an internal, model-backed dependency, which led to request timeouts and backpressure in the review processing pipeline, resulting in sustained queue growth and failed review completion.
We mitigated the issue by temporarily bypassing fix suggestions to reduce latency, increasing worker capacity to drain the backlog, and rolling out a model configuration change that reduced end-to-end latency. Queue depth and request success rates returned to normal and remained stable through peak traffic.
Following the incident, we increased baseline worker capacity, added instrumentation for worker utilization and queue health, and are improving automatic load-shedding, fallback behavior, and alerting to reduce time to detection and mitigation for similar issues.
Dec 15, 18:22 UTC
The degradation was caused by elevated response times in an internal, model-backed dependency, which led to request timeouts and backpressure in the review processing pipeline, resulting in sustained queue growth and failed review completion.
We mitigated the issue by temporarily bypassing fix suggestions to reduce latency, increasing worker capacity to drain the backlog, and rolling out a model configuration change that reduced end-to-end latency. Queue depth and request success rates returned to normal and remained stable through peak traffic.
Following the incident, we increased baseline worker capacity, added instrumentation for worker utilization and queue health, and are improving automatic load-shedding, fallback behavior, and alerting to reduce time to detection and mitigation for similar issues.
Dec 15, 18:22 UTC
Update -
We have seen recovery for Copilot Code Review requests and are investigating long-term availability and scaling strategies
Dec 15, 18:21 UTC
Dec 15, 18:21 UTC
Investigating -
We are investigating reports of impacted performance for some GitHub services.
Dec 15, 17:43 UTC
Dec 15, 17:43 UTC
Resolved -
On Dec 15th, 2025, between 14:00 UTC and 15:45 UTC the Copilot service was degraded for Grok Code Fast 1 model. On average, 4% of the requests to this model failed due to an issue with our upstream provider. No other models were impacted.
The issue was resolved after the upstream provider fixed the problem that caused the disruption. GitHub will continue to enhance our monitoring and alerting systems to reduce the time it takes to detect and mitigate similar issues in the future.
Dec 15, 15:45 UTC
The issue was resolved after the upstream provider fixed the problem that caused the disruption. GitHub will continue to enhance our monitoring and alerting systems to reduce the time it takes to detect and mitigate similar issues in the future.
Dec 15, 15:45 UTC
Update -
We are continuing to work with our provider on resolving the incident with Grok Code Fast 1. Users can expect some requests to intermittently fail until all issues are resolved.
Dec 15, 15:06 UTC
Dec 15, 15:06 UTC
Update -
We are experiencing degraded availability for the Grok Code Fast 1 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.
Other models are available and working as expected.
Dec 15, 14:13 UTC
Other models are available and working as expected.
Dec 15, 14:13 UTC
Investigating -
We are investigating reports of degraded performance for Copilot
Dec 15, 14:12 UTC
Dec 15, 14:12 UTC
Dec 14, 2025
No incidents reported.
Dec 13, 2025
No incidents reported.
Dec 12, 2025
No incidents reported.
Dec 11, 2025
Resolved -
Between 13:25 UTC and 18:35 UTC on Dec 11th, GitHub experienced an increase in scraper activity on public parts of our website. This scraper activity caused a low priority web request pool to increase and eventually exceed total capacity resulting in users experiencing 500 errors. In particular, this affected Login, Logout, and Signup routes, along with less than 1% requests from within Actions jobs. At the peak of the incident, 7.6% of login requests were impacted, which was the most significant impact of this scraping attack.
Our mitigation strategy identified the scraping activity and blocked it. We also increased the pool of web requests that were impacted to have more capacity, and lastly we upgraded key user login routes to higher priority queues.
In future, we’re working to more proactively identify this particular scraper activity and have faster mitigation times.
Dec 11, 20:05 UTC
Our mitigation strategy identified the scraping activity and blocked it. We also increased the pool of web requests that were impacted to have more capacity, and lastly we upgraded key user login routes to higher priority queues.
In future, we’re working to more proactively identify this particular scraper activity and have faster mitigation times.
Dec 11, 20:05 UTC
Update -
We see signs of full recovery and will post a more in-depth update soon.
Dec 11, 20:05 UTC
Dec 11, 20:05 UTC
Update -
We are continuing to monitor and continuing to see signs of recovery. We will update when we are confident that we are in full recovery.
Dec 11, 19:58 UTC
Dec 11, 19:58 UTC
Update -
We've applied a mitigation to fix intermittent failures in anonymous requests and downloads from GitHub, including Login, Signup, Logout, and some requests from within Actions jobs. We are seeing improvements in telemetry, but we will continue to monitor for full recovery.
Dec 11, 19:04 UTC
Dec 11, 19:04 UTC
Update -
We currently have ~7% of users experiencing errors when attempting to sign up, log in, or log out. We are deploying a change to mitigate these failures.
Dec 11, 18:47 UTC
Dec 11, 18:47 UTC
Investigating -
We are investigating reports of impacted performance for some GitHub services.
Dec 11, 18:40 UTC
Dec 11, 18:40 UTC
Resolved -
Between 13:25 UTC and 18:35 UTC on December 11th, GitHub experienced elevated traffic to portions of GitHub.com that exceeded previously provisioned capacity for specific request types. As a result, users encountered intermittent 500 errors. Impact was most pronounced on Login, Logout, and Signup pages, peaking at 7.6% of login requests. Additionally, fewer than 1% of requests originating from GitHub Actions jobs were affected.
This incident was driven by the same underlying factors as the previously reported disruption to Login and Signup flows
Our immediate response focused on identifying and mitigating the source of the traffic increase. We increased available capacity for web request handling to relieve pressure on constrained pools. To reduce recurrence risk, we also re-routed critical authentication endpoints to a different traffic pool, ensuring sufficient isolation and headroom for login related traffic.
In future, we’re working to more proactively identify these large changes in traffic volume and improve our time to mitigation.
Dec 11, 17:53 UTC
This incident was driven by the same underlying factors as the previously reported disruption to Login and Signup flows
Our immediate response focused on identifying and mitigating the source of the traffic increase. We increased available capacity for web request handling to relieve pressure on constrained pools. To reduce recurrence risk, we also re-routed critical authentication endpoints to a different traffic pool, ensuring sufficient isolation and headroom for login related traffic.
In future, we’re working to more proactively identify these large changes in traffic volume and improve our time to mitigation.
Dec 11, 17:53 UTC
Update -
Git Operations is operating normally.
Dec 11, 17:20 UTC
Dec 11, 17:20 UTC
Update -
We believe that we have narrowed down our affected users to primarily those that are signing up or signing in as well as logged out usage. We are currently continuing to investigate the root cause and are working multiple mitigation angles.
Dec 11, 17:19 UTC
Dec 11, 17:19 UTC
Update -
We are experiencing intermittent web request failures across multiple services, including login and authentication. Our teams are actively investigating the cause and working on mitigation.
Dec 11, 16:41 UTC
Dec 11, 16:41 UTC
Update -
Codespaces, Copilot, Git Operations, Packages, Pages, Pull Requests and Webhooks are experiencing degraded performance. We are continuing to investigate.
Dec 11, 16:09 UTC
Dec 11, 16:09 UTC
Update -
API Requests and Actions are experiencing degraded performance. We are continuing to investigate.
Dec 11, 16:01 UTC
Dec 11, 16:01 UTC
Investigating -
We are investigating reports of degraded performance for Issues
Dec 11, 15:47 UTC
Dec 11, 15:47 UTC
Subscribe to our developer newsletter
Get tips, technical guides, and best practices. Twice a month. Right in your inbox.
Subscribe