| CARVIEW |
Select Language
HTTP/2 200
accept-ranges: bytes
vary: Accept-Encoding
content-encoding: gzip
content-type: application/json
access-control-allow-origin: *
content-security-policy-report-only: require-trusted-types-for 'script'; report-uri https://csp.withgoogle.com/csp/cloud-status
cross-origin-resource-policy: cross-origin
cross-origin-opener-policy: same-origin; report-to="cloud-status"
report-to: {"group":"cloud-status","max_age":2592000,"endpoints":[{"url":"https://csp.withgoogle.com/csp/report-to/cloud-status"}]}
content-length: 35153
date: Fri, 26 Dec 2025 19:48:51 GMT
pragma: no-cache
expires: Fri, 01 Jan 1990 00:00:00 GMT
cache-control: no-cache, must-revalidate
last-modified: Fri, 26 Dec 2025 19:46:51 GMT
x-content-type-options: nosniff
server: sffe
x-xss-protection: 0
alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
[{"id":"8cY8jdUpEGGbsSMSQk7J","number":"15787347096705530732","begin":"2025-07-18T14:42:00+00:00","created":"2025-07-18T15:54:23+00:00","end":"2025-07-18T16:47:00+00:00","modified":"2025-07-23T09:26:58+00:00","external_desc":"We are investigating elevated error rates with multiple products in us-east1","updates":[{"created":"2025-07-22T13:42:49+00:00","modified":"2025-07-23T09:26:58+00:00","when":"2025-07-22T13:42:49+00:00","text":"## \\# Incident Report\n## \\#\\# Summary\nOn Friday, 18 July 2025 07:50 US/Pacific, several Google Cloud Platform (GCP) and Google Workspace (GWS) products experienced elevated latencies and error rates in the us-east1 region for a duration of up to 1 hour and 57 minutes.\n**GCP Impact Duration:** 18 July 2025 07:50 \\- 09:47 US/Pacific : 1 hour 57 minutes\n**GWS Impact Duration:** 18 July 2025 07:50 \\- 08:40 US/Pacific : 50 minutes\nWe sincerely apologize for this incident, which does not reflect the level of quality and reliability we strive to offer. We are taking immediate steps to improve the platform’s performance and availability.\n##\n## \\#\\# Root Cause\nThe service interruption was triggered by a procedural error during a planned hardware replacement in our datacenter. An incorrect physical disconnection was made to the active network switch serving our control plane, rather than the redundant unit scheduled for removal. The redundant unit had been properly de-configured as part of the procedure, and the combination of these two events led to partitioning of the network control plane. Our network is designed to withstand this type of control plane failure by failing open, continuing operation.\nHowever, an operational topology change while the network control plane was in a failed open state caused our network fabric's topology information to become stale. This led to packet loss and service disruption until services were moved away from the fabric and control plane connectivity was restored.\n## \\#\\# Remediation and Prevention\nGoogle engineers were alerted to the outage by our monitoring system on 18 July 2025 07:06 US/Pacific and immediately started an investigation. The following timeline details the remediation and restoration efforts:\n* **07:39 US/Pacific**: The underlying root cause (device disconnect) was identified and onsite technicians were engaged to reconnect the control plane device and restore control plane connectivity. At that moment, network failure open mechanisms worked as expected and no impact was observed.\n* **07:50 US/Pacific**: A topology change led to traffic being routed suboptimally, due to the network being in a fail open state. This caused congestion on the subset of links, packet loss, and latency to customer traffic. Engineers made a decision to move traffic away from the affected fabric, which mitigated the impact for the majority of the services.\n* **08:40 US/Pacific**: Engineers mitigated Workspace impact by shifting traffic away from the affected region.\n* **09:47 US/Pacific**: Onsite technicians reconnected the device, control plane connectivity was fully restored and all services were back to stable state.\nGoogle is committed to preventing a repeat of the issue in the future, and is completing the following actions:\n* Pause non-critical workflows until safety controls are implemented (complete).\n* Strengthen safety controls for hardware upgrade workflows by end of Q3 2025\\.\n* Design and implement a mechanism to prevent control plane partitioning in case of dual failure of upstream routers by end of Q4 2025\\.\n## \\#\\# Detailed Description of Impact\n\\#\\#\\# GCP Impact:\nMultiple products in us-east1 were affected by the loss of network connectivity, with the most significant impacts seen in us-east1-b. Other regions were not affected.\nThe outage caused a range of issues for customers with zonal resources in the region, including packet loss across VPC networks, increased error rates and latency, service unavailable (503) errors, and slow or stuck operations up to loss of networking connectivity. While regional products were briefly impacted, they recovered quickly by failing over to unaffected zones.\nA small number (0.1%) of Persistent Disks in us-east1-b were unavailable for the duration of the outage: these disks became available once the outage was mitigated, with no customer data loss.\n\\#\\#\\# GWS Impact:\nA small subset of Workspace users, primarily around the Southeast US, experienced varying degrees of unavailability and increased delays across multiple products, including Gmail, Google Meet, Google Drive, Google Chat, Google Calendar, Google Groups, Google Doc/Editors, and Google Voice.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-07-18T22:08:16+00:00","modified":"2025-07-22T13:42:49+00:00","when":"2025-07-18T22:08:16+00:00","text":"# Mini Incident Report\nWe apologize for the inconvenience this service disruption/outage may have caused. We would like to provide some information about this incident below. Please note, this information is based on our best knowledge at the time of posting and is subject to change as our investigation continues. If you have experienced impact outside of what is listed below, please reach out to Google Cloud Support using https://cloud.google.com/support or to Google Workspace Support using help article https://support.google.com/a/answer/1047213.\n(All Times US/Pacific)\n**GCP Impact start and end time:** 18 July 2025 08:10 - 09:47\n**Duration:** 1 hour 37 minutes\n**GWS Impact start and end time:** 18 July 2025 08:10 - 08:40\n**Duration:** 30 minutes\n**Regions/Zones:** us-east1\n**Description:**\nOn Friday, 18 July 2025 08:10 US/Pacific multiple GCP and GWS products experienced elevated latencies and error rates in the us-east1 region for a duration of up to 1 hour and 37 minutes.\nBased on the preliminary analysis, the root cause of the issue is a procedural error during a planned hardware maintenance in one of our data centers in the us-east1 region. Our engineering team mitigated the issue by draining traffic away from the clusters and then restoring the affected hardware.\nGoogle will be completing a full incident report in the following days that will provide a full root cause and preventive actions.\n**Customer Impact:**\nThe affected GCP and GWS products experienced elevated latencies and errors rates in the us-east1 region.\n**Affected Products:**\n**GCP :**\nAlloyDB for PostgreSQL, Apigee, Artifact Registry, Cloud Armor, Cloud Billing, Cloud Build, Cloud External Key Manager, Cloud Filestore, Cloud HSM, Cloud Key Management Service, Cloud Load Balancing, Cloud Monitoring, Cloud Run, Cloud Spanner, Cloud Storage for Firebase, Cloud Workflows, Database Migration Service, Dialogflow CX, Dialogflow ES, Google BigQuery, Google Cloud Dataflow, Google Cloud Dataproc, Google Cloud Storage, Google Cloud Support, Google Cloud Tasks, Google Compute Engine, Hybrid Connectivity, Media CDN, Network Telemetry, Private Service Connect, Secret Manager, Service Directory, Vertex AI Online Prediction, Virtual Private Cloud (VPC)\n**Workspace :**\nGmail, Google Meet, Google Drive, Google Chat, Google Calendar, Google Groups, Google Doc/Editors, Google Voice\n**Google SecOps:**\nGoogle SecOps SOAR \u0026 Google SecOps","status":"AVAILABLE","affected_locations":[]},{"created":"2025-07-18T18:03:11+00:00","modified":"2025-07-18T22:08:16+00:00","when":"2025-07-18T18:03:11+00:00","text":"The issue has been resolved for all affected products as of 2025-07-18 09:47 US/Pacific.\nFrom preliminary analysis, during a routine maintenance of our network in us-east1-b, we experienced elevated packet loss, causing service disruption in the zone.\nWe will publish a full Incident Report with root cause once we have completed our internal investigations.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-07-18T17:32:00+00:00","modified":"2025-07-18T18:03:11+00:00","when":"2025-07-18T17:32:00+00:00","text":"Our engineers have successfully recovered the network control plane in the affected us-east1 zones.\nWe're seeing multiple services reporting full recovery, and product engineers continue to validate the remaining services.\nWe'll provide another update with more details by 11:00 AM US/Pacific, July 18, 2025.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"South Carolina (us-east1)","id":"us-east1"}]},{"created":"2025-07-18T16:58:34+00:00","modified":"2025-07-18T17:32:00+00:00","when":"2025-07-18T16:58:34+00:00","text":"Our engineers have successfully recovered the network control plane in the affected us-east1 zones. We're seeing multiple services reporting full recovery, and product engineers are now validating the remaining services.\nWe'll provide another update with more details by 10:30 AM US/Pacific, July 18, 2025.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"South Carolina (us-east1)","id":"us-east1"}]},{"created":"2025-07-18T16:29:02+00:00","modified":"2025-07-18T16:58:34+00:00","when":"2025-07-18T16:29:02+00:00","text":"Our engineers have confirmed that us-east1-b is partially affected. All other zones in us-east1 are currently operating normally.\nOur engineers have recovered the failed hardware and are currently recovering the network control plane in the affected zones.\nWe'll provide another update by 10:00 AM US/Pacific, July 18, 2025.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"South Carolina (us-east1)","id":"us-east1"}]},{"created":"2025-07-18T15:54:23+00:00","modified":"2025-07-18T16:29:02+00:00","when":"2025-07-18T15:54:23+00:00","text":"We're currently experiencing elevated latency and error rates for several Cloud services in the us-east1 region, beginning at 7:06 AM PDT today, July 18, 2025. Our initial investigation points to a hardware infrastructure failure as the likely cause.\nWe apologize for any disruption this may be causing. We'll provide an update with more details by 9:15 AM PDT today.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"South Carolina (us-east1)","id":"us-east1"}]}],"most_recent_update":{"created":"2025-07-22T13:42:49+00:00","modified":"2025-07-23T09:26:58+00:00","when":"2025-07-22T13:42:49+00:00","text":"## \\# Incident Report\n## \\#\\# Summary\nOn Friday, 18 July 2025 07:50 US/Pacific, several Google Cloud Platform (GCP) and Google Workspace (GWS) products experienced elevated latencies and error rates in the us-east1 region for a duration of up to 1 hour and 57 minutes.\n**GCP Impact Duration:** 18 July 2025 07:50 \\- 09:47 US/Pacific : 1 hour 57 minutes\n**GWS Impact Duration:** 18 July 2025 07:50 \\- 08:40 US/Pacific : 50 minutes\nWe sincerely apologize for this incident, which does not reflect the level of quality and reliability we strive to offer. We are taking immediate steps to improve the platform’s performance and availability.\n##\n## \\#\\# Root Cause\nThe service interruption was triggered by a procedural error during a planned hardware replacement in our datacenter. An incorrect physical disconnection was made to the active network switch serving our control plane, rather than the redundant unit scheduled for removal. The redundant unit had been properly de-configured as part of the procedure, and the combination of these two events led to partitioning of the network control plane. Our network is designed to withstand this type of control plane failure by failing open, continuing operation.\nHowever, an operational topology change while the network control plane was in a failed open state caused our network fabric's topology information to become stale. This led to packet loss and service disruption until services were moved away from the fabric and control plane connectivity was restored.\n## \\#\\# Remediation and Prevention\nGoogle engineers were alerted to the outage by our monitoring system on 18 July 2025 07:06 US/Pacific and immediately started an investigation. The following timeline details the remediation and restoration efforts:\n* **07:39 US/Pacific**: The underlying root cause (device disconnect) was identified and onsite technicians were engaged to reconnect the control plane device and restore control plane connectivity. At that moment, network failure open mechanisms worked as expected and no impact was observed.\n* **07:50 US/Pacific**: A topology change led to traffic being routed suboptimally, due to the network being in a fail open state. This caused congestion on the subset of links, packet loss, and latency to customer traffic. Engineers made a decision to move traffic away from the affected fabric, which mitigated the impact for the majority of the services.\n* **08:40 US/Pacific**: Engineers mitigated Workspace impact by shifting traffic away from the affected region.\n* **09:47 US/Pacific**: Onsite technicians reconnected the device, control plane connectivity was fully restored and all services were back to stable state.\nGoogle is committed to preventing a repeat of the issue in the future, and is completing the following actions:\n* Pause non-critical workflows until safety controls are implemented (complete).\n* Strengthen safety controls for hardware upgrade workflows by end of Q3 2025\\.\n* Design and implement a mechanism to prevent control plane partitioning in case of dual failure of upstream routers by end of Q4 2025\\.\n## \\#\\# Detailed Description of Impact\n\\#\\#\\# GCP Impact:\nMultiple products in us-east1 were affected by the loss of network connectivity, with the most significant impacts seen in us-east1-b. Other regions were not affected.\nThe outage caused a range of issues for customers with zonal resources in the region, including packet loss across VPC networks, increased error rates and latency, service unavailable (503) errors, and slow or stuck operations up to loss of networking connectivity. While regional products were briefly impacted, they recovered quickly by failing over to unaffected zones.\nA small number (0.1%) of Persistent Disks in us-east1-b were unavailable for the duration of the outage: these disks became available once the outage was mitigated, with no customer data loss.\n\\#\\#\\# GWS Impact:\nA small subset of Workspace users, primarily around the Southeast US, experienced varying degrees of unavailability and increased delays across multiple products, including Gmail, Google Meet, Google Drive, Google Chat, Google Calendar, Google Groups, Google Doc/Editors, and Google Voice.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_DISRUPTION","severity":"medium","service_key":"zall","service_name":"Multiple Products","affected_products":[{"title":"AlloyDB for PostgreSQL","id":"fPovtKbaWN9UTepMm3kJ"},{"title":"Apigee","id":"9Y13BNFy4fJydvjdsN3X"},{"title":"Artifact Registry","id":"QbBuuiRdsLpMr9WmGwm5"},{"title":"Certificate Authority Service","id":"PvdE3tt1VdxKXzSyd8WF"},{"title":"Cloud Armor","id":"Kakg69gTC3xFyeJCY2va"},{"title":"Cloud Billing","id":"oLCqDYkE9NFWQVgctQTL"},{"title":"Cloud Build","id":"fw8GzBdZdqy4THau7e1y"},{"title":"Cloud External Key Manager","id":"GXALzYBgpi3XpsLLxLgu"},{"title":"Cloud Firestore","id":"CETSkT92V21G6A1x28me"},{"title":"Cloud HSM","id":"R3HPPUbVeFrApLaqQB4B"},{"title":"Cloud Key Management Service","id":"67cSySTL7dwJZo9JWUGU"},{"title":"Cloud Load Balancing","id":"ix7u9beT8ivBdjApTif3"},{"title":"Cloud Memorystore","id":"LGPLu3M5pcUAKU1z6eP3"},{"title":"Cloud Monitoring","id":"3zaaDb7antc73BM1UAVT"},{"title":"Cloud Run","id":"9D7d2iNBQWN24zc1VamE"},{"title":"Cloud Spanner","id":"EcNGGUgBtBLrtm4mWvqC"},{"title":"Cloud Storage for Firebase","id":"aY6Fbgy6TV4YWoutjhfe"},{"title":"Cloud Workflows","id":"C4P62W9Xc2zZ1Sk52bbw"},{"title":"Database Migration Service","id":"vY4CRgRFNbqUXWWyYGFS"},{"title":"Dataproc Metastore","id":"PXZh68NPz9auRyo4tVfy"},{"title":"Dialogflow CX","id":"BnCicQdHSdxaCv8Ya6Vm"},{"title":"Eventarc","id":"YaFawoMaXnqgY4keUBnW"},{"title":"Google App Engine","id":"kchyUtnkMHJWaAva8aYc"},{"title":"Google BigQuery","id":"9CcrhHUcFevXPSVaSxkf"},{"title":"Google Cloud Bigtable","id":"LfZSuE3xdQU46YMFV5fy"},{"title":"Google Cloud Console","id":"Wdsr1n5vyDvCt78qEifm"},{"title":"Google Cloud Dataflow","id":"T9bFoXPqG8w8g1YbWTKY"},{"title":"Google Cloud Dataproc","id":"yjXrEg3Yvy26BauMwr69"},{"title":"Google Cloud Pub/Sub","id":"dFjdLh2v6zuES6t9ADCB"},{"title":"Google Cloud SQL","id":"hV87iK5DcEXKgWU2kDri"},{"title":"Google Cloud Storage","id":"UwaYoXQ5bHYHG6EdiPB8"},{"title":"Google Cloud Support","id":"bGThzF7oEGP5jcuDdMuk"},{"title":"Google Cloud Tasks","id":"tMWyzhyKK4rAzAf7x62h"},{"title":"Google Compute Engine","id":"L3ggmi3Jy4xJmgodFA9K"},{"title":"Google Kubernetes Engine","id":"LCSbT57h59oR4W98NHuz"},{"title":"Hybrid Connectivity","id":"5x6CGnZvSHQZ26KtxpK1"},{"title":"Identity and Access Management","id":"adnGEDEt9zWzs8uF1oKA"},{"title":"Media CDN","id":"FK8WX6iZ3FuQL6qUwski"},{"title":"Memorystore for Memcached","id":"paC6vmsvnjCHsBkp4Wva"},{"title":"Memorystore for Redis","id":"3yFciKa9NQH7pmbnUYUs"},{"title":"Memorystore for Redis Cluster","id":"pAQRwuhqRn7Y1E2we8ds"},{"title":"Persistent Disk","id":"SzESm2Ux129pjDGKWD68"},{"title":"Private Service Connect","id":"fbzQRKqPfxZ2DUScMGV2"},{"title":"Secret Manager","id":"kzGfErQK3HzkFhptoeHH"},{"title":"Service Directory","id":"vmq8TsEZwitKYM6V9BaM"},{"title":"Vertex AI Online Prediction","id":"sdXM79fz1FS6ekNpu37K"},{"title":"Virtual Private Cloud (VPC)","id":"BSGtCUnz6ZmyajsjgTKv"}],"uri":"incidents/8cY8jdUpEGGbsSMSQk7J","currently_affected_locations":[],"previously_affected_locations":[{"title":"South Carolina (us-east1)","id":"us-east1"}]},{"id":"ow5i3PPK96RduMcb1SsW","number":"12995900318995415150","begin":"2025-06-12T17:51:00+00:00","created":"2025-06-12T18:46:38+00:00","end":"2025-06-13T01:18:00+00:00","modified":"2025-07-19T03:34:44+00:00","external_desc":"Multiple GCP products are experiencing Service issues.","updates":[{"created":"2025-06-13T23:45:21+00:00","modified":"2025-06-13T23:48:18+00:00","when":"2025-06-13T23:45:21+00:00","text":"# Incident Report\n## **Summary**\n*Google Cloud, Google Workspace and Google Security Operations products experienced increased 503 errors in external API requests, impacting customers.*\n***We deeply apologize for the impact this outage has had. Google Cloud customers and their users trust their businesses to Google, and we will do better. We apologize for the impact this has had not only on our customers’ businesses and their users but also on the trust of our systems. We are committed to making improvements to help avoid outages like this moving forward.***\n### **What happened?**\nGoogle and Google Cloud APIs are served through our Google API management and control planes. Distributed regionally, these management and control planes are responsible for ensuring each API request that comes in is authorized, has the policy and appropriate checks (like quota) to meet their endpoints. The core binary that is part of this policy check system is known as Service Control. Service Control is a regional service that has a regional datastore that it reads quota and policy information from. This datastore metadata gets replicated almost instantly globally to manage quota policies for Google Cloud and our customers.\nOn May 29, 2025, a new feature was added to Service Control for additional quota policy checks. This code change and binary release went through our region by region rollout, but the code path that failed was never exercised during this rollout due to needing a policy change that would trigger the code. As a safety precaution, this code change came with a red-button to turn off that particular policy serving path. The issue with this change was that it did not have appropriate error handling nor was it feature flag protected. Without the appropriate error handling, the null pointer caused the binary to crash. Feature flags are used to gradually enable the feature region by region per project, starting with internal projects, to enable us to catch issues. If this had been flag protected, the issue would have been caught in staging.\nOn June 12, 2025 at \\~10:45am PDT, a policy change was inserted into the regional Spanner tables that Service Control uses for policies. Given the global nature of quota management, this metadata was replicated globally within seconds. This policy data contained unintended blank fields. Service Control, then regionally exercised quota checks on policies in each regional datastore. This pulled in blank fields for this respective policy change and exercised the code path that hit the null pointer causing the binaries to go into a crash loop. This occurred globally given each regional deployment.\nWithin 2 minutes, our Site Reliability Engineering team was triaging the incident. Within 10 minutes, the root cause was identified and the red-button (to disable the serving path) was being put in place. The red-button was ready to roll out \\~25 minutes from the start of the incident. Within 40 minutes of the incident, the red-button rollout was completed, and we started seeing recovery across regions, starting with the smaller ones first.\nWithin some of our larger regions, such as us-central-1, as Service Control tasks restarted, it created a herd effect on the underlying infrastructure it depends on (i.e. that Spanner table), overloading the infrastructure. Service Control did not have the appropriate randomized exponential backoff implemented to avoid this. It took up to \\~2h 40 mins to fully resolve in us-central-1 as we throttled task creation to minimize the impact on the underlying infrastructure and routed traffic to multi-regional databases to reduce the load. At that point, Service Control and API serving was fully recovered across all regions. Corresponding Google and Google Cloud products started recovering with some taking longer depending upon their architecture.\n### **What is our immediate path forward?**\nImmediately upon recovery, we froze all changes to the Service Control stack and manual policy pushes until we can completely remediate the system.\n### **How did we communicate?**\nWe posted our first incident report to Cloud Service Health about \\~1h after the start of the crashes, due to the Cloud Service Health infrastructure being down due to this outage. For some customers, the monitoring infrastructure they had running on Google Cloud was also failing, leaving them without a signal of the incident or an understanding of the impact to their business and/or infrastructure. We will address this going forward.\n### **What’s our approach moving forward?**\nBeyond freezing the system as mentioned above, we will prioritize and safely complete the following:\n* We will modularize Service Control’s architecture, so the functionality is isolated and fails open. Thus, if a corresponding check fails, Service Control can still serve API requests.\n* We will audit all systems that consume globally replicated data. Regardless of the business need for near instantaneous consistency of the data globally (i.e. quota management settings are global), data replication needs to be propagated incrementally with sufficient time to validate and detect issues.\n* We will enforce all changes to critical binaries to be feature flag protected and disabled by default.\n* We will improve our static analysis and testing practices to correctly handle errors and if need be fail open.\n* We will audit and ensure our systems employ randomized exponential backoff.\n* We will improve our external communications, both automated and human, so our customers get the information they need asap to react to issues, manage their systems and help their customers.\n* We'll ensure our monitoring and communication infrastructure remains operational to serve customers even when Google Cloud and our primary monitoring products are down, ensuring business continuity.\n-------","status":"AVAILABLE","affected_locations":[]},{"created":"2025-06-13T06:34:31+00:00","modified":"2025-06-13T23:45:21+00:00","when":"2025-06-13T06:34:31+00:00","text":"# Mini Incident Report\nWe are deeply sorry for the impact to all of our users and their customers that this service disruption/outage caused. Businesses large and small trust Google Cloud with your workloads and we will do better. In the coming days, we will publish a full incident report of the root cause, detailed timeline and robust remediation steps we will be taking. Given the size and impact of this incident, we would like to provide some information below.\nPlease note, this information is based on our best knowledge at the time of posting and is subject to change as our investigation continues. If you have experienced impact outside of what is listed below, please reach out to Google Cloud Support using https://cloud.google.com/support or to Google Workspace Support using help article https://support.google.com/a/answer/1047213.\n**(All Times US/Pacific)**\n**Incident Start:** 12 June, 2025 10:49\n**All regions except us-central1 mitigated:** 12 June, 2025 12:48\n**Incident End:** 12 June, 2025 13:49\n**Duration:** 3 hours\n**Regions/Zones:** Global\n**Description:**\nMultiple Google Cloud and Google Workspace products experienced increased 503 errors in external API requests, impacting customers.\nFrom our initial analysis, the issue occurred due to an invalid automated quota update to our API management system which was distributed globally, causing external API requests to be rejected. To recover we bypassed the offending quota check, which allowed recovery in most regions within 2 hours. However, the quota policy database in us-central1 became overloaded, resulting in much longer recovery in that region. Several products had moderate residual impact (e.g. backlogs) for up to an hour after the primary issue was mitigated and a small number recovering after that.\nGoogle will complete a full Incident Report in the following days that will provide a detailed root cause.\n**Customer Impact:**\nCustomers had intermittent API and user-interface access issues to the impacted services. Existing streaming and IaaS resources were not impacted.\n**Additional details:**\nThis incident should not have happened, and we will take the following measures to prevent future recurrence:\n* Prevent our API management platform from failing due to invalid or corrupt data.\n* Prevent metadata from propagating globally without appropriate protection, testing and monitoring in place.\n* Improve system error handling and comprehensive testing for handling of invalid data.\n**Affected Services and Features:**\n**Google Cloud Products:**\n* Identity and Access Management\n* Cloud Build\n* Cloud Key Management Service\n* Google Cloud Storage\n* Cloud Monitoring\n* Google Cloud Dataproc\n* Cloud Security Command Center\n* Artifact Registry\n* Cloud Workflows\n* Cloud Healthcare\n* Resource Manager API\n* Dataproc Metastore\n* Cloud Run\n* VMWare engine\n* Dataplex\n* Migrate to Virtual Machines\n* Google BigQuery\n* Contact Center AI Platform\n* Google Cloud Deploy\n* Media CDN\n* Colab Enterprise\n* Vertex Gemini API\n* Cloud Data Fusion\n* Cloud Asset Inventory\n* Datastream\n* Integration Connectors\n* Apigee\n* Google Cloud NetApp Volumes\n* Google Cloud Bigtable\n* Looker (Google Cloud core)\n* Looker Studio\n* Google Cloud Functions\n* Cloud Load Balancing\n* Traffic Director\n* Document AI\n* AutoML Translation\n* Pub/Sub Lite\n* API Gateway\n* Agent Assist\n* AlloyDB for PostgreSQL\n* Cloud Firestore\n* Cloud Logging\n* Cloud Shell\n* Cloud Memorystore\n* Cloud Spanner\n* Contact Center Insights\n* Database Migration Service\n* Dialogflow CX\n* Dialogflow ES\n* Google App Engine\n* Google Cloud Composer\n* Google Cloud Console\n* Google Cloud DNS\n* Google Cloud Pub/Sub\n* Google Cloud SQL\n* Google Compute Engine\n* Identity Platform\n* Managed Service for Apache Kafka\n* Memorystore for Memcached\n* Memorystore for Redis\n* Memorystore for Redis Cluster\n* Persistent Disk\n* Personalized Service Health\n* Speech-to-Text\n* Text-to-Speech\n* Vertex AI Search\n* Retail API\n* Vertex AI Feature Store\n* BigQuery Data Transfer Service\n* Google Cloud Marketplace\n* Cloud NAT\n* Hybrid Connectivity\n* Cloud Vision\n* Network Connectivity Center\n* Cloud Workstations\n* Google Security Operations\n**Google Workspace Products:**\n* AppSheet\n* Gmail\n* Google Calendar\n* Google Drive\n* Google Chat\n* Google Voice\n* Google Docs\n* Google Meet\n* Google Cloud Search\n* Google Tasks","status":"AVAILABLE","affected_locations":[]},{"created":"2025-06-13T01:27:32+00:00","modified":"2025-06-13T06:34:31+00:00","when":"2025-06-13T01:27:32+00:00","text":"Vertex AI Online Prediction is full recovered as of 18:18 PDT.\nAll the services are fully recovered from the service issue\nWe will publish analysis of this incident once we have completed our internal investigation.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-06-13T00:59:00+00:00","modified":"2025-06-13T01:27:32+00:00","when":"2025-06-13T00:59:00+00:00","text":"**Vertex AI Online Prediction:**\nThe issue causing elevated 5xx errors with some Model Garden models was fully resolved as of 17:05 PDT. Vertex AI serving is now back to normal in all regions except europe-west1 and asia-southeast1. Engineers are actively working to restore normal serving capacity in these two regions.\nThe ETA for restoring normal serving capacity in europe-west1 and asia-southeast1 is 19:45 PDT.\nWe will provide an update by Thursday, 2025-06-12 19:45 PDT with current details.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-06-13T00:33:34+00:00","modified":"2025-06-13T00:59:00+00:00","when":"2025-06-13T00:33:34+00:00","text":"The impact on Personalized Service Health is now resolved and the updates should be reflected without any issues.\nThe issue with Google Cloud Dataflow is fully resolved as of 17:10 PDT\nThe only remaining impact is on Vertex AI Online Prediction as follows:\n**Vertex AI Online Prediction:** Customers may continue to experience elevated 5xx errors with some of the models available in the Model Garden. We are seeing gradual decrease in error rates as our engineers perform appropriate mitigation actions.\nThe ETA for full resolution of these 5xx errors is 22:00 PDT\nWe will provide an update by Thursday, 2025-06-12 22:00 PDT with current details.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Global","id":"global"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-06-13T00:06:24+00:00","modified":"2025-06-13T00:33:34+00:00","when":"2025-06-13T00:06:24+00:00","text":"The following Google Cloud products are still experiencing residual impact:\n**Google Cloud Dataflow:** Dataflow backlog has cleared up in all regions except us-central1. Customers may experience delays with Dataflow operations in us-central1 as the backlog clears up gradually. We do not have an ETA for Cloud Dataflow recovery in us-central1.\n**Vertex AI Online Prediction:** Customers may continue to experience elevated 5xx errors with some of the models available in the Model Garden. We are seeing gradual decrease in error rates as our engineers perform appropriate mitigation actions. The ETA for full resolution of these 5xx errors is 22:00 PDT\n**Personalized Service Health:** Updates on the Personalized Service Health are delayed and we recommend customers to continue using Cloud Service Health dashboard for updates.\nWe will provide an update by Thursday, 2025-06-12 17:45 PDT with current details.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Global","id":"global"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-06-12T23:13:50+00:00","modified":"2025-06-13T00:06:24+00:00","when":"2025-06-12T23:13:50+00:00","text":"The following Google Cloud products are still experiencing residual impact:\n**Google Cloud Dataflow:** Customers may experience delays with Dataflow operations as the backlog is clearing up gradually.\n**Vertex AI Online Prediction:** Customers may continue to experience elevated 5xx errors with some of the models available in the Model Garden.\n**Personalized Service Health:** Updates on the Personalized Service Health are delayed and we recommend customers to continue using Cloud Service Health dashboard for updates.\nWe currently do not have an ETA for full mitigation of the above services.\nWe will provide an update by Thursday, 2025-06-12 17:00 PDT with current details.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Johannesburg (africa-south1)","id":"africa-south1"},{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Stockholm (europe-north2)","id":"europe-north2"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Berlin (europe-west10)","id":"europe-west10"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Global","id":"global"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-06-12T22:16:06+00:00","modified":"2025-06-12T23:13:50+00:00","when":"2025-06-12T22:16:06+00:00","text":"Most of the Google Cloud products are fully recovered as of 13:45 PDT.\nThere is some residual impact for the products currently marked as affected on the dashboard. Please continue to monitor the services and the dashboard for individual product recoveries.\nWe will provide an update by Thursday, 2025-06-12 16:00 PDT with current details.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Johannesburg (africa-south1)","id":"africa-south1"},{"title":"Multi-region: asia","id":"asia"},{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Multi-region: eu","id":"eu"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Stockholm (europe-north2)","id":"europe-north2"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Berlin (europe-west10)","id":"europe-west10"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Global","id":"global"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Multi-region: us","id":"us"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-06-12T21:23:42+00:00","modified":"2025-06-12T22:16:06+00:00","when":"2025-06-12T21:23:42+00:00","text":"Most of the Google Cloud products have confirmed full service recovery.\nA few services are still seeing some residual impact and the respective engineering teams are actively working on recovery of those services.\nWe expect the recovery to complete in less than an hour.\nWe will provide an update by Thursday, 2025-06-12 15:00 PDT with current details.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Johannesburg (africa-south1)","id":"africa-south1"},{"title":"Multi-region: asia","id":"asia"},{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Multi-region: eu","id":"eu"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Stockholm (europe-north2)","id":"europe-north2"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Berlin (europe-west10)","id":"europe-west10"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Global","id":"global"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Multi-region: us","id":"us"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-06-12T21:00:07+00:00","modified":"2025-06-12T21:23:42+00:00","when":"2025-06-12T21:00:07+00:00","text":"We have implemented mitigation for the issue in us-central1 and multi-region/us and we are seeing signs of recovery.\nWe have received confirmation from our internal monitoring and customers that the Google Cloud products are also seeing recovery in multiple regions and are also seeing signs of some recovery in us-central1 and mutli-region/us.\nWe expect the recovery to complete in less than an hour.\nWe will provide an update by Thursday, 2025-06-12 14:30 PDT with current details.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Johannesburg (africa-south1)","id":"africa-south1"},{"title":"Multi-region: asia","id":"asia"},{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Multi-region: asia1","id":"asia1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Multi-region: eu","id":"eu"},{"title":"Multi-region: eur3","id":"eur3"},{"title":"Multi-region: eur4","id":"eur4"},{"title":"Multi-region: eur5","id":"eur5"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Stockholm (europe-north2)","id":"europe-north2"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Berlin (europe-west10)","id":"europe-west10"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Global","id":"global"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Multi-region: nam-eur-asia1","id":"nam-eur-asia1"},{"title":"Multi-region: nam10","id":"nam10"},{"title":"Multi-region: nam11","id":"nam11"},{"title":"Multi-region: nam12","id":"nam12"},{"title":"Multi-region: nam13","id":"nam13"},{"title":"Multi-region: nam3","id":"nam3"},{"title":"Multi-region: nam5","id":"nam5"},{"title":"Multi-region: nam6","id":"nam6"},{"title":"Multi-region: nam7","id":"nam7"},{"title":"Multi-region: nam8","id":"nam8"},{"title":"Multi-region: nam9","id":"nam9"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Multi-region: us","id":"us"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-06-12T20:16:22+00:00","modified":"2025-06-12T21:00:07+00:00","when":"2025-06-12T20:16:22+00:00","text":"We have identified the root cause and applied appropriate mitigations.\nOur infrastructure has recovered in all regions except us-central1.\nGoogle Cloud products that rely on the affected infrastructure are seeing recovery in multiple locations.\nOur engineers are aware of the customers still experiencing issues on us-central1 and multi-region/us and are actively working on full recovery.\nWe do not have an ETA for full recovery.\nWe will provide an update by Thursday, 2025-06-12 14:00 PDT with current details.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Johannesburg (africa-south1)","id":"africa-south1"},{"title":"Multi-region: asia","id":"asia"},{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Multi-region: asia1","id":"asia1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Multi-region: eu","id":"eu"},{"title":"Multi-region: eur3","id":"eur3"},{"title":"Multi-region: eur4","id":"eur4"},{"title":"Multi-region: eur5","id":"eur5"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Stockholm (europe-north2)","id":"europe-north2"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Berlin (europe-west10)","id":"europe-west10"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Global","id":"global"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Multi-region: nam-eur-asia1","id":"nam-eur-asia1"},{"title":"Multi-region: nam10","id":"nam10"},{"title":"Multi-region: nam11","id":"nam11"},{"title":"Multi-region: nam12","id":"nam12"},{"title":"Multi-region: nam13","id":"nam13"},{"title":"Multi-region: nam3","id":"nam3"},{"title":"Multi-region: nam5","id":"nam5"},{"title":"Multi-region: nam6","id":"nam6"},{"title":"Multi-region: nam7","id":"nam7"},{"title":"Multi-region: nam8","id":"nam8"},{"title":"Multi-region: nam9","id":"nam9"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Multi-region: us","id":"us"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-06-12T19:41:55+00:00","modified":"2025-06-12T20:16:22+00:00","when":"2025-06-12T19:41:55+00:00","text":"Our engineers have identified the root cause and have applied appropriate mitigations.\nWhile our engineers have confirmed that the underlying dependency is recovered in all locations except us-central1, ***we are aware that customers are still experiencing varying degrees of impact on individual google cloud products***. All the respective engineering teams are actively engaged and working on service recovery. We do not have an ETA for full service recovery.\nWe will provide an update by Thursday, 2025-06-12 13:30 PDT with current details.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Johannesburg (africa-south1)","id":"africa-south1"},{"title":"Multi-region: asia","id":"asia"},{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Multi-region: asia1","id":"asia1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Multi-region: eu","id":"eu"},{"title":"Multi-region: eur3","id":"eur3"},{"title":"Multi-region: eur4","id":"eur4"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Stockholm (europe-north2)","id":"europe-north2"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Berlin (europe-west10)","id":"europe-west10"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Global","id":"global"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Multi-region: nam5","id":"nam5"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Multi-region: us","id":"us"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-06-12T19:30:44+00:00","modified":"2025-06-12T19:41:55+00:00","when":"2025-06-12T19:30:44+00:00","text":"All locations except us-central1 have fully recovered. us-central1 is mostly recovered. We do not have an ETA for full recovery in us-central1.\nWe will provide an update by Thursday, 2025-06-12 13:00 PDT with current details.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Johannesburg (africa-south1)","id":"africa-south1"},{"title":"Multi-region: asia","id":"asia"},{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Multi-region: asia1","id":"asia1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Multi-region: eu","id":"eu"},{"title":"Multi-region: eur3","id":"eur3"},{"title":"Multi-region: eur4","id":"eur4"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Stockholm (europe-north2)","id":"europe-north2"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Berlin (europe-west10)","id":"europe-west10"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Global","id":"global"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Multi-region: nam5","id":"nam5"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Multi-region: us","id":"us"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-06-12T19:09:08+00:00","modified":"2025-06-12T19:30:44+00:00","when":"2025-06-12T19:09:08+00:00","text":"Our engineers are continuing to mitigate the issue and we have confirmation that the issue is recovered in some locations.\nWe do not have an ETA on full mitigation at this point.\nWe will provide an update by Thursday, 2025-06-12 12:45 PDT with current details.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Johannesburg (africa-south1)","id":"africa-south1"},{"title":"Multi-region: asia","id":"asia"},{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Multi-region: asia1","id":"asia1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Multi-region: eu","id":"eu"},{"title":"Multi-region: eur4","id":"eur4"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Stockholm (europe-north2)","id":"europe-north2"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Berlin (europe-west10)","id":"europe-west10"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Global","id":"global"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Multi-region: us","id":"us"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-06-12T18:59:31+00:00","modified":"2025-06-12T19:15:51+00:00","when":"2025-06-12T18:59:31+00:00","text":"**Summary:**\nMultiple GCP products are experiencing Service issues with API requests\n**Description**\nWe are experiencing service issues with multiple GCP products beginning at Thursday, 2025-06-12 10:51 PDT.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Thursday, 2025-06-12 12:15 PDT with current details.\nWe apologize to all who are affected by the disruption.\n**Symptoms:**\nMultiple GCP products are experiencing varying level of service impacts with API requests.\n**Workaround:**\nNone at this time.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Johannesburg (africa-south1)","id":"africa-south1"},{"title":"Multi-region: asia","id":"asia"},{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Multi-region: asia1","id":"asia1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Multi-region: eu","id":"eu"},{"title":"Multi-region: eur4","id":"eur4"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Stockholm (europe-north2)","id":"europe-north2"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Berlin (europe-west10)","id":"europe-west10"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Global","id":"global"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Multi-region: us","id":"us"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-06-12T18:46:38+00:00","modified":"2025-06-12T18:59:31+00:00","when":"2025-06-12T18:46:38+00:00","text":"**Summary:**\nMultiple GCP products are experiencing Service issues\n**Description**\nWe are experiencing service issues with multiple GCP products beginning at Thursday, 2025-06-12 10:51 PDT.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Thursday, 2025-06-12 12:15 PDT with current details.\nWe apologize to all who are affected by the disruption.\n**Symptoms:**\nMultiple GCP products are experiencing varying level of service impacts.\n**Workaround:**\nNone at this time.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Johannesburg (africa-south1)","id":"africa-south1"},{"title":"Multi-region: asia","id":"asia"},{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Multi-region: asia1","id":"asia1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Multi-region: eu","id":"eu"},{"title":"Multi-region: eur4","id":"eur4"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Stockholm (europe-north2)","id":"europe-north2"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Berlin (europe-west10)","id":"europe-west10"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Global","id":"global"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Multi-region: us","id":"us"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]}],"most_recent_update":{"created":"2025-06-13T23:45:21+00:00","modified":"2025-06-13T23:48:18+00:00","when":"2025-06-13T23:45:21+00:00","text":"# Incident Report\n## **Summary**\n*Google Cloud, Google Workspace and Google Security Operations products experienced increased 503 errors in external API requests, impacting customers.*\n***We deeply apologize for the impact this outage has had. Google Cloud customers and their users trust their businesses to Google, and we will do better. We apologize for the impact this has had not only on our customers’ businesses and their users but also on the trust of our systems. We are committed to making improvements to help avoid outages like this moving forward.***\n### **What happened?**\nGoogle and Google Cloud APIs are served through our Google API management and control planes. Distributed regionally, these management and control planes are responsible for ensuring each API request that comes in is authorized, has the policy and appropriate checks (like quota) to meet their endpoints. The core binary that is part of this policy check system is known as Service Control. Service Control is a regional service that has a regional datastore that it reads quota and policy information from. This datastore metadata gets replicated almost instantly globally to manage quota policies for Google Cloud and our customers.\nOn May 29, 2025, a new feature was added to Service Control for additional quota policy checks. This code change and binary release went through our region by region rollout, but the code path that failed was never exercised during this rollout due to needing a policy change that would trigger the code. As a safety precaution, this code change came with a red-button to turn off that particular policy serving path. The issue with this change was that it did not have appropriate error handling nor was it feature flag protected. Without the appropriate error handling, the null pointer caused the binary to crash. Feature flags are used to gradually enable the feature region by region per project, starting with internal projects, to enable us to catch issues. If this had been flag protected, the issue would have been caught in staging.\nOn June 12, 2025 at \\~10:45am PDT, a policy change was inserted into the regional Spanner tables that Service Control uses for policies. Given the global nature of quota management, this metadata was replicated globally within seconds. This policy data contained unintended blank fields. Service Control, then regionally exercised quota checks on policies in each regional datastore. This pulled in blank fields for this respective policy change and exercised the code path that hit the null pointer causing the binaries to go into a crash loop. This occurred globally given each regional deployment.\nWithin 2 minutes, our Site Reliability Engineering team was triaging the incident. Within 10 minutes, the root cause was identified and the red-button (to disable the serving path) was being put in place. The red-button was ready to roll out \\~25 minutes from the start of the incident. Within 40 minutes of the incident, the red-button rollout was completed, and we started seeing recovery across regions, starting with the smaller ones first.\nWithin some of our larger regions, such as us-central-1, as Service Control tasks restarted, it created a herd effect on the underlying infrastructure it depends on (i.e. that Spanner table), overloading the infrastructure. Service Control did not have the appropriate randomized exponential backoff implemented to avoid this. It took up to \\~2h 40 mins to fully resolve in us-central-1 as we throttled task creation to minimize the impact on the underlying infrastructure and routed traffic to multi-regional databases to reduce the load. At that point, Service Control and API serving was fully recovered across all regions. Corresponding Google and Google Cloud products started recovering with some taking longer depending upon their architecture.\n### **What is our immediate path forward?**\nImmediately upon recovery, we froze all changes to the Service Control stack and manual policy pushes until we can completely remediate the system.\n### **How did we communicate?**\nWe posted our first incident report to Cloud Service Health about \\~1h after the start of the crashes, due to the Cloud Service Health infrastructure being down due to this outage. For some customers, the monitoring infrastructure they had running on Google Cloud was also failing, leaving them without a signal of the incident or an understanding of the impact to their business and/or infrastructure. We will address this going forward.\n### **What’s our approach moving forward?**\nBeyond freezing the system as mentioned above, we will prioritize and safely complete the following:\n* We will modularize Service Control’s architecture, so the functionality is isolated and fails open. Thus, if a corresponding check fails, Service Control can still serve API requests.\n* We will audit all systems that consume globally replicated data. Regardless of the business need for near instantaneous consistency of the data globally (i.e. quota management settings are global), data replication needs to be propagated incrementally with sufficient time to validate and detect issues.\n* We will enforce all changes to critical binaries to be feature flag protected and disabled by default.\n* We will improve our static analysis and testing practices to correctly handle errors and if need be fail open.\n* We will audit and ensure our systems employ randomized exponential backoff.\n* We will improve our external communications, both automated and human, so our customers get the information they need asap to react to issues, manage their systems and help their customers.\n* We'll ensure our monitoring and communication infrastructure remains operational to serve customers even when Google Cloud and our primary monitoring products are down, ensuring business continuity.\n-------","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_OUTAGE","severity":"high","service_key":"zall","service_name":"Multiple Products","affected_products":[{"title":"API Gateway","id":"VzyLPL7CtWQqJ9WeKAjp"},{"title":"Agent Assist","id":"eUntUKqUrHdbBLNcVVXq"},{"title":"AlloyDB for PostgreSQL","id":"fPovtKbaWN9UTepMm3kJ"},{"title":"Apigee","id":"9Y13BNFy4fJydvjdsN3X"},{"title":"Apigee Edge Public Cloud","id":"SumcdgBT6GQBzp1vmdXu"},{"title":"Apigee Hybrid","id":"6gaft97Gv5hGQAJg6D3J"},{"title":"Cloud Data Fusion","id":"rLKDHeeaBiXTeutF1air"},{"title":"Cloud Firestore","id":"CETSkT92V21G6A1x28me"},{"title":"Cloud Logging","id":"PuCJ6W2ovoDhLcyvZ1xa"},{"title":"Cloud Memorystore","id":"LGPLu3M5pcUAKU1z6eP3"},{"title":"Cloud Monitoring","id":"3zaaDb7antc73BM1UAVT"},{"title":"Cloud Run","id":"9D7d2iNBQWN24zc1VamE"},{"title":"Cloud Security Command Center","id":"csyyfUYy88hkeqbv23Mc"},{"title":"Cloud Shell","id":"wF3PG44o1RzTnUW5dycy"},{"title":"Cloud Spanner","id":"EcNGGUgBtBLrtm4mWvqC"},{"title":"Cloud Workstations","id":"5UUXCiH1vfFHXmbDixrB"},{"title":"Contact Center AI Platform","id":"eSAGSSEKoxh8tTJucdYg"},{"title":"Contact Center Insights","id":"WYJx5eWkh8ZrCSQUcP4i"},{"title":"Data Catalog","id":"TFedVRYgKGRGMSJrUpup"},{"title":"Database Migration Service","id":"vY4CRgRFNbqUXWWyYGFS"},{"title":"Dataform","id":"JSShQKADMU3uXYNbCRCh"},{"title":"Dataplex","id":"Xx5qm9U2ovrN11z2Gd9Q"},{"title":"Dataproc Metastore","id":"PXZh68NPz9auRyo4tVfy"},{"title":"Datastream","id":"ibJgP4CNKnFojHHw8L3s"},{"title":"Dialogflow CX","id":"BnCicQdHSdxaCv8Ya6Vm"},{"title":"Dialogflow ES","id":"sQqrYvhjMT5crPHKWJFY"},{"title":"Google App Engine","id":"kchyUtnkMHJWaAva8aYc"},{"title":"Google BigQuery","id":"9CcrhHUcFevXPSVaSxkf"},{"title":"Google Cloud Bigtable","id":"LfZSuE3xdQU46YMFV5fy"},{"title":"Google Cloud Composer","id":"YxkG5FfcC42cQmvBCk4j"},{"title":"Google Cloud Console","id":"Wdsr1n5vyDvCt78qEifm"},{"title":"Google Cloud DNS","id":"TUZUsWSJUVJGW97Jq2sH"},{"title":"Google Cloud Dataflow","id":"T9bFoXPqG8w8g1YbWTKY"},{"title":"Google Cloud Dataproc","id":"yjXrEg3Yvy26BauMwr69"},{"title":"Google Cloud Pub/Sub","id":"dFjdLh2v6zuES6t9ADCB"},{"title":"Google Cloud SQL","id":"hV87iK5DcEXKgWU2kDri"},{"title":"Google Cloud Storage","id":"UwaYoXQ5bHYHG6EdiPB8"},{"title":"Google Compute Engine","id":"L3ggmi3Jy4xJmgodFA9K"},{"title":"Identity Platform","id":"LE1X2BHYANNsHtG1NM1M"},{"title":"Identity and Access Management","id":"adnGEDEt9zWzs8uF1oKA"},{"title":"Looker Studio","id":"kEYNqRYFXXHxP9QeFJ1d"},{"title":"Managed Service for Apache Kafka","id":"QMZ3IpyG3Ooxotv7JOKV"},{"title":"Memorystore for Memcached","id":"paC6vmsvnjCHsBkp4Wva"},{"title":"Memorystore for Redis","id":"3yFciKa9NQH7pmbnUYUs"},{"title":"Memorystore for Redis Cluster","id":"pAQRwuhqRn7Y1E2we8ds"},{"title":"Persistent Disk","id":"SzESm2Ux129pjDGKWD68"},{"title":"Personalized Service Health","id":"jY8GKegoC5RUVERU7vUG"},{"title":"Pub/Sub Lite","id":"5DWkcStmv4dFHRHLaRXb"},{"title":"Speech-to-Text","id":"5f5oET9B3whnSFHfwy4d"},{"title":"Text-to-Speech","id":"2Xt4Wt8rVvbz3UPsHBvx"},{"title":"Vertex AI Online Prediction","id":"sdXM79fz1FS6ekNpu37K"},{"title":"Vertex AI Search","id":"vNncXxtSVvqyhvSkQ6PJ"},{"title":"Vertex Gemini API","id":"Z0FZJAMvEB4j3NbCJs6B"},{"title":"Vertex Imagen API","id":"zeBmbgdSyHGTvPAiXwVS"},{"title":"reCAPTCHA Enterprise","id":"BubghYKyn8WLY5wnSjZL"}],"uri":"incidents/ow5i3PPK96RduMcb1SsW","currently_affected_locations":[],"previously_affected_locations":[{"title":"Johannesburg (africa-south1)","id":"africa-south1"},{"title":"Multi-region: asia","id":"asia"},{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Multi-region: asia1","id":"asia1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Multi-region: eu","id":"eu"},{"title":"Multi-region: eur3","id":"eur3"},{"title":"Multi-region: eur4","id":"eur4"},{"title":"Multi-region: eur5","id":"eur5"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Stockholm (europe-north2)","id":"europe-north2"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Berlin (europe-west10)","id":"europe-west10"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Global","id":"global"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Multi-region: nam-eur-asia1","id":"nam-eur-asia1"},{"title":"Multi-region: nam10","id":"nam10"},{"title":"Multi-region: nam11","id":"nam11"},{"title":"Multi-region: nam12","id":"nam12"},{"title":"Multi-region: nam13","id":"nam13"},{"title":"Multi-region: nam3","id":"nam3"},{"title":"Multi-region: nam5","id":"nam5"},{"title":"Multi-region: nam6","id":"nam6"},{"title":"Multi-region: nam7","id":"nam7"},{"title":"Multi-region: nam8","id":"nam8"},{"title":"Multi-region: nam9","id":"nam9"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Multi-region: us","id":"us"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"id":"SXRPpPwx2RZ5VHjTwFLx","number":"13271141640052664026","begin":"2025-05-20T03:23:00+00:00","created":"2025-05-20T11:07:41+00:00","end":"2025-05-20T12:05:00+00:00","modified":"2025-05-27T23:06:21+00:00","external_desc":"Google Compute Engine (GCE) issue impacting multiple dependent GCP services across zones","updates":[{"created":"2025-05-27T23:06:21+00:00","modified":"2025-05-27T23:06:22+00:00","when":"2025-05-27T23:06:21+00:00","text":"# Incident Report\n## Summary\nOn 19 May 2025, Google Compute Engine (GCE) encountered problems affecting Spot VM termination globally, and performance degradation and timeouts of reservation consumption / VM creation in us-central1 and us-east4 for a duration of 8 hours, 42 minutes. Consequently, multiple other Google Cloud Platform (GCP) products relying on GCE also experienced increased latencies and timeouts.\nTo our customers who were impacted during this disruption, we sincerely apologize. This is not the level of quality and reliability we strive to offer you, and we are taking immediate steps to improve the platform’s performance and availability.\n## Root Cause\nA recently deployed configuration change to a Google Compute Engine (GCE) component mistakenly disabled a feature flag that controlled how VM instance states are reported to other components. Safety checks intended to ensure gradual rollout of this type of change failed to be triggered, resulting in an unplanned rapid rollout of the change.\nThis caused Spot VMs to be stuck in an unexpected state. Consequently, Spot VMs that had initiated their standard termination process due to preemption began to accumulate as they failed to complete termination, creating a backlog that degraded performance for all VM types in some regions.\n## Remediation and Prevention\nGoogle engineers were alerted to the outage via internal monitoring on 19 May 2025, at 21:08 US/Pacific, and immediately started an investigation. Once the nature and scope of the issue became clear, Google engineers initiated a rollback of the change on 20 May 2025 at 03:29 US/Pacific.\nThe rollback completed at 03:55 US/Pacific, mitigating the impact.\nGoogle is committed to preventing a repeat of this issue in the future and is completing the following actions:\n* Google Cloud employs a robust and well-defined methodology for production updates, including a phased rollout approach as standard practice to avoid rapid global changes. This phased approach is meant to ensure that changes are introduced into production gradually and as safely as possible, however, in this case, the safety checks were not enforced. We have paused further feature flag rollouts for the affected system, while we undertake a comprehensive audit of safety checks and fix any exposed gaps that led to the unplanned rapid rollout of this change.\n* We will review and address scalability issues encountered by GCE during the incident.\n* We will improve monitoring coverage of Spot VM deletion workflows.\nGoogle is committed to quickly and continually improving our technology and operations to prevent service disruptions. We appreciate your patience and apologize again for the impact to your organization. We thank you for your business.\n## Detailed Description of Impact\nCustomers experienced increased latency for VM control plane operations in us-central1 and us-east4. VM control plane operations include creating, modifying, or deleting VMs. For some customers, Spot VM instances became stuck while terminating. Customers were not billed for Spot VM instances in this state. Furthermore, running virtual machines and the data plane were not impacted.\nVM control plane latency in the us-central1 and us-east4 regions began increasing at the start of the incident (19 May 2025 20:23 US/Pacific), and peaked around 20 May 2025 03:40 US/Pacific. At peak, median latency went from seconds to minutes, and tail latency went from minutes to hours. Several other regions experienced increased tail latency during the outage, but most operations in these regions completed as normal. Once mitigations took effect, median and tail latencies started falling and returned to normal by 05:15 US/Pacific.\nCustomers may have experienced similar latency increases in products which create, modify, failover or delete VM instances: GCE, GKE, Dataflow, Cloud SQL, Google Cloud Dataproc, Google App Engine, Cloud Deploy, Memorystore, Redis, Cloud Filestore, among others.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-05-20T18:58:49+00:00","modified":"2025-05-27T23:06:21+00:00","when":"2025-05-20T18:58:49+00:00","text":"## \\# Mini Incident Report\nWe apologize for the inconvenience this service disruption may have caused. We would like to provide some information about this incident below. Please note, this information is based on our best knowledge at the time of posting and is subject to change as our investigation continues. If you have experienced impact outside of what is listed below, please reach out to Google Cloud Support using [***https://cloud.google.com/support***](https://cloud.google.com/support).\n(All Times US/Pacific)\n**Incident Start:** 19 May 2025 20:23:00\n**Incident End:** 20 May 2025 05:05:00\n**Duration:** 8 hours, 42 minutes\n**Affected Services and Features:**\nGoogle Compute Engine, Google Kubernetes Engine, Cloud Dataflow, Cloud SQL, AlloyDb for PostgreSQL, Cloud Composer, Cloud Build, Cloud Dataproc, Google App Engine, Migrate to Virtual Machines, Vertex GenAI, Cloud Deploy and Memorystore for Redis.\n**Regions/Zones:**\n***VM creation issues:***\n* us-central1 all zones\n* us-east4 all zones\n***VM termination issues:***\nasia-east1, asia-northeast1, asia-south1, asia-southeast1, australia-southeast1, europe-central2, europe-north1, europe-west1, europe-west12, europe-west2, europe-west3, europe-west4, me-central2, southamerica-east1, us-central1, us-east1, us-east4, us-east5, us-west1, us-west2, us-west4\n**Note:** VM terminate operations, products dependent on VM creations and terminations may have seen impact outside the above zones.\n**Description:**\nGoogle Compute Engine (GCE) encountered problems affecting VM creation, termination, and reservation consumption. Consequently, multiple Google Cloud products experienced increased latencies and timeouts during create, update, and terminate operations.\nPreliminary analysis indicates that a recent configuration change negatively impacted GCE handling of routine spot virtual machine (VM) terminations. As a result of this problem, GCE Control Plane services became overloaded causing disruptions for VM Instance creation, termination, and reservation consumption.\nThe issue was mitigated by changing the configuration to the previous state, thereby resolving the impact on all affected products.\nGoogle will complete a full Incident Report in the following days that will provide a full root cause.\n**Customer Impact:**\n**Google Compute Engine:** Customers may have observed elevated latency or timeouts for VM Instance operations like creation, reservation consumption, etc.\n**Google Kubernetes Engine:** Customers may have observed latency while performing operations like creating or deleting clusters, adding or resizing nodepools, etc.\n**Google Cloud Dataproc:** Customers may have observed elevated latency while performing operations like creating or deleting clusters, and scale up and scale down operations, etc.\n**Google Cloud Dataflow:** Customers may have observed elevated latency for start-up / scaleup / shut-downs for Dataflow jobs.\n**Cloud Filestore:** Customers may have observed create instance failures.\n**Cloud Build:** Customers using private pools may have observed elevated latency in build completion or sporadic build failures due to workers failing to start.\n**Cloud SQL:** Customers may have observed failures or elevated latency for instance creation, resizing and high-availability update operations. As a workaround, for failure in the create operations, customers can retry by deleting the failed instances and re-attempt the operation.\n**Cloud Composer:** Customers may have experienced failures in new Composer environment creation and in upgrade of Composer/Airflow versions, as well as delays in up-scaling of new airflow-workers and in KubernetesPodOperator tasks.\n**AlloyDB for PostgreSQL:** Customers may have experienced failures in instance creation operations. In addition, a small number of instance update operations may also see failures.\n**Google App Engine:** Customers may have experienced failures in insert/update/create/delete operations.\n**Migrate to Virtual Machines:** Customers may have experienced timeouts or errors.\n**Vertex GenAI:** Customers may have experienced issues in creating cluster operations.\n**Cloud Deploy:** Customers may have experienced Cloud Deploy operations (e.g. Render, Deploy, Verify, etc.) as “in progress” for a long time or failed to start.\n**Memorystore for Redis:** Customers may have experienced increased latency or timeouts for some CreateCluster operations.\n------","status":"AVAILABLE","affected_locations":[]},{"created":"2025-05-20T12:17:55+00:00","modified":"2025-05-20T18:58:49+00:00","when":"2025-05-20T12:17:55+00:00","text":"The issue with multiple dependent GCP services has been resolved for all affected users as of Tuesday, 2025-05-20 05:05 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Oregon (us-west1)","id":"us-west1"}]},{"created":"2025-05-20T12:05:43+00:00","modified":"2025-05-20T12:17:55+00:00","when":"2025-05-20T12:05:43+00:00","text":"Description:\nWe are experiencing an issue with multiple dependent GCP services beginning on Monday, 2025-05-19 20:23 US/Pacific.\nOur engineering team has deployed a mitigation and are seeing improvement across all affected zones. Most of the impacted products have been mitigated and the work towards full mitigation is ongoing.\nWe will provide more information by Tuesday, 2025-05-20 05:30 US/Pacific.\nDiagnosis:\nGoogle Cloud Dataproc: Customers may experience elevated latency while performing operations like creating or deleting clusters, and scale up and scale down operations, etc.\nGoogle Compute Engine: Now Mitigated\nCustomers might experience increased latency or timeouts when performing VM instance operations, including creation and reservation consumption.\nGoogle Kubernetes Engine: Now Mitigated\nCustomers may experience latency while performing operations like creating or deleting clusters, adding or resizing nodepools, etc.\nGoogle Cloud Dataflow: Now Mitigated\nCustomers may experience elevated latency for start-up / scaleup / shut-downs for Dataflow jobs..\nCloud Filestore: Now Mitigated\nCustomers may experience create instance failures.\nCloud Build: Now Mitigated\nCustomers may experience elevated latency in build completion or sporadic build failures due to workers failing to start. Default pools (including the legacy \"global\" region) and private pools are both impacted.\nCloud SQL: Now Mitigated\nCustomers may experience failures or elevated latency for instance creation, resizing and high-availability update operations. As a workaround, for failure in the create operations, customers can retry by deleting the failed instances and re-attempt the operation.\nCloud Composer: Now Mitigated\nCustomers may experience failures in creation of new Composer environments and in upgrade of Composer/Airflow versions, as well as delays in up-scaling of new airflow-workers and in KubernetesPodOperator tasks.\nAlloyDB for PostgreSQL: Now Mitigated\nCustomers may experience failures in instance creation operations. In addition, a small number of instance update operations may also see failures.\nGoogle App Engine (Google App Engine Flexible): Customers may experience failures in insert/update/create/delete operations.\nMigrate to Virtual Machines: Now Mitigated\nCustomers may experience timeouts or errors.\nVertex GenAI: Now Mitigated\nCustomers may experience issues in creating cluster operations.\nCloud Deploy: Now Mitigated\nCustomers may see Cloud Deploy operations (e.g. Render, Deploy, Verify, etc.) as “in progress” for a long time or fail to start.\nMemorystore for Redis: Now Mitigated\nCustomers may experience increased latency or timeouts for some CreateCluster operations.\nWorkaround:\nCustomers who are experiencing impact are advised to use alternate zones.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Oregon (us-west1)","id":"us-west1"}]},{"created":"2025-05-20T11:28:15+00:00","modified":"2025-05-20T12:05:43+00:00","when":"2025-05-20T11:28:15+00:00","text":"Description:\nWe are experiencing an issue with Google Compute Engine, Google Kubernetes Engine, Cloud Dataflow, Cloud SQL, AlloyDB for PostgreSQL, Cloud Composer, Cloud Build, Cloud Dataproc, Cloud Filestore, Google App Engine (Google App Engine Flexible) beginning on Monday, 2025-05-19 20:23 US/Pacific.\nOur engineering team has deployed a mitigation and are seeing improvement across all affected zones.\nWe will provide more information by Tuesday, 2025-05-20 05:00 US/Pacific.\nDiagnosis:\nGoogle Compute Engine: Customers might experience increased latency or timeouts when performing VM instance operations, including creation and reservation consumption.\nGoogle Kubernetes Engine: Customers may experience latency while performing operations like creating or deleting clusters, adding or resizing nodepools, etc.\nGoogle Cloud Dataproc: Customers may experience elevated latency while performing operations like creating or deleting clusters, and scale up and scale down operations, etc.\nGoogle Cloud Dataflow: Customers may experience elevated latency for start-up / scaleup / shut-downs for Dataflow jobs..\nCloud Filestore: Customers may experience create instance failures.\nCloud Build: Customers may experience elevated latency in build completion or sporadic build failures due to workers failing to start. Default pools (including the legacy \"global\" region) and private pools are both impacted.\nCloud SQL: Customers may experience failures or elevated latency for instance creation, resizing and high-availability update operations. As a workaround, for failure in the create operations, customers can retry by deleting the failed instances and re-attempt the operation.\nCloud Composer: Customers may experience failures in creation of new Composer environments and in upgrade of Composer/Airflow versions, as well as delays in up-scaling of new airflow-workers and in KubernetesPodOperator tasks.\nAlloyDB for PostgreSQL: Customers may experience failures in instance creation operations. In addition, a small number of instance update operations may also see failures.\nGoogle App Engine (Google App Engine Flexible): Customers may experience failures in insert/update/create/delete operations.\nWorkaround:\nCustomers who are experiencing impact are advised to use alternate zones.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Oregon (us-west1)","id":"us-west1"}]},{"created":"2025-05-20T11:07:41+00:00","modified":"2025-05-20T11:28:15+00:00","when":"2025-05-20T11:07:41+00:00","text":"Description:\nWe are experiencing an issue with Google Compute Engine, Google Kubernetes Engine, Cloud Dataflow, Cloud SQL, AlloyDB for PostgreSQL, Cloud Composer, Cloud Build, Cloud Dataproc, Cloud Filestore, Google App Engine (Google App Engine Flexible) beginning on Monday, 2025-05-19 20:23 US/Pacific.\nMitigation work is currently underway by our engineering team.\nWe do not have an ETA for mitigation at this point.\nWe will provide more information by Tuesday, 2025-05-20 04:30 US/Pacific.\nDiagnosis:\nGoogle Compute Engine: Customers might experience increased latency or timeouts when performing VM instance operations, including creation and reservation consumption.\nGoogle Kubernetes Engine: Customers may experience latency while performing operations like creating or deleting clusters, adding or resizing nodepools, etc.\nGoogle Cloud Dataproc: Customers may experience elevated latency while performing operations like creating or deleting clusters, and scale up and scale down operations, etc.\nGoogle Cloud Dataflow: Customers may experience elevated latency for start-up / scaleup / shut-downs for Dataflow jobs..\nCloud Filestore: Customers may experience create instance failures.\nCloud Build: Customers may experience elevated latency in build completion or sporadic build failures due to workers failing to start. Default pools (including the legacy \"global\" region) and private pools are both impacted.\nCloud SQL: Customers may experience failures or elevated latency for instance creation, resizing and high-availability update operations. As a workaround, for failure in the create operations, customers can retry by deleting the failed instances and re-attempt the operation.\nCloud Composer: Customers may experience failures in creation of new Composer environments and in upgrade of Composer/Airflow versions, as well as delays in up-scaling of new airflow-workers and in KubernetesPodOperator tasks.\nAlloyDB for PostgreSQL: Customers may experience failures in instance creation operations. In addition, a small number of instance update operations may also see failures.\nGoogle App Engine (Google App Engine Flexible): Customers may experience failures in insert/update/create/delete operations.\nWorkaround:\nCustomers who are experiencing impact are advised to use alternate zones.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Oregon (us-west1)","id":"us-west1"}]}],"most_recent_update":{"created":"2025-05-27T23:06:21+00:00","modified":"2025-05-27T23:06:22+00:00","when":"2025-05-27T23:06:21+00:00","text":"# Incident Report\n## Summary\nOn 19 May 2025, Google Compute Engine (GCE) encountered problems affecting Spot VM termination globally, and performance degradation and timeouts of reservation consumption / VM creation in us-central1 and us-east4 for a duration of 8 hours, 42 minutes. Consequently, multiple other Google Cloud Platform (GCP) products relying on GCE also experienced increased latencies and timeouts.\nTo our customers who were impacted during this disruption, we sincerely apologize. This is not the level of quality and reliability we strive to offer you, and we are taking immediate steps to improve the platform’s performance and availability.\n## Root Cause\nA recently deployed configuration change to a Google Compute Engine (GCE) component mistakenly disabled a feature flag that controlled how VM instance states are reported to other components. Safety checks intended to ensure gradual rollout of this type of change failed to be triggered, resulting in an unplanned rapid rollout of the change.\nThis caused Spot VMs to be stuck in an unexpected state. Consequently, Spot VMs that had initiated their standard termination process due to preemption began to accumulate as they failed to complete termination, creating a backlog that degraded performance for all VM types in some regions.\n## Remediation and Prevention\nGoogle engineers were alerted to the outage via internal monitoring on 19 May 2025, at 21:08 US/Pacific, and immediately started an investigation. Once the nature and scope of the issue became clear, Google engineers initiated a rollback of the change on 20 May 2025 at 03:29 US/Pacific.\nThe rollback completed at 03:55 US/Pacific, mitigating the impact.\nGoogle is committed to preventing a repeat of this issue in the future and is completing the following actions:\n* Google Cloud employs a robust and well-defined methodology for production updates, including a phased rollout approach as standard practice to avoid rapid global changes. This phased approach is meant to ensure that changes are introduced into production gradually and as safely as possible, however, in this case, the safety checks were not enforced. We have paused further feature flag rollouts for the affected system, while we undertake a comprehensive audit of safety checks and fix any exposed gaps that led to the unplanned rapid rollout of this change.\n* We will review and address scalability issues encountered by GCE during the incident.\n* We will improve monitoring coverage of Spot VM deletion workflows.\nGoogle is committed to quickly and continually improving our technology and operations to prevent service disruptions. We appreciate your patience and apologize again for the impact to your organization. We thank you for your business.\n## Detailed Description of Impact\nCustomers experienced increased latency for VM control plane operations in us-central1 and us-east4. VM control plane operations include creating, modifying, or deleting VMs. For some customers, Spot VM instances became stuck while terminating. Customers were not billed for Spot VM instances in this state. Furthermore, running virtual machines and the data plane were not impacted.\nVM control plane latency in the us-central1 and us-east4 regions began increasing at the start of the incident (19 May 2025 20:23 US/Pacific), and peaked around 20 May 2025 03:40 US/Pacific. At peak, median latency went from seconds to minutes, and tail latency went from minutes to hours. Several other regions experienced increased tail latency during the outage, but most operations in these regions completed as normal. Once mitigations took effect, median and tail latencies started falling and returned to normal by 05:15 US/Pacific.\nCustomers may have experienced similar latency increases in products which create, modify, failover or delete VM instances: GCE, GKE, Dataflow, Cloud SQL, Google Cloud Dataproc, Google App Engine, Cloud Deploy, Memorystore, Redis, Cloud Filestore, among others.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_DISRUPTION","severity":"medium","service_key":"zall","service_name":"Multiple Products","affected_products":[{"title":"AlloyDB for PostgreSQL","id":"fPovtKbaWN9UTepMm3kJ"},{"title":"Cloud Build","id":"fw8GzBdZdqy4THau7e1y"},{"title":"Cloud Filestore","id":"jog4nyYkquiLeSK5s26q"},{"title":"Colab Enterprise","id":"7Nbc1kZUvPLiihodettN"},{"title":"Google App Engine","id":"kchyUtnkMHJWaAva8aYc"},{"title":"Google Cloud Composer","id":"YxkG5FfcC42cQmvBCk4j"},{"title":"Google Cloud Dataflow","id":"T9bFoXPqG8w8g1YbWTKY"},{"title":"Google Cloud Dataproc","id":"yjXrEg3Yvy26BauMwr69"},{"title":"Google Cloud Deploy","id":"6z5SnvJrJMJQSdJmUQjH"},{"title":"Google Cloud SQL","id":"hV87iK5DcEXKgWU2kDri"},{"title":"Google Compute Engine","id":"L3ggmi3Jy4xJmgodFA9K"},{"title":"Google Kubernetes Engine","id":"LCSbT57h59oR4W98NHuz"},{"title":"Managed Service for Apache Kafka","id":"QMZ3IpyG3Ooxotv7JOKV"},{"title":"Migrate to Virtual Machines","id":"EwEFrihT41NLB9mhyWhz"}],"uri":"incidents/SXRPpPwx2RZ5VHjTwFLx","currently_affected_locations":[],"previously_affected_locations":[{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Oregon (us-west1)","id":"us-west1"}]},{"id":"N3Dw7nbJ7rk7qwrtwh7X","number":"6284910072052476183","begin":"2025-03-29T19:53:00+00:00","created":"2025-03-30T01:30:30+00:00","end":"2025-03-30T02:15:00+00:00","modified":"2025-04-11T16:10:00+00:00","external_desc":"Customers are experiencing connectivity issues with multiple Google Cloud services in zone us-east5-c","updates":[{"created":"2025-04-11T16:10:00+00:00","modified":"2025-04-11T16:10:00+00:00","when":"2025-04-11T16:10:00+00:00","text":"# Incident Report\n## Summary:\nOn Saturday, 29 March 2025, multiple Google Cloud Services in the us-east5-c zone experienced degraded service or unavailability for a duration of 6 hours and 10 minutes. To our Google Cloud customers whose services were impacted during this disruption, we sincerely apologize. This is not the level of quality and reliability we strive to offer you, and we are taking immediate steps to improve the platform’s performance and availability.\n## Root Cause:\nThe root cause of the service disruption was a loss of utility power in the affected zone. This power outage triggered a cascading failure within the uninterruptible power supply (UPS) system responsible for maintaining power to the zone during such events. The UPS system, which relies on batteries to bridge the gap between utility power loss and generator power activation, experienced a critical battery failure.\nThis failure rendered the UPS unable to perform its core function of ensuring continuous power to the system. As a direct consequence of the UPS failure, virtual machine instances within the affected zone lost power and went offline, resulting in service downtime for customers. The power outage and subsequent UPS failure also triggered a series of secondary issues, including packet loss within the us-east5-c zone, which impacted network communication and performance. Additionally, a limited number of storage disks within the zone became unavailable during the outage.\n## Remediation and Prevention:\nGoogle engineers were alerted to the incident from our internal monitoring alerts at 12:54 US/Pacific on Saturday, 29 March and immediately started an investigation.\nGoogle engineers diverted traffic away from the impacted location to partially mitigate impact for some services that did not have zonal resource dependencies. Engineers bypassed the failed UPS and restored power via generator by 14:49 US/Pacific on Saturday, 29 March. The majority of Google Cloud services recovered shortly thereafter. A few services experienced longer restoration times as manual actions were required in some cases to complete full recovery.\nGoogle is committed to preventing a repeat of this issue in the future and is completing the following actions:\n* Harden cluster power failure and recovery path to achieve a predictable and faster time-to-serving after power is restored.\n* Audit systems that did not automatically failover and close any gaps that prevented this function.\n* Work with our uninterruptible power supply (UPS) vendor to understand and remediate issues in the battery backup system.\nGoogle is committed to quickly and continually improving our technology and operations to prevent service disruptions. We appreciate your patience and apologize again for the impact to your organization. We thank you for your business.\n## Detailed Description of Impact:\nCustomers experienced degraded service or unavailability for multiple Google Cloud products in the us-east5-c zone of varying impact and severity as noted below:\n**AlloyDB for PostgreSQL:** A few clusters experienced transient unavailability during the failover. Two impacted clusters did not failover automatically and required manual intervention from Google engineers to do the failover.\n**BigQuery:** A few customers in the impacted region experienced brief unavailability of the product between 12:57 US/Pacific until 13:19 US/Pacific.\n**Cloud Bigtable:** The outage resulted in increased errors and latency for a few customers between 12:47 US/Pacific to 19:37 US/Pacific.\n**Cloud Composer:** External streaming jobs for a few customers experienced increased latency for a period of 16 minutes.\n**Cloud Dataflow:** Streaming and batch jobs saw brief periods of performance degradation. 17% of streaming jobs experienced degradation from 12:52 US/Pacific to 13:08 US/Pacific, while 14% of batch jobs experienced degradation from 15:42 US/Pacific to 16:00 US/Pacific.\n**Cloud Filestore:** All basic, high scale and zonal instances in us-east5-c were unavailable and all enterprise and regional instances in us-east5 were operating in degraded mode from 12:54 to 18:47 US/Pacific on Saturday, 29 March 2025\\.\n**Cloud Firestore:** Limited impact of approximately 2 minutes where customers experienced elevated unavailability and latency, as jobs were being rerouted automatically.\n**Cloud Identity and Access Management:** A few customers experienced slight latency or errors while retrying for a short period of time.\n**Cloud Interconnect:** All us-east5 attachments connected to zone1 were unavailable for a duration of 2 hours, 7 minutes.\n**Cloud Key Management Service:** Customers experienced 5XX errors for a brief period of time (less than 4 mins). Google engineers rerouted the traffic to healthy cells shortly after the power loss to mitigate the impact.\n**Cloud Kubernetes Engine:** Customers experienced terminations of their nodes in us-east5-c. Some zonal clusters in us-east5-c experienced loss of connectivity to their control plane. No impact was observed for nodes or control planes outside of us-east5-c.\n**Cloud NAT:** Transient control plane outage affecting new VM creation processes and/or dynamic port allocation.\n**Cloud Router:** Cloud Router was unavailable for up to 30 seconds while leadership shifted to other clusters. This downtime was within the thresholds of most customer's graceful restart configuration (60 seconds).\n**Cloud SQL:** Based on monitoring data, 318 zonal instances experienced 3h of downtime in the us-east5-c zone. All external high-availability instances successfully failed out of the impacted zone.\n**Cloud Spanner:** Customers in the us-east5 region may have seen a few minutes of errors or latency increase during the few minutes after 12:52 US/Pacific when the cluster first failed.\n**Cloud VPN:** A few legacy customers experienced loss of connectivity of their sessions up to 5 mins.\n**Compute Engine:** Customers experienced instance unavailability and inability to manage instances in us-east5-c from 12:54 to 18:30 US/Pacific on Saturday, 29 March 2025\\.\n**Managed Service for Apache Kafka:** CreateCluster and some UpdateCluster commands (those that increased capacity config) had a 100% error rate in the region, with the symptom being INTERNAL errors or timeouts. Based on our monitoring, the impact was limited to one customer who attempted to use these methods during the incident.\n**Memorystore for Redis:** High availability instances failed over to healthy zones during the incident. 12 instances required manual intervention to bring back provisioned capacity. All instances were recovered by 19:28 US/Pacific.\n**Persistent Disk:** Customers experienced very high I/O latency, including stalled I/O operations or errors in some disks in us-east5-c from 12:54 US/Pacific to 20:45 US/Pacific on Saturday, 29 March 2025\\. Other products using PD or communicating with impacted PD devices experienced service issues with varied symptoms.\n**Secret Manager:** Customers experienced 5XX errors for a brief period of time (less than 4 mins). Google engineers rerouted the traffic to healthy cells shortly after the power loss to mitigate the impact.\n**Virtual Private Cloud:** Virtual machine instances running in the us-east5-c zone were unable to reach the network. Services were partially unavailable from the impacted zone. Customers wherever applicable were able to fail over workloads to different Cloud zones.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-04-01T08:53:47+00:00","modified":"2025-04-11T16:10:00+00:00","when":"2025-04-01T08:53:47+00:00","text":"# Mini Incident Report\nWe apologize for the inconvenience this outage may have caused. We would like to provide some information about this incident below. Please note, this information is based on our best knowledge at the time of posting and is subject to change as our investigation continues. If you have experienced impact outside of what is listed below, please reach out to Google Cloud Support using https://cloud.google.com/support\n(All Times US/Pacific)\n**Incident Start:** 29 March 2025 12:53\n**Incident End:** 29 March 2025 19:12\n**Duration:** 6 hours, 19 minutes\n**Affected Services and Features:**\n- AlloyDB for PostgreSQL\n- BigQuery\n- Cloud Bigtable\n- Cloud Composer\n- Cloud Dataflow\n- Cloud Filestore\n- Cloud Firestore\n- Cloud Identity and Access Management\n- Cloud Interconnect\n- Cloud Key Management Service\n- Cloud Kubernetes Engine\n- Cloud NAT\n- Cloud Router\n- Cloud SQL\n- Cloud Spanner\n- Cloud VPN\n- Compute Engine\n- Managed Service for Apache Kafka\n- Memorystore for Redis\n- Persistent Disk\n- Secret Manager\n- Virtual Private Cloud\n**Regions/Zones:** us-east5-c\n**Description:**\nMultiple Google Cloud products were impacted in us-east5-c, with some zonal resources unavailable, for a duration of 6 hours and 19 minutes.\nThe root cause of the issue was a utility power outage in the zone and a subsequent failure of batteries within the uninterruptible power supply (UPS) system supporting a portion of the impacted zone. This failure prevented the UPS from operating correctly, thereby preventing a power source transfer to generators during the utility power outage. As a result, some Compute Engine instances in the zone experienced downtime. The incident also caused some packet loss within the us-east5-c zone, as well as some capacity constraints for Google Kubernetes Engine in other zones of us-east5. Additionally, a small number of Persistent Disks were unavailable during the outage.\nGoogle engineers diverted traffic away from the impacted location to partially mitigate impact for some services that did not have zonal resource dependencies. Engineers bypassed the failed UPS and restored power via generator, allowing the underlying infrastructure to come back online. Impact to all affected Cloud services was mitigated by 29 March 2025 at 19:12 US/Pacific.\nGoogle will complete a full Incident Report in the following days that will provide a detailed root cause analysis.\n**Customer Impact:**\nCustomers experienced degraded service or zonal unavailability for multiple Google Cloud products in us-east5-c.\n**Additional details:**\nThe us-east5-c zone has transitioned back to primary power without further impact as of 30 March 2025 at 17:30 US/Pacific.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-03-30T02:43:53+00:00","modified":"2025-04-01T08:55:14+00:00","when":"2025-03-30T02:43:53+00:00","text":"Currently, the us-east5-c zone is stable on an alternate power source. All previously impacted products are mitigated as of 19:12 US/Pacific.\nA small number of Persistent Disks remain still in recovery, and are actively being worked on. Customers still experiencing issues attaching Persistent Disks should open a support case.\nOur engineers continue to monitor service stability prior to transitioning back to primary power.\nWe will provide continuing updates via PSH by Sunday, 2025-03-30 01:30 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-03-30T01:30:30+00:00","modified":"2025-04-01T08:39:26+00:00","when":"2025-03-30T01:30:30+00:00","text":"Our engineers are actively working on recovery following a power event in the affected zone. Full recovery is currently expected to take several hours.\nThe impacted services include Cloud Interconnect, Virtual Private Cloud (VPC), Google Compute Engine, Persistent Disk, AlloyDB for PostgreSQL, Cloud Dataproc, Cloud Dataflow, Cloud Filestore, Identity and Access Management, Cloud SQL , Google Kubernetes Engine, Cloud Composer, BigQuery, Cloud Bigtable and more.\nWe have determined that no other zones (a, b) in the us-east5 region are impacted.\nWe will provide an update by Saturday, 2025-03-29 20:00 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Columbus (us-east5)","id":"us-east5"}]}],"most_recent_update":{"created":"2025-04-11T16:10:00+00:00","modified":"2025-04-11T16:10:00+00:00","when":"2025-04-11T16:10:00+00:00","text":"# Incident Report\n## Summary:\nOn Saturday, 29 March 2025, multiple Google Cloud Services in the us-east5-c zone experienced degraded service or unavailability for a duration of 6 hours and 10 minutes. To our Google Cloud customers whose services were impacted during this disruption, we sincerely apologize. This is not the level of quality and reliability we strive to offer you, and we are taking immediate steps to improve the platform’s performance and availability.\n## Root Cause:\nThe root cause of the service disruption was a loss of utility power in the affected zone. This power outage triggered a cascading failure within the uninterruptible power supply (UPS) system responsible for maintaining power to the zone during such events. The UPS system, which relies on batteries to bridge the gap between utility power loss and generator power activation, experienced a critical battery failure.\nThis failure rendered the UPS unable to perform its core function of ensuring continuous power to the system. As a direct consequence of the UPS failure, virtual machine instances within the affected zone lost power and went offline, resulting in service downtime for customers. The power outage and subsequent UPS failure also triggered a series of secondary issues, including packet loss within the us-east5-c zone, which impacted network communication and performance. Additionally, a limited number of storage disks within the zone became unavailable during the outage.\n## Remediation and Prevention:\nGoogle engineers were alerted to the incident from our internal monitoring alerts at 12:54 US/Pacific on Saturday, 29 March and immediately started an investigation.\nGoogle engineers diverted traffic away from the impacted location to partially mitigate impact for some services that did not have zonal resource dependencies. Engineers bypassed the failed UPS and restored power via generator by 14:49 US/Pacific on Saturday, 29 March. The majority of Google Cloud services recovered shortly thereafter. A few services experienced longer restoration times as manual actions were required in some cases to complete full recovery.\nGoogle is committed to preventing a repeat of this issue in the future and is completing the following actions:\n* Harden cluster power failure and recovery path to achieve a predictable and faster time-to-serving after power is restored.\n* Audit systems that did not automatically failover and close any gaps that prevented this function.\n* Work with our uninterruptible power supply (UPS) vendor to understand and remediate issues in the battery backup system.\nGoogle is committed to quickly and continually improving our technology and operations to prevent service disruptions. We appreciate your patience and apologize again for the impact to your organization. We thank you for your business.\n## Detailed Description of Impact:\nCustomers experienced degraded service or unavailability for multiple Google Cloud products in the us-east5-c zone of varying impact and severity as noted below:\n**AlloyDB for PostgreSQL:** A few clusters experienced transient unavailability during the failover. Two impacted clusters did not failover automatically and required manual intervention from Google engineers to do the failover.\n**BigQuery:** A few customers in the impacted region experienced brief unavailability of the product between 12:57 US/Pacific until 13:19 US/Pacific.\n**Cloud Bigtable:** The outage resulted in increased errors and latency for a few customers between 12:47 US/Pacific to 19:37 US/Pacific.\n**Cloud Composer:** External streaming jobs for a few customers experienced increased latency for a period of 16 minutes.\n**Cloud Dataflow:** Streaming and batch jobs saw brief periods of performance degradation. 17% of streaming jobs experienced degradation from 12:52 US/Pacific to 13:08 US/Pacific, while 14% of batch jobs experienced degradation from 15:42 US/Pacific to 16:00 US/Pacific.\n**Cloud Filestore:** All basic, high scale and zonal instances in us-east5-c were unavailable and all enterprise and regional instances in us-east5 were operating in degraded mode from 12:54 to 18:47 US/Pacific on Saturday, 29 March 2025\\.\n**Cloud Firestore:** Limited impact of approximately 2 minutes where customers experienced elevated unavailability and latency, as jobs were being rerouted automatically.\n**Cloud Identity and Access Management:** A few customers experienced slight latency or errors while retrying for a short period of time.\n**Cloud Interconnect:** All us-east5 attachments connected to zone1 were unavailable for a duration of 2 hours, 7 minutes.\n**Cloud Key Management Service:** Customers experienced 5XX errors for a brief period of time (less than 4 mins). Google engineers rerouted the traffic to healthy cells shortly after the power loss to mitigate the impact.\n**Cloud Kubernetes Engine:** Customers experienced terminations of their nodes in us-east5-c. Some zonal clusters in us-east5-c experienced loss of connectivity to their control plane. No impact was observed for nodes or control planes outside of us-east5-c.\n**Cloud NAT:** Transient control plane outage affecting new VM creation processes and/or dynamic port allocation.\n**Cloud Router:** Cloud Router was unavailable for up to 30 seconds while leadership shifted to other clusters. This downtime was within the thresholds of most customer's graceful restart configuration (60 seconds).\n**Cloud SQL:** Based on monitoring data, 318 zonal instances experienced 3h of downtime in the us-east5-c zone. All external high-availability instances successfully failed out of the impacted zone.\n**Cloud Spanner:** Customers in the us-east5 region may have seen a few minutes of errors or latency increase during the few minutes after 12:52 US/Pacific when the cluster first failed.\n**Cloud VPN:** A few legacy customers experienced loss of connectivity of their sessions up to 5 mins.\n**Compute Engine:** Customers experienced instance unavailability and inability to manage instances in us-east5-c from 12:54 to 18:30 US/Pacific on Saturday, 29 March 2025\\.\n**Managed Service for Apache Kafka:** CreateCluster and some UpdateCluster commands (those that increased capacity config) had a 100% error rate in the region, with the symptom being INTERNAL errors or timeouts. Based on our monitoring, the impact was limited to one customer who attempted to use these methods during the incident.\n**Memorystore for Redis:** High availability instances failed over to healthy zones during the incident. 12 instances required manual intervention to bring back provisioned capacity. All instances were recovered by 19:28 US/Pacific.\n**Persistent Disk:** Customers experienced very high I/O latency, including stalled I/O operations or errors in some disks in us-east5-c from 12:54 US/Pacific to 20:45 US/Pacific on Saturday, 29 March 2025\\. Other products using PD or communicating with impacted PD devices experienced service issues with varied symptoms.\n**Secret Manager:** Customers experienced 5XX errors for a brief period of time (less than 4 mins). Google engineers rerouted the traffic to healthy cells shortly after the power loss to mitigate the impact.\n**Virtual Private Cloud:** Virtual machine instances running in the us-east5-c zone were unable to reach the network. Services were partially unavailable from the impacted zone. Customers wherever applicable were able to fail over workloads to different Cloud zones.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_OUTAGE","severity":"high","service_key":"zall","service_name":"Multiple Products","affected_products":[{"title":"AlloyDB for PostgreSQL","id":"fPovtKbaWN9UTepMm3kJ"},{"title":"Cloud Firestore","id":"CETSkT92V21G6A1x28me"},{"title":"Google BigQuery","id":"9CcrhHUcFevXPSVaSxkf"},{"title":"Google Cloud Bigtable","id":"LfZSuE3xdQU46YMFV5fy"},{"title":"Google Cloud Composer","id":"YxkG5FfcC42cQmvBCk4j"},{"title":"Google Cloud Dataflow","id":"T9bFoXPqG8w8g1YbWTKY"},{"title":"Google Cloud Dataproc","id":"yjXrEg3Yvy26BauMwr69"},{"title":"Google Cloud SQL","id":"hV87iK5DcEXKgWU2kDri"},{"title":"Google Compute Engine","id":"L3ggmi3Jy4xJmgodFA9K"},{"title":"Google Kubernetes Engine","id":"LCSbT57h59oR4W98NHuz"},{"title":"Hybrid Connectivity","id":"5x6CGnZvSHQZ26KtxpK1"},{"title":"Identity and Access Management","id":"adnGEDEt9zWzs8uF1oKA"},{"title":"Persistent Disk","id":"SzESm2Ux129pjDGKWD68"},{"title":"Virtual Private Cloud (VPC)","id":"BSGtCUnz6ZmyajsjgTKv"}],"uri":"incidents/N3Dw7nbJ7rk7qwrtwh7X","currently_affected_locations":[],"previously_affected_locations":[{"title":"Columbus (us-east5)","id":"us-east5"}]},{"id":"hdknJ5aWh8KCAimNhTHe","number":"12620528886240411317","begin":"2025-03-04T20:04:19+00:00","created":"2025-03-04T21:33:01+00:00","end":"2025-03-04T21:40:41+00:00","modified":"2025-03-04T21:40:43+00:00","external_desc":"Apigee customers may experience unable to login to Admin UI portal.","updates":[{"created":"2025-03-04T21:40:41+00:00","modified":"2025-03-04T21:40:44+00:00","when":"2025-03-04T21:40:41+00:00","text":"The issue with Apigee Hybrid, Apigee Edge Public Cloud, Apigee has been resolved for all affected users as of Tuesday, 2025-03-04 13:30 US/Pacific.\nFrom preliminary analysis root cause appears to be due certificate expiration.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-03-04T21:32:59+00:00","modified":"2025-03-04T21:40:43+00:00","when":"2025-03-04T21:32:59+00:00","text":"Summary: Apigee customers may experience unable to login to Admin UI portal.\nDescription: We are experiencing an issue with Apigee beginning at Tuesday, 2025-03-04 01:25 US/Pacific.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Tuesday, 2025-03-04 14:15 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.\nDiagnosis: Portals Admin UI unable to log in to portals\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]}],"most_recent_update":{"created":"2025-03-04T21:40:41+00:00","modified":"2025-03-04T21:40:44+00:00","when":"2025-03-04T21:40:41+00:00","text":"The issue with Apigee Hybrid, Apigee Edge Public Cloud, Apigee has been resolved for all affected users as of Tuesday, 2025-03-04 13:30 US/Pacific.\nFrom preliminary analysis root cause appears to be due certificate expiration.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_INFORMATION","severity":"low","service_key":"zall","service_name":"Multiple Products","affected_products":[{"title":"Apigee","id":"9Y13BNFy4fJydvjdsN3X"},{"title":"Apigee Edge Public Cloud","id":"SumcdgBT6GQBzp1vmdXu"},{"title":"Apigee Hybrid","id":"6gaft97Gv5hGQAJg6D3J"}],"uri":"incidents/hdknJ5aWh8KCAimNhTHe","currently_affected_locations":[],"previously_affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"id":"32iSTecJmvVhCPRvCuWX","number":"2482435240037512913","begin":"2025-02-19T09:14:00+00:00","created":"2025-02-19T15:14:17+00:00","end":"2025-02-20T07:56:52+00:00","modified":"2025-02-20T07:56:53+00:00","external_desc":"Cloud Asset Inventory customers' queries may not return result","updates":[{"created":"2025-02-20T07:56:52+00:00","modified":"2025-02-20T07:56:55+00:00","when":"2025-02-20T07:56:52+00:00","text":"The issue with Cloud Asset Inventory has been resolved for all affected users as of Wednesday, 2025-02-19 23:20 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-02-20T06:11:34+00:00","modified":"2025-02-20T07:56:53+00:00","when":"2025-02-20T06:11:34+00:00","text":"Summary: Cloud Asset Inventory customers' queries may not return result\nDescription: Mitigation work is currently underway by our engineering team.\nWe do not have an ETA for mitigation at this point.\nWe will provide more information by Thursday, 2025-02-20 03:00 US/Pacific.\nDiagnosis: Cloud Asset Inventory customers impacted by this issue may see empty result from their QueryAssets queries.\nWorkaround: None at this time","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Global","id":"global"}]},{"created":"2025-02-19T22:05:03+00:00","modified":"2025-02-20T06:11:34+00:00","when":"2025-02-19T22:05:03+00:00","text":"Summary: Cloud Asset Inventory customers' queries may not return result\nDescription: Mitigation work is currently underway by our engineering team.\nWe do not have an ETA for mitigation at this point.\nWe will provide more information by Wednesday, 2025-02-19 22:00 US/Pacific.\nDiagnosis: Cloud Asset Inventory customers impacted by this issue may see empty result from their QueryAssets queries.\nWorkaround: None at this time","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Global","id":"global"}]},{"created":"2025-02-19T20:07:29+00:00","modified":"2025-02-19T22:05:03+00:00","when":"2025-02-19T20:07:29+00:00","text":"Summary: Cloud Asset Inventory customers' queries may not return result\nDescription: Mitigation work is currently underway by our engineering team.\nWe do not have an ETA for mitigation at this point.\nWe will provide more information by Wednesday, 2025-02-19 14:30 US/Pacific.\nDiagnosis: Cloud Asset Inventory customers impacted by this issue may see empty result from their QueryAssets queries.\nWorkaround: None at this time","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Global","id":"global"}]},{"created":"2025-02-19T15:14:04+00:00","modified":"2025-02-19T20:07:29+00:00","when":"2025-02-19T15:14:04+00:00","text":"Summary: Cloud Asset Inventory customers' queries may not return result\nDescription: We are experiencing an issue with Cloud Asset Inventory beginning at Wednesday, 2025-02-19 01:14 US/Pacific.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Wednesday, 2025-02-19 12:00 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.\nDiagnosis: Cloud Asset Inventory customers impacted by this issue may see empty result from their QueryAssets queries.\nWorkaround: None at this time","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Global","id":"global"}]}],"most_recent_update":{"created":"2025-02-20T07:56:52+00:00","modified":"2025-02-20T07:56:55+00:00","when":"2025-02-20T07:56:52+00:00","text":"The issue with Cloud Asset Inventory has been resolved for all affected users as of Wednesday, 2025-02-19 23:20 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_INFORMATION","severity":"low","service_key":"r1uH3MduHyHHH4P6z18G","service_name":"Cloud Asset Inventory","affected_products":[{"title":"Cloud Asset Inventory","id":"r1uH3MduHyHHH4P6z18G"}],"uri":"incidents/32iSTecJmvVhCPRvCuWX","currently_affected_locations":[],"previously_affected_locations":[{"title":"Global","id":"global"}]},{"id":"r23WwsX2tpSN7RyFs83c","number":"7947296736152757302","begin":"2025-02-18T22:42:52+00:00","created":"2025-02-19T00:29:35+00:00","end":"2025-02-19T02:41:56+00:00","modified":"2025-02-19T11:03:46+00:00","external_desc":"SIEM Dashboards and SOAR Advanced Dashboards for Google Security Operations (SecOps) were unavailable.","updates":[{"created":"2025-02-19T11:03:46+00:00","modified":"2025-02-19T11:03:46+00:00","when":"2025-02-19T11:03:46+00:00","text":"# Mini Incident Report\nWe apologize for the inconvenience this service disruption/outage may have caused. We would like to provide some information about this incident below. Please note, this information is based on our best knowledge at the time of posting and is subject to change as our investigation continues. If you have experienced impact outside of what is listed below, please reach out to Google Cloud Support using https://cloud.google.com/support.\n(All Times US/Pacific)\nIncident Start: 18 February 2025, 14:42\nIncident End: 18 February 2025, 18:40\nDuration: 3 hours 58 minutes\nAffected Services and Features:\n* Chronicle Security\n* Chronicle SOAR\nRegions/Zones: Global\nDescription:\nStarting on 18 February 2025 14:42 US/Pacific, Chronicle Security and Chronicle SOAR experienced an issue where customers encountered the error message \"An error occurred while loading dashboards\" when attempting to access dashboards in Google Cloud Security for a duration of 3 hours 58 minutes. From preliminary analysis, the root cause of the issue is related to a revocation of associated internal looker licenses - all revoked licenses were restored to mitigate an issue.\nCustomer Impact:\nCustomers encountered an error message \"An error occurred while loading dashboards\" when attempting to access certain security dashboards in Google Cloud.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-02-19T02:41:56+00:00","modified":"2025-02-19T11:03:46+00:00","when":"2025-02-19T02:41:56+00:00","text":"The issue with Chronicle Security, Chronicle SOAR has been resolved for all affected users as of Tuesday, 2025-02-18 18:35 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-02-19T01:59:24+00:00","modified":"2025-02-19T02:41:58+00:00","when":"2025-02-19T01:59:24+00:00","text":"Summary: SIEM Dashboards and SOAR Advanced Dashboards for Google Security Operations (SecOps) are unavailable.\nDescription: Our engineering team has identified the root cause and we are now in the process of restoring the services incrementally.\nServices are being restored one by one and we anticipate they will become fully functional by Tuesday, 2025-02-18 19:00.\nWe will provide an update by Tuesday, 2025-02-18 19:30 US/Pacific with current details.\nDiagnosis: Customers impacted by this issue would encounter an error message \"An error occurred while loading dashboards\" when attempting to access SIEM Dashboards and SOAR Advanced Dashboards in Google SecOps\nWorkaround: None at this time.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Multi-region: europe","id":"europe"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Multi-region: us","id":"us"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"}]},{"created":"2025-02-19T00:29:33+00:00","modified":"2025-02-19T01:59:24+00:00","when":"2025-02-19T00:29:33+00:00","text":"Summary: SIEM Dashboards and SOAR Advanced Dashboards for Google Security Operations (SecOps) are unavailable.\nDescription: SIEM Dashboards and SOAR Advanced Dashboards for SecOps are unavailable beginning on Tuesday, 2025-02-18 14:42 US/Pacific.\nAs a result, Dashboards to visualize data trends are getting errors while loading.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Tuesday, 2025-02-18 18:00 US/Pacific with current details.\nDiagnosis: Customers impacted by this issue would encounter an error message \"An error occurred while loading dashboards\" when attempting to access SIEM Dashboards and SOAR Advanced Dashboards in Google SecOps\nWorkaround: None at this time.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Multi-region: europe","id":"europe"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Multi-region: us","id":"us"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"}]}],"most_recent_update":{"created":"2025-02-19T11:03:46+00:00","modified":"2025-02-19T11:03:46+00:00","when":"2025-02-19T11:03:46+00:00","text":"# Mini Incident Report\nWe apologize for the inconvenience this service disruption/outage may have caused. We would like to provide some information about this incident below. Please note, this information is based on our best knowledge at the time of posting and is subject to change as our investigation continues. If you have experienced impact outside of what is listed below, please reach out to Google Cloud Support using https://cloud.google.com/support.\n(All Times US/Pacific)\nIncident Start: 18 February 2025, 14:42\nIncident End: 18 February 2025, 18:40\nDuration: 3 hours 58 minutes\nAffected Services and Features:\n* Chronicle Security\n* Chronicle SOAR\nRegions/Zones: Global\nDescription:\nStarting on 18 February 2025 14:42 US/Pacific, Chronicle Security and Chronicle SOAR experienced an issue where customers encountered the error message \"An error occurred while loading dashboards\" when attempting to access dashboards in Google Cloud Security for a duration of 3 hours 58 minutes. From preliminary analysis, the root cause of the issue is related to a revocation of associated internal looker licenses - all revoked licenses were restored to mitigate an issue.\nCustomer Impact:\nCustomers encountered an error message \"An error occurred while loading dashboards\" when attempting to access certain security dashboards in Google Cloud.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_DISRUPTION","severity":"medium","service_key":"zall","service_name":"Multiple Products","affected_products":[{"title":"Chronicle SOAR","id":"GTT16Lf72XZKWArC9VxA"},{"title":"Chronicle Security","id":"FHwvkSZ6RzzDYAvDZXMM"}],"uri":"incidents/r23WwsX2tpSN7RyFs83c","currently_affected_locations":[],"previously_affected_locations":[{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Multi-region: europe","id":"europe"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Multi-region: us","id":"us"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"}]},{"id":"YzMELUzpd8rYgwYt714D","number":"14132312074564100052","begin":"2025-02-16T12:48:58+00:00","created":"2025-02-16T15:33:10+00:00","end":"2025-02-16T15:49:43+00:00","modified":"2025-02-17T09:23:03+00:00","external_desc":"Vertex AI Search for commerce customers may observe 100% error rate for certain IPs.","updates":[{"created":"2025-02-16T15:49:43+00:00","modified":"2025-02-17T09:23:03+00:00","when":"2025-02-16T15:49:43+00:00","text":"The issue with Vertex AI Search, Recommendation AI has been resolved for all affected users as of Sunday, 2025-02-16 07:21 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-02-16T15:32:51+00:00","modified":"2025-02-16T15:49:54+00:00","when":"2025-02-16T15:32:51+00:00","text":"Summary: Vertex AI Search for commerce customers may observe 100% error rate for certain IPs.\nDescription: We are experiencing an issue with Vertex AI Search, Recommendation AI beginning on Sunday, 2025-02-16 04:48 US/Pacific.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Sunday, 2025-02-16 08:15 US/Pacific with current details.\nDiagnosis: Vertex AI Search for commerce customers may observe 100% error rate for certain IPs\nWorkaround: None at this time.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"Multi-region: eu","id":"eu"},{"title":"Global","id":"global"},{"title":"Multi-region: us","id":"us"}]}],"most_recent_update":{"created":"2025-02-16T15:49:43+00:00","modified":"2025-02-17T09:23:03+00:00","when":"2025-02-16T15:49:43+00:00","text":"The issue with Vertex AI Search, Recommendation AI has been resolved for all affected users as of Sunday, 2025-02-16 07:21 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_DISRUPTION","severity":"medium","service_key":"zall","service_name":"Multiple Products","affected_products":[{"title":"Recommendation AI","id":"jWSoZzR1kkyiDi9C5GMM"},{"title":"Vertex AI Search","id":"vNncXxtSVvqyhvSkQ6PJ"}],"uri":"incidents/YzMELUzpd8rYgwYt714D","currently_affected_locations":[],"previously_affected_locations":[{"title":"Multi-region: eu","id":"eu"},{"title":"Global","id":"global"},{"title":"Multi-region: us","id":"us"}]},{"id":"eya5zBxFRFhNqBhUXe6Q","number":"4053479891814018787","begin":"2025-02-13T20:13:52+00:00","created":"2025-02-14T04:23:18+00:00","end":"2025-02-14T05:39:53+00:00","modified":"2025-02-14T05:39:59+00:00","external_desc":"Chronicle Security experienced issues related to feed based ingestion in europe-west2","updates":[{"created":"2025-02-14T05:39:53+00:00","modified":"2025-02-14T05:40:00+00:00","when":"2025-02-14T05:39:53+00:00","text":"The issue with Chronicle Security has been resolved for all affected users as of Thursday, 2025-02-13 21:38 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-02-14T04:23:16+00:00","modified":"2025-02-14T05:39:59+00:00","when":"2025-02-14T04:23:16+00:00","text":"Summary: Chronicle Security is experiencing issues related to feed based ingestion in europe-west2\nDescription: We are experiencing an issue with Chronicle Security beginning on Thursday, 2025-02-13 12:13 US/Pacific.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Thursday, 2025-02-13 23:30 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.\nDiagnosis: Customers using feed based ingestion may see delays in detection and recent logs are unavailable in searches. Data will be available once the incident is mitigated.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"London (europe-west2)","id":"europe-west2"}]}],"most_recent_update":{"created":"2025-02-14T05:39:53+00:00","modified":"2025-02-14T05:40:00+00:00","when":"2025-02-14T05:39:53+00:00","text":"The issue with Chronicle Security has been resolved for all affected users as of Thursday, 2025-02-13 21:38 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_INFORMATION","severity":"low","service_key":"FHwvkSZ6RzzDYAvDZXMM","service_name":"Chronicle Security","affected_products":[{"title":"Chronicle Security","id":"FHwvkSZ6RzzDYAvDZXMM"}],"uri":"incidents/eya5zBxFRFhNqBhUXe6Q","currently_affected_locations":[],"previously_affected_locations":[{"title":"London (europe-west2)","id":"europe-west2"}]},{"id":"3C3D9dLK9dkx8kRdc72a","number":"7627331540311700362","begin":"2025-02-02T02:00:00+00:00","created":"2025-02-03T07:49:36+00:00","end":"2025-02-03T09:54:55+00:00","modified":"2025-02-03T09:54:59+00:00","external_desc":"Chronicle Security users experiencing an issue in us-multiregions","updates":[{"created":"2025-02-03T09:54:55+00:00","modified":"2025-02-03T09:55:04+00:00","when":"2025-02-03T09:54:55+00:00","text":"The issue with Chronicle Security has been resolved for all affected users as of Monday, 2025-02-03 01:49 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-02-03T09:15:25+00:00","modified":"2025-02-03T09:54:59+00:00","when":"2025-02-03T09:15:25+00:00","text":"Summary: Chronicle Security users experiencing an issue in us-multiregions\nDescription: We are experiencing an issue with Chronicle Security. Our engineering team is working with a mitigation strategy.\nWe will provide more information by Monday, 2025-02-03 04:00 US/Pacific.\nDiagnosis: Impacted customers will observe the Risk Analytics Dashboard details are not up-to-date while using Chronicle security in us-multiregions.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Multi-region: us","id":"us"}]},{"created":"2025-02-03T07:49:33+00:00","modified":"2025-02-03T09:15:25+00:00","when":"2025-02-03T07:49:33+00:00","text":"Summary: Chronicle Security users experiencing an issue in us-multiregions\nDescription: We are experiencing an issue with Chronicle Security beginning at Sunday, 2025-02-02 18:00 US/Pacific.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Monday, 2025-02-03 02:00 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.\nDiagnosis: Impacted customers will observe the Risk Analytics Dashboard details are not up-to-date while using Chronicle security in us-multiregions.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Multi-region: us","id":"us"}]}],"most_recent_update":{"created":"2025-02-03T09:54:55+00:00","modified":"2025-02-03T09:55:04+00:00","when":"2025-02-03T09:54:55+00:00","text":"The issue with Chronicle Security has been resolved for all affected users as of Monday, 2025-02-03 01:49 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_INFORMATION","severity":"low","service_key":"FHwvkSZ6RzzDYAvDZXMM","service_name":"Chronicle Security","affected_products":[{"title":"Chronicle Security","id":"FHwvkSZ6RzzDYAvDZXMM"}],"uri":"incidents/3C3D9dLK9dkx8kRdc72a","currently_affected_locations":[],"previously_affected_locations":[{"title":"Multi-region: us","id":"us"}]},{"id":"n8QFYMxUxe65sum9P1gk","number":"6338697417315324530","begin":"2025-01-31T00:53:00+00:00","created":"2025-01-31T09:05:14+00:00","end":"2025-01-31T14:13:00+00:00","modified":"2025-01-31T20:14:05+00:00","external_desc":"Chronicle Security - WORKSPACE_ACTIVITY data ingestion observed delays in asia-southeast1 \u0026 asia-south1 regions","updates":[{"created":"2025-01-31T20:13:45+00:00","modified":"2025-01-31T20:14:05+00:00","when":"2025-01-31T20:13:45+00:00","text":"## \\# Mini Incident Report\nWe apologize for the inconvenience this service disruption/outage may have caused. We would like to provide some information about this incident below. Please note, this information is based on our best knowledge at the time of posting and is subject to change as our investigation continues. If you have experienced impact outside of what is listed below, please reach out to Google Cloud Support using [***https://cloud.google.com/support***](https://cloud.google.com/support).\n(All Times US/Pacific)\n**Incident Start:** 30 January 2025, 16:53\n**Incident End:** 31 January 2025 06:13\n**Duration:** 13 hours, 20 minutes\n**Affected Services and Features:**\nChronicle Security\n**Regions/Zones:**\nasia-south1, asia-southeast1\n**Description:**\nChronicle Security WORKSPACE\\_ACTIVITY data ingestion observed delays in asia-southeast1 and asia-south1 regions for 13 hours, 20 minutes due to network connectivity issues between the trans-Pacific regions.\nWhile the network connectivity issues are being fixed, the data ingestion delays were mitigated by increasing the Network Quality of Service (QoS) for this traffic to ensure timely processing.\n**Customer Impact:**\n* Chronicle Security customers observed WORKSPACE\\_ACTIVITY data was delayed in the Secops instance. The impacted regions processed only around one third of the actual traffic.\n* Rule detections which depend on WORKSPACE\\_ACTIVITY data may have been delayed.\n* Search on WORKSPACE\\_ACTIVITY data may have shown fewer events due to ingestion delays.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-31T14:14:33+00:00","modified":"2025-01-31T20:13:45+00:00","when":"2025-01-31T14:14:33+00:00","text":"The issue with Chronicle Security has been resolved for all affected users as of Friday, 2025-01-31 06:13 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-31T13:50:26+00:00","modified":"2025-01-31T14:14:37+00:00","when":"2025-01-31T13:50:26+00:00","text":"Summary: Chronicle Security - WORKSPACE_ACTIVITY data ingestion observed delays in asia-southeast1 \u0026 asia-south1 regions\nDescription: We are experiencing an issue with Chronicle Security beginning on Thursday, 2025-01-30 16:53 US/Pacific.\nMitigation work is currently underway by our engineering team.\nWe do not have an ETA for mitigation at this point.\nWe will provide more information by Friday, 2025-01-31 07:00 US/Pacific.\nDiagnosis: Customers impacted by this issue will see that WORKSPACE_ACTIVITY data is delayed in the Secops instance. Rule detections which depend on this data will be delayed. Search on this data will show fewer events.\nWorkaround: None at this time.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"}]},{"created":"2025-01-31T10:33:27+00:00","modified":"2025-01-31T13:50:26+00:00","when":"2025-01-31T10:33:27+00:00","text":"Summary: Chronicle Security - WORKSPACE_ACTIVITY data ingestion observed delays in asia-southeast1 \u0026 asia-south1 regions\nDescription: We are experiencing an issue with Chronicle Security beginning on Thursday, 2025-01-30 16:53 US/Pacific.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Friday, 2025-01-31 12:00 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.\nDiagnosis: Customers impacted by this issue will see that WORKSPACE_ACTIVITY data is delayed in the Secops instance. Rule detections which depend on this data will be delayed. Search on this data will show fewer events.\nWorkaround: None at this time.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"}]},{"created":"2025-01-31T09:05:12+00:00","modified":"2025-01-31T10:33:27+00:00","when":"2025-01-31T09:05:12+00:00","text":"Summary: Chronicle Security - WORKSPACE_ACTIVITY data ingestion observed delays in asia-southeast1 \u0026 asia-south1 regions\nDescription: We are experiencing an issue with Chronicle Security beginning on Thursday, 2025-01-30 16:53 US/Pacific.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Friday, 2025-01-31 03:00 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.\nDiagnosis: Customers impacted by this issue will see that WORKSPACE_ACTIVITY data is delayed in the Secops instance. Rule detections which depend on this data will be delayed. Search on this data will show fewer events.\nWorkaround: None at this time.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"}]}],"most_recent_update":{"created":"2025-01-31T20:13:45+00:00","modified":"2025-01-31T20:14:05+00:00","when":"2025-01-31T20:13:45+00:00","text":"## \\# Mini Incident Report\nWe apologize for the inconvenience this service disruption/outage may have caused. We would like to provide some information about this incident below. Please note, this information is based on our best knowledge at the time of posting and is subject to change as our investigation continues. If you have experienced impact outside of what is listed below, please reach out to Google Cloud Support using [***https://cloud.google.com/support***](https://cloud.google.com/support).\n(All Times US/Pacific)\n**Incident Start:** 30 January 2025, 16:53\n**Incident End:** 31 January 2025 06:13\n**Duration:** 13 hours, 20 minutes\n**Affected Services and Features:**\nChronicle Security\n**Regions/Zones:**\nasia-south1, asia-southeast1\n**Description:**\nChronicle Security WORKSPACE\\_ACTIVITY data ingestion observed delays in asia-southeast1 and asia-south1 regions for 13 hours, 20 minutes due to network connectivity issues between the trans-Pacific regions.\nWhile the network connectivity issues are being fixed, the data ingestion delays were mitigated by increasing the Network Quality of Service (QoS) for this traffic to ensure timely processing.\n**Customer Impact:**\n* Chronicle Security customers observed WORKSPACE\\_ACTIVITY data was delayed in the Secops instance. The impacted regions processed only around one third of the actual traffic.\n* Rule detections which depend on WORKSPACE\\_ACTIVITY data may have been delayed.\n* Search on WORKSPACE\\_ACTIVITY data may have shown fewer events due to ingestion delays.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_DISRUPTION","severity":"medium","service_key":"FHwvkSZ6RzzDYAvDZXMM","service_name":"Chronicle Security","affected_products":[{"title":"Chronicle Security","id":"FHwvkSZ6RzzDYAvDZXMM"}],"uri":"incidents/n8QFYMxUxe65sum9P1gk","currently_affected_locations":[],"previously_affected_locations":[{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"}]},{"id":"jPyjAMj7j3NksnWVMTRt","number":"11727232941456060067","begin":"2025-01-28T12:50:17+00:00","created":"2025-01-29T00:40:41+00:00","end":"2025-01-29T04:54:18+00:00","modified":"2025-01-29T04:54:21+00:00","external_desc":"Cloud Translation experiencing elevated latency and error rates","updates":[{"created":"2025-01-29T04:54:18+00:00","modified":"2025-01-29T04:54:22+00:00","when":"2025-01-29T04:54:18+00:00","text":"The issue with Cloud Translation has been resolved for all affected users as of Tuesday, 2025-01-28 20:53 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-29T02:54:20+00:00","modified":"2025-01-29T04:54:21+00:00","when":"2025-01-29T02:54:20+00:00","text":"Summary: Cloud Translation experiencing elevated latency and error rates\nDescription: Mitigation work is currently underway by our engineering team. We are seeing signs of recovery and continue to monitor for further impact.\nWe do not have an ETA for mitigation at this point.\nWe will provide more information by Tuesday, 2025-01-28 23:00 US/Pacific.\nDiagnosis: Customers may experience elevated latency, error rates including 'resource exhausted' or service unavailable.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Global","id":"global"}]},{"created":"2025-01-29T00:40:38+00:00","modified":"2025-01-29T02:54:20+00:00","when":"2025-01-29T00:40:38+00:00","text":"Summary: Cloud Translation experiencing elevated latency and error rates\nDescription: Mitigation work is currently underway by our engineering team.\nWe do not have an ETA for mitigation at this point.\nWe will provide more information by Tuesday, 2025-01-28 19:00 US/Pacific.\nDiagnosis: Customers may experience elevated latency, error rates including 'resource exhausted' or service unavailable.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Global","id":"global"}]}],"most_recent_update":{"created":"2025-01-29T04:54:18+00:00","modified":"2025-01-29T04:54:22+00:00","when":"2025-01-29T04:54:18+00:00","text":"The issue with Cloud Translation has been resolved for all affected users as of Tuesday, 2025-01-28 20:53 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_INFORMATION","severity":"low","service_key":"zall","service_name":"Multiple Products","affected_products":[{"title":"Cloud Machine Learning","id":"z9PfKanGZYvYNUbnKzRJ"},{"title":"Cloud Translation","id":"gCLTpLXcWqCKtcyUyHCF"}],"uri":"incidents/jPyjAMj7j3NksnWVMTRt","currently_affected_locations":[],"previously_affected_locations":[{"title":"Global","id":"global"}]},{"id":"uqPLSADLwLztWWcLCPfz","number":"10659277538773394775","begin":"2025-01-28T12:50:17+00:00","created":"2025-01-28T18:33:21+00:00","end":"2025-01-28T22:16:39+00:00","modified":"2025-01-28T22:16:41+00:00","external_desc":"Cloud Translation experienced elevated latency and error rates","updates":[{"created":"2025-01-28T22:16:39+00:00","modified":"2025-01-28T22:16:42+00:00","when":"2025-01-28T22:16:39+00:00","text":"The issue with Cloud Translation has been resolved for all affected users as of Tuesday, 2025-01-28 14:00 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-28T21:36:09+00:00","modified":"2025-01-28T22:16:41+00:00","when":"2025-01-28T21:36:09+00:00","text":"Summary: Cloud Translation experiencing elevated latency and error rates\nDescription: Our engineering team has mitigated the issue and are showing signs of\nrecovery. We will continue to monitor the service for stability.\nWe will provide more information by Tuesday, 2025-01-28 15:00 US/Pacific.\nDiagnosis: Customers may experience elevated latency, error rates including 'resource exhausted' or service unavailable.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Global","id":"global"}]},{"created":"2025-01-28T20:03:36+00:00","modified":"2025-01-28T21:36:09+00:00","when":"2025-01-28T20:03:36+00:00","text":"Summary: Cloud Translation experiencing elevated latency and error rates\nDescription: Mitigation work is currently underway by our engineering team. We are seeing signs of recovery and continue to monitor for further impact.\nWe do not have an ETA for mitigation at this point.\nWe will provide more information by Tuesday, 2025-01-28 13:00 US/Pacific.\nDiagnosis: Customers may experience elevated latency, error rates including 'resource exhausted' or service unavailable.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Global","id":"global"}]},{"created":"2025-01-28T18:44:44+00:00","modified":"2025-01-28T20:03:36+00:00","when":"2025-01-28T18:44:44+00:00","text":"Summary: Cloud Translation experiencing elevated latency and error rates\nDescription: Mitigation work is currently underway by our engineering team. We are seeing signs of recovery and continue to monitor for further impact.\nWe do not have an ETA for mitigation at this point.\nWe will provide more information by Tuesday, 2025-01-28 12:00 US/Pacific.\nDiagnosis: Customers may experience elevated latency, error rates including 'resource exhausted' or service unavailable.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Global","id":"global"}]},{"created":"2025-01-28T18:33:10+00:00","modified":"2025-01-28T18:55:00+00:00","when":"2025-01-28T18:33:10+00:00","text":"Summary: We are experiencing an issue with Cloud Translation\nDescription: We are experiencing an issue with Cloud Translation beginning at Tuesday, 2025-01-28 04:50 US/Pacific.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Tuesday, 2025-01-28 11:08 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.\nDiagnosis: None at this time.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Global","id":"global"}]}],"most_recent_update":{"created":"2025-01-28T22:16:39+00:00","modified":"2025-01-28T22:16:42+00:00","when":"2025-01-28T22:16:39+00:00","text":"The issue with Cloud Translation has been resolved for all affected users as of Tuesday, 2025-01-28 14:00 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_INFORMATION","severity":"low","service_key":"zall","service_name":"Multiple Products","affected_products":[{"title":"Cloud Machine Learning","id":"z9PfKanGZYvYNUbnKzRJ"},{"title":"Cloud Translation","id":"gCLTpLXcWqCKtcyUyHCF"}],"uri":"incidents/uqPLSADLwLztWWcLCPfz","currently_affected_locations":[],"previously_affected_locations":[{"title":"Global","id":"global"}]},{"id":"qB1du5LQfSHCJjWR88Fi","number":"12777587082307720292","begin":"2025-01-24T17:30:00+00:00","created":"2025-01-24T18:41:26+00:00","end":"2025-01-24T19:20:00+00:00","modified":"2025-01-28T05:06:14+00:00","external_desc":"Appsheet is unavailable in us-east4 and europe-west4","updates":[{"created":"2025-01-28T04:11:19+00:00","modified":"2025-01-28T05:06:14+00:00","when":"2025-01-28T04:11:19+00:00","text":"# Incident Report\n## Summary\nOn Friday, 24 January 2025, AppSheet customers were unable to load AppSheet apps with the app editor or the app load page due to ‘500’ errors and timeouts. Around 60% of the requests were impacted in us-east4 and europe-west4 for a duration of 1 hour and 50 minutes.\nWe sincerely apologize to our Google Cloud customers for the disruption you experienced.\n## Root Cause\nA database schema migration in production triggered a cascading incident. The migration caused failures and timeouts on the primary database, disrupting most AppSheet operations and preventing apps loading for users in us-east4 and europe-west4. The sustained outage occurred due to a surge of retries, overloading the secondary authentication database and rendering it completely unresponsive for requests to the affected regions. The authentication database is responsible for storing user authentication tokens.\nTraffic was migrated to the us-central1 and us-west1 regions, after which issues pertaining to user auth tokens were resolved.\nHowever, this triggered an increase in load on our service for validating users’ Workspace license entitlements, due to that information no longer being available in cache. The request rate went up significantly, triggering aggressive load shedding, resulting in elevated latency for 95% of the traffic. This further aggravated latency after traffic migration to us-central1 and us-west1 was performed.\n## Remediation and Prevention\nGoogle engineers were alerted to the outage via an automated alert on 24 January 2025 09:42 US/Pacific and immediately started an investigation. To mitigate the impact, engineers redirected the traffic from us-east4 and europe-west4, to us-central1 and us-west1.\nThe resultant load shedding that occurred on the licensing server recovered by 11:20 US/Pacific, once we restored our authentication database and gradually reverted traffic to us-east4 and europe-west4.\nGoogle is committed to preventing a repeat of this issue in the future and is completing the following actions: - Improve alerting and monitoring of on license server traffic to reduce impact on latencies when traffic migration happens. - Gradually reduce dependency on licensing servers to avoid failures arising from either increased traffic, or unavailability of licensing servers. - We are reviewing measures to increase the stability of our authentication database, to ensure optimal handling of any surge in requests.\n## Detailed Description of Impact\nOn Friday, 24 January 2025, from 09:30 to 11:20 US/Pacific, approximately 60% of the AppSheet requests in us-east4 and europe-west4 may have failed. - Affected customers were unable to load AppSheet apps with the app editor or the app load page. - Affected customers experienced elevated ‘500’ errors and timeouts. - Some customers may also have observed intermittent latency.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-24T22:26:38+00:00","modified":"2025-01-28T04:11:19+00:00","when":"2025-01-24T22:26:38+00:00","text":"# Mini Incident Report\nWe apologize for the inconvenience this service disruption/outage may have caused. We would like to provide some information about this incident below. Please note, this information is based on our best knowledge at the time of posting and is subject to change as our investigation continues. If you have experienced impact outside of what is listed below, please reach out to Google Cloud Support using https://cloud.google.com/support or to Google Workspace Support using help article https://support.google.com/a/answer/1047213.\n**(All Times US/Pacific)**\n**Incident Start:** 24 January, 2025 09:30\n**Incident End:** 24 January, 2025 11:20\n**Duration:** 1 hour, 50 minutes\n**Affected Services and Features:**\nAppSheet\n**Regions/Zones:**\nus-east4 and europe-west4\n**Description:**\nAppSheet experienced availability issues in us-west4 and europe-west4 for a total duration of 1 hour, 50 minutes.\nFrom our preliminary analysis, a schema migration initiated on a backend database caused errors and request timeouts. This led to increased retries from clients, causing a secondary database that manages authentication for us-east4 and europe-west4 to become overloaded. The overload on the authentication database subsequently impacted another dependency licensing server which also became overloaded.\nWhile the original database issues caused by the schema migration were fully resolved, the authentication and licensing servers continued to observe issues. Google engineers mitigated the overloaded authentication database by shifting traffic away from us-east4 and europe-west4 regions, resolving the issue. The licensing server recovered by 11:20 US/Pacific due to organic traffic reduction.\nGoogle will complete a full Incident Report in the following days that will provide a full root cause.\n**Customer Impact:**\n* Affected customers were unable to load AppSheet apps with the editor app load page.\n* Affected customers experienced elevated 500 errors and timeouts.\n* Some customers may also have observed intermittent latency.\n---","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-24T19:55:37+00:00","modified":"2025-01-24T22:26:38+00:00","when":"2025-01-24T19:55:37+00:00","text":"The issue with AppSheet has been resolved for all affected users as of Friday, 2025-01-24 11:20 US/Pacific.\nPreliminary investigation narrowed down the trigger of the issue to be a schema migration to our backend database which caused all requests to temporarily fail with 500 errors and timeouts. This caused a large amount of traffic in us-east4 and europe-west4. The issue was fully mitigated once the migration completed.\nWe will publish an analysis of this incident once we have completed our internal investigation.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-24T19:26:02+00:00","modified":"2025-01-24T19:55:39+00:00","when":"2025-01-24T19:26:02+00:00","text":"Summary: Appsheet is unavailable in us-east4 and europe-west4\nDescription: Mitigation work is currently underway by our engineering team. We are showing signs of recovery and some users may observe elevated latency while we work towards full recovery.\nWe will provide more information by Friday, 2025-01-24 12:30 US/Pacific.\nDiagnosis: Customers impacted by this issue are unable to load apps via either editor or app load page. Customers may also observe intermittent latency.\nWorkaround: None at this time.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Northern Virginia (us-east4)","id":"us-east4"}]},{"created":"2025-01-24T18:41:24+00:00","modified":"2025-01-24T19:26:02+00:00","when":"2025-01-24T18:41:24+00:00","text":"Summary: Appsheet is unavailable in us-east4 and europe-west4\nDescription: Mitigation work is currently underway by our engineering team.\nWe do not have an ETA for mitigation at this point.\nWe will provide more information by Friday, 2025-01-24 11:30 US/Pacific.\nDiagnosis: Customers impacted by this issue are unable to load apps via either editor or app load page.\nWorkaround: None at this time.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Northern Virginia (us-east4)","id":"us-east4"}]}],"most_recent_update":{"created":"2025-01-28T04:11:19+00:00","modified":"2025-01-28T05:06:14+00:00","when":"2025-01-28T04:11:19+00:00","text":"# Incident Report\n## Summary\nOn Friday, 24 January 2025, AppSheet customers were unable to load AppSheet apps with the app editor or the app load page due to ‘500’ errors and timeouts. Around 60% of the requests were impacted in us-east4 and europe-west4 for a duration of 1 hour and 50 minutes.\nWe sincerely apologize to our Google Cloud customers for the disruption you experienced.\n## Root Cause\nA database schema migration in production triggered a cascading incident. The migration caused failures and timeouts on the primary database, disrupting most AppSheet operations and preventing apps loading for users in us-east4 and europe-west4. The sustained outage occurred due to a surge of retries, overloading the secondary authentication database and rendering it completely unresponsive for requests to the affected regions. The authentication database is responsible for storing user authentication tokens.\nTraffic was migrated to the us-central1 and us-west1 regions, after which issues pertaining to user auth tokens were resolved.\nHowever, this triggered an increase in load on our service for validating users’ Workspace license entitlements, due to that information no longer being available in cache. The request rate went up significantly, triggering aggressive load shedding, resulting in elevated latency for 95% of the traffic. This further aggravated latency after traffic migration to us-central1 and us-west1 was performed.\n## Remediation and Prevention\nGoogle engineers were alerted to the outage via an automated alert on 24 January 2025 09:42 US/Pacific and immediately started an investigation. To mitigate the impact, engineers redirected the traffic from us-east4 and europe-west4, to us-central1 and us-west1.\nThe resultant load shedding that occurred on the licensing server recovered by 11:20 US/Pacific, once we restored our authentication database and gradually reverted traffic to us-east4 and europe-west4.\nGoogle is committed to preventing a repeat of this issue in the future and is completing the following actions: - Improve alerting and monitoring of on license server traffic to reduce impact on latencies when traffic migration happens. - Gradually reduce dependency on licensing servers to avoid failures arising from either increased traffic, or unavailability of licensing servers. - We are reviewing measures to increase the stability of our authentication database, to ensure optimal handling of any surge in requests.\n## Detailed Description of Impact\nOn Friday, 24 January 2025, from 09:30 to 11:20 US/Pacific, approximately 60% of the AppSheet requests in us-east4 and europe-west4 may have failed. - Affected customers were unable to load AppSheet apps with the app editor or the app load page. - Affected customers experienced elevated ‘500’ errors and timeouts. - Some customers may also have observed intermittent latency.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_OUTAGE","severity":"high","service_key":"FWjKi5U7KX4FUUPThHAJ","service_name":"AppSheet","affected_products":[{"title":"AppSheet","id":"FWjKi5U7KX4FUUPThHAJ"}],"uri":"incidents/qB1du5LQfSHCJjWR88Fi","currently_affected_locations":[],"previously_affected_locations":[{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Northern Virginia (us-east4)","id":"us-east4"}]},{"id":"GsmUFt8Pb4XU3vaTwHS6","number":"2677708848074508580","begin":"2025-01-15T20:30:00+00:00","created":"2025-01-15T22:04:33+00:00","end":"2025-01-15T23:43:08+00:00","modified":"2025-01-15T23:43:10+00:00","external_desc":"Chronicle Security experienced elevated timeout errors across US regions.","updates":[{"created":"2025-01-15T23:43:08+00:00","modified":"2025-01-15T23:43:11+00:00","when":"2025-01-15T23:43:08+00:00","text":"The issue with Chronicle Security has been resolved for all affected users as of Wednesday, 2025-01-15 15:30 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-15T22:04:26+00:00","modified":"2025-01-15T23:43:10+00:00","when":"2025-01-15T22:04:26+00:00","text":"Summary: Chronicle Security is experiencing elevated timeout errors across US regions.\nDescription: We are experiencing an issue with Chronicle Security beginning on Wednesday, 2025-01-15 12:30 US/Pacific.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Wednesday, 2025-01-15 16:30 US/Pacific with current details.\nDiagnosis: Customers are experiencing timeouts in views and APIs related to rules and detections. Some UI pages related to rules and detection may not load completely.\nWorkaround: None at this time","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Multi-region: us","id":"us"}]}],"most_recent_update":{"created":"2025-01-15T23:43:08+00:00","modified":"2025-01-15T23:43:11+00:00","when":"2025-01-15T23:43:08+00:00","text":"The issue with Chronicle Security has been resolved for all affected users as of Wednesday, 2025-01-15 15:30 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_INFORMATION","severity":"low","service_key":"FHwvkSZ6RzzDYAvDZXMM","service_name":"Chronicle Security","affected_products":[{"title":"Chronicle Security","id":"FHwvkSZ6RzzDYAvDZXMM"}],"uri":"incidents/GsmUFt8Pb4XU3vaTwHS6","currently_affected_locations":[],"previously_affected_locations":[{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Multi-region: us","id":"us"}]},{"id":"2np3yFXF8LegQkKGdPuB","number":"3222613190846435204","begin":"2025-01-10T19:36:15+00:00","created":"2025-01-10T20:38:19+00:00","end":"2025-01-13T14:16:11+00:00","modified":"2025-01-13T14:16:20+00:00","external_desc":"Vertex Gemini API customers are experiencing elevated errors on Gemini 1.5 Flash 002 model","updates":[{"created":"2025-01-13T14:16:11+00:00","modified":"2025-01-13T14:16:26+00:00","when":"2025-01-13T14:16:11+00:00","text":"The issue with Vertex Gemini API has been resolved for all affected projects as of Monday, 2025-01-13 06:16 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-10T21:43:06+00:00","modified":"2025-01-13T14:16:20+00:00","when":"2025-01-10T21:43:06+00:00","text":"Summary: Vertex Gemini API customers are experiencing elevated errors on Gemini 1.5 Flash 002 model\nDescription: Mitigation work is currently underway by our engineering team.\nWe do not have an ETA for mitigation at this point.\nWe will provide more information by Monday, 2025-01-13 11:00 US/Pacific.\nDiagnosis: Affected customers would encounter 5XX or 429 errors.\nWorkaround: We recommend customers to use other regions where feasible.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"}]},{"created":"2025-01-10T20:38:16+00:00","modified":"2025-01-10T21:43:06+00:00","when":"2025-01-10T20:38:16+00:00","text":"Summary: Vertex Gemini API customers are experiencing elevated errors on Gemini 1.5 Flash 002 model\nDescription: We are experiencing an issue with Vertex Gemini API beginning at Friday, 2025-01-10 11:36 US/Pacific.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Friday, 2025-01-10 13:45 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.\nDiagnosis: Affected customers will see 5XX or 429 errors.\nWorkaround: We recommend customers to use other regions where feasible.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"}]}],"most_recent_update":{"created":"2025-01-13T14:16:11+00:00","modified":"2025-01-13T14:16:26+00:00","when":"2025-01-13T14:16:11+00:00","text":"The issue with Vertex Gemini API has been resolved for all affected projects as of Monday, 2025-01-13 06:16 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_INFORMATION","severity":"low","service_key":"Z0FZJAMvEB4j3NbCJs6B","service_name":"Vertex Gemini API","affected_products":[{"title":"Vertex Gemini API","id":"Z0FZJAMvEB4j3NbCJs6B"}],"uri":"incidents/2np3yFXF8LegQkKGdPuB","currently_affected_locations":[],"previously_affected_locations":[{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"}]},{"id":"mZ5kwvWe9KSwbPdn1P61","number":"5322604246658881111","begin":"2025-01-10T06:20:03+00:00","created":"2025-01-10T07:44:18+00:00","end":"2025-01-10T08:46:58+00:00","modified":"2025-01-10T08:47:00+00:00","external_desc":"Data Ingestion feeds errors while using SecOps","updates":[{"created":"2025-01-10T08:46:58+00:00","modified":"2025-01-10T08:47:01+00:00","when":"2025-01-10T08:46:58+00:00","text":"The issue with Chronicle Security has been resolved for all affected users as of Friday, 2025-01-10 00:30 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-10T08:27:49+00:00","modified":"2025-01-10T08:47:00+00:00","when":"2025-01-10T08:27:49+00:00","text":"Summary: Data Ingestion feeds errors while using SecOps\nDescription: Customers ingesting data via feeds will experience errors and delays while using SecOps beginning at Thursday, 2025-01-09 16:20 US/Pacific.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Friday, 2025-01-10 01:30 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.\nDiagnosis: Customers impacted by this issue may observe DNS errors and delay in ingestion of their data.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Multi-region: europe","id":"europe"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Multi-region: us","id":"us"}]},{"created":"2025-01-10T07:49:44+00:00","modified":"2025-01-10T08:27:49+00:00","when":"2025-01-10T07:49:44+00:00","text":"Summary: Data Ingestion feeds errors while using SecOps\nDescription: Customers ingesting data via feeds will experience errors and delays while using SecOps beginning at Thursday, 2025-01-09 16:20 US/Pacific.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Friday, 2025-01-10 00:30 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.\nDiagnosis: Customers impacted by this issue may observe DNS errors and delay in ingestion of their data.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Multi-region: europe","id":"europe"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Multi-region: us","id":"us"}]}],"most_recent_update":{"created":"2025-01-10T08:46:58+00:00","modified":"2025-01-10T08:47:01+00:00","when":"2025-01-10T08:46:58+00:00","text":"The issue with Chronicle Security has been resolved for all affected users as of Friday, 2025-01-10 00:30 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_INFORMATION","severity":"low","service_key":"FHwvkSZ6RzzDYAvDZXMM","service_name":"Chronicle Security","affected_products":[{"title":"Chronicle Security","id":"FHwvkSZ6RzzDYAvDZXMM"}],"uri":"incidents/mZ5kwvWe9KSwbPdn1P61","currently_affected_locations":[],"previously_affected_locations":[{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Multi-region: europe","id":"europe"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Multi-region: us","id":"us"}]},{"id":"285CgEr7WAXjNeEC78gX","number":"10826049100390990553","begin":"2025-01-08T17:36:53+00:00","created":"2025-01-08T18:50:26+00:00","end":"2025-01-08T19:43:11+00:00","modified":"2025-01-08T21:14:23+00:00","external_desc":"We are experiencing an issue with Chronicle Security","updates":[{"created":"2025-01-08T21:06:47+00:00","modified":"2025-01-08T21:14:23+00:00","when":"2025-01-08T21:06:47+00:00","text":"# Mini Incident Report\nWe apologize for the inconvenience this service disruption/outage may have caused. We would like to provide some information about this incident below. Please note, this information is based on our best knowledge at the time of posting and is subject to change as our investigation continues. If you have experienced impact outside of what is listed below, please reach out to Google Cloud Support using https://cloud.google.com/support.\n(All Times US/Pacific)\n**Incident Start:** 3 January 2025 12:14\n**Incident End:** 8 January 2025 11:08\n**Duration:** 5 days, 10 hours, 55 minutes\n**Affected Services and Features:**\nGoogle SecOps (Chronicle Security) - SOAR Permissions\n**Regions/Zones:**\neurope, europe-west12, europe-west2, europe-west3, europe-west6, europe-west9, asia-northeast1, asia-south1, asia-southeast1, australia-southeast1, me-central1, me-central2, me-west1, northamerica-northeast2, southamerica-east1\n**Description:**\nGoogle SecOps (Chronicle Security) experienced an increase in permission errors for non-admin users accessing SOAR cases. From preliminary analysis, the issue was due to a software defect introduced by a recent service update that had been rolled out to non-US regions.\nThe issue was fully mitigated once the affected service update was rolled back, restoring service for all affected users.\n**Customer Impact:**\n* When a non-admin user attempted to access the SOAR cases view, they received a 403 error.\n---","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-08T19:43:11+00:00","modified":"2025-01-08T21:06:47+00:00","when":"2025-01-08T19:43:11+00:00","text":"The issue with Chronicle Security has been resolved for all affected users as of Wednesday, 2025-01-08 11:06 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-08T18:55:04+00:00","modified":"2025-01-08T19:43:14+00:00","when":"2025-01-08T18:55:04+00:00","text":"Summary: We are experiencing an issue with Chronicle Security\nDescription: Mitigation work is currently underway by our engineering team.\nThe mitigation is expected to complete by Wednesday, 2025-01-08 11:17 US/Pacific.\nWe will provide more information by Wednesday, 2025-01-08 12:00 US/Pacific.\nDiagnosis: Some non-admin users are facing an issue where they receive a 403 Forbidden when logging in to Cases view.\nWorkaround: None at this time.","status":"SERVICE_DISRUPTION","affected_locations":[{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Multi-region: europe","id":"europe"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"}]},{"created":"2025-01-08T18:50:20+00:00","modified":"2025-01-08T18:55:07+00:00","when":"2025-01-08T18:50:20+00:00","text":"Summary: We are experiencing an issue with Chronicle Security.\nDescription: Mitigation work is currently underway by our engineering team.\nThe mitigation is expected to complete by Wednesday, 2025-01-08 11:17 US/Pacific.\nWe will provide more information by Wednesday, 2025-01-08 12:00 US/Pacific.\nDiagnosis: Some non-admin users are facing an issue where they receive a 403 Forbidden when logging in to Cases view.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Multi-region: europe","id":"europe"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"}]}],"most_recent_update":{"created":"2025-01-08T21:06:47+00:00","modified":"2025-01-08T21:14:23+00:00","when":"2025-01-08T21:06:47+00:00","text":"# Mini Incident Report\nWe apologize for the inconvenience this service disruption/outage may have caused. We would like to provide some information about this incident below. Please note, this information is based on our best knowledge at the time of posting and is subject to change as our investigation continues. If you have experienced impact outside of what is listed below, please reach out to Google Cloud Support using https://cloud.google.com/support.\n(All Times US/Pacific)\n**Incident Start:** 3 January 2025 12:14\n**Incident End:** 8 January 2025 11:08\n**Duration:** 5 days, 10 hours, 55 minutes\n**Affected Services and Features:**\nGoogle SecOps (Chronicle Security) - SOAR Permissions\n**Regions/Zones:**\neurope, europe-west12, europe-west2, europe-west3, europe-west6, europe-west9, asia-northeast1, asia-south1, asia-southeast1, australia-southeast1, me-central1, me-central2, me-west1, northamerica-northeast2, southamerica-east1\n**Description:**\nGoogle SecOps (Chronicle Security) experienced an increase in permission errors for non-admin users accessing SOAR cases. From preliminary analysis, the issue was due to a software defect introduced by a recent service update that had been rolled out to non-US regions.\nThe issue was fully mitigated once the affected service update was rolled back, restoring service for all affected users.\n**Customer Impact:**\n* When a non-admin user attempted to access the SOAR cases view, they received a 403 error.\n---","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_DISRUPTION","severity":"medium","service_key":"FHwvkSZ6RzzDYAvDZXMM","service_name":"Chronicle Security","affected_products":[{"title":"Chronicle Security","id":"FHwvkSZ6RzzDYAvDZXMM"}],"uri":"incidents/285CgEr7WAXjNeEC78gX","currently_affected_locations":[],"previously_affected_locations":[{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Multi-region: europe","id":"europe"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"}]},{"id":"ghMho2Gka33Exr9UNavz","number":"7928343550133267122","begin":"2025-01-08T14:54:00+00:00","created":"2025-01-08T15:56:01+00:00","end":"2025-01-08T16:07:00+00:00","modified":"2025-01-10T19:23:00+00:00","external_desc":"Multiple regions completely blocked for subscribe for Pubsub","updates":[{"created":"2025-01-10T19:23:00+00:00","modified":"2025-01-10T19:23:00+00:00","when":"2025-01-10T19:23:00+00:00","text":"# Incident Report\n## Summary\nOn Wednesday, 8 January 2025 06:54 to 08:07 US/Pacific, Google Cloud Pub/Sub experienced a service outage in multiple regions resulting in customers unable to publish or subscribe to the messages for a duration of 1 hour and 13 minutes.\nThis outage also resulted in an increased backlog which was identified at 8 January 2025 09:07 US/Pacific for a small subset of customer subscriptions using message ordering[1], which extended beyond the unavailability time window. These subscriptions were repaired and mitigated by 8 January 2025 23:09 US/Pacific.\nWe deeply regret the disruption this outage caused for our Google Cloud customers. This is not the level of quality and reliability we strive to offer you, and we are taking immediate steps to improve the platform’s availability.\n## Root Cause\nCloud Pub/Sub uses a regional database for the metadata state of its storage system, including information about published messages and the order in which those messages were published for ordered delivery. The regional metadata database is on the critical path of most of the Cloud Pub/Sub data plane operations. From 8 January 2025 06:54 to 07:30 US/Pacific, a bad service configuration change, which unintentionally over-restricted the permission to access this database, was rolled out to multiple regions. The issue did not surface in our pre-production environment due to a mismatch in the configuration between the two environments. In addition, the change was mistakenly rolled out to multiple regions within a short time period and did not follow the standard rollout process. This change prevented Cloud Pub/Sub from accessing the regional metadata store, leading to publish, subscribe, and backlog metrics failures and unavailability impact, which was mitigated on 8 January 2025 08:07 US/Pacific.\nThough the configuration change was rolled back and mitigated on 8 January 2025 08:07 US/Pacific, the database unavailability during the issue exposed a latent bug in the way Cloud Pub/Sub enforces ordered delivery for subscriptions with ordering enabled. In particular, when the database was unavailable for an extended period of time, the metadata pertaining to ordering became inconsistent with the metadata about published messages. This inconsistency prevented the delivery of a subset of messages until the subscriptions were repaired, and they received all backlogged messages in the proper order. Mitigation was completed by 8 January 2025 23:09 US/Pacific. Note that this did not impact ordering or guaranteed delivery.\n## Remediation and Prevention\nGoogle engineers were alerted to the outage via internal telemetry on 8 January 2025 07:03 US/Pacific, 9 minutes after impact started. The config change that caused the issue was identified and rollback completed by 8 January 2025 08:07 US/Pacific. At 8 January 2025 09:07 US/Pacific, Google engineers were alerted via internal telemetry to the fact that a small subset of ordered subscriptions were unable to consume their backlog and root caused the metadata inconsistency at 8 January 2025 12:20 US/Pacific. Google engineers worked on identifying and repairing all impacted ordered subscriptions, which was completed by 8 January 2025 23:09 US/Pacific.\nGoogle is committed to preventing a repeat of this issue in the future and is completing the following actions:\n* Our engineering team is working on implementing stronger enforcement of parity between pre-production and production environments in order to ensure the impact of configuration changes can be caught before changes move to production. ETA: 31 January 2025.\n* We are reviewing our change management process to ensure that future configuration changes roll out in a progressive fashion aligned with the priority of the change. ETA: 31 January 2025.\n* We are working on implementing additional monitoring that proactively detects ordering metadata inconsistency. ETA: 31 March 2025.\n* We are implementing a fix to the Cloud Pub/Sub ordering metadata management bug, which led to undelivered, ordered messages. ETA: 30 June 2025.\n## Detailed Description of Impact\nOn Wednesday 8 January 2025 from 06:54 to 08:07 US/Pacific Google Cloud Pub/Sub, Cloud Logging, and BigQuery Data Transfer Service experienced a service outage in europe-west10, asia-south1, europe-west1, us-central1, asia-southeast2, us-east1, us-east5, asia-south2, us-south1, me-central1 regions.\nCustomers publishing from other regions may have also experienced the issue if the message storage policies [2] are set to store and process the messages in the above-mentioned regions.\n#### Google Cloud Pub/Sub : Customers were unable to publish or subscribe to the messages in the impacted regions. Publishing the messages from other regions may also have been impacted, if they have any of the impacted regions in their message storage policies. Backlog metrics might have been stale or missing.\n#### Google BigQuery Data Transfer Service : Customers experienced failures with data transfers runs failing to publish to Pub/Sub for a duration of 20 minutes.\n#### Cloud Logging : All Cloud Logs customers exporting logs to Cloud Pub/Sub experienced a delay in the log export for a duration of 26 minutes.\n**Appendix:**\n* [1] https://cloud.google.com/pubsub/docs/ordering\n* [2] https://cloud.google.com/pubsub/docs/resource-location-restriction#message_storage_policy_overview","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-08T21:23:47+00:00","modified":"2025-01-10T19:23:00+00:00","when":"2025-01-08T21:23:47+00:00","text":"# Mini Incident Report\nWe apologize for the inconvenience this service disruption/outage may have caused. We would like to provide some information about this incident below. Please note, this information is based on our best knowledge at the time of posting and is subject to change as our investigation continues. If you have experienced impact outside of what is listed below, please reach out to Google Cloud Support using https://cloud.google.com/support\n(All Times US/Pacific)\n**Incident Start:** 8 January 2025 6:54\n**Incident End:** 8 January 2025 8:07\n**Duration:** 1 hour, 13 minutes\n**Affected Services and Features:**\n* Google Cloud Pub/Sub\n* Cloud Logging\n* BigQuery Data Transfer Service\n**Regions/Zones:** europe-west10, asia-south1, europe-west1, us-central1, asia-southeast2, us-east1, us-east5, asia-south2, us-south1, me-central1\nCustomers publishing from other regions may have also experienced the issue if the message storage policies [1] are set to store and process the messages in the above-mentioned regions.\n**Description:**\nGoogle Cloud Pub/Sub experienced a service outage in multiple regions for a duration of 1 hour and 13 minutes resulting in customers unable to publish or subscribe to the messages.\nFrom preliminary analysis, the root cause of the issue was a configuration change which was rolled back to restore the service. Google will complete a full Incident Report in the following days that will provide a full root cause.\n**Customer Impact:**\n* Google Cloud Pub/Sub : Customers were unable to publish or subscribe to the messages in the impacted regions. Publishing the messages from other regions may also have been impacted, if they have any of the impacted regions in their message storage policies. Backlog stats metric might be stale or missing.\n* Google BigQuery Data Transfer Service : Customers experienced failures with data transfers runs.\n* Cloud Logging : All Cloud Logs customers exporting logs to Cloud Pub/Sub experienced a delay in the log export for a duration of 26 minutes.\n**Reference(s):**\n[1] https://cloud.google.com/pubsub/docs/resource-location-restriction#message_storage_policy_overview","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-08T16:12:26+00:00","modified":"2025-01-08T21:23:47+00:00","when":"2025-01-08T16:12:26+00:00","text":"The issue with Google Cloud Pub/Sub has been resolved for all affected projects as of Wednesday, 2025-01-08 08:07 US/Pacific.\nWe will publish an analysis of this incident once we have completed our internal investigation.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-08T16:03:42+00:00","modified":"2025-01-08T16:12:27+00:00","when":"2025-01-08T16:03:42+00:00","text":"Summary: Multiple regions completely blocked for subscribe for Pubsub\nDescription: We are experiencing an issue with Google Cloud Pub/Sub, across multiple regions affecting publish and subscribe.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Wednesday, 2025-01-08 09:15 US/Pacific with current details.\nDiagnosis: Customers in the impacted regions are unable to subscribe to messages\nWorkaround: None at this time.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Berlin (europe-west10)","id":"europe-west10"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"}]},{"created":"2025-01-08T15:56:00+00:00","modified":"2025-01-08T16:03:45+00:00","when":"2025-01-08T15:56:00+00:00","text":"Summary: Multiple regions completely blocked for publish for Pubsub\nDescription: We are experiencing an issue with Google Cloud Pub/Sub, across multiple regions affecting publish and subscribe.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Wednesday, 2025-01-08 09:00 US/Pacific with current details.\nDiagnosis: Customers in the impacted regions are unable to publish messages\nWorkaround: None at this time.","status":"SERVICE_OUTAGE","affected_locations":[{"title":"Johannesburg (africa-south1)","id":"africa-south1"},{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Berlin (europe-west10)","id":"europe-west10"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Global","id":"global"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]}],"most_recent_update":{"created":"2025-01-10T19:23:00+00:00","modified":"2025-01-10T19:23:00+00:00","when":"2025-01-10T19:23:00+00:00","text":"# Incident Report\n## Summary\nOn Wednesday, 8 January 2025 06:54 to 08:07 US/Pacific, Google Cloud Pub/Sub experienced a service outage in multiple regions resulting in customers unable to publish or subscribe to the messages for a duration of 1 hour and 13 minutes.\nThis outage also resulted in an increased backlog which was identified at 8 January 2025 09:07 US/Pacific for a small subset of customer subscriptions using message ordering[1], which extended beyond the unavailability time window. These subscriptions were repaired and mitigated by 8 January 2025 23:09 US/Pacific.\nWe deeply regret the disruption this outage caused for our Google Cloud customers. This is not the level of quality and reliability we strive to offer you, and we are taking immediate steps to improve the platform’s availability.\n## Root Cause\nCloud Pub/Sub uses a regional database for the metadata state of its storage system, including information about published messages and the order in which those messages were published for ordered delivery. The regional metadata database is on the critical path of most of the Cloud Pub/Sub data plane operations. From 8 January 2025 06:54 to 07:30 US/Pacific, a bad service configuration change, which unintentionally over-restricted the permission to access this database, was rolled out to multiple regions. The issue did not surface in our pre-production environment due to a mismatch in the configuration between the two environments. In addition, the change was mistakenly rolled out to multiple regions within a short time period and did not follow the standard rollout process. This change prevented Cloud Pub/Sub from accessing the regional metadata store, leading to publish, subscribe, and backlog metrics failures and unavailability impact, which was mitigated on 8 January 2025 08:07 US/Pacific.\nThough the configuration change was rolled back and mitigated on 8 January 2025 08:07 US/Pacific, the database unavailability during the issue exposed a latent bug in the way Cloud Pub/Sub enforces ordered delivery for subscriptions with ordering enabled. In particular, when the database was unavailable for an extended period of time, the metadata pertaining to ordering became inconsistent with the metadata about published messages. This inconsistency prevented the delivery of a subset of messages until the subscriptions were repaired, and they received all backlogged messages in the proper order. Mitigation was completed by 8 January 2025 23:09 US/Pacific. Note that this did not impact ordering or guaranteed delivery.\n## Remediation and Prevention\nGoogle engineers were alerted to the outage via internal telemetry on 8 January 2025 07:03 US/Pacific, 9 minutes after impact started. The config change that caused the issue was identified and rollback completed by 8 January 2025 08:07 US/Pacific. At 8 January 2025 09:07 US/Pacific, Google engineers were alerted via internal telemetry to the fact that a small subset of ordered subscriptions were unable to consume their backlog and root caused the metadata inconsistency at 8 January 2025 12:20 US/Pacific. Google engineers worked on identifying and repairing all impacted ordered subscriptions, which was completed by 8 January 2025 23:09 US/Pacific.\nGoogle is committed to preventing a repeat of this issue in the future and is completing the following actions:\n* Our engineering team is working on implementing stronger enforcement of parity between pre-production and production environments in order to ensure the impact of configuration changes can be caught before changes move to production. ETA: 31 January 2025.\n* We are reviewing our change management process to ensure that future configuration changes roll out in a progressive fashion aligned with the priority of the change. ETA: 31 January 2025.\n* We are working on implementing additional monitoring that proactively detects ordering metadata inconsistency. ETA: 31 March 2025.\n* We are implementing a fix to the Cloud Pub/Sub ordering metadata management bug, which led to undelivered, ordered messages. ETA: 30 June 2025.\n## Detailed Description of Impact\nOn Wednesday 8 January 2025 from 06:54 to 08:07 US/Pacific Google Cloud Pub/Sub, Cloud Logging, and BigQuery Data Transfer Service experienced a service outage in europe-west10, asia-south1, europe-west1, us-central1, asia-southeast2, us-east1, us-east5, asia-south2, us-south1, me-central1 regions.\nCustomers publishing from other regions may have also experienced the issue if the message storage policies [2] are set to store and process the messages in the above-mentioned regions.\n#### Google Cloud Pub/Sub : Customers were unable to publish or subscribe to the messages in the impacted regions. Publishing the messages from other regions may also have been impacted, if they have any of the impacted regions in their message storage policies. Backlog metrics might have been stale or missing.\n#### Google BigQuery Data Transfer Service : Customers experienced failures with data transfers runs failing to publish to Pub/Sub for a duration of 20 minutes.\n#### Cloud Logging : All Cloud Logs customers exporting logs to Cloud Pub/Sub experienced a delay in the log export for a duration of 26 minutes.\n**Appendix:**\n* [1] https://cloud.google.com/pubsub/docs/ordering\n* [2] https://cloud.google.com/pubsub/docs/resource-location-restriction#message_storage_policy_overview","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_OUTAGE","severity":"high","service_key":"dFjdLh2v6zuES6t9ADCB","service_name":"Google Cloud Pub/Sub","affected_products":[{"title":"Google Cloud Pub/Sub","id":"dFjdLh2v6zuES6t9ADCB"}],"uri":"incidents/ghMho2Gka33Exr9UNavz","currently_affected_locations":[],"previously_affected_locations":[{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Berlin (europe-west10)","id":"europe-west10"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"}]},{"id":"HY6cMoMSTAUFjcPDnQGq","number":"3310612079465271339","begin":"2025-01-07T21:28:20+00:00","created":"2025-01-07T22:51:07+00:00","end":"2025-01-08T05:18:48+00:00","modified":"2025-01-08T05:18:50+00:00","external_desc":"Elevated rate of ‘500’ errors observed on Gemini 1.5 Flash and Gemini 1.5 Pro 002","updates":[{"created":"2025-01-08T05:18:48+00:00","modified":"2025-01-08T05:18:51+00:00","when":"2025-01-08T05:18:48+00:00","text":"The issue with Vertex Gemini API has been resolved for all affected users as of Tuesday, 2025-01-07 21:18 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-08T04:24:40+00:00","modified":"2025-01-08T05:18:50+00:00","when":"2025-01-08T04:24:40+00:00","text":"Summary: Elevated rate of ‘500’ errors observed on Gemini 1.5 Flash and Gemini 1.5 Pro 002\nDescription: We are experiencing an intermittent issue with Vertex Gemini API.\nGemini 1.5 Flash - Steady progress is being made with the recovery process, while the Engineering team continues to ensure that recurrence of errors are prevented.\nGemini 1.5 Pro 002 - Engineering is working on the final instance where mitigation is being applied. Error rates have reduced to ~1-3%, and are expected to return to normalcy post complete mitigation.\nWe will provide an update by Tuesday, 2025-01-07 23:30 US/Pacific with current details.\nDiagnosis: Customers impacted by this issue may see intermittent ‘500’ error messages.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-01-08T03:29:12+00:00","modified":"2025-01-08T04:24:40+00:00","when":"2025-01-08T03:29:12+00:00","text":"Summary: Elevated rate of ‘500’ errors observed on Gemini 1.5 Flash and Gemini 1.5 Pro 002\nDescription: We are experiencing an intermittent issue with Vertex Gemini API.\nGemini 1.5 Flash - Steady progress is being made with the recovery process, while the Engineering team continues to ensure that recurrence of errors are prevented.\nGemini 1.5 Pro 002 - Engineering is working on the final instance where mitigation is being applied. Error rates have reduced to ~1-3%, and are expected to return to normalcy post complete mitigation.\nWe will provide an update by Tuesday, 2025-01-07 21:00 US/Pacific with current details.\nDiagnosis: Customers impacted by this issue may see intermittent ‘500’ error messages.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-01-08T02:17:22+00:00","modified":"2025-01-08T03:29:12+00:00","when":"2025-01-08T02:17:22+00:00","text":"Summary: Elevated rate of ‘500’ errors observed on Gemini 1.5 Flash and Gemini 1.5 Pro 002\nDescription: We are experiencing an intermittent issue with Vertex Gemini API.\nGemini 1.5 Flash - Our engineering team continues to work steadily towards mitigation.\nGemini 1.5 Pro 002 - A majority of the mitigation efforts have been successfully completed and a drastic improvement has been confirmed in the success rates for requests.\nWe will provide an update by Tuesday, 2025-01-07 19:30 US/Pacific with current details.\nDiagnosis: Customers impacted by this issue may see intermittent ‘500’ error messages.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-01-08T00:47:09+00:00","modified":"2025-01-08T02:17:22+00:00","when":"2025-01-08T00:47:09+00:00","text":"Summary: Elevated rate of ‘500’ errors observed on Gemini 1.5 Flash and Gemini 1.5 Pro 002\nDescription: We are experiencing an intermittent issue with Vertex Gemini API.\nOur engineering team has identified a potential mitigation strategy and is in the process of executing the same.\nSubsequently, a drop in the number of ‘500’ errors has been observed.\nWe will provide an update by Tuesday, 2025-01-07 18:00 US/Pacific with current details.\nDiagnosis: Customers impacted by this issue may see intermittent ‘500’ error messages.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-01-07T23:30:49+00:00","modified":"2025-01-08T00:47:09+00:00","when":"2025-01-07T23:30:49+00:00","text":"Summary: Elevated rate of ‘500’ errors observed on Gemini 1.5 Flash and Gemini 1.5 Pro 002\nDescription: We are experiencing an intermittent issue with Vertex Gemini API.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Tuesday, 2025-01-07 16:30 US/Pacific with current details.\nDiagnosis: Customers impacted by this issue may see intermittent ‘500’ error messages.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-01-07T22:51:05+00:00","modified":"2025-01-07T23:30:49+00:00","when":"2025-01-07T22:51:05+00:00","text":"Summary: Elevated rate of ‘500’ errors observed on Gemini 1.5 Flash\nDescription: We are experiencing an intermittent issue with Vertex Gemini API.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Tuesday, 2025-01-07 16:00 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.\nDiagnosis: Customers impacted by this issue may see intermittent ‘500’ error messages.\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]}],"most_recent_update":{"created":"2025-01-08T05:18:48+00:00","modified":"2025-01-08T05:18:51+00:00","when":"2025-01-08T05:18:48+00:00","text":"The issue with Vertex Gemini API has been resolved for all affected users as of Tuesday, 2025-01-07 21:18 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_INFORMATION","severity":"low","service_key":"Z0FZJAMvEB4j3NbCJs6B","service_name":"Vertex Gemini API","affected_products":[{"title":"Vertex Gemini API","id":"Z0FZJAMvEB4j3NbCJs6B"}],"uri":"incidents/HY6cMoMSTAUFjcPDnQGq","currently_affected_locations":[],"previously_affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"id":"cvAsjCVK2TcZk3M46WSM","number":"3092468524730147922","begin":"2025-01-07T20:51:40+00:00","created":"2025-01-07T21:15:14+00:00","end":"2025-01-07T23:30:08+00:00","modified":"2025-01-07T23:30:11+00:00","external_desc":"Some Apigee X customers experienced issues with logging into the integrated developer portals using SAML.","updates":[{"created":"2025-01-07T23:30:08+00:00","modified":"2025-01-07T23:30:12+00:00","when":"2025-01-07T23:30:08+00:00","text":"The issue with Apigee has been resolved for all affected users as of Tuesday, 2025-01-07 15:22 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-07T22:45:43+00:00","modified":"2025-01-07T23:30:11+00:00","when":"2025-01-07T22:45:43+00:00","text":"Summary: Some Apigee X customers are experiencing issues with logging into the integrated developer portals using SAML.\nDescription: We are experiencing an intermittent issue with Apigee X beginning at Tuesday, 2025-01-07 12:51 US/Pacific.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Tuesday, 2025-01-07 17:00 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.\nDiagnosis: Affected customers are unable to login to the integrated developer portals using SAML authentication\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-01-07T21:44:47+00:00","modified":"2025-01-07T22:45:43+00:00","when":"2025-01-07T21:44:47+00:00","text":"Summary: Some Apigee X customers are experiencing issues with logging into the integrated developer portals using SAML.\nDescription: We are experiencing an intermittent issue with Apigee X beginning at Tuesday, 2025-01-07 12:51 US/Pacific.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Tuesday, 2025-01-07 15:00 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.\nDiagnosis: Affected customers are unable to login to the integrated developer portals using SAML authentication\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"created":"2025-01-07T21:15:11+00:00","modified":"2025-01-07T21:44:50+00:00","when":"2025-01-07T21:15:11+00:00","text":"Summary: Some Apigee and Apigee Edge Public customers are experiencing issues with logging into the integrated developer portals using SAML.\nDescription: We are experiencing an intermittent issue with Apigee Edge Public Cloud, Apigee beginning at Tuesday, 2025-01-07 12:51 US/Pacific.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Tuesday, 2025-01-07 14:03 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.\nDiagnosis: Affected customers are unable to login to the integrated developer portals using SAML authentication\nWorkaround: None at this time.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Global","id":"global"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]}],"most_recent_update":{"created":"2025-01-07T23:30:08+00:00","modified":"2025-01-07T23:30:12+00:00","when":"2025-01-07T23:30:08+00:00","text":"The issue with Apigee has been resolved for all affected users as of Tuesday, 2025-01-07 15:22 US/Pacific.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_INFORMATION","severity":"low","service_key":"9Y13BNFy4fJydvjdsN3X","service_name":"Apigee","affected_products":[{"title":"Apigee","id":"9Y13BNFy4fJydvjdsN3X"}],"uri":"incidents/cvAsjCVK2TcZk3M46WSM","currently_affected_locations":[],"previously_affected_locations":[{"title":"Taiwan (asia-east1)","id":"asia-east1"},{"title":"Hong Kong (asia-east2)","id":"asia-east2"},{"title":"Tokyo (asia-northeast1)","id":"asia-northeast1"},{"title":"Osaka (asia-northeast2)","id":"asia-northeast2"},{"title":"Seoul (asia-northeast3)","id":"asia-northeast3"},{"title":"Mumbai (asia-south1)","id":"asia-south1"},{"title":"Delhi (asia-south2)","id":"asia-south2"},{"title":"Singapore (asia-southeast1)","id":"asia-southeast1"},{"title":"Jakarta (asia-southeast2)","id":"asia-southeast2"},{"title":"Sydney (australia-southeast1)","id":"australia-southeast1"},{"title":"Melbourne (australia-southeast2)","id":"australia-southeast2"},{"title":"Warsaw (europe-central2)","id":"europe-central2"},{"title":"Finland (europe-north1)","id":"europe-north1"},{"title":"Madrid (europe-southwest1)","id":"europe-southwest1"},{"title":"Belgium (europe-west1)","id":"europe-west1"},{"title":"Turin (europe-west12)","id":"europe-west12"},{"title":"London (europe-west2)","id":"europe-west2"},{"title":"Frankfurt (europe-west3)","id":"europe-west3"},{"title":"Netherlands (europe-west4)","id":"europe-west4"},{"title":"Zurich (europe-west6)","id":"europe-west6"},{"title":"Milan (europe-west8)","id":"europe-west8"},{"title":"Paris (europe-west9)","id":"europe-west9"},{"title":"Doha (me-central1)","id":"me-central1"},{"title":"Dammam (me-central2)","id":"me-central2"},{"title":"Tel Aviv (me-west1)","id":"me-west1"},{"title":"Montréal (northamerica-northeast1)","id":"northamerica-northeast1"},{"title":"Toronto (northamerica-northeast2)","id":"northamerica-northeast2"},{"title":"Mexico (northamerica-south1)","id":"northamerica-south1"},{"title":"São Paulo (southamerica-east1)","id":"southamerica-east1"},{"title":"Santiago (southamerica-west1)","id":"southamerica-west1"},{"title":"Iowa (us-central1)","id":"us-central1"},{"title":"South Carolina (us-east1)","id":"us-east1"},{"title":"Northern Virginia (us-east4)","id":"us-east4"},{"title":"Columbus (us-east5)","id":"us-east5"},{"title":"Dallas (us-south1)","id":"us-south1"},{"title":"Oregon (us-west1)","id":"us-west1"},{"title":"Los Angeles (us-west2)","id":"us-west2"},{"title":"Salt Lake City (us-west3)","id":"us-west3"},{"title":"Las Vegas (us-west4)","id":"us-west4"}]},{"id":"LgA6CQF3F5F29nBorBL1","number":"9220967981318000539","begin":"2025-01-06T19:01:50+00:00","created":"2025-01-06T20:12:27+00:00","end":"2025-01-06T20:24:24+00:00","modified":"2025-01-06T20:24:25+00:00","external_desc":"Cloud Console customers trying to get support via chat may experience higher latency or errors.","updates":[{"created":"2025-01-06T20:24:24+00:00","modified":"2025-01-06T20:24:26+00:00","when":"2025-01-06T20:24:24+00:00","text":"The issue with Google Cloud Console, Google Cloud Support has been resolved for all affected users as of Monday, 2025-01-06 11:45 US/Pacific.\nOnly a few chat instances were impacted during the incident.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},{"created":"2025-01-06T20:12:24+00:00","modified":"2025-01-06T20:24:25+00:00","when":"2025-01-06T20:12:24+00:00","text":"Summary: Cloud Console customers trying to get support via chat may experience higher latency or errors.\nDescription: We are experiencing an issue with Google Cloud Console and Cloud Support.\nOur engineering team continues to investigate the issue.\nWe will provide an update by Monday, 2025-01-06 13:00 US/Pacific with current details.\nWe apologize to all who are affected by the disruption.\nDiagnosis: Users reaching out to support may experience higher latency responses or no response at all\nWorkaround: Users can create support cases instead of interacting through chat.","status":"SERVICE_INFORMATION","affected_locations":[{"title":"Global","id":"global"}]}],"most_recent_update":{"created":"2025-01-06T20:24:24+00:00","modified":"2025-01-06T20:24:26+00:00","when":"2025-01-06T20:24:24+00:00","text":"The issue with Google Cloud Console, Google Cloud Support has been resolved for all affected users as of Monday, 2025-01-06 11:45 US/Pacific.\nOnly a few chat instances were impacted during the incident.\nWe thank you for your patience while we worked on resolving the issue.","status":"AVAILABLE","affected_locations":[]},"status_impact":"SERVICE_INFORMATION","severity":"low","service_key":"zall","service_name":"Multiple Products","affected_products":[{"title":"Google Cloud Console","id":"Wdsr1n5vyDvCt78qEifm"},{"title":"Google Cloud Support","id":"bGThzF7oEGP5jcuDdMuk"}],"uri":"incidents/LgA6CQF3F5F29nBorBL1","currently_affected_locations":[],"previously_affected_locations":[{"title":"Global","id":"global"}]}]