CVE-2026-2473: CWE-340 Generation of Predictable Numbers or Identifiers in Google Cloud Vertex AI Experiments
Predictable bucket naming in Vertex AI Experiments in Google Cloud Vertex AI from version 1.21.0 up to (but not including) 1.133.0 on Google Cloud Platform allows an unauthenticated remote attacker to achieve cross-tenant remote code execution, model theft, and poisoning via pre-creating predictably named Cloud Storage buckets (Bucket Squatting). This vulnerability was patched and no customer action is needed.
AI Analysis
Technical Summary
CVE-2026-2473 is a vulnerability categorized under CWE-340 (Generation of Predictable Numbers or Identifiers) affecting Google Cloud Vertex AI Experiments from version 1.21.0 up to but not including 1.133.0. The core issue is predictable bucket naming for Cloud Storage buckets used by Vertex AI Experiments. An unauthenticated remote attacker can exploit this by pre-creating Cloud Storage buckets with predictable names, a technique known as bucket squatting. This allows the attacker to interfere with the AI experiment workflows of other tenants by hijacking bucket resources. The consequences include cross-tenant remote code execution, enabling attackers to run arbitrary code in the context of the victim’s AI experiments. Additionally, attackers can steal AI models, which may contain sensitive intellectual property or data, and poison models by injecting malicious data or code, undermining the integrity and reliability of AI outputs. The vulnerability requires no privileges and no authentication, but user interaction is needed to trigger the exploit. The vulnerability affects confidentiality, integrity, and availability of AI workloads and data. Google has released patches to address this issue, and no customer action is required if the environment is updated. The CVSS 4.0 vector indicates network attack vector, low attack complexity, no privileges required, partial user interaction, and high impact on confidentiality, integrity, and availability.
Potential Impact
The impact of CVE-2026-2473 is significant for organizations relying on Google Cloud Vertex AI Experiments for their AI and machine learning workloads. Successful exploitation can lead to unauthorized remote code execution across tenants, resulting in potential full compromise of AI experiment environments. This can cause theft of proprietary AI models, leading to intellectual property loss and competitive disadvantage. Model poisoning can degrade AI model accuracy and reliability, causing erroneous business decisions or automated actions. The cross-tenant nature of the attack increases risk in multi-tenant cloud environments, potentially affecting multiple customers simultaneously. The vulnerability also threatens data confidentiality and availability, as attackers can manipulate or deny access to AI experiment data. Organizations in sectors heavily dependent on AI, such as finance, healthcare, autonomous systems, and technology, face heightened risks. The ease of exploitation (no privileges needed) and the broad scope of affected versions amplify the threat. Although no known exploits in the wild have been reported, the potential damage warrants urgent remediation.
Mitigation Recommendations
To mitigate CVE-2026-2473, organizations should immediately verify their Google Cloud Vertex AI Experiments versions and upgrade to version 1.133.0 or later where the vulnerability is patched. Implement strict monitoring and alerting for unusual Cloud Storage bucket creation activities, especially buckets with names matching predictable patterns used by Vertex AI Experiments. Employ Google Cloud IAM policies to restrict bucket creation permissions to trusted users and service accounts only. Use Google Cloud’s security tools to audit access logs and detect anomalous behavior related to AI experiment resources. Consider isolating AI workloads in dedicated projects or environments to limit cross-tenant exposure. Regularly review and update security configurations for AI and cloud storage services. Engage with Google Cloud support for guidance on best practices and incident response in case of suspected exploitation. Finally, educate development and security teams about the risks of predictable resource naming and enforce secure naming conventions.
Affected Countries
United States, India, Germany, Japan, United Kingdom, Canada, Australia, France, South Korea, Netherlands, Singapore
CVE-2026-2473: CWE-340 Generation of Predictable Numbers or Identifiers in Google Cloud Vertex AI Experiments
Description
Predictable bucket naming in Vertex AI Experiments in Google Cloud Vertex AI from version 1.21.0 up to (but not including) 1.133.0 on Google Cloud Platform allows an unauthenticated remote attacker to achieve cross-tenant remote code execution, model theft, and poisoning via pre-creating predictably named Cloud Storage buckets (Bucket Squatting). This vulnerability was patched and no customer action is needed.
AI-Powered Analysis
Machine-generated threat intelligence
Technical Analysis
CVE-2026-2473 is a vulnerability categorized under CWE-340 (Generation of Predictable Numbers or Identifiers) affecting Google Cloud Vertex AI Experiments from version 1.21.0 up to but not including 1.133.0. The core issue is predictable bucket naming for Cloud Storage buckets used by Vertex AI Experiments. An unauthenticated remote attacker can exploit this by pre-creating Cloud Storage buckets with predictable names, a technique known as bucket squatting. This allows the attacker to interfere with the AI experiment workflows of other tenants by hijacking bucket resources. The consequences include cross-tenant remote code execution, enabling attackers to run arbitrary code in the context of the victim’s AI experiments. Additionally, attackers can steal AI models, which may contain sensitive intellectual property or data, and poison models by injecting malicious data or code, undermining the integrity and reliability of AI outputs. The vulnerability requires no privileges and no authentication, but user interaction is needed to trigger the exploit. The vulnerability affects confidentiality, integrity, and availability of AI workloads and data. Google has released patches to address this issue, and no customer action is required if the environment is updated. The CVSS 4.0 vector indicates network attack vector, low attack complexity, no privileges required, partial user interaction, and high impact on confidentiality, integrity, and availability.
Potential Impact
The impact of CVE-2026-2473 is significant for organizations relying on Google Cloud Vertex AI Experiments for their AI and machine learning workloads. Successful exploitation can lead to unauthorized remote code execution across tenants, resulting in potential full compromise of AI experiment environments. This can cause theft of proprietary AI models, leading to intellectual property loss and competitive disadvantage. Model poisoning can degrade AI model accuracy and reliability, causing erroneous business decisions or automated actions. The cross-tenant nature of the attack increases risk in multi-tenant cloud environments, potentially affecting multiple customers simultaneously. The vulnerability also threatens data confidentiality and availability, as attackers can manipulate or deny access to AI experiment data. Organizations in sectors heavily dependent on AI, such as finance, healthcare, autonomous systems, and technology, face heightened risks. The ease of exploitation (no privileges needed) and the broad scope of affected versions amplify the threat. Although no known exploits in the wild have been reported, the potential damage warrants urgent remediation.
Mitigation Recommendations
To mitigate CVE-2026-2473, organizations should immediately verify their Google Cloud Vertex AI Experiments versions and upgrade to version 1.133.0 or later where the vulnerability is patched. Implement strict monitoring and alerting for unusual Cloud Storage bucket creation activities, especially buckets with names matching predictable patterns used by Vertex AI Experiments. Employ Google Cloud IAM policies to restrict bucket creation permissions to trusted users and service accounts only. Use Google Cloud’s security tools to audit access logs and detect anomalous behavior related to AI experiment resources. Consider isolating AI workloads in dedicated projects or environments to limit cross-tenant exposure. Regularly review and update security configurations for AI and cloud storage services. Engage with Google Cloud support for guidance on best practices and incident response in case of suspected exploitation. Finally, educate development and security teams about the risks of predictable resource naming and enforce secure naming conventions.
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- GoogleCloud
- Date Reserved
- 2026-02-13T15:41:59.549Z
- Cvss Version
- 4.0
- State
- PUBLISHED
Threat ID: 6998c9e1be58cf853bab6ab7
Added to database: 2/20/2026, 8:53:53 PM
Last enriched: 2/28/2026, 1:22:04 PM
Last updated: 4/7/2026, 1:38:17 PM
Views: 66
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need more coverage?
Upgrade to Pro Console for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.
Latest Threats
Check if your credentials are on the dark web
Instant breach scanning across billions of leaked records. Free tier available.