Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2026-2473: CWE-340 Generation of Predictable Numbers or Identifiers in Google Cloud Vertex AI Experiments

0
High
VulnerabilityCVE-2026-2473cvecve-2026-2473cwe-340
Published: Fri Feb 20 2026 (02/20/2026, 19:39:51 UTC)
Source: CVE Database V5
Vendor/Project: Google Cloud
Product: Vertex AI Experiments

Description

CVE-2026-2473 is a high-severity vulnerability in Google Cloud Vertex AI Experiments versions 1. 21. 0 up to but not including 1. 133. 0. It involves predictable bucket naming that allows unauthenticated remote attackers to pre-create Cloud Storage buckets, leading to bucket squatting. Exploiting this flaw can result in cross-tenant remote code execution, model theft, and poisoning attacks. The vulnerability stems from CWE-340, which concerns the generation of predictable numbers or identifiers. No authentication is required, but user interaction is needed to trigger the attack. Google has patched this issue, and no customer action is currently required.

AI-Powered Analysis

AILast updated: 02/20/2026, 20:55:59 UTC

Technical Analysis

CVE-2026-2473 is a vulnerability identified in Google Cloud's Vertex AI Experiments product, specifically affecting versions from 1.21.0 up to but not including 1.133.0. The root cause is the generation of predictable Cloud Storage bucket names used by Vertex AI Experiments, which allows an unauthenticated remote attacker to preemptively create these buckets (a technique known as bucket squatting). By controlling these buckets, attackers can execute cross-tenant remote code execution, steal AI models, or poison models used by other tenants. This vulnerability is categorized under CWE-340, indicating weak or predictable identifier generation. The attack vector is network-based with no privileges required, but user interaction is necessary to trigger the exploit. The vulnerability affects the confidentiality, integrity, and availability of AI workloads and data stored or processed via Vertex AI Experiments. Google has addressed this vulnerability in versions 1.133.0 and later, and no active exploits have been reported in the wild. The CVSS 4.0 score of 7.7 reflects the high impact and relative ease of exploitation given the lack of authentication requirements. This vulnerability highlights the critical importance of secure resource naming and access controls in cloud AI services.

Potential Impact

The impact of CVE-2026-2473 is significant for organizations using Google Cloud Vertex AI Experiments. Successful exploitation can lead to unauthorized remote code execution across tenants, allowing attackers to execute arbitrary code within the cloud environment. This compromises the confidentiality of AI models, enabling theft of proprietary or sensitive intellectual property. Additionally, attackers can poison models, degrading the integrity and reliability of AI-driven decisions and outputs. Availability can also be affected if attackers disrupt AI workflows or corrupt stored data. The cross-tenant nature of the attack increases the risk of widespread damage in multi-tenant cloud environments. Organizations relying on Vertex AI for critical AI workloads, especially those in regulated industries or handling sensitive data, face risks of data breaches, intellectual property loss, and operational disruption. The vulnerability's exploitation could undermine trust in cloud AI services and result in financial and reputational damage.

Mitigation Recommendations

To mitigate CVE-2026-2473, organizations should immediately verify their Vertex AI Experiments version and upgrade to version 1.133.0 or later, where the vulnerability is patched. Beyond upgrading, organizations should audit their Google Cloud Storage bucket naming policies to ensure unpredictability and avoid predictable patterns that could be exploited. Implement strict IAM policies to restrict bucket creation and access permissions, limiting the ability of unauthorized users to create or control buckets. Employ monitoring and alerting on unusual bucket creation activities and access patterns to detect potential bucket squatting attempts early. Use Google Cloud's security tools such as Cloud Security Command Center to identify misconfigurations or suspicious activities related to AI experiments and storage buckets. Additionally, consider isolating AI workloads in dedicated projects or VPCs to reduce cross-tenant attack surfaces. Regularly review and update security configurations in line with Google Cloud best practices for AI and storage services. Finally, educate development and operations teams about the risks of predictable resource naming and enforce secure coding and deployment standards.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
GoogleCloud
Date Reserved
2026-02-13T15:41:59.549Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 6998c9e1be58cf853bab6ab7

Added to database: 2/20/2026, 8:53:53 PM

Last enriched: 2/20/2026, 8:55:59 PM

Last updated: 2/20/2026, 8:57:02 PM

Views: 1

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats