CVE-2026-2472: CWE-79 Improper Neutralization of Input During Web Page Generation (XSS or 'Cross-site Scripting') in Google Cloud Vertex AI SDK for Python
Stored Cross-Site Scripting (XSS) in the _genai/_evals_visualization component of Google Cloud Vertex AI SDK (google-cloud-aiplatform) versions from 1.98.0 up to (but not including) 1.131.0 allows an unauthenticated remote attacker to execute arbitrary JavaScript in a victim's Jupyter or Colab environment via injecting script escape sequences into model evaluation results or dataset JSON data.
AI Analysis
Technical Summary
CVE-2026-2472 is a stored Cross-Site Scripting (XSS) vulnerability classified under CWE-79, found in the _genai/_evals_visualization component of the Google Cloud Vertex AI SDK for Python. This SDK is widely used for managing AI and machine learning workflows, including model evaluation and visualization in environments such as Jupyter notebooks and Google Colab. The vulnerability exists in versions from 1.98.0 up to but not including 1.131.0. It allows an unauthenticated remote attacker to inject malicious JavaScript code by embedding script escape sequences into model evaluation results or dataset JSON data. When these results are rendered in a victim's interactive environment, the malicious script executes with the privileges of the user running the notebook. This can lead to theft of sensitive data, session hijacking, or further compromise of the environment. The vulnerability is notable because it does not require authentication to exploit, though user interaction is necessary to trigger the payload. The CVSS 4.0 score of 8.6 reflects the high impact on confidentiality, integrity, and availability, combined with the ease of remote exploitation. No patches or fixes are linked yet, and no known exploits have been reported in the wild. The vulnerability highlights the risks of insufficient input sanitization in web page generation components within AI development tools, especially those integrated into collaborative notebook environments.
Potential Impact
The impact of CVE-2026-2472 is significant for organizations leveraging Google Cloud Vertex AI SDK for Python in their AI/ML workflows, particularly those using Jupyter or Colab environments for model evaluation and visualization. Successful exploitation can lead to arbitrary JavaScript execution in the victim's environment, potentially resulting in data theft, unauthorized access to sensitive AI model data, credential compromise, or manipulation of evaluation results. This undermines the confidentiality and integrity of AI workflows and can disrupt availability if malicious scripts perform destructive actions. Since the vulnerability is exploitable without authentication, attackers can target exposed environments broadly, increasing the attack surface. Organizations relying on collaborative notebooks for AI development are especially vulnerable, as these environments often contain sensitive datasets and credentials. The absence of known exploits in the wild suggests a window of opportunity for proactive mitigation, but the high CVSS score indicates that the threat could be severe if weaponized. The vulnerability could also damage trust in AI model outputs if attackers manipulate evaluation data.
Mitigation Recommendations
To mitigate CVE-2026-2472, organizations should immediately upgrade the Google Cloud Vertex AI SDK for Python to version 1.131.0 or later once available, as this will contain the necessary patches to properly sanitize input and prevent script injection. Until patches are released, organizations should implement strict input validation and sanitization on all model evaluation results and dataset JSON data before rendering in notebooks. Employ Content Security Policy (CSP) headers in Jupyter and Colab environments to restrict execution of unauthorized scripts. Limit exposure of Jupyter and Colab notebooks to trusted users and networks only, and monitor for suspicious activity or unexpected script execution. Educate users about the risks of opening untrusted notebooks or datasets. Additionally, consider isolating AI development environments to reduce the blast radius of potential XSS attacks. Regularly audit and review third-party AI SDK components for security updates and vulnerabilities. Finally, implement runtime detection tools that can identify anomalous script execution within notebook environments.
Affected Countries
United States, India, Germany, United Kingdom, Canada, Australia, Japan, France, South Korea, Netherlands, Singapore
CVE-2026-2472: CWE-79 Improper Neutralization of Input During Web Page Generation (XSS or 'Cross-site Scripting') in Google Cloud Vertex AI SDK for Python
Description
Stored Cross-Site Scripting (XSS) in the _genai/_evals_visualization component of Google Cloud Vertex AI SDK (google-cloud-aiplatform) versions from 1.98.0 up to (but not including) 1.131.0 allows an unauthenticated remote attacker to execute arbitrary JavaScript in a victim's Jupyter or Colab environment via injecting script escape sequences into model evaluation results or dataset JSON data.
AI-Powered Analysis
Machine-generated threat intelligence
Technical Analysis
CVE-2026-2472 is a stored Cross-Site Scripting (XSS) vulnerability classified under CWE-79, found in the _genai/_evals_visualization component of the Google Cloud Vertex AI SDK for Python. This SDK is widely used for managing AI and machine learning workflows, including model evaluation and visualization in environments such as Jupyter notebooks and Google Colab. The vulnerability exists in versions from 1.98.0 up to but not including 1.131.0. It allows an unauthenticated remote attacker to inject malicious JavaScript code by embedding script escape sequences into model evaluation results or dataset JSON data. When these results are rendered in a victim's interactive environment, the malicious script executes with the privileges of the user running the notebook. This can lead to theft of sensitive data, session hijacking, or further compromise of the environment. The vulnerability is notable because it does not require authentication to exploit, though user interaction is necessary to trigger the payload. The CVSS 4.0 score of 8.6 reflects the high impact on confidentiality, integrity, and availability, combined with the ease of remote exploitation. No patches or fixes are linked yet, and no known exploits have been reported in the wild. The vulnerability highlights the risks of insufficient input sanitization in web page generation components within AI development tools, especially those integrated into collaborative notebook environments.
Potential Impact
The impact of CVE-2026-2472 is significant for organizations leveraging Google Cloud Vertex AI SDK for Python in their AI/ML workflows, particularly those using Jupyter or Colab environments for model evaluation and visualization. Successful exploitation can lead to arbitrary JavaScript execution in the victim's environment, potentially resulting in data theft, unauthorized access to sensitive AI model data, credential compromise, or manipulation of evaluation results. This undermines the confidentiality and integrity of AI workflows and can disrupt availability if malicious scripts perform destructive actions. Since the vulnerability is exploitable without authentication, attackers can target exposed environments broadly, increasing the attack surface. Organizations relying on collaborative notebooks for AI development are especially vulnerable, as these environments often contain sensitive datasets and credentials. The absence of known exploits in the wild suggests a window of opportunity for proactive mitigation, but the high CVSS score indicates that the threat could be severe if weaponized. The vulnerability could also damage trust in AI model outputs if attackers manipulate evaluation data.
Mitigation Recommendations
To mitigate CVE-2026-2472, organizations should immediately upgrade the Google Cloud Vertex AI SDK for Python to version 1.131.0 or later once available, as this will contain the necessary patches to properly sanitize input and prevent script injection. Until patches are released, organizations should implement strict input validation and sanitization on all model evaluation results and dataset JSON data before rendering in notebooks. Employ Content Security Policy (CSP) headers in Jupyter and Colab environments to restrict execution of unauthorized scripts. Limit exposure of Jupyter and Colab notebooks to trusted users and networks only, and monitor for suspicious activity or unexpected script execution. Educate users about the risks of opening untrusted notebooks or datasets. Additionally, consider isolating AI development environments to reduce the blast radius of potential XSS attacks. Regularly audit and review third-party AI SDK components for security updates and vulnerabilities. Finally, implement runtime detection tools that can identify anomalous script execution within notebook environments.
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- GoogleCloud
- Date Reserved
- 2026-02-13T15:38:12.195Z
- Cvss Version
- 4.0
- State
- PUBLISHED
Threat ID: 6998c9e1be58cf853bab6ab4
Added to database: 2/20/2026, 8:53:53 PM
Last enriched: 2/28/2026, 1:20:54 PM
Last updated: 4/7/2026, 1:38:00 PM
Views: 51
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
Need more coverage?
Upgrade to Pro Console for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.
Latest Threats
Check if your credentials are on the dark web
Instant breach scanning across billions of leaked records. Free tier available.