Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2026-25580: CWE-918: Server-Side Request Forgery (SSRF) in pydantic pydantic-ai

0
High
VulnerabilityCVE-2026-25580cvecve-2026-25580cwe-918
Published: Fri Feb 06 2026 (02/06/2026, 21:01:38 UTC)
Source: CVE Database V5
Vendor/Project: pydantic
Product: pydantic-ai

Description

CVE-2026-25580 is a high-severity Server-Side Request Forgery (SSRF) vulnerability in pydantic-ai versions from 0. 0. 26 up to but not including 1. 56. 0. The vulnerability arises from the URL download functionality that processes message history from untrusted external sources, allowing attackers to craft malicious URLs. Exploiting this flaw enables an attacker to make the server perform HTTP requests to internal network resources, potentially exposing sensitive internal services or cloud credentials. No authentication or user interaction is required, and the vulnerability affects applications that accept external message history. The issue is fixed in pydantic-ai version 1. 56.

AI-Powered Analysis

AILast updated: 02/06/2026, 21:29:39 UTC

Technical Analysis

CVE-2026-25580 is a Server-Side Request Forgery (SSRF) vulnerability classified under CWE-918 affecting the pydantic-ai Python agent framework, specifically versions from 0.0.26 up to but not including 1.56.0. Pydantic AI facilitates building applications and workflows leveraging Generative AI, including functionality to download URLs as part of processing message histories. The vulnerability exists because the URL download feature does not properly validate or restrict URLs received from untrusted external sources. An attacker who can supply message history data to the application can embed malicious URLs that cause the server to initiate HTTP requests to internal network addresses or cloud metadata endpoints. This can lead to unauthorized access to internal services, sensitive data leakage, or exposure of cloud credentials. The vulnerability does not require any authentication or user interaction, making it easier to exploit remotely. The scope is limited to applications that accept message history from external users, but given the growing adoption of AI frameworks, the attack surface is significant. The vulnerability was published on February 6, 2026, with a CVSS v3.1 score of 8.6, reflecting high severity primarily due to the potential for complete confidentiality compromise without affecting integrity or availability. The issue is resolved in pydantic-ai version 1.56.0, and users are advised to upgrade to this or later versions to remediate the risk.

Potential Impact

For European organizations, this SSRF vulnerability poses a significant risk to confidentiality, especially for those deploying AI applications using pydantic-ai that accept external message histories. Exploitation can lead to unauthorized internal network reconnaissance, access to sensitive internal services, and exposure of cloud credentials, which may result in further compromise of cloud resources or internal systems. This can affect sectors heavily reliant on cloud infrastructure and AI, such as finance, healthcare, and critical infrastructure. The vulnerability does not impact integrity or availability directly but can be a stepping stone for more severe attacks. Given the ease of exploitation without authentication or user interaction, attackers can remotely leverage this flaw to bypass perimeter defenses. The impact is heightened in environments where internal services are trusted implicitly or where cloud metadata services are accessible internally. Organizations failing to patch may face data breaches, regulatory penalties under GDPR for data exposure, and reputational damage.

Mitigation Recommendations

European organizations should immediately upgrade pydantic-ai to version 1.56.0 or later to eliminate the vulnerability. In addition to patching, organizations should implement strict input validation and sanitization for any URLs or external data accepted from untrusted sources, especially in AI workflows. Network segmentation should be enforced to limit server access to internal resources and cloud metadata endpoints, reducing the impact of SSRF exploitation. Employing egress filtering and web application firewalls (WAFs) with SSRF detection rules can help detect and block malicious outbound requests. Monitoring and logging HTTP requests initiated by AI applications can provide early detection of exploitation attempts. Organizations should also review and harden cloud instance metadata service access policies, such as using IMDSv2 in AWS, to prevent unauthorized credential access. Finally, conducting regular security assessments and penetration testing focused on SSRF and AI application inputs will help identify residual risks.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2026-02-03T01:02:46.715Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 698659ddf9fa50a62f342a24

Added to database: 2/6/2026, 9:15:09 PM

Last enriched: 2/6/2026, 9:29:39 PM

Last updated: 2/6/2026, 10:35:36 PM

Views: 6

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats