CVE-2026-25580: CWE-918: Server-Side Request Forgery (SSRF) in pydantic pydantic-ai
Pydantic AI is a Python agent framework for building applications and workflows with Generative AI. From 0.0.26 to before 1.56.0, aServer-Side Request Forgery (SSRF) vulnerability exists in Pydantic AI's URL download functionality. When applications accept message history from untrusted sources, attackers can include malicious URLs that cause the server to make HTTP requests to internal network resources, potentially accessing internal services or cloud credentials. This vulnerability only affects applications that accept message history from external users. This vulnerability is fixed in 1.56.0.
AI Analysis
Technical Summary
CVE-2026-25580 is a Server-Side Request Forgery (SSRF) vulnerability identified in the pydantic-ai Python agent framework, specifically in its URL download functionality. This vulnerability exists in versions from 0.0.26 up to but not including 1.56.0. Pydantic-ai is used to build applications and workflows leveraging Generative AI, often processing message histories that may come from external, untrusted sources. The SSRF flaw allows an attacker to craft malicious URLs within the message history input, which the vulnerable server then fetches. This can lead to unauthorized internal HTTP requests to services within the internal network or cloud metadata endpoints, potentially exposing sensitive information such as internal APIs, databases, or cloud credentials. The vulnerability requires no authentication or user interaction, making it easier to exploit remotely. The CVSS 3.1 score of 8.6 reflects a high severity with critical confidentiality impact but no integrity or availability impact. The vulnerability is mitigated by upgrading to pydantic-ai version 1.56.0 or later, where the URL download functionality has been secured against SSRF attacks. Since the vulnerability only affects applications that accept message history from untrusted external users, the risk depends on the deployment context. No known exploits in the wild have been reported yet, but the potential impact on confidentiality is significant, especially in cloud environments where internal metadata services can be accessed via SSRF. Organizations using pydantic-ai in AI-driven workflows should audit their usage and update promptly.
Potential Impact
For European organizations, the SSRF vulnerability in pydantic-ai poses a significant risk to confidentiality, particularly for those deploying AI applications that process external message histories and rely on internal network services or cloud infrastructure. Exploitation can lead to unauthorized access to internal APIs, databases, or cloud metadata services, potentially resulting in data leakage, credential compromise, and lateral movement within corporate networks. This is especially critical for sectors with sensitive data such as finance, healthcare, and government. The vulnerability does not affect integrity or availability directly but can be a stepping stone for further attacks. Given the widespread adoption of Python frameworks and increasing use of AI-driven applications in Europe, organizations that have integrated pydantic-ai without strict input validation or network segmentation are at heightened risk. The impact is amplified in cloud environments where SSRF can expose cloud credentials, enabling attackers to escalate privileges or exfiltrate data. The absence of required authentication or user interaction lowers the barrier for exploitation, increasing the threat surface.
Mitigation Recommendations
1. Upgrade pydantic-ai to version 1.56.0 or later immediately to apply the official fix for the SSRF vulnerability. 2. Implement strict input validation and sanitization on all message history inputs, especially those originating from untrusted external sources, to block or whitelist URLs. 3. Employ network segmentation and firewall rules to restrict server outbound HTTP requests to only trusted destinations, preventing unauthorized internal network access. 4. Use cloud provider security controls to limit access to metadata services, such as enabling metadata service version 2 (IMDSv2) on AWS or equivalent protections on other platforms. 5. Monitor logs for unusual outbound HTTP requests from AI application servers to detect potential exploitation attempts. 6. Conduct security reviews and penetration testing focused on SSRF vectors in AI workflows. 7. Educate developers and DevOps teams about SSRF risks in AI frameworks and enforce secure coding practices. 8. If upgrading immediately is not possible, consider disabling or restricting the URL download functionality in pydantic-ai or isolating the service in a network environment with no access to sensitive internal resources.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Ireland
CVE-2026-25580: CWE-918: Server-Side Request Forgery (SSRF) in pydantic pydantic-ai
Description
Pydantic AI is a Python agent framework for building applications and workflows with Generative AI. From 0.0.26 to before 1.56.0, aServer-Side Request Forgery (SSRF) vulnerability exists in Pydantic AI's URL download functionality. When applications accept message history from untrusted sources, attackers can include malicious URLs that cause the server to make HTTP requests to internal network resources, potentially accessing internal services or cloud credentials. This vulnerability only affects applications that accept message history from external users. This vulnerability is fixed in 1.56.0.
AI-Powered Analysis
Machine-generated threat intelligence
Technical Analysis
CVE-2026-25580 is a Server-Side Request Forgery (SSRF) vulnerability identified in the pydantic-ai Python agent framework, specifically in its URL download functionality. This vulnerability exists in versions from 0.0.26 up to but not including 1.56.0. Pydantic-ai is used to build applications and workflows leveraging Generative AI, often processing message histories that may come from external, untrusted sources. The SSRF flaw allows an attacker to craft malicious URLs within the message history input, which the vulnerable server then fetches. This can lead to unauthorized internal HTTP requests to services within the internal network or cloud metadata endpoints, potentially exposing sensitive information such as internal APIs, databases, or cloud credentials. The vulnerability requires no authentication or user interaction, making it easier to exploit remotely. The CVSS 3.1 score of 8.6 reflects a high severity with critical confidentiality impact but no integrity or availability impact. The vulnerability is mitigated by upgrading to pydantic-ai version 1.56.0 or later, where the URL download functionality has been secured against SSRF attacks. Since the vulnerability only affects applications that accept message history from untrusted external users, the risk depends on the deployment context. No known exploits in the wild have been reported yet, but the potential impact on confidentiality is significant, especially in cloud environments where internal metadata services can be accessed via SSRF. Organizations using pydantic-ai in AI-driven workflows should audit their usage and update promptly.
Potential Impact
For European organizations, the SSRF vulnerability in pydantic-ai poses a significant risk to confidentiality, particularly for those deploying AI applications that process external message histories and rely on internal network services or cloud infrastructure. Exploitation can lead to unauthorized access to internal APIs, databases, or cloud metadata services, potentially resulting in data leakage, credential compromise, and lateral movement within corporate networks. This is especially critical for sectors with sensitive data such as finance, healthcare, and government. The vulnerability does not affect integrity or availability directly but can be a stepping stone for further attacks. Given the widespread adoption of Python frameworks and increasing use of AI-driven applications in Europe, organizations that have integrated pydantic-ai without strict input validation or network segmentation are at heightened risk. The impact is amplified in cloud environments where SSRF can expose cloud credentials, enabling attackers to escalate privileges or exfiltrate data. The absence of required authentication or user interaction lowers the barrier for exploitation, increasing the threat surface.
Mitigation Recommendations
1. Upgrade pydantic-ai to version 1.56.0 or later immediately to apply the official fix for the SSRF vulnerability. 2. Implement strict input validation and sanitization on all message history inputs, especially those originating from untrusted external sources, to block or whitelist URLs. 3. Employ network segmentation and firewall rules to restrict server outbound HTTP requests to only trusted destinations, preventing unauthorized internal network access. 4. Use cloud provider security controls to limit access to metadata services, such as enabling metadata service version 2 (IMDSv2) on AWS or equivalent protections on other platforms. 5. Monitor logs for unusual outbound HTTP requests from AI application servers to detect potential exploitation attempts. 6. Conduct security reviews and penetration testing focused on SSRF vectors in AI workflows. 7. Educate developers and DevOps teams about SSRF risks in AI frameworks and enforce secure coding practices. 8. If upgrading immediately is not possible, consider disabling or restricting the URL download functionality in pydantic-ai or isolating the service in a network environment with no access to sensitive internal resources.
Affected Countries
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- GitHub_M
- Date Reserved
- 2026-02-03T01:02:46.715Z
- Cvss Version
- 3.1
- State
- PUBLISHED
Threat ID: 698659ddf9fa50a62f342a24
Added to database: 2/6/2026, 9:15:09 PM
Last enriched: 2/14/2026, 12:10:12 PM
Last updated: 3/22/2026, 7:52:33 AM
Views: 479
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
Need more coverage?
Upgrade to Pro Console for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.
Latest Threats
Check if your credentials are on the dark web
Instant breach scanning across billions of leaked records. Free tier available.