Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-62615: CWE-918: Server-Side Request Forgery (SSRF) in Significant-Gravitas AutoGPT

0
Critical
VulnerabilityCVE-2025-62615cvecve-2025-62615cwe-918
Published: Wed Feb 04 2026 (02/04/2026, 22:28:37 UTC)
Source: CVE Database V5
Vendor/Project: Significant-Gravitas
Product: AutoGPT

Description

CVE-2025-62615 is a critical Server-Side Request Forgery (SSRF) vulnerability in Significant-Gravitas AutoGPT versions prior to autogpt-platform-beta-v0. 6. 34. The vulnerability arises from the use of urllib. request. urlopen without proper input validation in the RSSFeedBlock component, allowing attackers to make unauthorized requests from the server. This flaw can lead to high confidentiality and integrity impacts without requiring authentication or user interaction. Although no known exploits are currently in the wild, the high CVSS score of 9. 3 indicates severe risk. The issue has been patched in version 0.

AI-Powered Analysis

AILast updated: 02/04/2026, 22:59:53 UTC

Technical Analysis

CVE-2025-62615 is a critical Server-Side Request Forgery (SSRF) vulnerability identified in Significant-Gravitas AutoGPT, an AI platform designed to automate complex workflows via continuous AI agents. The vulnerability exists in versions prior to autogpt-platform-beta-v0.6.34 within the RSSFeedBlock component, where the third-party Python library urllib.request.urlopen is used directly to fetch URLs without proper input validation or filtering. This lack of sanitization allows an attacker to supply malicious URLs that the server will fetch, potentially enabling unauthorized internal network scanning, access to internal services, or exfiltration of sensitive data. SSRF vulnerabilities are particularly dangerous because they can bypass perimeter defenses by leveraging the vulnerable server as a proxy. The CVSS 4.0 score of 9.3 reflects the vulnerability's ease of exploitation (no authentication or user interaction required) and its severe impact on confidentiality and integrity. Although no public exploits have been reported yet, the vulnerability's presence in an AI automation platform that may be integrated into various enterprise workflows increases the risk profile. The vendor has addressed the issue in autogpt-platform-beta-v0.6.34 by implementing proper input validation and filtering to prevent malicious URL requests. Organizations using affected versions should urgently apply this patch to mitigate the risk.

Potential Impact

For European organizations, the impact of this SSRF vulnerability can be significant. AutoGPT is used to automate AI-driven workflows, which may include sensitive data processing and integration with internal systems. Exploitation could allow attackers to access internal network resources, bypass firewalls, and retrieve confidential information or manipulate internal services. This could lead to data breaches, disruption of AI-driven business processes, and potential lateral movement within corporate networks. Given the critical CVSS score and the lack of required authentication, attackers could exploit this vulnerability remotely and at scale. The risk is heightened for organizations heavily invested in AI automation and those with complex internal network architectures. Additionally, the potential for SSRF to be a pivot point for further attacks makes this vulnerability a serious threat to operational continuity and data security in European enterprises.

Mitigation Recommendations

1. Immediately upgrade AutoGPT installations to autogpt-platform-beta-v0.6.34 or later, where the vulnerability is patched. 2. Implement strict network egress filtering on servers running AutoGPT to restrict outbound HTTP/HTTPS requests only to trusted domains and IP ranges. 3. Employ Web Application Firewalls (WAFs) with rules designed to detect and block SSRF attack patterns, especially targeting URL input parameters. 4. Conduct thorough input validation and sanitization on any user-supplied URLs or external resource references within custom workflows or extensions. 5. Monitor logs for unusual outbound requests originating from AutoGPT servers, focusing on internal IP ranges or sensitive endpoints. 6. Segment networks to limit the ability of compromised servers to reach critical internal services. 7. Educate development and security teams about SSRF risks and secure coding practices to prevent similar vulnerabilities in future AI automation components.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2025-10-16T19:24:37.269Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 6983cbf5f9fa50a62fb2103d

Added to database: 2/4/2026, 10:45:09 PM

Last enriched: 2/4/2026, 10:59:53 PM

Last updated: 2/5/2026, 2:11:27 AM

Views: 5

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats