Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-62616: CWE-918: Server-Side Request Forgery (SSRF) in Significant-Gravitas AutoGPT

0
Critical
VulnerabilityCVE-2025-62616cvecve-2025-62616cwe-918
Published: Wed Feb 04 2026 (02/04/2026, 22:28:40 UTC)
Source: CVE Database V5
Vendor/Project: Significant-Gravitas
Product: AutoGPT

Description

CVE-2025-62616 is a critical Server-Side Request Forgery (SSRF) vulnerability in Significant-Gravitas AutoGPT versions prior to autogpt-platform-beta-v0. 6. 34. The flaw exists in the SendDiscordFileBlock component, where the aiohttp. ClientSession(). get method is used without proper URL input validation, allowing attackers to induce the server to make arbitrary HTTP requests. This vulnerability can lead to high confidentiality and integrity impacts, including unauthorized internal network access and data exposure, without requiring authentication or user interaction. The issue has been patched in version 0. 6. 34.

AI-Powered Analysis

AILast updated: 02/04/2026, 22:59:40 UTC

Technical Analysis

CVE-2025-62616 identifies a Server-Side Request Forgery (SSRF) vulnerability in Significant-Gravitas AutoGPT, a platform designed to automate complex workflows using continuous AI agents. The vulnerability resides in the SendDiscordFileBlock feature, which uses the aiohttp.ClientSession().get method to fetch URLs without filtering or validating the input URL parameter. This lack of input sanitization allows an attacker to craft malicious URLs that the server will fetch, potentially accessing internal resources, metadata services, or other protected network endpoints inaccessible externally. The vulnerability affects all versions prior to autogpt-platform-beta-v0.6.34, where the issue has been patched. The CVSS 4.0 score of 9.3 reflects the ease of exploitation (no authentication or user interaction required), network-level attack vector, and the high impact on confidentiality and integrity. Exploiting this SSRF can lead to unauthorized data access, internal network reconnaissance, and possibly further exploitation chains. Although no known exploits are reported in the wild yet, the critical nature and widespread use of AutoGPT in AI-driven automation make this a significant threat. The vulnerability highlights the risks of integrating third-party libraries without proper input validation in AI platforms that interact with external services.

Potential Impact

For European organizations, this SSRF vulnerability poses a severe risk, especially for those leveraging AutoGPT to automate workflows involving external APIs or internal network resources. Exploitation could allow attackers to bypass perimeter defenses, access sensitive internal services, exfiltrate confidential data, or pivot within the network. Sectors such as finance, healthcare, manufacturing, and critical infrastructure that increasingly adopt AI automation are particularly vulnerable. The confidentiality and integrity of data processed or accessed by AutoGPT agents could be compromised, leading to regulatory non-compliance under GDPR and potential operational disruptions. The vulnerability's ease of exploitation without authentication increases the attack surface, potentially enabling widespread attacks if vulnerable instances are exposed to the internet. Additionally, the SSRF could be used to target cloud metadata services, leading to credential theft and further compromise. The lack of known exploits currently provides a window for proactive mitigation but also suggests that attackers may develop exploits soon given the criticality.

Mitigation Recommendations

Immediate upgrade to autogpt-platform-beta-v0.6.34 or later is the primary mitigation step to eliminate the vulnerability. Organizations should audit their AutoGPT deployments to identify and remediate any instances running vulnerable versions. Implement strict input validation and sanitization on all URL inputs used by AutoGPT components, especially those invoking HTTP client libraries. Network segmentation should be enforced to restrict AutoGPT servers' access to only necessary external and internal resources, minimizing the impact of potential SSRF exploitation. Employ web application firewalls (WAFs) with SSRF detection capabilities to monitor and block suspicious outbound requests. Conduct regular security assessments and penetration testing focusing on AI automation platforms. Additionally, monitor logs for unusual outbound HTTP requests originating from AutoGPT services. Educate developers and DevOps teams about secure coding practices related to SSRF and third-party library usage. Finally, consider implementing runtime application self-protection (RASP) solutions to detect and prevent SSRF attacks dynamically.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2025-10-16T19:24:37.269Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 6983cbf5f9fa50a62fb21040

Added to database: 2/4/2026, 10:45:09 PM

Last enriched: 2/4/2026, 10:59:40 PM

Last updated: 2/5/2026, 2:06:52 AM

Views: 5

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats