Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2026-26020: CWE-285: Improper Authorization in Significant-Gravitas AutoGPT

0
Critical
VulnerabilityCVE-2026-26020cvecve-2026-26020cwe-285
Published: Thu Feb 12 2026 (02/12/2026, 20:52:15 UTC)
Source: CVE Database V5
Vendor/Project: Significant-Gravitas
Product: AutoGPT

Description

AutoGPT is a platform that allows users to create, deploy, and manage continuous artificial intelligence agents that automate complex workflows. Prior to 0.6.48, an authenticated user could achieve Remote Code Execution (RCE) on the backend server by embedding a disabled block inside a graph. The BlockInstallationBlock — a development tool capable of writing and importing arbitrary Python code — was marked disabled=True, but graph validation did not enforce this flag. This allowed any authenticated user to bypass the restriction by including the block as a node in a graph, rather than calling the block's execution endpoint directly (which did enforce the flag). This vulnerability is fixed in 0.6.48.

AI-Powered Analysis

AILast updated: 02/12/2026, 21:18:37 UTC

Technical Analysis

CVE-2026-26020 is a critical security vulnerability found in Significant-Gravitas AutoGPT, a platform designed to create and manage continuous AI agents automating complex workflows. The vulnerability exists in versions prior to 0.6.48 and stems from improper authorization (CWE-285) related to the BlockInstallationBlock, a development tool block capable of writing and importing arbitrary Python code. Although this block is marked as disabled (disabled=True) to prevent execution, the graph validation process does not enforce this flag when the block is embedded as a node within a graph. Consequently, any authenticated user can bypass the intended restriction by including this disabled block in a graph, leading to remote code execution (RCE) on the backend server. This bypass does not require direct invocation of the block's execution endpoint, which correctly enforces the disabled flag. The vulnerability requires the attacker to have authenticated access but does not require additional user interaction. The CVSS 4.0 score of 9.4 reflects the vulnerability's critical nature, with network attack vector, low attack complexity, no need for privileges beyond authentication, and no user interaction. The impact includes full compromise of confidentiality, integrity, and availability of the affected system, as arbitrary Python code execution can lead to data theft, system manipulation, or denial of service. The vulnerability was publicly disclosed on February 12, 2026, and is fixed in AutoGPT version 0.6.48. No known exploits are currently in the wild, but the severity and ease of exploitation make it a high-risk issue.

Potential Impact

For European organizations, this vulnerability poses a significant risk, especially those leveraging AutoGPT for automating AI-driven workflows in critical sectors such as finance, healthcare, manufacturing, and government services. Successful exploitation could lead to unauthorized execution of arbitrary code on backend servers, resulting in data breaches, disruption of automated processes, and potential lateral movement within networks. The compromise of AI automation platforms could undermine trust in AI-driven decision-making and operational continuity. Given the criticality of the flaw and the widespread adoption of AI automation tools, organizations face risks to confidentiality, integrity, and availability of their systems and data. The requirement for authentication reduces the attack surface but does not eliminate risk, as insider threats or compromised credentials could be leveraged. The absence of known exploits currently provides a window for proactive mitigation, but the critical severity demands urgent attention.

Mitigation Recommendations

1. Immediately upgrade all AutoGPT deployments to version 0.6.48 or later, where the vulnerability is patched. 2. Restrict authenticated user permissions to the minimum necessary, applying the principle of least privilege to reduce the risk of exploitation by unauthorized users. 3. Implement strict access controls and multi-factor authentication (MFA) to protect user accounts that can access AutoGPT. 4. Monitor graph configurations and audit logs for unusual inclusion of disabled blocks or unexpected graph structures that could indicate exploitation attempts. 5. Conduct regular security assessments and code reviews of AI automation workflows to detect and remediate insecure configurations. 6. Isolate AutoGPT backend servers within segmented network zones to limit potential lateral movement in case of compromise. 7. Educate administrators and developers about the risks of embedding disabled or development blocks in production graphs. 8. Establish incident response plans specific to AI automation platform compromises to enable rapid containment and recovery.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2026-02-09T21:36:29.554Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 698e404cc9e1ff5ad81500e3

Added to database: 2/12/2026, 9:04:12 PM

Last enriched: 2/12/2026, 9:18:37 PM

Last updated: 2/12/2026, 11:16:52 PM

Views: 7

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats