Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2026-34450: CWE-276: Incorrect Default Permissions in anthropics anthropic-sdk-python

0
Medium
VulnerabilityCVE-2026-34450cvecve-2026-34450cwe-276cwe-732
Published: Tue Mar 31 2026 (03/31/2026, 21:32:53 UTC)
Source: CVE Database V5
Vendor/Project: anthropics
Product: anthropic-sdk-python

Description

CVE-2026-34450 is a medium severity vulnerability in the anthropic-sdk-python versions 0. 86. 0 to before 0. 87. 0. The local filesystem memory tool created files with overly permissive default permissions (mode 0o666), making them world-readable and potentially world-writable in permissive environments like Docker containers. This allows local attackers on shared hosts to read sensitive persisted agent state and, in containerized deployments, to modify memory files, potentially influencing the behavior of AI models. Both synchronous and asynchronous memory implementations are affected. The issue does not require user interaction but requires local or container access with low privileges. It has been patched in version 0.

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 03/31/2026, 22:09:42 UTC

Technical Analysis

The vulnerability CVE-2026-34450 affects the Claude SDK for Python (anthropic-sdk-python) specifically versions from 0.86.0 up to but not including 0.87.0. The issue arises from the local filesystem memory tool component, which creates memory files with default permissions set to 0o666. This permission setting means that files are readable and writable by all users on the system, which is problematic on systems with standard umask settings that do not restrict permissions further. In containerized environments, such as Docker base images that often have permissive umasks, these files can become world-writable. This misconfiguration allows a local attacker on a shared host to read sensitive persisted agent state data stored in these files. Furthermore, in containerized deployments, an attacker with access to the container could modify these memory files, potentially influencing the behavior of AI models that rely on this persisted state. Both synchronous and asynchronous memory tool implementations in the SDK are affected, indicating a broad impact within the SDK's memory management features. The vulnerability is classified under CWE-276 (Incorrect Default Permissions) and CWE-732 (Incorrect Permission Assignment for Critical Resource). Exploitation requires local or container access with low privileges but no user interaction or authentication is needed. The vulnerability has a CVSS 4.8 (medium) score reflecting limited but meaningful impact on confidentiality and integrity. The issue was patched in version 0.87.0 of the SDK, which corrects the file permission settings to prevent unauthorized access or modification of memory files.

Potential Impact

The primary impact of this vulnerability is unauthorized disclosure and potential tampering with sensitive persisted agent state data used by the Claude SDK. For organizations deploying the anthropic-sdk-python in shared hosting environments, this vulnerability could allow local attackers to read confidential information stored in memory files, leading to information leakage. In containerized environments, the risk is elevated as attackers could modify these files, potentially manipulating AI model behavior, which could lead to incorrect or malicious outputs from AI-driven applications. This could undermine trust in AI services, cause data integrity issues, and potentially lead to further exploitation if the AI model behavior is critical to business processes. The scope is limited to local or container access, so remote exploitation is not feasible without prior access. However, given the increasing use of containerized AI deployments and shared cloud environments, the vulnerability poses a tangible risk to confidentiality and integrity of AI workloads. Organizations relying on this SDK for AI applications should consider the sensitivity of the persisted state and the deployment environment to assess risk.

Mitigation Recommendations

To mitigate this vulnerability, organizations should upgrade the anthropic-sdk-python to version 0.87.0 or later, where the file permission issue has been fixed. Until the upgrade is applied, administrators should manually enforce stricter file permissions on the memory files created by the SDK, ensuring they are not world-readable or writable (e.g., setting permissions to 0o600 or more restrictive). In containerized environments, explicitly set restrictive umask values or use container security policies to prevent overly permissive file permissions. Additionally, restrict local and container access to trusted users only, minimizing the risk of local attackers exploiting this vulnerability. Monitoring and auditing file permission changes and access to memory files can help detect potential exploitation attempts. Finally, review deployment architectures to avoid shared hosting scenarios where untrusted users have local access, and consider isolating AI workloads in dedicated containers or VMs with strict access controls.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2026-03-27T18:18:14.895Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 69cc424fe6bfc5ba1d44f4af

Added to database: 3/31/2026, 9:53:19 PM

Last enriched: 3/31/2026, 10:09:42 PM

Last updated: 4/1/2026, 3:52:45 AM

Views: 4

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses