Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2026-34450: CWE-276: Incorrect Default Permissions in anthropics anthropic-sdk-python

0
Medium
VulnerabilityCVE-2026-34450cvecve-2026-34450cwe-276cwe-732
Published: Tue Mar 31 2026 (03/31/2026, 21:32:53 UTC)
Source: CVE Database V5
Vendor/Project: anthropics
Product: anthropic-sdk-python

Description

The Claude SDK for Python provides access to the Claude API from Python applications. From version 0.86.0 to before version 0.87.0, the local filesystem memory tool in the Anthropic Python SDK created memory files with mode 0o666, leaving them world-readable on systems with a standard umask and world-writable in environments with a permissive umask such as many Docker base images. A local attacker on a shared host could read persisted agent state, and in containerized deployments could modify memory files to influence subsequent model behavior. Both the synchronous and asynchronous memory tool implementations were affected. This issue has been patched in version 0.87.0.

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 04/08/2026, 04:14:09 UTC

Technical Analysis

CVE-2026-34450 describes an incorrect default permissions vulnerability (CWE-276) in the anthropic-sdk-python package, specifically in the local filesystem memory tool. Versions 0.86.0 through before 0.87.0 created memory files with mode 0o666, making them world-readable and potentially world-writable depending on the environment's umask. This could allow local attackers on shared hosts to access sensitive persisted agent state or, in containerized deployments with permissive umasks, to modify memory files and influence subsequent model behavior. Both synchronous and asynchronous implementations were impacted. The vulnerability was patched in version 0.87.0.

Potential Impact

Local attackers on shared hosts could read sensitive persisted agent state due to world-readable file permissions. In containerized environments with permissive umasks, attackers could modify memory files, potentially influencing the behavior of the AI model using the SDK. This elevates the risk of information disclosure and unauthorized modification of agent state, but requires local access or container environment conditions.

Mitigation Recommendations

Upgrade the anthropic-sdk-python package to version 0.87.0 or later, where the issue has been patched. This update corrects the file permission settings for memory files, preventing unauthorized read or write access. No additional mitigation is required if the upgrade is applied.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2026-03-27T18:18:14.895Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 69cc424fe6bfc5ba1d44f4af

Added to database: 3/31/2026, 9:53:19 PM

Last enriched: 4/8/2026, 4:14:09 AM

Last updated: 5/15/2026, 7:29:43 AM

Views: 92

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses