Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-7707: CWE-377 Insecure Temporary File in run-llama run-llama/llama_index

0
High
VulnerabilityCVE-2025-7707cvecve-2025-7707cwe-377
Published: Mon Oct 13 2025 (10/13/2025, 16:15:08 UTC)
Source: CVE Database V5
Vendor/Project: run-llama
Product: run-llama/llama_index

Description

The llama_index library version 0.12.33 sets the NLTK data directory to a subdirectory of the codebase by default, which is world-writable in multi-user environments. This configuration allows local users to overwrite, delete, or corrupt NLTK data files, leading to potential denial of service, data tampering, or privilege escalation. The vulnerability arises from the use of a shared cache directory instead of a user-specific one, making it susceptible to local data tampering and denial of service.

AI-Powered Analysis

AILast updated: 10/13/2025, 16:27:58 UTC

Technical Analysis

The vulnerability identified as CVE-2025-7707 affects the run-llama/llama_index library, specifically version 0.12.33, where the NLTK (Natural Language Toolkit) data directory is configured by default to a subdirectory within the codebase that is world-writable. In multi-user environments, this configuration flaw allows any local user to modify, delete, or corrupt the NLTK data files stored in this shared directory. The root cause is the use of a shared cache directory rather than isolating data storage per user, which violates secure temporary file handling best practices (CWE-377). This insecure setup can lead to several attack vectors: denial of service by deleting or corrupting essential data files, data tampering that could affect the integrity of AI models or processing, and potential privilege escalation if the corrupted data influences higher-privileged processes. The vulnerability requires only low privileges and no user interaction, making it relatively easy to exploit on systems where multiple users have access. Although no exploits have been reported in the wild yet, the vulnerability's presence in a widely used AI indexing library poses a significant risk. The CVSS 3.0 score of 7.1 reflects a high severity level, with attack vector local (AV:L), low attack complexity (AC:L), low privileges required (PR:L), no user interaction (UI:N), unchanged scope (S:U), no confidentiality impact (C:N), high integrity impact (I:H), and high availability impact (A:H). The lack of vendor patches at the time of reporting necessitates immediate configuration changes and monitoring.

Potential Impact

For European organizations, especially those involved in AI research, development, and deployment using the run-llama/llama_index library, this vulnerability poses a serious risk. The ability for local users to manipulate shared NLTK data can disrupt AI workflows, leading to denial of service conditions that halt critical processing tasks. Data tampering could compromise the integrity of AI models, resulting in incorrect outputs or decisions, which is particularly concerning in sensitive sectors such as healthcare, finance, and critical infrastructure. Privilege escalation potential increases the risk of broader system compromise, especially in shared computing environments like universities, research labs, and cloud-based multi-tenant platforms. The vulnerability undermines trust in AI data pipelines and could lead to operational downtime, financial losses, and reputational damage. Given the high integrity and availability impacts, organizations must prioritize remediation to maintain secure and reliable AI operations.

Mitigation Recommendations

To mitigate CVE-2025-7707, organizations should immediately audit their use of the run-llama/llama_index library and verify the configuration of the NLTK data directory. Specifically, they should: 1) Reconfigure the NLTK data directory to reside within user-specific directories that are not world-writable, ensuring strict file system permissions that prevent unauthorized modifications. 2) Implement access controls and monitoring on shared environments to detect unauthorized changes to NLTK data files. 3) Isolate AI workloads in containerized or virtualized environments to limit local user access and reduce attack surface. 4) Regularly back up NLTK data files and AI model data to enable quick recovery from tampering or deletion. 5) Stay informed about vendor updates and apply patches promptly once released. 6) Educate system administrators and developers about secure temporary file handling best practices to prevent similar issues. 7) Consider employing file integrity monitoring tools to alert on unexpected changes to critical AI data directories. These steps go beyond generic advice by focusing on configuration hardening, environment isolation, and proactive monitoring tailored to the specific vulnerability context.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
@huntr_ai
Date Reserved
2025-07-16T12:49:24.649Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 68ed28784a0d14fc5ab516ed

Added to database: 10/13/2025, 4:27:36 PM

Last enriched: 10/13/2025, 4:27:58 PM

Last updated: 10/13/2025, 8:14:29 PM

Views: 6

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats