CVE-2025-7647: CWE-378 Creation of Temporary File With Insecure Permissions in run-llama run-llama/llama_index
The llama-index-core package, up to version 0.12.44, contains a vulnerability in the `get_cache_dir()` function where a predictable, hardcoded directory path `/tmp/llama_index` is used on Linux systems without proper security controls. This vulnerability allows attackers on multi-user systems to steal proprietary models, poison cached embeddings, or conduct symlink attacks. The issue affects all Linux deployments where multiple users share the same system. The vulnerability is classified under CWE-379, CWE-377, and CWE-367, indicating insecure temporary file creation and potential race conditions.
AI Analysis
Technical Summary
CVE-2025-7647 is a high-severity vulnerability affecting the run-llama project's llama-index-core package up to version 0.12.44. The vulnerability arises from the use of a hardcoded, predictable temporary directory path `/tmp/llama_index` on Linux systems within the `get_cache_dir()` function. This directory is used for caching purposes but lacks proper security controls such as secure permissions or randomized naming. Because the directory is shared and predictable on multi-user Linux systems, attackers with local access can exploit this to perform several malicious actions. These include stealing proprietary machine learning models stored in the cache, poisoning cached embeddings to manipulate downstream AI model behavior, or conducting symlink attacks that can lead to unauthorized file access or modification. The vulnerability is categorized under CWE-378 (Creation of Temporary File With Insecure Permissions), CWE-379 (Creation of Temporary File in Directory with Insecure Permissions), CWE-377 (Insecure Temporary File), and CWE-367 (Time-of-check Time-of-use Race Condition), indicating that the issue also involves potential race conditions and insecure file handling practices. The CVSS 3.0 base score is 7.3, reflecting high severity, with an attack vector of local access (AV:L), low attack complexity (AC:L), requiring low privileges (PR:L), no user interaction (UI:N), unchanged scope (S:U), and high impact on confidentiality and integrity, with low impact on availability. No known exploits are currently reported in the wild, but the vulnerability poses a significant risk in shared Linux environments where multiple users have access to the same system and can potentially interfere with each other's cached data or models. The lack of a patch link suggests that a fix may not yet be publicly available, emphasizing the need for immediate mitigation steps.
Potential Impact
For European organizations, especially those deploying AI/ML workloads on shared Linux infrastructure, this vulnerability can lead to serious confidentiality and integrity breaches. Proprietary AI models and embeddings, which often represent valuable intellectual property and competitive advantage, could be stolen or tampered with, resulting in loss of trade secrets or corrupted AI outputs. Poisoned embeddings can degrade the quality and reliability of AI-driven services, potentially causing erroneous decisions or outputs in critical applications such as finance, healthcare, or autonomous systems. Symlink attacks could further escalate to unauthorized access or modification of sensitive files beyond the cache directory. Organizations operating multi-user Linux environments, such as research institutions, cloud service providers, or enterprises with shared compute clusters, are particularly at risk. The impact extends to compliance concerns under GDPR if personal data processed by AI models is exposed or manipulated. Additionally, the disruption of AI services due to corrupted caches could affect business continuity and reputation.
Mitigation Recommendations
To mitigate this vulnerability, European organizations should immediately audit their use of the run-llama/llama_index package and identify any Linux deployments where multiple users share the same system. Until an official patch is released, organizations should implement the following specific measures: 1) Configure the cache directory to use a unique, non-predictable path per user or process, avoiding shared global temporary directories. 2) Enforce strict filesystem permissions on the cache directory and files, restricting access only to the owning user or process. 3) Employ secure temporary file creation functions that atomically create files with safe permissions to prevent race conditions and symlink attacks. 4) Consider running AI workloads in isolated containers or virtual machines to segregate user environments and reduce the risk of cross-user attacks. 5) Monitor filesystem access logs for suspicious activity around the `/tmp/llama_index` directory. 6) Engage with the vendor or open-source maintainers to track patch releases and apply updates promptly once available. 7) Review and harden overall Linux system security policies, including user privilege management and access controls, to minimize local attack vectors.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Italy, Spain
CVE-2025-7647: CWE-378 Creation of Temporary File With Insecure Permissions in run-llama run-llama/llama_index
Description
The llama-index-core package, up to version 0.12.44, contains a vulnerability in the `get_cache_dir()` function where a predictable, hardcoded directory path `/tmp/llama_index` is used on Linux systems without proper security controls. This vulnerability allows attackers on multi-user systems to steal proprietary models, poison cached embeddings, or conduct symlink attacks. The issue affects all Linux deployments where multiple users share the same system. The vulnerability is classified under CWE-379, CWE-377, and CWE-367, indicating insecure temporary file creation and potential race conditions.
AI-Powered Analysis
Technical Analysis
CVE-2025-7647 is a high-severity vulnerability affecting the run-llama project's llama-index-core package up to version 0.12.44. The vulnerability arises from the use of a hardcoded, predictable temporary directory path `/tmp/llama_index` on Linux systems within the `get_cache_dir()` function. This directory is used for caching purposes but lacks proper security controls such as secure permissions or randomized naming. Because the directory is shared and predictable on multi-user Linux systems, attackers with local access can exploit this to perform several malicious actions. These include stealing proprietary machine learning models stored in the cache, poisoning cached embeddings to manipulate downstream AI model behavior, or conducting symlink attacks that can lead to unauthorized file access or modification. The vulnerability is categorized under CWE-378 (Creation of Temporary File With Insecure Permissions), CWE-379 (Creation of Temporary File in Directory with Insecure Permissions), CWE-377 (Insecure Temporary File), and CWE-367 (Time-of-check Time-of-use Race Condition), indicating that the issue also involves potential race conditions and insecure file handling practices. The CVSS 3.0 base score is 7.3, reflecting high severity, with an attack vector of local access (AV:L), low attack complexity (AC:L), requiring low privileges (PR:L), no user interaction (UI:N), unchanged scope (S:U), and high impact on confidentiality and integrity, with low impact on availability. No known exploits are currently reported in the wild, but the vulnerability poses a significant risk in shared Linux environments where multiple users have access to the same system and can potentially interfere with each other's cached data or models. The lack of a patch link suggests that a fix may not yet be publicly available, emphasizing the need for immediate mitigation steps.
Potential Impact
For European organizations, especially those deploying AI/ML workloads on shared Linux infrastructure, this vulnerability can lead to serious confidentiality and integrity breaches. Proprietary AI models and embeddings, which often represent valuable intellectual property and competitive advantage, could be stolen or tampered with, resulting in loss of trade secrets or corrupted AI outputs. Poisoned embeddings can degrade the quality and reliability of AI-driven services, potentially causing erroneous decisions or outputs in critical applications such as finance, healthcare, or autonomous systems. Symlink attacks could further escalate to unauthorized access or modification of sensitive files beyond the cache directory. Organizations operating multi-user Linux environments, such as research institutions, cloud service providers, or enterprises with shared compute clusters, are particularly at risk. The impact extends to compliance concerns under GDPR if personal data processed by AI models is exposed or manipulated. Additionally, the disruption of AI services due to corrupted caches could affect business continuity and reputation.
Mitigation Recommendations
To mitigate this vulnerability, European organizations should immediately audit their use of the run-llama/llama_index package and identify any Linux deployments where multiple users share the same system. Until an official patch is released, organizations should implement the following specific measures: 1) Configure the cache directory to use a unique, non-predictable path per user or process, avoiding shared global temporary directories. 2) Enforce strict filesystem permissions on the cache directory and files, restricting access only to the owning user or process. 3) Employ secure temporary file creation functions that atomically create files with safe permissions to prevent race conditions and symlink attacks. 4) Consider running AI workloads in isolated containers or virtual machines to segregate user environments and reduce the risk of cross-user attacks. 5) Monitor filesystem access logs for suspicious activity around the `/tmp/llama_index` directory. 6) Engage with the vendor or open-source maintainers to track patch releases and apply updates promptly once available. 7) Review and harden overall Linux system security policies, including user privilege management and access controls, to minimize local attack vectors.
Affected Countries
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- @huntr_ai
- Date Reserved
- 2025-07-14T16:44:34.096Z
- Cvss Version
- 3.0
- State
- PUBLISHED
Threat ID: 68d813d8c38eb2a1b8713fdc
Added to database: 9/27/2025, 4:42:00 PM
Last enriched: 9/27/2025, 4:42:29 PM
Last updated: 9/28/2025, 12:09:51 AM
Views: 8
Related Threats
CVE-2025-11089: SQL Injection in kidaze CourseSelectionSystem
MediumCVE-2025-11049: Improper Authorization in Portabilis i-Educar
MediumCVE-2025-3193: Prototype Pollution in algoliasearch-helper
MediumCVE-2025-10954: Improper Validation of Syntactic Correctness of Input in github.com/nyaruka/phonenumbers
MediumCVE-2025-11051: Cross-Site Request Forgery in SourceCodester Pet Grooming Management Software
MediumActions
Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.