Skip to main content
DashboardThreatsMapFeedsAPI
reconnecting
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-61784: CWE-22: Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') in hiyouga LLaMA-Factory

0
High
VulnerabilityCVE-2025-61784cvecve-2025-61784cwe-22cwe-918
Published: Tue Oct 07 2025 (10/07/2025, 19:01:40 UTC)
Source: CVE Database V5
Vendor/Project: hiyouga
Product: LLaMA-Factory

Description

LLaMA-Factory is a tuning library for large language models. Prior to version 0.9.4, a Server-Side Request Forgery (SSRF) vulnerability in the chat API allows any authenticated user to force the server to make arbitrary HTTP requests to internal and external networks. This can lead to the exposure of sensitive internal services, reconnaissance of the internal network, or interaction with third-party services. The same mechanism also allows for a Local File Inclusion (LFI) vulnerability, enabling users to read arbitrary files from the server's filesystem. The vulnerability exists in the `_process_request` function within `src/llamafactory/api/chat.py.` This function is responsible for processing incoming multimodal content, including images, videos, and audio provided via URLs. The function checks if the provided URL is a base64 data URI or a local file path (`os.path.isfile`). If neither is true, it falls back to treating the URL as a web URI and makes a direct HTTP GET request using `requests.get(url, stream=True).raw` without any validation or sanitization of the URL. Version 0.9.4 fixes the underlying issue.

AI-Powered Analysis

AILast updated: 10/07/2025, 19:30:23 UTC

Technical Analysis

CVE-2025-61784 affects hiyouga's LLaMA-Factory, a tuning library for large language models, specifically versions prior to 0.9.4. The vulnerability is rooted in the _process_request function within src/llamafactory/api/chat.py, which processes multimodal content URLs (images, videos, audio). The function attempts to identify if the input is a base64 data URI or a local file path using os.path.isfile. If neither condition is met, it treats the input as a web URI and performs an HTTP GET request using requests.get(url, stream=True).raw without validating or sanitizing the URL. This lack of validation enables an authenticated user to perform SSRF attacks, forcing the server to make arbitrary HTTP requests to internal or external networks, potentially exposing sensitive internal services or enabling reconnaissance. Additionally, the same mechanism allows Local File Inclusion (LFI), where attackers can read arbitrary files from the server filesystem, risking exposure of sensitive data or configuration files. The vulnerability is classified under CWE-22 (Path Traversal) and CWE-918 (Server-Side Request Forgery). The CVSS v3.1 score is 7.6 (AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L), indicating network attack vector, low attack complexity, requiring privileges but no user interaction, with high confidentiality impact and low integrity and availability impacts. No known exploits are reported in the wild as of the publication date (October 7, 2025). The issue is resolved in LLaMA-Factory version 0.9.4.

Potential Impact

For European organizations, this vulnerability poses significant risks, especially those deploying LLaMA-Factory in production environments for AI model tuning. The SSRF capability can be leveraged to access internal services that are otherwise protected by network segmentation or firewalls, potentially exposing sensitive internal APIs, databases, or management interfaces. This can lead to data leakage, unauthorized access, or pivoting within the network. The LFI aspect allows attackers to read arbitrary files on the server, risking exposure of credentials, configuration files, or proprietary data. Given the increasing adoption of AI tooling in sectors such as finance, healthcare, and government across Europe, exploitation could lead to breaches of sensitive personal data protected under GDPR, causing regulatory and reputational damage. The requirement for authentication limits exposure but does not eliminate risk, especially if credentials are compromised or insider threats exist. The vulnerability also impacts service availability and integrity to a lesser extent, as attackers could disrupt operations or manipulate data by leveraging internal access gained through SSRF. Overall, the threat could facilitate advanced persistent threats and lateral movement within affected networks.

Mitigation Recommendations

European organizations should immediately upgrade all instances of LLaMA-Factory to version 0.9.4 or later, where the vulnerability is patched. Until upgrades are complete, restrict access to the chat API to trusted users only and enforce strong authentication and authorization controls. Implement network-level segmentation and firewall rules to limit the server's ability to make outbound HTTP requests to internal services, reducing SSRF impact. Employ application-layer filtering or web application firewalls (WAFs) to detect and block suspicious URL patterns or unexpected requests. Conduct thorough audits of server file permissions to minimize sensitive file exposure in case of LFI exploitation. Monitor logs for unusual access patterns or unexpected file reads. Additionally, consider deploying runtime application self-protection (RASP) solutions to detect and block exploitation attempts in real time. Educate developers and DevOps teams on secure coding practices, especially validating and sanitizing all user-supplied inputs that influence network requests or file access.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2025-09-30T19:43:49.902Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 68e566d0a677756fc99d8dd1

Added to database: 10/7/2025, 7:15:28 PM

Last enriched: 10/7/2025, 7:30:23 PM

Last updated: 10/7/2025, 8:20:28 PM

Views: 5

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats