Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-67729: CWE-502: Deserialization of Untrusted Data in InternLM lmdeploy

0
High
VulnerabilityCVE-2025-67729cvecve-2025-67729cwe-502
Published: Fri Dec 26 2025 (12/26/2025, 21:54:10 UTC)
Source: CVE Database V5
Vendor/Project: InternLM
Product: lmdeploy

Description

CVE-2025-67729 is a high-severity deserialization vulnerability in InternLM's lmdeploy toolkit versions prior to 0. 11. 1. The flaw arises because lmdeploy uses torch. load() without the weights_only=true parameter when loading model checkpoint files, allowing maliciously crafted . bin or . pt files to execute arbitrary code on the host system. Exploitation requires user interaction to load a malicious model file but does not require prior authentication. This vulnerability impacts confidentiality, integrity, and availability due to potential remote code execution. Although no known exploits are currently reported in the wild, the high CVSS score (8.

AI-Powered Analysis

AILast updated: 12/26/2025, 22:24:44 UTC

Technical Analysis

The vulnerability identified as CVE-2025-67729 affects InternLM's lmdeploy toolkit, a software suite designed to compress, deploy, and serve large language models (LLMs). The core issue is an insecure deserialization flaw (CWE-502) stemming from the use of the PyTorch function torch.load() without the weights_only=true parameter when loading model checkpoint files (.bin or .pt). Normally, torch.load() can deserialize arbitrary Python objects, which, if untrusted, can lead to arbitrary code execution. By omitting weights_only=true, lmdeploy inadvertently allows execution of malicious payloads embedded within model files. An attacker can craft a malicious model checkpoint that, when loaded by a victim using a vulnerable lmdeploy version (<0.11.1), triggers execution of arbitrary code with the privileges of the user running lmdeploy. This can lead to full system compromise, data theft, or disruption of services. The vulnerability requires user interaction to load the malicious model file but does not require prior authentication, making it exploitable in scenarios where users download or receive untrusted model files. The vulnerability has a CVSS 3.1 base score of 8.8, reflecting its high impact on confidentiality, integrity, and availability, combined with low attack complexity and no privileges required. The issue was publicly disclosed on December 26, 2025, and has been patched in lmdeploy version 0.11.1. No known exploits have been reported in the wild yet, but the risk remains significant given the widespread adoption of LLM deployment tools and the critical nature of AI infrastructure.

Potential Impact

For European organizations, the impact of this vulnerability is substantial. Organizations leveraging lmdeploy for AI model deployment, including research institutions, AI startups, and enterprises integrating LLMs into their products or services, face risks of remote code execution leading to data breaches, intellectual property theft, and operational disruption. Compromise could allow attackers to manipulate AI models, exfiltrate sensitive data, or pivot to other internal systems. Given the increasing reliance on AI technologies across sectors such as finance, healthcare, manufacturing, and government, exploitation could have cascading effects on service availability and trust. Additionally, regulatory frameworks like GDPR impose strict data protection requirements, and a breach resulting from this vulnerability could lead to significant legal and financial penalties. The need for user interaction limits automated exploitation but does not eliminate risk, especially in environments where model files are frequently shared or downloaded from external sources.

Mitigation Recommendations

To mitigate this vulnerability, European organizations should: 1) Immediately upgrade lmdeploy to version 0.11.1 or later, where the vulnerability is patched by enforcing weights_only=true during model loading. 2) Implement strict validation and integrity checks on all model files before loading, including cryptographic signatures or hashes from trusted sources. 3) Restrict model file loading to isolated or sandboxed environments to limit potential damage from malicious payloads. 4) Educate users and developers about the risks of loading untrusted model files and enforce policies that prohibit using models from unverified sources. 5) Monitor systems running lmdeploy for unusual behavior indicative of exploitation attempts, such as unexpected process spawning or network activity. 6) Employ network segmentation and least privilege principles to reduce the attack surface and limit lateral movement if compromise occurs. 7) Maintain up-to-date backups of critical AI models and system configurations to enable recovery in case of compromise.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2025-12-10T20:04:28.290Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 694f079233784cecd499b7f7

Added to database: 12/26/2025, 10:09:22 PM

Last enriched: 12/26/2025, 10:24:44 PM

Last updated: 12/27/2025, 1:07:19 AM

Views: 6

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats