CVE-2025-13708: CWE-502: Deserialization of Untrusted Data in Tencent NeuralNLP-NeuralClassifier
Tencent NeuralNLP-NeuralClassifier _load_checkpoint Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Tencent NeuralNLP-NeuralClassifier. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the _load_checkpoint function. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of root. Was ZDI-CAN-27184.
AI Analysis
Technical Summary
CVE-2025-13708 identifies a critical vulnerability in Tencent's NeuralNLP-NeuralClassifier product, specifically within the _load_checkpoint function responsible for loading serialized model checkpoints. The vulnerability arises from the deserialization of untrusted data without proper validation, classified under CWE-502. This flaw enables remote attackers to execute arbitrary code on affected systems by crafting malicious serialized data that, when deserialized, triggers execution of attacker-controlled payloads. Exploitation requires user interaction, such as opening a malicious file or visiting a malicious webpage that delivers the payload. The vulnerability is particularly severe because the code execution occurs with root privileges, potentially allowing full system compromise. The CVSS v3.0 score of 7.8 reflects a high severity with attack vector local (requiring user interaction), low attack complexity, no privileges required, and high impact on confidentiality, integrity, and availability. No patches or known exploits are currently available, but the vulnerability was publicly disclosed by ZDI (ZDI-CAN-27184). Given the increasing adoption of AI and NLP tools in enterprise environments, this vulnerability could be leveraged to infiltrate critical systems, steal sensitive data, or disrupt operations.
Potential Impact
For European organizations, the impact of this vulnerability can be substantial, especially for those integrating Tencent NeuralNLP-NeuralClassifier into their AI workflows or products. Successful exploitation could lead to full system compromise with root-level access, enabling attackers to steal confidential data, manipulate AI models, disrupt services, or establish persistent backdoors. This risk is amplified in sectors relying heavily on AI for decision-making, such as finance, healthcare, and critical infrastructure. The requirement for user interaction limits mass exploitation but targeted spear-phishing or supply chain attacks could be effective. Additionally, compromised AI models could lead to integrity issues, undermining trust in automated processes. The lack of current patches means organizations must rely on mitigations to reduce exposure. The reputational damage and regulatory consequences under GDPR for data breaches caused by such an exploit could also be significant.
Mitigation Recommendations
1. Immediately audit and restrict the sources from which serialized checkpoint data can be loaded, ensuring only trusted and verified files are accepted. 2. Implement strict input validation and sandboxing around the deserialization process to prevent execution of malicious payloads. 3. Employ application whitelisting and privilege separation to limit the impact of potential code execution. 4. Educate users about the risks of opening untrusted files or visiting suspicious websites to reduce the likelihood of user interaction-based exploitation. 5. Monitor system logs and network traffic for unusual activity related to checkpoint loading or unexpected process spawning. 6. Engage with Tencent for updates and patches, and plan for timely deployment once available. 7. Consider isolating AI model processing environments from critical infrastructure to contain potential breaches. 8. Use endpoint detection and response (EDR) tools to detect and respond to exploitation attempts quickly.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Italy, Spain
CVE-2025-13708: CWE-502: Deserialization of Untrusted Data in Tencent NeuralNLP-NeuralClassifier
Description
Tencent NeuralNLP-NeuralClassifier _load_checkpoint Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Tencent NeuralNLP-NeuralClassifier. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the _load_checkpoint function. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of root. Was ZDI-CAN-27184.
AI-Powered Analysis
Technical Analysis
CVE-2025-13708 identifies a critical vulnerability in Tencent's NeuralNLP-NeuralClassifier product, specifically within the _load_checkpoint function responsible for loading serialized model checkpoints. The vulnerability arises from the deserialization of untrusted data without proper validation, classified under CWE-502. This flaw enables remote attackers to execute arbitrary code on affected systems by crafting malicious serialized data that, when deserialized, triggers execution of attacker-controlled payloads. Exploitation requires user interaction, such as opening a malicious file or visiting a malicious webpage that delivers the payload. The vulnerability is particularly severe because the code execution occurs with root privileges, potentially allowing full system compromise. The CVSS v3.0 score of 7.8 reflects a high severity with attack vector local (requiring user interaction), low attack complexity, no privileges required, and high impact on confidentiality, integrity, and availability. No patches or known exploits are currently available, but the vulnerability was publicly disclosed by ZDI (ZDI-CAN-27184). Given the increasing adoption of AI and NLP tools in enterprise environments, this vulnerability could be leveraged to infiltrate critical systems, steal sensitive data, or disrupt operations.
Potential Impact
For European organizations, the impact of this vulnerability can be substantial, especially for those integrating Tencent NeuralNLP-NeuralClassifier into their AI workflows or products. Successful exploitation could lead to full system compromise with root-level access, enabling attackers to steal confidential data, manipulate AI models, disrupt services, or establish persistent backdoors. This risk is amplified in sectors relying heavily on AI for decision-making, such as finance, healthcare, and critical infrastructure. The requirement for user interaction limits mass exploitation but targeted spear-phishing or supply chain attacks could be effective. Additionally, compromised AI models could lead to integrity issues, undermining trust in automated processes. The lack of current patches means organizations must rely on mitigations to reduce exposure. The reputational damage and regulatory consequences under GDPR for data breaches caused by such an exploit could also be significant.
Mitigation Recommendations
1. Immediately audit and restrict the sources from which serialized checkpoint data can be loaded, ensuring only trusted and verified files are accepted. 2. Implement strict input validation and sandboxing around the deserialization process to prevent execution of malicious payloads. 3. Employ application whitelisting and privilege separation to limit the impact of potential code execution. 4. Educate users about the risks of opening untrusted files or visiting suspicious websites to reduce the likelihood of user interaction-based exploitation. 5. Monitor system logs and network traffic for unusual activity related to checkpoint loading or unexpected process spawning. 6. Engage with Tencent for updates and patches, and plan for timely deployment once available. 7. Consider isolating AI model processing environments from critical infrastructure to contain potential breaches. 8. Use endpoint detection and response (EDR) tools to detect and respond to exploitation attempts quickly.
Affected Countries
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- zdi
- Date Reserved
- 2025-11-25T21:52:38.662Z
- Cvss Version
- 3.0
- State
- PUBLISHED
Threat ID: 694b0d93d69af40f312d3866
Added to database: 12/23/2025, 9:45:55 PM
Last enriched: 12/30/2025, 11:55:21 PM
Last updated: 2/7/2026, 11:08:31 AM
Views: 24
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
CVE-2026-2083: SQL Injection in code-projects Social Networking Site
MediumCVE-2026-2082: OS Command Injection in D-Link DIR-823X
MediumCVE-2026-2080: Command Injection in UTT HiPER 810
HighCVE-2026-2079: Improper Authorization in yeqifu warehouse
MediumCVE-2026-1675: CWE-1188 Initialization of a Resource with an Insecure Default in brstefanovic Advanced Country Blocker
MediumActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
Need more coverage?
Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.