Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-10965: Deserialization in LazyAGI LazyLLM

0
Medium
VulnerabilityCVE-2025-10965cvecve-2025-10965
Published: Thu Sep 25 2025 (09/25/2025, 20:02:07 UTC)
Source: CVE Database V5
Vendor/Project: LazyAGI
Product: LazyLLM

Description

A security vulnerability has been detected in LazyAGI LazyLLM up to 0.6.1. Affected by this issue is the function lazyllm_call of the file lazyllm/components/deploy/relay/server.py. Such manipulation leads to deserialization. The attack can be launched remotely. The exploit has been disclosed publicly and may be used.

AI-Powered Analysis

AILast updated: 10/03/2025, 00:40:39 UTC

Technical Analysis

CVE-2025-10965 is a security vulnerability identified in the LazyAGI project's LazyLLM product, specifically affecting versions 0.6.0 and 0.6.1. The vulnerability resides in the function lazyllm_call within the file lazyllm/components/deploy/relay/server.py. It involves unsafe deserialization, a common security flaw where untrusted data is deserialized without proper validation or sanitization. This can allow an attacker to manipulate serialized input data to execute arbitrary code or cause unexpected behavior remotely. The vulnerability is exploitable over the network without requiring user interaction or prior authentication, making it a remote code execution risk vector. The CVSS 4.0 base score is 5.3, indicating a medium severity level. The vector details show the attack is network-based (AV:N), requires low attack complexity (AC:L), no privileges (PR:L) but some limited privileges are needed, no user interaction (UI:N), and impacts confidentiality, integrity, and availability to a low extent (VC:L, VI:L, VA:L). The scope remains unchanged (S:N), and the exploitability is partially functional (E:P). Although no public exploits are currently known in the wild, the vulnerability has been publicly disclosed, increasing the risk of exploitation. The lack of patches or mitigation links suggests that fixes may not yet be available or widely distributed. This vulnerability is critical to address in environments using LazyLLM, especially where the affected versions are deployed in production or exposed to untrusted networks, as it could lead to unauthorized code execution or system compromise through crafted serialized payloads.

Potential Impact

For European organizations, the impact of this vulnerability can be significant, particularly for those leveraging LazyLLM in AI or automation workflows. Exploitation could lead to unauthorized access, data leakage, or disruption of AI services, affecting confidentiality, integrity, and availability of critical systems. Organizations in sectors such as finance, healthcare, manufacturing, and government that rely on AI-driven decision-making or automation could face operational disruptions or data breaches. The remote exploitability without user interaction increases the risk of automated attacks or worm-like propagation within networks. Additionally, the medium severity score indicates that while the vulnerability is not the most critical, it still poses a tangible threat that could be leveraged as a foothold for further lateral movement or privilege escalation within enterprise environments. Given the increasing adoption of AI frameworks in Europe, this vulnerability could undermine trust in AI deployments and lead to regulatory scrutiny under GDPR if personal data is compromised.

Mitigation Recommendations

To mitigate this vulnerability, European organizations should: 1) Immediately identify and inventory all instances of LazyLLM versions 0.6.0 and 0.6.1 in their environments. 2) Apply vendor patches or updates as soon as they become available; if no official patch exists, consider disabling or isolating affected components until a fix is released. 3) Implement network segmentation and firewall rules to restrict access to LazyLLM services, limiting exposure to untrusted networks. 4) Employ input validation and sanitization on all serialized data inputs to prevent malicious payloads from being processed. 5) Monitor network traffic and logs for unusual deserialization activity or anomalies indicative of exploitation attempts. 6) Use application-layer security controls such as Web Application Firewalls (WAFs) with custom rules to detect and block suspicious serialized payloads. 7) Conduct security assessments and penetration testing focusing on deserialization vulnerabilities in AI frameworks. 8) Educate development and operations teams about secure coding practices around serialization and deserialization to prevent similar issues in future releases.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
VulDB
Date Reserved
2025-09-25T10:11:23.733Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 68d5da069e21be37e937d04c

Added to database: 9/26/2025, 12:10:46 AM

Last enriched: 10/3/2025, 12:40:39 AM

Last updated: 11/6/2025, 1:00:21 PM

Views: 29

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats