Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

AI Infrastructure Supply Chain Poisoning Alert

0
Medium
Published: Fri Mar 27 2026 (03/27/2026, 18:59:53 UTC)
Source: AlienVault OTX General

Description

A supply chain poisoning attack on LiteLLM, a popular AI model gateway, was detected by NSFOCUS Technology CERT. The TeamPCP group compromised the Trivy security scanning tool used in LiteLLM's release process, allowing them to publish malicious versions 1.82.7 and 1.82.8 on PyPI. These versions contained credential-stealing programs that collected sensitive data and, if a Kubernetes cluster was detected, deployed privileged Pods and implanted persistent backdoors. The attack impacted numerous dependent packages and potentially affected millions of users. The incident highlights the growing risks in AI infrastructure and the need for robust supply chain security measures.

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 03/27/2026, 20:01:27 UTC

Technical Analysis

The AI Infrastructure Supply Chain Poisoning Alert involves a sophisticated attack on LiteLLM, a popular AI model gateway, detected by NSFOCUS Technology CERT. The adversary group TeamPCP compromised the Trivy security scanning tool, which is integral to LiteLLM's release pipeline. By injecting malicious code into Trivy, they published tainted versions 1.82.7 and 1.82.8 on the Python Package Index (PyPI). These versions contained credential-stealing malware designed to exfiltrate sensitive information from infected systems. Furthermore, the malware included logic to detect Kubernetes clusters; upon detection, it deployed privileged Pods, granting elevated access within the cluster, and implanted persistent backdoors to maintain long-term control. This multi-stage attack leverages supply chain trust to infiltrate AI infrastructure, impacting numerous dependent packages and potentially millions of users worldwide. The attack techniques align with known tactics such as credential theft (T1552), persistence (T1505.003), and execution through command and scripting interpreters (T1059). Although no active exploits have been confirmed in the wild, the incident highlights vulnerabilities in open-source software supply chains, especially in AI ecosystems where dependencies are complex and widespread. The attack's sophistication and potential scale make it a significant concern for organizations relying on AI model gateways and Kubernetes environments.

Potential Impact

This supply chain poisoning attack poses a substantial risk to organizations globally, particularly those utilizing LiteLLM or its dependent packages in their AI infrastructure. Credential theft can lead to unauthorized access to sensitive systems and data, increasing the risk of data breaches and intellectual property theft. The deployment of privileged Pods within Kubernetes clusters can result in full cluster compromise, enabling attackers to move laterally, escalate privileges, and maintain persistent access. Given the widespread use of PyPI packages and the popularity of Kubernetes in cloud-native environments, the attack could affect millions of users and organizations, disrupting AI services and potentially causing operational downtime. The persistent backdoors implanted by the malware complicate detection and remediation efforts, increasing the likelihood of prolonged exposure. This incident also erodes trust in open-source supply chains, potentially impacting software development and deployment practices across industries. The attack could have cascading effects on AI-driven applications, critical infrastructure, and enterprises relying on AI model gateways, amplifying the overall threat landscape.

Mitigation Recommendations

Organizations should immediately audit their environments for the presence of Trivy versions 1.82.7 and 1.82.8 and any dependent packages that might have been compromised. Remove and replace these versions with verified clean releases. Implement strict supply chain security measures including cryptographic verification of software packages and dependencies before deployment. Employ runtime security controls and monitoring within Kubernetes clusters to detect anomalous privileged Pod deployments and unusual network activity indicative of backdoors. Use tools that can detect credential theft attempts and unusual authentication patterns. Enforce least privilege principles for Kubernetes service accounts and restrict Pod privileges to minimize attack surface. Regularly update and patch all components in the AI infrastructure and maintain an inventory of third-party dependencies. Establish incident response plans specifically addressing supply chain attacks and conduct threat hunting exercises focusing on persistence mechanisms. Collaborate with open-source communities and security vendors to share intelligence and receive timely alerts about compromised packages. Consider adopting software bill of materials (SBOM) practices to improve visibility into software supply chains.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Author
AlienVault
Tlp
white
References
["https://securityboulevard.com/2026/03/ai-infrastructure-litellm-supply-chain-poisoning-alert/"]
Adversary
TeamPCP
Pulse Id
69c6d3a930c99b3993018f22
Threat Score
null

Indicators of Compromise

Hash

ValueDescriptionCopy
hash97e073abd819d9cdc07705aeaa481f59
hash78cd382040eda14e2f8a17ee7387cffdabe96ab5
hash71e35aef03099cd1f2d6446734273025a163597de93912df321ef118bf135238
hash8395c3268d5c5dbae1c7c6d4bb3c318c752ba4608cfcd90eb97ffb94a910eac2
hasha0d229be8efcb2f9135e2ad55ba275b76ddcfeb55fa4370e0a522a5bdee0120b
hashd2a0d5f564628773b6af7b9c11f6b86531a875bd2d186d7081ab62748a800ebb

Threat ID: 69c6de343c064ed76fea1bee

Added to database: 3/27/2026, 7:44:52 PM

Last enriched: 3/27/2026, 8:01:27 PM

Last updated: 3/27/2026, 11:11:43 PM

Views: 28

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

External Links

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses