Malicious attack method on hosted ML models now targets PyPI
A new malicious campaign has been discovered targeting the Python Package Index (PyPI) by exploiting the Pickle file format in machine learning models. Three malicious packages posing as an Alibaba AI Labs SDK were detected, containing infostealer payloads hidden inside PyTorch models. The packages exfiltrate information about infected machines and .gitconfig file contents. This attack demonstrates the evolving threat landscape in AI and machine learning, particularly in the software supply chain. The campaign likely targeted developers in China and highlights the need for improved security measures and tools to detect malicious functionality in ML models.
AI Analysis
Technical Summary
This threat involves a novel malicious campaign targeting the Python Package Index (PyPI) ecosystem by exploiting the Pickle serialization format within machine learning (ML) models, specifically PyTorch models. Attackers have uploaded three malicious packages masquerading as an Alibaba AI Labs SDK, embedding infostealer payloads inside the PyTorch model files. The Pickle format is known for its ability to serialize and deserialize Python objects but is inherently insecure when loading untrusted data, as it can execute arbitrary code during deserialization. By embedding malicious code within these ML models, the attackers leverage the trust developers place in PyPI packages and the opaque nature of ML model files to evade detection. Once installed, these packages exfiltrate sensitive information from infected machines, including details about the system environment and contents of the .gitconfig file, which may contain credentials or configuration details useful for further compromise or lateral movement. This attack represents an evolution in supply chain attacks, combining AI/ML components with traditional malware techniques to target developers and organizations relying on open-source ML tools. Although the campaign appears to have primarily targeted developers in China, the use of PyPI and PyTorch is global, raising concerns about broader exposure. The attack chain exploits the lack of robust security controls around ML model files and the absence of effective scanning tools capable of detecting malicious payloads embedded in serialized ML artifacts. No known exploits are currently reported in the wild beyond this campaign, and no patches or mitigations have been formally released, underscoring the need for heightened vigilance and improved security practices in ML supply chains.
Potential Impact
For European organizations, this threat poses significant risks, particularly for those engaged in AI and ML development or deployment using Python and PyTorch frameworks. The exfiltration of system information and .gitconfig files can lead to credential theft, unauthorized access, and potential lateral movement within corporate networks. Compromise of developer machines could result in the insertion of malicious code into software supply chains, undermining the integrity of ML models and applications deployed in production. This could affect sectors heavily reliant on AI, such as finance, healthcare, automotive, and critical infrastructure, potentially leading to data breaches, intellectual property theft, and operational disruptions. The stealthy nature of the attack, leveraging trusted package repositories and ML model files, complicates detection and response efforts. Additionally, the supply chain aspect means that even organizations not directly targeted could be impacted if they consume compromised packages. The medium severity rating reflects the current scope and sophistication, but the potential for escalation and broader impact remains, especially if attackers adapt the technique for wider distribution or target high-value European entities.
Mitigation Recommendations
European organizations should implement several targeted measures beyond generic advice: 1) Enforce strict vetting and validation of all PyPI packages, especially those containing ML models or serialized data. This includes verifying package provenance, digital signatures, and conducting manual code reviews where feasible. 2) Integrate specialized scanning tools capable of analyzing Pickle files and ML model artifacts for embedded malicious code or unusual behaviors. 3) Restrict the use of Pickle deserialization to trusted sources only, and consider replacing Pickle with safer serialization formats (e.g., JSON, protobuf) where possible. 4) Monitor network traffic for unusual exfiltration patterns, particularly outbound connections from developer workstations or build environments. 5) Implement robust endpoint detection and response (EDR) solutions with behavioral analytics tuned to detect infostealer activity and anomalous access to configuration files like .gitconfig. 6) Educate developers and DevOps teams about the risks of supply chain attacks involving ML models and encourage the use of isolated or sandboxed environments for testing new packages. 7) Maintain up-to-date inventories of third-party dependencies and apply continuous monitoring for emerging threats related to these components. 8) Collaborate with PyPI maintainers and the broader open-source community to report and remediate malicious packages promptly.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Italy, Spain
Indicators of Compromise
- hash: 2b9c4c002dcce5cd6e890b19a81b3d04
- hash: 848f4d6f0b74d848be3dd9d17bcc6013
- hash: 017416afba124b5d0dab19887bc611f9b5b53a27
- hash: 05dbc49da7796051450d1fa529235f2606ec048a
- hash: 0e0469a70d2dbcfe8f33386cf45db6de81adf5e7
- hash: 17eaddfd96bc0d6a8e3337690dc983d2067feca7
- hash: 183199821f1cb841b3fc9e6d41b168fd8781c489
- hash: 1fedfba761c5dab65e99a30b23caf77af23f07bc
- hash: 2bb1bc02697b97b552fbe3036a2c8237d9dd055e
- hash: 32debab99f8908eff0da2d48337b13f58d7c7e61
- hash: 4bd9b016af8578fbd22559c9776a8380bbdbc076
- hash: 6dc828ca381fd2c6f5d4400d1cb52447465e49dd
- hash: 7d3636cecd970bb448fc59b3a948710e4f7fae7d
- hash: 81080f2e44609d0764aa35abc7e1c5c270725446
- hash: 8aaba017e3a28465b7176e3922f4af69b342ca80
- hash: a975e2783e2d4c84f5488f52642eaffc4fb1b4cd
- hash: a9aec9766f57aaf8fd7261690046e905158b5337
- hash: e1d8dbc75835198c95d1cf227e66d7bc17e42888
- hash: 1f83b32270c72146c0e39b1fc23d0d8d62f7a8d83265dfa1e709ebf681bac9ce
- hash: 22fd17b184cd6f05a2fbe3ed7b27fa42f66b7a2eaf2b272a77467b08f96b6031
Malicious attack method on hosted ML models now targets PyPI
Description
A new malicious campaign has been discovered targeting the Python Package Index (PyPI) by exploiting the Pickle file format in machine learning models. Three malicious packages posing as an Alibaba AI Labs SDK were detected, containing infostealer payloads hidden inside PyTorch models. The packages exfiltrate information about infected machines and .gitconfig file contents. This attack demonstrates the evolving threat landscape in AI and machine learning, particularly in the software supply chain. The campaign likely targeted developers in China and highlights the need for improved security measures and tools to detect malicious functionality in ML models.
AI-Powered Analysis
Technical Analysis
This threat involves a novel malicious campaign targeting the Python Package Index (PyPI) ecosystem by exploiting the Pickle serialization format within machine learning (ML) models, specifically PyTorch models. Attackers have uploaded three malicious packages masquerading as an Alibaba AI Labs SDK, embedding infostealer payloads inside the PyTorch model files. The Pickle format is known for its ability to serialize and deserialize Python objects but is inherently insecure when loading untrusted data, as it can execute arbitrary code during deserialization. By embedding malicious code within these ML models, the attackers leverage the trust developers place in PyPI packages and the opaque nature of ML model files to evade detection. Once installed, these packages exfiltrate sensitive information from infected machines, including details about the system environment and contents of the .gitconfig file, which may contain credentials or configuration details useful for further compromise or lateral movement. This attack represents an evolution in supply chain attacks, combining AI/ML components with traditional malware techniques to target developers and organizations relying on open-source ML tools. Although the campaign appears to have primarily targeted developers in China, the use of PyPI and PyTorch is global, raising concerns about broader exposure. The attack chain exploits the lack of robust security controls around ML model files and the absence of effective scanning tools capable of detecting malicious payloads embedded in serialized ML artifacts. No known exploits are currently reported in the wild beyond this campaign, and no patches or mitigations have been formally released, underscoring the need for heightened vigilance and improved security practices in ML supply chains.
Potential Impact
For European organizations, this threat poses significant risks, particularly for those engaged in AI and ML development or deployment using Python and PyTorch frameworks. The exfiltration of system information and .gitconfig files can lead to credential theft, unauthorized access, and potential lateral movement within corporate networks. Compromise of developer machines could result in the insertion of malicious code into software supply chains, undermining the integrity of ML models and applications deployed in production. This could affect sectors heavily reliant on AI, such as finance, healthcare, automotive, and critical infrastructure, potentially leading to data breaches, intellectual property theft, and operational disruptions. The stealthy nature of the attack, leveraging trusted package repositories and ML model files, complicates detection and response efforts. Additionally, the supply chain aspect means that even organizations not directly targeted could be impacted if they consume compromised packages. The medium severity rating reflects the current scope and sophistication, but the potential for escalation and broader impact remains, especially if attackers adapt the technique for wider distribution or target high-value European entities.
Mitigation Recommendations
European organizations should implement several targeted measures beyond generic advice: 1) Enforce strict vetting and validation of all PyPI packages, especially those containing ML models or serialized data. This includes verifying package provenance, digital signatures, and conducting manual code reviews where feasible. 2) Integrate specialized scanning tools capable of analyzing Pickle files and ML model artifacts for embedded malicious code or unusual behaviors. 3) Restrict the use of Pickle deserialization to trusted sources only, and consider replacing Pickle with safer serialization formats (e.g., JSON, protobuf) where possible. 4) Monitor network traffic for unusual exfiltration patterns, particularly outbound connections from developer workstations or build environments. 5) Implement robust endpoint detection and response (EDR) solutions with behavioral analytics tuned to detect infostealer activity and anomalous access to configuration files like .gitconfig. 6) Educate developers and DevOps teams about the risks of supply chain attacks involving ML models and encourage the use of isolated or sandboxed environments for testing new packages. 7) Maintain up-to-date inventories of third-party dependencies and apply continuous monitoring for emerging threats related to these components. 8) Collaborate with PyPI maintainers and the broader open-source community to report and remediate malicious packages promptly.
Affected Countries
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Author
- AlienVault
- Tlp
- white
- References
- ["https://securityboulevard.com/2025/05/malicious-attack-method-on-hosted-ml-models-now-targets-pypi/"]
- Adversary
- Pulse Id
- 68343195f3f6c6e7a2fde462
Indicators of Compromise
Hash
Value | Description | Copy |
---|---|---|
hash2b9c4c002dcce5cd6e890b19a81b3d04 | — | |
hash848f4d6f0b74d848be3dd9d17bcc6013 | — | |
hash017416afba124b5d0dab19887bc611f9b5b53a27 | — | |
hash05dbc49da7796051450d1fa529235f2606ec048a | — | |
hash0e0469a70d2dbcfe8f33386cf45db6de81adf5e7 | — | |
hash17eaddfd96bc0d6a8e3337690dc983d2067feca7 | — | |
hash183199821f1cb841b3fc9e6d41b168fd8781c489 | — | |
hash1fedfba761c5dab65e99a30b23caf77af23f07bc | — | |
hash2bb1bc02697b97b552fbe3036a2c8237d9dd055e | — | |
hash32debab99f8908eff0da2d48337b13f58d7c7e61 | — | |
hash4bd9b016af8578fbd22559c9776a8380bbdbc076 | — | |
hash6dc828ca381fd2c6f5d4400d1cb52447465e49dd | — | |
hash7d3636cecd970bb448fc59b3a948710e4f7fae7d | — | |
hash81080f2e44609d0764aa35abc7e1c5c270725446 | — | |
hash8aaba017e3a28465b7176e3922f4af69b342ca80 | — | |
hasha975e2783e2d4c84f5488f52642eaffc4fb1b4cd | — | |
hasha9aec9766f57aaf8fd7261690046e905158b5337 | — | |
hashe1d8dbc75835198c95d1cf227e66d7bc17e42888 | — | |
hash1f83b32270c72146c0e39b1fc23d0d8d62f7a8d83265dfa1e709ebf681bac9ce | — | |
hash22fd17b184cd6f05a2fbe3ed7b27fa42f66b7a2eaf2b272a77467b08f96b6031 | — |
Threat ID: 683432cd0acd01a249284968
Added to database: 5/26/2025, 9:22:21 AM
Last enriched: 6/25/2025, 9:46:01 AM
Last updated: 8/20/2025, 10:40:25 PM
Views: 57
Related Threats
ThreatFox IOCs for 2025-08-21
MediumAPT36 Malware Campaign Using Desktop Entry Files and Google Drive Payload Delivery
MediumThink before you Click(Fix): Analyzing the ClickFix social engineering technique
MediumNew Variant of ACRStealer Actively Distributed with Modifications
MediumMuddyWater Leveraging DCHSpy For Israel-Iran Conflict
MediumActions
Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.
External Links
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.