Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-64320: CWE-1427 Improper Neutralization of Input Used for LLM Prompting in Salesforce Agentforce Vibes Extension

0
Medium
VulnerabilityCVE-2025-64320cvecve-2025-64320cwe-1427
Published: Tue Nov 04 2025 (11/04/2025, 18:27:32 UTC)
Source: CVE Database V5
Vendor/Project: Salesforce
Product: Agentforce Vibes Extension

Description

Improper Neutralization of Input Used for LLM Prompting vulnerability in Salesforce Agentforce Vibes Extension allows Code Injection.This issue affects Agentforce Vibes Extension: before 3.2.0.

AI-Powered Analysis

AILast updated: 11/11/2025, 20:12:37 UTC

Technical Analysis

CVE-2025-64320 is a vulnerability classified under CWE-1427, indicating improper neutralization of input used for LLM prompting within the Salesforce Agentforce Vibes Extension. This extension, used to enhance Salesforce agent capabilities through integration with large language models, improperly sanitizes or validates input that is fed into LLM prompts. This flaw allows an attacker to inject malicious code or commands into the prompt, which the LLM may then execute or use in generating responses, effectively enabling code injection attacks. The vulnerability affects all versions before 3.2.0 of the Agentforce Vibes Extension. Exploitation requires no authentication or user interaction and can be performed remotely, increasing the attack surface. The CVSS v3.1 base score is 6.5, reflecting medium severity with network attack vector, low attack complexity, and no privileges required. The impact primarily affects confidentiality and integrity, as attackers could manipulate LLM outputs or extract sensitive information. Availability is not impacted. No public exploits or active exploitation have been reported yet. The vulnerability was reserved and published in late 2025, indicating recent discovery. The lack of patch links suggests a patch may be forthcoming or in development. This vulnerability highlights the emerging risks associated with integrating AI/LLM technologies into enterprise software without robust input validation.

Potential Impact

For European organizations, this vulnerability poses a risk of unauthorized code injection via the Salesforce Agentforce Vibes Extension, potentially leading to leakage or manipulation of sensitive customer or operational data handled within Salesforce environments. Confidentiality breaches could expose personal data subject to GDPR regulations, resulting in legal and financial repercussions. Integrity impacts could undermine trust in automated agent responses or workflows relying on LLM outputs, affecting customer service quality and decision-making. Since the vulnerability does not require authentication or user interaction, attackers can exploit it remotely, increasing the likelihood of attacks especially against externally facing Salesforce instances. Although availability is not affected, the reputational damage and compliance risks are significant. Organizations heavily reliant on Salesforce for customer relationship management and service automation are particularly vulnerable. The absence of known exploits provides a window for proactive mitigation before widespread exploitation occurs.

Mitigation Recommendations

1. Immediately monitor Salesforce and Agentforce Vibes Extension vendor communications for official patches or updates addressing CVE-2025-64320 and apply them promptly once available. 2. Until patches are released, implement strict input validation and sanitization on all data fed into LLM prompts within the extension, employing allowlists and escaping potentially dangerous characters or commands. 3. Restrict network access to Salesforce environments using the Agentforce Vibes Extension, limiting exposure to trusted IP ranges and enforcing strong access controls. 4. Conduct thorough security reviews and penetration testing focused on AI/LLM integration points to identify and remediate similar input handling flaws. 5. Enable detailed logging and monitoring of LLM prompt inputs and outputs to detect anomalous or suspicious activity indicative of exploitation attempts. 6. Educate development and security teams on secure coding practices related to AI/LLM prompt handling to prevent future vulnerabilities. 7. Review and update incident response plans to include scenarios involving AI/LLM-based code injection attacks. 8. Consider isolating or sandboxing LLM interactions to limit potential damage from injected code execution.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.2
Assigner Short Name
Salesforce
Date Reserved
2025-10-30T15:17:24.110Z
Cvss Version
null
State
PUBLISHED

Threat ID: 690a47346d939959c8021a92

Added to database: 11/4/2025, 6:34:28 PM

Last enriched: 11/11/2025, 8:12:37 PM

Last updated: 12/20/2025, 5:16:03 PM

Views: 39

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats