A newly discovered vulnerability in Salesforce’s Agentforce platform shows how AI driven tools can open novel attack paths for criminal actors. The bug, named ForcedLeak, leveraged an indirect prompt injection exploit to siphon sensitive CRM data, highlighting the evolving dangers of integrating autonomous AI agents into enterprise workflows.
Quick Summary – TLDR:
- A CVSS 9.4 critical flaw in Salesforce Agentforce called ForcedLeak lets attackers hide malicious instructions in Web to Lead forms
- When employees later interact with those leads via the AI agent, the hidden instructions are executed, causing data to leak
- Attackers exploited a now expired but whitelisted domain to receive the exfiltrated data
- Salesforce has issued patches, revoked the expired domain, and enforced Trusted URL rules
What Happened?
Security researchers at Noma Labs discovered ForcedLeak on July 28, 2025, and disclosed it to Salesforce. The flaw affects organizations using Agentforce with the Web to Lead feature enabled. In short, attackers insert hidden instructions inside the Description field of a Web to Lead form. These instructions look benign. Later, when an internal user asks Agentforce to process that lead, the AI agent runs both the legitimate query and the hidden payload. That payload then queries the CRM for sensitive data and transmits it to a domain the attacker now controls.
ForcedLeak: AI Agent risks exposed in Salesforce AgentForce – https://t.co/JMsazzE6PD By @sasi2103
— AISecHub (@AISecHub) September 25, 2025
This research outlines how @NomaSecurity discovered ForcedLeak, a critical severity (CVSS 9.4) vulnerability chain in Salesforce Agentforce that could enable external attackers to…
The pivotal factor was that Salesforce had left a domain on its whitelist for Content Security Policy (CSP) that had expired. The attacker re registered that domain cheaply (for around $5) and used it as a trusted destination. Because the domain was still allowed in the CSP, exfiltration appeared legitimate from the system’s perspective.
The attack chain included injection via form, delayed execution by the AI agent, CRM query, and data egress over a whitelisted external channel. All of this happened without raising immediate alarms.
Why This Is Worse Than a Traditional Bug?
Most vulnerabilities are exploited almost immediately and rely on direct code flaws. However, ForcedLeak succeeds because of how AI agents operate. Agentforce is not a simple chatbot. It reasons, plans, and executes multi step tasks using internal memory, tools, and connected systems. Traditional security controls often assume AI systems only act on explicit user prompts. ForcedLeak shows that data already present in the system can carry hidden commands.
The vulnerability exploited three key lapses:
- Context validation failure: The AI could not reliably distinguish between benign user data and hidden commands.
- Overly permissive model behavior: Agentforce accepted and acted on injected instructions.
- CSP bypass via expired whitelist: The domain allowed the attacker to exfiltrate data unnoticed.
Because the exploit is delayed (the malicious payload only triggers upon later interactions), it can remain hidden until an employee interacts with the tainted lead. This makes detection more difficult.
What Salesforce Did and What Users Must Do?
Once notified, Salesforce moved quickly. The expired whitelisted domain was secured or removed. Patches were released that enforce a Trusted URL allowlist for both Agentforce and Einstein AI, preventing output to untrusted domains. Salesforce also stated that its underlying services would enforce the Trusted URL control to block malicious links or domains from being generated or used.
But organizations using Agentforce must also act now. Key steps include:
- Apply the security patch and ensure Trusted URLs enforcement is active.
- Audit existing lead data for anomalous submissions such as extremely long description fields, embedded instructions or code, image tags, or suspicious URL references.
- Implement input validation and sanitization on Web to Lead forms to detect prompt style payloads.
- Sanitize all data from external sources before including it in any AI prompt or agent context.
- Monitor model behavior and output for signs of external calls like image tags pointing to unusual domains.
Broader Implications for AI Security
ForcedLeak is not merely a bug. It signals a shift in what security means for AI driven systems. It shows that:
- AI agents expand the attack surface beyond traditional modules and runtimes to include prompt injection, memory, tool calls, and chained workflows.
- Security teams must rethink threat modeling. It is no longer enough to secure databases, APIs, and frontend code. You must secure how AI models are fed data, how they interpret it, and where they are allowed to send outputs.
- Prompt hygiene, context boundaries, memory governance, output filtering, and domain allowlisting are now core components of AI security posture.
- Even low cost attackers (for example, someone paying $5 to register a domain) can exploit seemingly minor oversights.
- As more enterprises adopt AI agents, these kinds of indirect and delayed exploits will likely become more common.
SQ Magazine Takeaway
I believe ForcedLeak is a wakeup call. We are entering an era where AI agents are not just helpful assistants but potential security liabilities. The very power that makes Agentforce useful gives attackers new ways in.
From now on, defending AI systems means thinking differently. It is not enough to patch code or lock down servers. We must protect the interfaces between data and agent logic. We must treat every piece of content that enters or exits an AI system as a potential vector of attack.
For organizations using AI agents, prevention must go deep. That means strict input sanitation, minimal trust in external domains, constant auditing, and a culture of suspicious prompt hygiene. In other words, security cannot be an afterthought. It must be built into every step of AI integration.