Latent Injection Vulnerability
Updated: April 28, 2026
Description
The AI model is vulnerable to prompt injections that are hidden within other contexts, also known as latent injections
Attackers can subtly manipulate the model's outputs by embedding malicious instructions or prompts within seemingly benign content. This could cause the model to generate unsafe, biased, or harmful responses without detecting the hidden nature of the injection.
Example Attack
If latent injections are successfully exploited, attackers could manipulate the model's behavior in ways that bypass its standard security filters. This could lead to the generation of harmful content, unauthorized actions, or unethical outputs, all of which could harm users, cause reputational damage, or violate compliance requirements.
Remediation
Investigate and improve the effectiveness of guardrails and output security mechanisms that can detect and block prompt injections hidden in text.
Security Frameworks
A Prompt Injection Vulnerability occurs when user prompts alter the LLM's behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model.
Adversaries can Craft Adversarial Data that prevent a machine learning model from correctly identifying the contents of the data. This technique can be used to evade a downstream task where machine learning is utilized. The adversary may evade machine learning based virus/malware detection, or network scanning towards the goal of a traditional cyber attack.
An adversary may craft malicious prompts as inputs to an LLM that cause the LLM to act in unintended ways. These prompt injections are often designed to cause the model to ignore aspects of its original instructions and follow the adversary's instructions instead.
An adversary may inject prompts directly as a user of the LLM. This type of injection may be used by the adversary to gain a foothold in the system or to misuse the LLM itself, as for example to generate harmful content.
An adversary may inject prompts indirectly via separate data channel ingested by the LLM such as include text or multimedia pulled from databases or websites. These malicious prompts may be hidden or obfuscated from the user. This type of injection may be used by the adversary to gain a foothold in the system or to target an unwitting user of the system.
AI system is evaluated regularly for safety risks - as identified in the MAP function. The AI system to be deployed is demonstrated to be safe, its residual negative risk does not exceed the risk tolerance, and can fail safely, particularly if made to operate beyond its knowledge limits. Safety metrics implicate system reliability and robustness, real-time monitoring, and response times for AI system failures.
AI system security and resilience - as identified in the MAP function - are evaluated and documented.
Post-deployment AI system monitoring plans are implemented, including mechanisms for capturing and evaluating input from users and other relevant AI actors, appeal and override, decommissioning, incident response, recovery, and change management.
The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.
The organization shall define and document the necessary elements for the ongoing operation of the AI system. At the minimum, this should include system and performance monitoring, repairs, updates and support.
The organization shall ensure that the AI system is used according to the intended uses of the AI system and its accompanying documentation.
Attackers can manipulate an agent's objectives, task selection, or decision pathways through prompt-based manipulation, deceptive tool outputs, malicious artefacts, forged agent-to-agent messages, or poisoned external data.
Adversaries corrupt or seed agent context with malicious or misleading data, causing future reasoning, planning, or tool use to become biased, unsafe, or aid exfiltration.