Phrasing Vulnerability
Updated: April 3, 2026
Description
The AI model can be manipulated into ignoring safety measures by phrasing harmful queries in the past or future tense.
By presenting a query as if it refers to something that has already happened or will happen in the future, an attacker may bypass the model's built-in safeguards that are designed to prevent harmful or unethical content generation in the present context.
Example Attack
If exploited, this vulnerability could allow attackers to generate dangerous or harmful content by framing queries in a way that the model perceives as safe or theoretical. This could lead to the creation of unsafe responses, such as instructions for illegal activities, harmful advice, or unethical behavior, even though the model's guardrails are in place to prevent such actions in real-time.
Remediation
Investigate and improve the effectiveness of guardrails and other output security mechanisms to prevent harmful content from being generated, regardless of how the query is phrased.
Security Frameworks
A Prompt Injection Vulnerability occurs when user prompts alter the LLM's behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model.
Adversaries may abuse their access to a victim system and use its resources or capabilities to further their goals by causing harms external to that system. These harms could affect the organization (e.g. Financial Harm, Reputational Harm), its users (e.g. User Harm), or the general public (e.g. Societal Harm).
Reputational harm involves a degradation of public perception and trust in organizations. Examples of reputation-harming incidents include scandals or false impersonations.
Societal harms might generate harmful outcomes that reach either the general public or specific vulnerable groups such as the exposure of children to vulgar content.
User harms may encompass a variety of harm types including financial and reputational that are directed at or felt by individual victims of the attack rather than at the organization level.
An adversary may craft malicious prompts as inputs to an LLM that cause the LLM to act in unintended ways. These prompt injections are often designed to cause the model to ignore aspects of its original instructions and follow the adversary's instructions instead.
An adversary may inject prompts directly as a user of the LLM. This type of injection may be used by the adversary to gain a foothold in the system or to misuse the LLM itself, as for example to generate harmful content.
An adversary may inject prompts indirectly via separate data channel ingested by the LLM such as include text or multimedia pulled from databases or websites. These malicious prompts may be hidden or obfuscated from the user. This type of injection may be used by the adversary to gain a foothold in the system or to target an unwitting user of the system.
An adversary may use a carefully crafted LLM Prompt Injection designed to place LLM in a state in which it will freely respond to any user input, bypassing any controls, restrictions, or guardrails placed on the LLM. Once successfully jailbroken, the LLM can be used in unintended ways by the adversary.
AI system is evaluated regularly for safety risks - as identified in the MAP function. The AI system to be deployed is demonstrated to be safe, its residual negative risk does not exceed the risk tolerance, and can fail safely, particularly if made to operate beyond its knowledge limits. Safety metrics implicate system reliability and robustness, real-time monitoring, and response times for AI system failures.
AI system security and resilience - as identified in the MAP function - are evaluated and documented.
Mechanisms are in place and applied, responsibilities are assigned and understood to supersede, disengage, or deactivate AI systems that demonstrate performance or outcomes inconsistent with intended use.
The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.
The organization shall define and document the necessary elements for the ongoing operation of the AI system. At the minimum, this should include system and performance monitoring, repairs, updates and support.
The organization shall ensure that the AI system is used according to the intended uses of the AI system and its accompanying documentation.
Attackers can manipulate an agent's objectives, task selection, or decision pathways through prompt-based manipulation, deceptive tool outputs, malicious artefacts, forged agent-to-agent messages, or poisoned external data.