Findings
Prompt injection vulnerability
Updated: June 19, 2025
Description
The model is vulnerable to attacks from the PromptInject framework.
This allows adversaries to manipulate its behavior by injecting crafted prompts. This type of attack exploits the model's inability to distinguish between user instructions and embedded adversarial inputs, leading to unauthorized actions, data leakage, or policy bypasses.
Example Attack
Prompt injection can be used to override system instructions, extract sensitive information, or generate harmful content. Attackers may disguise malicious commands within seemingly benign inputs, tricking the model into executing unintended actions. This can lead to compliance violations, reputational damage, and security breaches, particularly if the AI system interacts with confidential or regulated data
Remediation
Investigate and improve the effectiveness of guardrails and other output security mechanisms.
Security Frameworks
A Prompt Injection Vulnerability occurs when user prompts alter the LLM's behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model.
An adversary may craft malicious prompts as inputs to an LLM that cause the LLM to act in unintended ways. These prompt injections are often designed to cause the model to ignore aspects of its original instructions and follow the adversary's instructions instead.
An adversary may inject prompts directly as a user of the LLM. This type of injection may be used by the adversary to gain a foothold in the system or to misuse the LLM itself, as for example to generate harmful content.
An adversary may inject prompts indirectly via separate data channel ingested by the LLM such as include text or multimedia pulled from databases or websites. These malicious prompts may be hidden or obfuscated from the user. This type of injection may be used by the adversary to gain a foothold in the system or to target an unwitting user of the system.