Findings
TAP jailbreak vulnerability
Updated: June 19, 2025
Description
The AI model is vulnerable to jailbreak attacks via Tree of Attacks with Pruning (TAP).
This technique uses a structured attack approach that manipulates the model's decision-making process by pruning out certain responses and guiding the model toward unsafe outputs. By exploiting this vulnerability, attackers can circumvent content filters and safeguards, causing the model to generate prohibited, harmful, or biased responses.
Example Attack
exploited, this vulnerability could lead to a breach of the AI's ethical guidelines and result in the generation of harmful or malicious content. This could lead to the exposure of sensitive data, violations of security protocols, or the model producing offensive or harmful outputs, which could damage the AI's reputation and cause harm to users or organizations relying on it for safe interaction.
Remediation
Investigate and improve the effectiveness of guardrails and output security mechanisms, specifically to detect and prevent manipulation through the Tree of Attacks with Pruning technique. Strengthen the model's decision-making processes to avoid being steered into generating harmful content. Implement additional layers of validation and security checks to identify and block these types of attack strategies before they can influence the model's outputs.
Security Frameworks
A Prompt Injection Vulnerability occurs when user prompts alter the LLM's behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model.
An adversary may craft malicious prompts as inputs to an LLM that cause the LLM to act in unintended ways. These prompt injections are often designed to cause the model to ignore aspects of its original instructions and follow the adversary's instructions instead.
An adversary may inject prompts directly as a user of the LLM. This type of injection may be used by the adversary to gain a foothold in the system or to misuse the LLM itself, as for example to generate harmful content.
An adversary may inject prompts indirectly via separate data channel ingested by the LLM such as include text or multimedia pulled from databases or websites. These malicious prompts may be hidden or obfuscated from the user. This type of injection may be used by the adversary to gain a foothold in the system or to target an unwitting user of the system.
An adversary may use a carefully crafted LLM Prompt Injection designed to place LLM in a state in which it will freely respond to any user input, bypassing any controls, restrictions, or guardrails placed on the LLM. Once successfully jailbroken, the LLM can be used in unintended ways by the adversary.