Findings

Adversarial suffix vulnerability

Updated: June 19, 2025

Description

Severity: Medium

The AI model is vulnerable to jailbreak attacks through the appending of adversarial suffixes to queries.

These suffixes are designed to bypass the model's guardrails, leading to the generation of unintended or harmful responses. Attackers can exploit this vulnerability by appending specific phrases or keywords to queries that trick the model into providing unsafe, unethical, or otherwise prohibited outputs.

Example Attack

This vulnerability could allow attackers to bypass content filters and safeguards, leading to the generation of harmful or toxic content. This could result in security breaches, reputational damage, or the propagation of false or dangerous information. Additionally, the ability to manipulate the model's behavior in this way could undermine trust in the AI system and lead to its misuse in malicious activities.

Remediation

Investigate and improve the effectiveness of guardrails and other output security mechanisms to prevent the model from being manipulated through adversarial suffixes. Strengthen the model's ability to recognize and reject queries that include such malicious suffixes, and ensure that the model maintains a high level of scrutiny for input manipulation.

Security Frameworks

A Prompt Injection Vulnerability occurs when user prompts alter the LLM's behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model.

An adversary may craft malicious prompts as inputs to an LLM that cause the LLM to act in unintended ways. These prompt injections are often designed to cause the model to ignore aspects of its original instructions and follow the adversary's instructions instead.

An adversary may inject prompts directly as a user of the LLM. This type of injection may be used by the adversary to gain a foothold in the system or to misuse the LLM itself, as for example to generate harmful content.

An adversary may inject prompts indirectly via separate data channel ingested by the LLM such as include text or multimedia pulled from databases or websites. These malicious prompts may be hidden or obfuscated from the user. This type of injection may be used by the adversary to gain a foothold in the system or to target an unwitting user of the system.

An adversary may use a carefully crafted LLM Prompt Injection designed to place LLM in a state in which it will freely respond to any user input, bypassing any controls, restrictions, or guardrails placed on the LLM. Once successfully jailbroken, the LLM can be used in unintended ways by the adversary.

Previous (Findings - Action based findings)
Accepted negative data
Next (Findings - Action based findings)
Alias Overloading