Findings

Block list bypass vulnerability

Updated: June 19, 2025

Description

Severity: Medium

The AI model can be made to discuss topics that are adjacent to blocked words, effectively bypassing content restrictions.

This vulnerability occurs when users craft queries that manipulate the model into generating content about sensitive or prohibited subjects without directly mentioning blocked terms. By exploiting the model's contextual understanding, attackers can guide it toward generating discussions on restricted topics, even if the words themselves are not explicitly included in the input.

Example Attack

If exploited, this vulnerability could lead to the model inadvertently generating harmful, sensitive, or otherwise prohibited content. Attackers could use this bypass method to extract information, propagate harmful ideas, or provoke the model into producing offensive, biased, or illegal content, potentially causing reputational damage, legal consequences, or breaches of trust.

Remediation

Investigate and improve the effectiveness of guardrails and other output security mechanisms to prevent the model from discussing restricted topics, even when adjacent words or phrases are used. Enhance the model's understanding of context and refine its ability to recognize when a topic is related to a restricted subject, even if not directly mentioning blocked terms.

Security Frameworks

A Prompt Injection Vulnerability occurs when user prompts alter the LLM's behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model.

Adversaries may abuse their access to a victim system and use its resources or capabilities to further their goals by causing harms external to that system. These harms could affect the organization (e.g. Financial Harm, Reputational Harm), its users (e.g. User Harm), or the general public (e.g. Societal Harm).

Reputational harm involves a degradation of public perception and trust in organizations. Examples of reputation-harming incidents include scandals or false impersonations.

Societal harms might generate harmful outcomes that reach either the general public or specific vulnerable groups such as the exposure of children to vulgar content.

User harms may encompass a variety of harm types including financial and reputational that are directed at or felt by individual victims of the attack rather than at the organization level.

An adversary may craft malicious prompts as inputs to an LLM that cause the LLM to act in unintended ways. These prompt injections are often designed to cause the model to ignore aspects of its original instructions and follow the adversary's instructions instead.

An adversary may inject prompts directly as a user of the LLM. This type of injection may be used by the adversary to gain a foothold in the system or to misuse the LLM itself, as for example to generate harmful content.

An adversary may inject prompts indirectly via separate data channel ingested by the LLM such as include text or multimedia pulled from databases or websites. These malicious prompts may be hidden or obfuscated from the user. This type of injection may be used by the adversary to gain a foothold in the system or to target an unwitting user of the system.

Previous (Findings - Action based findings)
Attack generation vulnerability
Next (Findings - Action based findings)
Continuation vulnerability