Findings

Do-Not-Answer vulnerability

Updated: June 19, 2025

Description

Severity: Medium

The AI model can be manipulated into responding to prompts that it is typically trained to avoid.

This vulnerability could allow attackers to bypass established safety and ethical constraints, resulting in the AI providing sensitive, harmful, or unethical content when it should refrain from doing so.

Example Attack

If an attacker successfully prompts the model to respond to otherwise restricted queries, the model could provide harmful, illegal, or unethical information, such as personal data, offensive language, or instructions for malicious activities. This could lead to compliance issues, reputational damage, and the potential for harm if such outputs are shared or misused.

Remediation

Investigate and enhance the safeguards in place during model training and fine-tuning. Focus on refining prompt filtering and reinforcement learning techniques to ensure the model appropriately recognizes and adheres to "Do-Not-Answer" restrictions.

Security Frameworks

A Prompt Injection Vulnerability occurs when user prompts alter the LLM's behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model.

Improper Output Handling refers specifically to insufficient validation, sanitization, and handling of the outputs generated by large language models before they are passed downstream to other components and systems. Since LLM-generated content can be controlled by prompt input, this behavior is similar to providing users indirect access to additional functionality.

Adversaries may abuse their access to a victim system and use its resources or capabilities to further their goals by causing harms external to that system. These harms could affect the organization (e.g. Financial Harm, Reputational Harm), its users (e.g. User Harm), or the general public (e.g. Societal Harm).

Reputational harm involves a degradation of public perception and trust in organizations. Examples of reputation-harming incidents include scandals or false impersonations.

Societal harms might generate harmful outcomes that reach either the general public or specific vulnerable groups such as the exposure of children to vulgar content.

User harms may encompass a variety of harm types including financial and reputational that are directed at or felt by individual victims of the attack rather than at the organization level.

An adversary may craft malicious prompts as inputs to an LLM that cause the LLM to act in unintended ways. These prompt injections are often designed to cause the model to ignore aspects of its original instructions and follow the adversary's instructions instead.

An adversary may inject prompts directly as a user of the LLM. This type of injection may be used by the adversary to gain a foothold in the system or to misuse the LLM itself, as for example to generate harmful content.

An adversary may inject prompts indirectly via separate data channel ingested by the LLM such as include text or multimedia pulled from databases or websites. These malicious prompts may be hidden or obfuscated from the user. This type of injection may be used by the adversary to gain a foothold in the system or to target an unwitting user of the system.

An adversary may use a carefully crafted LLM Prompt Injection designed to place LLM in a state in which it will freely respond to any user input, bypassing any controls, restrictions, or guardrails placed on the LLM. Once successfully jailbroken, the LLM can be used in unintended ways by the adversary.

Previous (Findings - Action based findings)
Directive Overloading
Next (Findings - Action based findings)
Field Duplication