Findings
Snowball vulnerability
Updated: June 19, 2025
Description
The AI model can be induced to give incorrect or misleading answers when presented with overly complicated or convoluted queries.
This vulnerability occurs when attackers craft queries that confuse the model, causing it to generate inaccurate responses. These queries often involve layers of complexity or obfuscation that the model fails to process correctly, leading to errors in the output.
Example Attack
This could lead to the AI providing users with incorrect or unreliable information, potentially causing confusion, inefficiency, or misguided actions. In critical fields such as healthcare, law, or finance, this could have serious consequences, including financial loss, misinformed decisions, or even harm to individuals.
Remediation
Investigate and improve the effectiveness of guardrails and other output security mechanisms to help the model recognize and handle complex queries accurately. Strengthen the model's ability to identify and process convoluted or ambiguous inputs and refine its decision-making process to avoid errors in such situations.
Security Frameworks
Misinformation from LLMs poses a core vulnerability for applications relying on these models. Misinformation occurs when LLMs produce false or misleading information that appears credible. This vulnerability can lead to security breaches, reputational damage, and legal liability. One of the major causes of misinformation is hallucination: when the LLM generates content that seems accurate but is fabricated.
Adversaries may abuse their access to a victim system and use its resources or capabilities to further their goals by causing harms external to that system. These harms could affect the organization (e.g. Financial Harm, Reputational Harm), its users (e.g. User Harm), or the general public (e.g. Societal Harm).
Reputational harm involves a degradation of public perception and trust in organizations. Examples of reputation-harming incidents include scandals or false impersonations.
Societal harms might generate harmful outcomes that reach either the general public or specific vulnerable groups such as the exposure of children to vulgar content.
User harms may encompass a variety of harm types including financial and reputational that are directed at or felt by individual victims of the attack rather than at the organization level.
Adversaries may prompt large language models and identify hallucinated entities. They may request software packages, commands, URLs, organization names, or e-mail addresses, and identify hallucinations with no connected real-world source. Discovered hallucinations provide the adversary with potential targets to Publish Hallucinated Entities. Different LLMs have been shown to produce the same hallucinations, so the hallucinations exploited by an adversary may affect users of other LLMs.