Findings
Glitch token vulnerability
Updated: June 19, 2025
Description
The AI model exhibits unusual behavior when processing glitch tokens.
This can cause unexpected outputs or system failures. These tokens may disrupt the model's processing flow, leading to improper responses or potential security vulnerabilities. Glitch tokens may be injected into the input to provoke erratic behavior from the model.
Remediation
Investigate and improve the effectiveness of guardrails and other output security mechanisms to detect and block glitch tokens. Implement better input validation and sanitization processes to prevent anomalous tokens from affecting the model's behavior.
Security Frameworks
LLM supply chains are susceptible to various vulnerabilities, which can affect the integrity of training data, models, and deployment platforms. These risks can result in biased outputs, security breaches, or system failures. While traditional software vulnerabilities focus on issues like code flaws and dependencies, in ML the risks also extend to third-party pre-trained models and data.
An adversary may craft malicious prompts as inputs to an LLM that cause the LLM to act in unintended ways. These prompt injections are often designed to cause the model to ignore aspects of its original instructions and follow the adversary's instructions instead.
An adversary may inject prompts directly as a user of the LLM. This type of injection may be used by the adversary to gain a foothold in the system or to misuse the LLM itself, as for example to generate harmful content.
An adversary may inject prompts indirectly via separate data channel ingested by the LLM such as include text or multimedia pulled from databases or websites. These malicious prompts may be hidden or obfuscated from the user. This type of injection may be used by the adversary to gain a foothold in the system or to target an unwitting user of the system.