Findings

Prompt encoding vulnerability

Updated: June 19, 2025

Description

Severity: High

The AI model can be manipulated to parse encoded prompts.

This could lead to the circumvention of safeguards such as input and output validation, as well as content filtering mechanisms. By encoding malicious prompts, attackers may bypass restrictions designed to prevent harmful or sensitive content generation.

Example Attack

If the model fails to correctly decode or validate inputs, attackers could use encoded prompts to bypass safety measures, resulting in the AI generating harmful, illegal, or unethical outputs. This can lead to data leakage, the generation of offensive content, or even the facilitation of cyberattacks, posing significant security and compliance risks.

Remediation

Investigate and improve the effectiveness of input validation and sanitization mechanisms. Ensure that all inputs, including encoded prompts, are properly decoded and validated before processing. Strengthen the guardrails and content filtering systems to detect and block suspicious patterns in both encoded and decoded inputs.

Security Frameworks

A Prompt Injection Vulnerability occurs when user prompts alter the LLM's behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model.

Adversaries can Craft Adversarial Data that prevent a machine learning model from correctly identifying the contents of the data. This technique can be used to evade a downstream task where machine learning is utilized. The adversary may evade machine learning based virus/malware detection, or network scanning towards the goal of a traditional cyber attack.

An adversary may craft malicious prompts as inputs to an LLM that cause the LLM to act in unintended ways. These prompt injections are often designed to cause the model to ignore aspects of its original instructions and follow the adversary's instructions instead.

An adversary may inject prompts directly as a user of the LLM. This type of injection may be used by the adversary to gain a foothold in the system or to misuse the LLM itself, as for example to generate harmful content.

An adversary may inject prompts indirectly via separate data channel ingested by the LLM such as include text or multimedia pulled from databases or websites. These malicious prompts may be hidden or obfuscated from the user. This type of injection may be used by the adversary to gain a foothold in the system or to target an unwitting user of the system.

Previous (Findings - Action based findings)
POST based url-encoded query (possible CSRF)
Next (Findings - Action based findings)
Prompt injection vulnerability