Findings

ANSI vulnerability

Updated: June 19, 2025

Description

Severity: High

The model can be made to output ANSI control sequences.

These sequences may interfere with downstream systems, cause unintended terminal behaviors, or introduce security risks, such as spoofing outputs or executing unintended commands in vulnerable environments.

Remediation

Sanitize AI outputs to strip or escape ANSI sequences before passing responses to downstream processes. Implement strict input validation to prevent prompt injection attacks that could exploit ANSI escape codes.

Security Frameworks

A Prompt Injection Vulnerability occurs when user prompts alter the LLM's behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model.

LLM supply chains are susceptible to various vulnerabilities, which can affect the integrity of training data, models, and deployment platforms. These risks can result in biased outputs, security breaches, or system failures. While traditional software vulnerabilities focus on issues like code flaws and dependencies, in ML the risks also extend to third-party pre-trained models and data.

Improper Output Handling refers specifically to insufficient validation, sanitization, and handling of the outputs generated by large language models before they are passed downstream to other components and systems. Since LLM-generated content can be controlled by prompt input, this behavior is similar to providing users indirect access to additional functionality.

Adversaries may abuse command and script interpreters to execute commands, scripts, or binaries. These interfaces and languages provide ways of interacting with computer systems and are a common feature across many different platforms. Most systems come with some built-in command-line interface and scripting capabilities, for example, macOS and Linux distributions include some flavor of Unix Shell while Windows installations include the Windows Command Shell and PowerShell.

An adversary may craft malicious prompts as inputs to an LLM that cause the LLM to act in unintended ways. These prompt injections are often designed to cause the model to ignore aspects of its original instructions and follow the adversary's instructions instead.

An adversary may inject prompts directly as a user of the LLM. This type of injection may be used by the adversary to gain a foothold in the system or to misuse the LLM itself, as for example to generate harmful content.

An adversary may inject prompts indirectly via separate data channel ingested by the LLM such as include text or multimedia pulled from databases or websites. These malicious prompts may be hidden or obfuscated from the user. This type of injection may be used by the adversary to gain a foothold in the system or to target an unwitting user of the system.

Adversaries may use their access to an LLM that is part of a larger system to compromise connected plugins. LLMs are often connected to other services or resources via plugins to increase their capabilities. Plugins may include integrations with other applications, access to public or private data sources, and the ability to execute code.

Previous (Findings - Action based findings)
Alias Overloading
Next (Findings - Action based findings)
Array-based Query Batching