Findings
Modal jailbreak vulnerability
Updated: June 19, 2025
Description
The model's output safeguards can be bypassed using multimodal prompts, such as combinations of images and text.
By integrating visual inputs with textual prompts, attackers can exploit this vulnerability to escape the model's usual content restrictions. This results in the model being able to generate harmful, offensive, or otherwise restricted content, as the image component may be interpreted as a trigger to bypass safety mechanisms designed for text-only inputs.
Example Attack
If exploited, this vulnerability could lead to the model generating unsafe or malicious content when processing multimodal inputs. Attackers could use images paired with text to prompt the model into bypassing its content safeguards, potentially resulting in the generation of harmful material, such as explicit content or toxic responses. This could compromise the integrity of the AI system and lead to reputational damage, legal issues, and the dissemination of inappropriate content.
Remediation
Investigate and improve the effectiveness of guardrails and other output security mechanisms that are designed to handle multimodal inputs. Enhance the model's ability to interpret and appropriately filter both textual and visual elements of a prompt to ensure that unsafe content is not generated, regardless of the input format. Implement additional security layers and context validation to prevent the escape of safeguards via multimodal attacks.
Security Frameworks
A Prompt Injection Vulnerability occurs when user prompts alter the LLM's behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model.
An adversary may craft malicious prompts as inputs to an LLM that cause the LLM to act in unintended ways. These prompt injections are often designed to cause the model to ignore aspects of its original instructions and follow the adversary's instructions instead.
An adversary may inject prompts directly as a user of the LLM. This type of injection may be used by the adversary to gain a foothold in the system or to misuse the LLM itself, as for example to generate harmful content.
An adversary may inject prompts indirectly via separate data channel ingested by the LLM such as include text or multimedia pulled from databases or websites. These malicious prompts may be hidden or obfuscated from the user. This type of injection may be used by the adversary to gain a foothold in the system or to target an unwitting user of the system.
An adversary may use a carefully crafted LLM Prompt Injection designed to place LLM in a state in which it will freely respond to any user input, bypassing any controls, restrictions, or guardrails placed on the LLM. Once successfully jailbroken, the LLM can be used in unintended ways by the adversary.