No Output Scanning
Updated: May 5, 2026
Description
The AI model does not scan its output for known-bad signatures of malicious or dangerous content.
This lack of output filtering increases the risk of generating harmful responses, including spam, phishing attempts, or malware signatures, which could be exploited by bad actors.
If the model generates and outputs malicious content unchecked, it could inadvertently facilitate cyberattacks. Attackers could leverage this weakness to distribute phishing emails, generate malware payloads, or propagate spam, leading to security breaches, reputational harm, or regulatory violations.
Remediation
Implement scanning mechanisms that detect and block known-bad signatures of viruses, spam, and phishing attempts in AI-generated output.
Security Frameworks
Improper Output Handling refers specifically to insufficient validation, sanitization, and handling of the outputs generated by large language models before they are passed downstream to other components and systems. Since LLM-generated content can be controlled by prompt input, this behavior is similar to providing users indirect access to additional functionality.
Adversaries may abuse their access to a victim system and use its resources or capabilities to further their goals by causing harms external to that system. These harms could affect the organization (e.g. Financial Harm, Reputational Harm), its users (e.g. User Harm), or the general public (e.g. Societal Harm).
Reputational harm involves a degradation of public perception and trust in organizations. Examples of reputation-harming incidents include scandals or false impersonations.
Societal harms might generate harmful outcomes that reach either the general public or specific vulnerable groups such as the exposure of children to vulgar content.
User harms may encompass a variety of harm types including financial and reputational that are directed at or felt by individual victims of the attack rather than at the organization level.
AI system is evaluated regularly for safety risks - as identified in the MAP function. The AI system to be deployed is demonstrated to be safe, its residual negative risk does not exceed the risk tolerance, and can fail safely, particularly if made to operate beyond its knowledge limits. Safety metrics implicate system reliability and robustness, real-time monitoring, and response times for AI system failures.
AI system security and resilience - as identified in the MAP function - are evaluated and documented.
Post-deployment AI system monitoring plans are implemented, including mechanisms for capturing and evaluating input from users and other relevant AI actors, appeal and override, decommissioning, incident response, recovery, and change management.
The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.
The organization shall define and document the necessary elements for the ongoing operation of the AI system. At the minimum, this should include system and performance monitoring, repairs, updates and support.
The organization shall provide capabilities for interested parties to report adverse impacts of the AI system.
Agents can misuse legitimate tools due to prompt injection, misalignment, or unsafe delegation - leading to data exfiltration, tool output manipulation or workflow hijacking.