Logging
Cloud logs
Updated: September 29, 2025
Cloud logs record each interaction with an AI model, capturing when the model was called, the inputs provided, and the outputs generated. Logs are enriched with tags to help identify important details, such as the presence of personally identifiable information (PII). To start receiving AI logs you must first set up a Logging integration. Learn more about setting up AWS Bedrock logging with AWS Lambda.
- In the side menu, go to AI and under Logging select Cloud Logs.
- The logs are presented in a table format, displaying columns such as:
- ID: Unique Identifier of the log.
- Generated Time: The timestamp of log creation.
- Model: Displays what AI model has been used.
- Input Message, Output Message: Details of the interaction.
- Latency, Input Tokens, Output Tokens, Total Tokens: Performance metrics.
- View log details:
- Click on a log ID to view more in-depth information, including:
- Detailed Metrics: Latency, token usage, and additional metadata.
- Inputs: Prompt that is sent to an AI model for processing. For example: Summarize this article.
- Input Metadata: Provides details on how the AI model processes the given input:
- Input Content: Specifies the format of the input.
- Max Tokens: The maximum number of tokens the AI model can generate in its response. Tokens can be words, sub-words, or characters, depending on the AI model.
- Stop Sequences: Defines sequences of characters or words that signal the AI to stop generating a response.
- Temperature: Controls the creativity of responses, lower values (e.g., 0.1-0.5) result in predictable, structured outputs and higher values generating more diverse and creative text.
- Top P: Controls the range of possible words the AI model can choose
- Tokens: The total number of tokens in the input request.
- Outputs: The response generated by the AI model based on the given input.
- Output Metadata: Provides details on the AI's response generation.
- Output Content: The format of the AI's response.
- Stop Reason: Indicates why the AI stopped generating output. For example:
- end_turn: The AI completed its response naturally.
- max_token: Maximum token limit reached.
- guardrail_intervened: Response blocked or modified due to the AI's safety system.
- Tokens: The number of tokens used in the AI's response.