Skip to main content
Use AWS Bedrock for Tero’s AI classification. Your data stays in your AWS environment.
Bedrock is only available for self-hosted Tero deployments.

Prerequisites

  • AWS account with Bedrock enabled
  • Access to Claude models in Bedrock
  • AWS credentials (access key/secret or IAM role)
Configuring AWS IAM and Bedrock access is outside the scope of this document. See AWS Bedrock documentation.

Connect

1

Run tero

Open your terminal and run:
tero
This opens the Tero TUI, which guides you through setup.
2

Log in to Tero

The TUI opens your browser to create an account or log in. Complete the flow in your browser, then confirm the code shown in your terminal. The TUI logs you in automatically.
3

Select AWS Bedrock

The TUI asks which integration you want to connect. Select AWS Bedrock.
4

Configure AWS credentials

Provide your AWS credentials and region. The TUI validates access to Bedrock and confirms the connection.
5

Done

Tero is now configured to use AWS Bedrock for classification.

How Tero uses AI

Tero sends telemetry samples for classification. This powers:
  • Log event classification: Understanding what each log event represents and whether it’s valuable
  • Service enrichment: Adding context about what services do based on their telemetry patterns
  • Waste identification: Determining which telemetry is debug noise, health checks, or otherwise low-value
See How Tero evaluates waste for the full classification process.

What gets sent

Tero sends sample log lines, metric names, and span names. These samples include:
  • Log message bodies and attributes
  • Service names and descriptions
  • Metric and span metadata
Samples are processed in memory and discarded. They are not stored by Tero or the AI provider.

Usage scales with diversity, not volume

AI usage depends on how many unique event types you have, not how much telemetry you generate. If you have 1 billion logs but they all match 1,000 distinct event types, Tero only needs to classify those 1,000 events. The other 999,999,000 logs match existing classifications and don’t require AI calls. Rough estimates:
EnvironmentUnique event typesInitial analysis
Small (10 services)~500-2,000~50K-200K tokens
Medium (50 services)~2,000-10,000~200K-1M tokens
Large (200+ services)~10,000-50,000~1M-5M tokens
After initial analysis, ongoing usage is minimal. New event types are rare once your telemetry patterns are understood.
These are rough estimates. Actual usage depends on telemetry diversity, not volume. A service with verbose logging creates more unique event types than one with structured logs.

Data handling

With Bedrock, your telemetry samples never leave your AWS environment. Data is processed by Bedrock in your account, subject to your AWS agreements and compliance controls.

Fallback

You can configure another provider as a fallback in case Bedrock is unavailable. Fallback triggers on unrecoverable errors (API unavailable, authentication failures). Rate limits do not trigger fallback.

Requirements

RequirementDetails
ModelClaude Sonnet (via Bedrock)
CredentialsAWS access key/secret or IAM role
RegionAny region with Bedrock and Claude access