Skip to main content
Use your own OpenAI API key for Tero’s AI classification. We recommend Anthropic for slightly better accuracy on telemetry classification, but OpenAI works well and is a good choice if you have existing agreements or prefer it.

Connect

1

Run tero

Open your terminal and run:
tero
This opens the Tero TUI, which guides you through setup.
Don’t have the CLI installed? See the quickstart.
2

Log in to Tero

The TUI opens your browser to create an account or log in. Complete the flow in your browser, then confirm the code shown in your terminal. The TUI logs you in automatically.
3

Select OpenAI

The TUI asks which integration you want to connect. Select OpenAI.
4

Create an API key

Go to the OpenAI Platform and create an API key. Copy the key.
5

Paste your API key

Paste the API key into the TUI. Tero validates the key and confirms the connection.
6

Done

Tero is now configured to use your OpenAI API key for classification.

How Tero uses AI

Tero sends telemetry samples for classification. This powers:
  • Log event classification: Understanding what each log event represents and whether it’s valuable
  • Service enrichment: Adding context about what services do based on their telemetry patterns
  • Waste identification: Determining which telemetry is debug noise, health checks, or otherwise low-value
See How Tero evaluates waste for the full classification process.

What gets sent

Tero sends sample log lines, metric names, and span names. These samples include:
  • Log message bodies and attributes
  • Service names and descriptions
  • Metric and span metadata
Samples are processed in memory and discarded. They are not stored by Tero or the AI provider.

Usage scales with diversity, not volume

AI usage depends on how many unique event types you have, not how much telemetry you generate. If you have 1 billion logs but they all match 1,000 distinct event types, Tero only needs to classify those 1,000 events. The other 999,999,000 logs match existing classifications and don’t require AI calls. Rough estimates:
EnvironmentUnique event typesInitial analysis
Small (10 services)~500-2,000~50K-200K tokens
Medium (50 services)~2,000-10,000~200K-1M tokens
Large (200+ services)~10,000-50,000~1M-5M tokens
After initial analysis, ongoing usage is minimal. New event types are rare once your telemetry patterns are understood.
These are rough estimates. Actual usage depends on telemetry diversity, not volume. A service with verbose logging creates more unique event types than one with structured logs.

Data handling

OpenAI’s API does not use customer data to train models by default. See OpenAI’s API data usage policy for details.

Fallback

You can configure Anthropic as a fallback in case OpenAI is unavailable. Fallback triggers on unrecoverable errors (API unavailable, authentication failures). Rate limits do not trigger fallback.

API key

RequirementDetails
ModelGPT-4 or later
PermissionsDefault API key permissions
Rate limitsStandard limits are sufficient