Connect
- CLI
1
Run tero
Open your terminal and run:This opens the Tero TUI, which guides you through setup.
Don’t have the CLI installed? See the quickstart.
2
Log in to Tero
The TUI opens your browser to create an account or log in. Complete the flow in your browser, then confirm the code shown in your terminal. The TUI logs you in automatically.
3
Select OpenAI
The TUI asks which integration you want to connect. Select OpenAI.
4
Create an API key
Go to the OpenAI Platform and create an API key. Copy the key.
5
Paste your API key
Paste the API key into the TUI. Tero validates the key and confirms the connection.
6
Done
Tero is now configured to use your OpenAI API key for classification.
How Tero uses AI
Tero sends telemetry samples for classification. This powers:- Log event classification: Understanding what each log event represents and whether it’s valuable
- Service enrichment: Adding context about what services do based on their telemetry patterns
- Waste identification: Determining which telemetry is debug noise, health checks, or otherwise low-value
What gets sent
Tero sends sample log lines, metric names, and span names. These samples include:- Log message bodies and attributes
- Service names and descriptions
- Metric and span metadata
Usage scales with diversity, not volume
AI usage depends on how many unique event types you have, not how much telemetry you generate. If you have 1 billion logs but they all match 1,000 distinct event types, Tero only needs to classify those 1,000 events. The other 999,999,000 logs match existing classifications and don’t require AI calls. Rough estimates:| Environment | Unique event types | Initial analysis |
|---|---|---|
| Small (10 services) | ~500-2,000 | ~50K-200K tokens |
| Medium (50 services) | ~2,000-10,000 | ~200K-1M tokens |
| Large (200+ services) | ~10,000-50,000 | ~1M-5M tokens |
These are rough estimates. Actual usage depends on telemetry diversity, not volume. A service with verbose logging creates more unique event types than one with structured logs.
Data handling
OpenAI’s API does not use customer data to train models by default. See OpenAI’s API data usage policy for details.Fallback
You can configure Anthropic as a fallback in case OpenAI is unavailable. Fallback triggers on unrecoverable errors (API unavailable, authentication failures). Rate limits do not trigger fallback.API key
| Requirement | Details |
|---|---|
| Model | GPT-4 or later |
| Permissions | Default API key permissions |
| Rate limits | Standard limits are sufficient |