Skip to main content

1. Connect

Connect to your observability stack
Tero connects to your observability stack via API: Datadog, Splunk, your collectors. Read-only by default. No agents, no code changes, no infrastructure changes. Your setup stays exactly as it is.

2. Build the Master Catalog

Build the Master Catalog from your data
Tero analyzes your telemetry and builds the Master Catalog. Billions of raw logs compress into thousands of distinct log events. Each one is understood: what it represents, which service produces it, whether anyone queries it. Services, dependencies, and failure scenarios get connected into a graph. This is how Tero knows what’s waste and what matters.

3. Review policies

Review and approve policies
Tero generates policies: specific statements about what’s wrong and how to fix it. This field duplicates that one. This log is a health check firing 86,400 times a day. This debug statement shipped to production. Policies are organized by category: duplicate fields, health checks, accidental debug statements, verbose payloads, PII leakage. You review with examples from your actual data. Approve what you agree with.

4. Enforce

Approved policies get enforced. You choose where:

In your provider

Configure your provider
Enforce policies in your provider by configuring exclusion filters, routing rules, or transformations directly via API. No deployment required. Fully reversible.
Fix it in your code
Enforce policies by opening PRs to fix instrumentation at the source, or create tickets for engineers to handle on their schedule.
Execute at the edge
Enforce policies at the edge before data leaves your network. Drop-in replacements for collectors you already run.
Tero monitors for regression. If a problem resurfaces, Tero catches it. Problems you fix stay fixed.