Skip to main content
Tero discovers metrics from your integrations and builds semantic understanding of what each one measures and whether it’s worth keeping.

Example

Here’s a raw metric:
name: http_request_duration_seconds
type: histogram
tags:
  service: checkout
  method: POST
  endpoint: /api/orders
  status_code: 200
  instance_id: i-0a1b2c3d4e5f
Tero turns it into a metric with context:
name: http_request_duration_seconds
service: checkout
type: histogram
description: Measures HTTP request latency for the checkout service.

tags:
  - name: method
    status: keep
  - name: endpoint
    status: keep
  - name: status_code
    status: keep
  - name: instance_id
    status: drop
    reason: High cardinality, no query value. Use service tag instead.
Tero identifies which tags are useful and which are causing cardinality problems.

Exploring metrics

Open any metric to see what Tero learned: description, type, tags, classification reasoning. Browse by service or search across all metrics. See which tags are flagged as problems, which services produce the metric, how it connects to the broader system.

Using in chat

Reference a metric with @ to focus your questions:
@http_request_duration_seconds why is p99 latency elevated?
@kafka_consumer_lag which services are affected?
@error_rate what changed in the last hour?
Tero pulls in the metric’s context: what it measures, which service owns it, related log events. Your question gets answered with that full picture.

Improving context

Tero infers what metrics measure, but you can refine:
  • Edit descriptions to clarify what the metric represents
  • Reclassify metrics that Tero misjudged
  • Confirm tag recommendations to normalize or drop problematic tags
Every refinement improves how Tero uses metrics in future answers.