Skip to main content
Low risk Fields that agents, collectors, and SDKs add by default. Fields you never actually query.

Why it happens

Every tool in your observability stack adds metadata. Kubernetes adds pod UIDs. The OTel SDK adds its version. Collectors add process information. Each tool assumes you might need it. Most of it, you don’t. These fields exist on every log, get indexed, take up storage. Nobody ever searches them.

Example

Kubernetes assigns internal UIDs to every resource. You already have the human-readable names (k8s.pod.name, k8s.deployment.name), so the UIDs just take up space.
{
  "@timestamp": "2024-01-15T10:30:00Z",
  "severity_text": "ERROR",
  "service.name": "checkout-api",
  "k8s.pod.name": "checkout-api-7d8f9",
  "k8s.pod.uid": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "k8s.replicaset.name": "checkout-api-7d8f9",
  "k8s.replicaset.uid": "f9e8d7c6-b5a4-3210-fedc-ba0987654321",
  "k8s.deployment.name": "checkout-api",
  "k8s.deployment.uid": "12345678-abcd-efgh-ijkl-mnopqrstuvwx",
  "message": "Connection timeout"
}
Tero generates the following policy:
id: remove-k8s-uids
name: Remove Kubernetes UIDs
description: Drop internal Kubernetes identifiers. Pod and deployment names are sufficient for debugging.
log:
  match:
    - resource_attribute: k8s.pod.uid
      exists: true
  transform:
    remove:
      - resource_attribute: k8s.pod.uid
      - resource_attribute: k8s.replicaset.uid
      - resource_attribute: k8s.deployment.uid
      - resource_attribute: k8s.statefulset.uid
      - resource_attribute: k8s.daemonset.uid
      - resource_attribute: k8s.job.uid
      - resource_attribute: k8s.cronjob.uid
This category often makes sense to apply org-wide. If you’re not querying k8s.pod.uid on one service, you’re probably not querying it anywhere.

Enforce at edge

Drop unused fields before data leaves your network. Immediate savings, no code changes.
These fields are added by infrastructure tooling, not your application code. Fixing at the source would mean reconfiguring agents across your entire fleet. It’s simpler to drop them at the edge.

How it works

Tero checks every signal in your context graph: query history, dashboards, alerts, saved searches. Fields that exist on logs but never appear anywhere are flagged as unused. This isn’t guessing. If k8s.pod.uid appears in even one dashboard or alert, it won’t be flagged. Tero only surfaces fields with zero usage across all signals.