Skip to main content
Logs from health check endpoints. Kubernetes probes hitting /health, load balancers pinging /ready, synthetic monitors verifying uptime. Infrastructure checking if your service is alive, not real user traffic. Every production service needs health checks. Each check generates a log. Multiply by pods, by check frequency, by services. A small cluster can generate millions of health check logs per day. They tell you nothing you don’t already know. Only successful probes are dropped. Failed health checks have debugging value - a 503 to /health tells you when and why your service was unhealthy. Those stay.

Example

{
  "@timestamp": "2024-01-15T10:30:00Z",
  "service.name": "checkout-api",
  "http.method": "GET",
  "http.target": "/health",
  "http.status_code": 200,
  "http.user_agent": "kube-probe/1.28"
}
Tero generates a scoped policy for each service where this pattern exists:
id: drop-health-checks-checkout-api
name: Drop health check logs from checkout-api
description: Drop Kubernetes probe requests to health endpoints.
log:
  match:
    - resource_attribute: service.name
      exact: checkout-api
    - log_attribute: http.target
      regex: "^/(health|ready|live|ping)"
    - log_attribute: http.status_code
      exact: "200"
  keep: none

Enforce at edge

Drop health check logs before they reach your provider. Immediate volume reduction.
Health checks are infrastructure noise generated by Kubernetes and load balancers, not your application logic. Dropping at the edge is the right place.

How it works

Tero identifies health check logs by looking at request patterns: common paths (/health, /ready, /live, /ping, /healthz), known probe user agents (kube-probe, ELB-HealthChecker), and successful status codes.