/health, load balancers pinging /ready, synthetic monitors verifying uptime. Infrastructure checking if your service is alive, not real user traffic.
Every production service needs health checks. Each check generates a log. Multiply by pods, by check frequency, by services. A small cluster can generate millions of health check logs per day. They tell you nothing you don’t already know.
Only successful probes are dropped. Failed health checks have debugging value - a 503 to /health tells you when and why your service was unhealthy. Those stay.
Example
- Before
- After
Recommended enforcement
Enforce at edge
Drop health check logs before they reach your provider. Immediate volume reduction.
How it works
Tero identifies health check logs by looking at request patterns: common paths (/health, /ready, /live, /ping, /healthz), known probe user agents (kube-probe, ELB-HealthChecker), and successful status codes.