Skip to main content
Low risk Logs from health check endpoints. Kubernetes probes hitting /health, load balancers pinging /ready, synthetic monitors verifying uptime. They fire constantly and tell you nothing you don’t already know.

Why it happens

Every production service needs health checks. Kubernetes needs to know if your pod is alive. Load balancers need to know if your instance can take traffic. Monitoring systems need to verify uptime. Each check generates a log. Multiply by pods, by check frequency, by services. A small cluster can generate millions of health check logs per day.

Example

{
  "@timestamp": "2024-01-15T10:30:00Z",
  "service.name": "checkout-api",
  "http.method": "GET",
  "http.target": "/health",
  "http.status_code": 200,
  "http.user_agent": "kube-probe/1.28"
}
Tero generates a scoped policy for each service where this pattern exists:
id: drop-health-checks-checkout-api
name: Drop health check logs from checkout-api
description: Drop successful Kubernetes probe requests to health endpoints.
log:
  match:
    - resource_attribute: service.name
      exact: checkout-api
    - log_attribute: http.target
      regex: "^/(health|ready|live|ping)"
    - log_attribute: http.status_code
      exact: "200"
  keep: none
Health check patterns are usually consistent across services. You can expand the scope to apply org-wide.

Enforce at edge

Drop health check logs before they reach your provider. Immediate volume reduction.
Health checks are infrastructure noise. They’re generated by Kubernetes and load balancers, not your application logic. Dropping them at the edge is the simplest fix.

How it works

Tero identifies health check logs by looking at request patterns: common health check paths (/health, /ready, /live, /ping), known probe user agents (kube-probe, ELB-HealthChecker), and request frequency patterns. A log that matches health check patterns and never appears in any dashboard or alert is flagged. Failed health checks are not flagged - those have debugging value.