Skip to main content
A service running at DEBUG log level in production. Someone enabled verbose logging to investigate an issue, then forgot to turn it off. Now you’re getting 10x the logs you need. Log level changes often happen through environment variables or config files that don’t go through normal code review. There’s no reminder to switch back. This is common.

Example

A service emitting thousands of DEBUG logs per minute:
{"severity_text": "DEBUG", "body": "Entering getUserById", "service.name": "user-service"}
{"severity_text": "DEBUG", "body": "Cache miss for user 12345", "service.name": "user-service"}
{"severity_text": "DEBUG", "body": "Querying database", "service.name": "user-service"}
{"severity_text": "DEBUG", "body": "Query took 3ms", "service.name": "user-service"}
{"severity_text": "DEBUG", "body": "Exiting getUserById", "service.name": "user-service"}
This pattern repeating for every request, for days.
This isn’t a policy you deploy to filter logs. It’s a configuration problem that needs to be fixed at the source. Start with tickets. If a service has had debug mode on for 48+ hours with no response, edge blocking can get attention. But the real fix is changing the configuration.

How it works

This isn’t a per-log-event analysis. Tero looks at service-level patterns: a sudden spike in DEBUG logs that persists beyond a configurable threshold. Tero defaults to 24 hours before flagging. You can configure thresholds to escalate: notify at 24 hours, block at 48. DEBUG logs during an active incident are normal. DEBUG logs that persist for days are a forgotten configuration.