Skip to main content
Medium risk A service set to DEBUG log level for troubleshooting that was never switched back. You’re getting 10x the logs you need because someone forgot to turn off verbose logging.

Why it happens

An engineer enables DEBUG logging to investigate an issue. The issue gets resolved. The engineer moves on. The log level stays at DEBUG. Days pass. Weeks pass. The service keeps emitting massive volumes of logs nobody reads. This is surprisingly common. Log level changes often happen through environment variables or config files that don’t go through normal code review.

Example

A service emitting thousands of DEBUG logs per minute:
{"severity_text": "DEBUG", "body": "Entering getUserById", "service.name": "user-service"}
{"severity_text": "DEBUG", "body": "Cache miss for user 12345", "service.name": "user-service"}
{"severity_text": "DEBUG", "body": "Querying database", "service.name": "user-service"}
{"severity_text": "DEBUG", "body": "Query took 3ms", "service.name": "user-service"}
{"severity_text": "DEBUG", "body": "Exiting getUserById", "service.name": "user-service"}
This isn’t a policy you deploy. It’s a configuration change the service owner needs to make.
Tero detects this by noticing a sudden spike in DEBUG logs that persists beyond a reasonable troubleshooting window. If a service has been emitting DEBUG logs for days, something’s probably wrong.
Start by notifying the service owner. If the issue persists, escalate. Most teams start with tickets. If a service has had debug mode on for 48+ hours with no response, temporary edge blocking gets attention without permanent consequences.

How it works

Tero tracks log volume patterns per service in your Master Catalog. When a service’s DEBUG log volume spikes and stays elevated for more than a few hours, it’s flagged. The key signal is time. DEBUG logs during an active incident are normal. DEBUG logs that persist for days after the incident ended are a forgotten configuration.