Skip to main content
Zero risk Logs that aren’t logs. Binary data, corrupted output, unparseable strings. They convey nothing and cost you money.

Why it happens

Applications crash mid-write. Binary protocols get routed to text log pipelines. Encoding mismatches produce garbage. A process dumps core and the output ends up in your logs. Sometimes it’s a misconfigured logger, sometimes it’s a bug, sometimes it’s just bad luck. These aren’t edge cases. In a large enough system, something is always emitting garbage somewhere.

Example

{
  "@timestamp": "2024-01-15T10:30:00Z",
  "service.name": "image-processor",
  "body": "\u0089PNG\r\n\u001a\n\u0000\u0000\u0000\rIHDR..."
}
Tero generates a policy to drop this specific log event:
id: drop-png-binary-image-processor
name: Drop PNG binary data from image-processor
description: PNG image data routed to log pipeline. Not parseable, not queryable.
log:
  match:
    - resource_attribute: service.name
      exact: image-processor
    - log_field: body
      regex: "^\\x89PNG\\r\\n"
  keep: none
Tero identifies this specific log event and generates a policy to drop it.

Enforce at edge

Drop malformed logs before they reach your provider. No point paying to store garbage.
There’s no “fix at source” for most malformed data. It’s usually a symptom of something breaking, not a logging decision someone made.

How it works

Tero evaluates each log event in your context graph. It looks at the content structure and whether the log is parseable. When Tero finds malformed data (binary content, unparseable strings, corrupted output), it creates a rule for that specific log event. You approve the rule, and Tero generates a policy that matches the exact pattern it found.