Low riskRequests from crawlers, scrapers, and automated scanners. Googlebot indexing your pages, security scanners probing your endpoints, Slack unfurling your links. Real traffic, but not your users.
Bots are a fact of life. Search engines crawl your site. Social platforms fetch previews. Security tools scan for vulnerabilities. Monitoring services check uptime.Each request generates logs. For public-facing services, bot traffic can be 30-50% of total volume. You’re paying to store logs from Googlebot, not your customers.
Tero generates a scoped policy for each service where bot traffic is detected:
Copy
id: drop-bot-traffic-marketing-sitename: Drop bot traffic from marketing-sitedescription: Drop requests from known crawlers and scrapers.log: match: - resource_attribute: service.name exact: marketing-site - log_attribute: http.user_agent regex: "(Googlebot|bingbot|Slackbot|AhrefsBot|facebookexternalhit)" keep: none
Bot traffic patterns are consistent across services. You can expand the scope to apply org-wide.
Bot traffic is external noise. Your application didn’t decide to log these requests - they just happened. Dropping them at the edge is the simplest fix.
Tero identifies bot traffic through multiple signals:
Known user agents: Bots usually self-identify. Googlebot, bingbot, Slackbot, and hundreds of others announce themselves in the user agent string.
Missing browser signals: Real browsers send cookies, referrers, and consistent header patterns. Requests missing these signals are likely automated.
Request patterns: Bots often crawl systematically - sequential paths, predictable timing, no session continuity.
A request that matches bot patterns and comes from a public-facing endpoint is flagged. Internal service traffic is not flagged, even if it lacks browser signals.