Security starts with architecture. We designed Tero to minimize risk at every integration point.
You start with read-only API access to your observability platform. We analyze telemetry samples to build a semantic catalog: schemas, field types, volume patterns, quality classifications. You see the analysis, review recommendations, and decide what to implement yourself.When you’re ready, grant write access. We configure exclusion rules and sampling policies directly in your observability platform instead of you implementing them manually. You scope these permissions however you need and revoke them anytime.The edge proxy (optional) runs in your infrastructure. You deploy it, control it, monitor it. Rules sync from the control plane over encrypted connections, but edge operates independently. If the control plane goes offline, edge continues with cached rules.We store the semantic catalog and your quality rules in the control plane. We don’t store telemetry content. Not log messages, not metric values, not trace data. Your telemetry stays in your observability platform or gets filtered at the edge before reaching your vendor.This architecture separates the control plane (intelligence and optimization) from the data plane (processing and filtering). The control plane never sits in your telemetry path. Even with write access, we configure your systems through APIs, not by intercepting data.
The control plane runs on Google Cloud Platform in a multi-zone configuration. If one zone fails, traffic shifts to another automatically.Everything is encrypted. Data in transit uses TLS 1.3. Data at rest uses AES-256. This includes the database, backups, and any temporary storage. Encryption keys are managed through GCP’s Key Management Service with automatic rotation.Network access is restricted. The control plane runs in a private VPC. Only specific services are exposed publicly, and those sit behind load balancers with DDoS protection. Internal services communicate over private networks.We monitor infrastructure continuously using GCP Security Command Center. This detects anomalies, flags misconfigurations, and alerts on potential threats. All access to infrastructure is logged.Database backups run daily and are encrypted. We retain backups for 30 days, stored in geographically separate regions from the primary database. Recovery is tested quarterly.
Authentication requires SSO with multi-factor authentication. No shared accounts. No password-only access.Access is role-based. Engineers can access development and staging environments. Production access is limited to operations staff and requires justification. Production access is time-limited and expires automatically.Every production action is logged: who accessed what, when, and why. These logs are immutable and retained for compliance purposes.When employees leave, access is revoked immediately across all systems. We maintain a formal offboarding checklist that covers infrastructure, repositories, and third-party services.
Every code change requires review before merging. No one can push directly to production branches. Automated tests must pass before code can be deployed.We scan dependencies for known vulnerabilities. When a vulnerability is identified, we evaluate severity and patch within 7 days for critical issues, 30 days for high-severity issues.Secrets never live in code or configuration files. API keys, database credentials, and other secrets are stored in Doppler and GCP Secret Manager. Applications retrieve secrets at runtime.All API endpoints implement rate limiting, authentication, and input validation. We assume all input is malicious until proven otherwise.
The edge proxy runs in your infrastructure, not ours. You control where it deploys and how it’s configured.Edge applies quality rules locally. It processes telemetry at line rate and decides what to keep or drop based on rules synced from the control plane. Only rule execution results leave your infrastructure. No log content, no metric values, no trace data.Edge is designed to fail open. If it encounters an error it can’t handle, it passes data through unfiltered. This reduces many classes of failures, but putting any tool in your data path has operational risk. The complete mitigation is an upstream failover system that bypasses edge if it fails. We support this architecture but don’t provide the failover component yet.
Evaluating edge deployment? The edge proxy is open source - built by the creators of Vector.dev with the same reliability focus. Fail-open design means processing errors won’t block your telemetry. For additional resilience, we can help you architect an upstream failover system.
Rule sync happens over encrypted connections. Edge pulls rules from the control plane on startup and caches them locally. If the control plane becomes unreachable, edge continues using cached rules until connectivity is restored.You control the deployment architecture. Run edge as a sidecar to your agent, in your telemetry pipeline, or at your network boundary. Choose what makes sense for your security and reliability requirements.
We use AI to classify telemetry and identify quality patterns. By default, this uses Anthropic Claude.Telemetry samples are sent to the AI provider, processed in memory, and the classification results are returned. The samples themselves are not persisted in our database or the AI provider’s systems. This is enforced through API contracts with providers.If you prefer to use your own AI infrastructure, you can configure Tero to use AWS Bedrock, Azure OpenAI, or other providers with your own API keys. This means samples never leave your approved systems.If you self-host the control plane, you control the entire AI pipeline. Use your infrastructure, your models, your keys.
When you self-host the control plane, everything runs in your infrastructure. The database, the application, the AI classification—all of it.Your data never leaves your network. We provide the software, you run it. Your security controls apply. Your compliance boundary is maintained.We provide updates, documentation, and support. You manage the infrastructure, monitoring, and incident response. This gives you complete control over security posture.Self-hosting makes sense for organizations with strict data residency requirements, regulatory constraints, or security policies that prohibit external data processing.
Need to keep data in your infrastructure? Self-host the control plane. Your data never leaves your network. We provide the software, you run it with your security controls.
We monitor for security incidents through automated alerts, anomaly detection, and application logging. If an incident is detected, our security team is notified immediately.We have a documented incident response plan that covers identification, containment, eradication, and recovery. Every incident gets a post-mortem with action items to prevent recurrence.If an incident affects customer data, we notify affected customers within 24 hours. We provide details about what happened, what data was affected, and what we’re doing to prevent it from happening again.We’ve never had a data breach. That doesn’t mean we’re perfect or invulnerable. It means we take security seriously and remain vigilant.
We work with security researchers through responsible disclosure. Report the issue privately, give us time to fix it, and we’ll credit you publicly once the vulnerability is patched.