Skip to main content
Security
We take security seriously. This page explains how we protect the data we handle, where it runs, and how we respond to threats.
Looking for security details? Jump to the Reference section at the bottom for our checklist and available documents.

What We Protect

Security starts with architecture. We designed Tero to minimize risk at every integration point.
Tero Architecture
You start with read-only API access to your observability platform. We analyze telemetry samples to build a semantic catalog: schemas, field types, volume patterns, quality classifications. You see the analysis, review recommendations, and decide what to implement yourself. When you’re ready, grant write access. We configure exclusion rules and sampling policies directly in your observability platform instead of you implementing them manually. You scope these permissions however you need and revoke them anytime. The edge proxy (optional) runs in your infrastructure. You deploy it, control it, monitor it. Rules sync from the control plane over encrypted connections, but edge operates independently. If the control plane goes offline, edge continues with cached rules. We store the semantic catalog and your quality rules in the control plane. We don’t store telemetry content. Not log messages, not metric values, not trace data. Your telemetry stays in your observability platform or gets filtered at the edge before reaching your vendor. This architecture separates the control plane (intelligence and optimization) from the data plane (processing and filtering). The control plane never sits in your telemetry path. Even with write access, we configure your systems through APIs, not by intercepting data.

Infrastructure Security

The control plane runs on Google Cloud Platform in a multi-zone configuration. If one zone fails, traffic shifts to another automatically. Everything is encrypted. Data in transit uses TLS 1.3. Data at rest uses AES-256. This includes the database, backups, and any temporary storage. Encryption keys are managed through GCP’s Key Management Service with automatic rotation. Network access is restricted. The control plane runs in a private VPC. Only specific services are exposed publicly, and those sit behind load balancers with DDoS protection. Internal services communicate over private networks. We monitor infrastructure continuously using GCP Security Command Center. This detects anomalies, flags misconfigurations, and alerts on potential threats. All access to infrastructure is logged. Database backups run daily and are encrypted. We retain backups for 30 days, stored in geographically separate regions from the primary database. Recovery is tested quarterly.

Access Controls

Authentication requires SSO with multi-factor authentication. No shared accounts. No password-only access. Access is role-based. Engineers can access development and staging environments. Production access is limited to operations staff and requires justification. Production access is time-limited and expires automatically. Every production action is logged: who accessed what, when, and why. These logs are immutable and retained for compliance purposes. When employees leave, access is revoked immediately across all systems. We maintain a formal offboarding checklist that covers infrastructure, repositories, and third-party services.

Application Security

Every code change requires review before merging. No one can push directly to production branches. Automated tests must pass before code can be deployed. We scan dependencies for known vulnerabilities. When a vulnerability is identified, we evaluate severity and patch within 7 days for critical issues, 30 days for high-severity issues. Secrets never live in code or configuration files. API keys, database credentials, and other secrets are stored in Doppler and GCP Secret Manager. Applications retrieve secrets at runtime. All API endpoints implement rate limiting, authentication, and input validation. We assume all input is malicious until proven otherwise.

Edge Security

The edge proxy runs in your infrastructure, not ours. You control where it deploys and how it’s configured. Edge applies quality rules locally. It processes telemetry at line rate and decides what to keep or drop based on rules synced from the control plane. Only rule execution results leave your infrastructure. No log content, no metric values, no trace data. Edge is designed to fail open. If it encounters an error it can’t handle, it passes data through unfiltered. This reduces many classes of failures, but putting any tool in your data path has operational risk. The complete mitigation is an upstream failover system that bypasses edge if it fails. We support this architecture but don’t provide the failover component yet.
Evaluating edge deployment? The edge proxy is open source - built by the creators of Vector.dev with the same reliability focus. Fail-open design means processing errors won’t block your telemetry. For additional resilience, we can help you architect an upstream failover system.
Rule sync happens over encrypted connections. Edge pulls rules from the control plane on startup and caches them locally. If the control plane becomes unreachable, edge continues using cached rules until connectivity is restored. You control the deployment architecture. Run edge as a sidecar to your agent, in your telemetry pipeline, or at your network boundary. Choose what makes sense for your security and reliability requirements.

AI Security

We use AI to classify telemetry and identify quality patterns. By default, this uses Anthropic Claude. Telemetry samples are sent to the AI provider, processed in memory, and the classification results are returned. The samples themselves are not persisted in our database or the AI provider’s systems. This is enforced through API contracts with providers. If you prefer to use your own AI infrastructure, you can configure Tero to use AWS Bedrock, Azure OpenAI, or other providers with your own API keys. This means samples never leave your approved systems. If you self-host the control plane, you control the entire AI pipeline. Use your infrastructure, your models, your keys.

Self-Hosted Security

When you self-host the control plane, everything runs in your infrastructure. The database, the application, the AI classification—all of it. Your data never leaves your network. We provide the software, you run it. Your security controls apply. Your compliance boundary is maintained. We provide updates, documentation, and support. You manage the infrastructure, monitoring, and incident response. This gives you complete control over security posture. Self-hosting makes sense for organizations with strict data residency requirements, regulatory constraints, or security policies that prohibit external data processing.
Need to keep data in your infrastructure? Self-host the control plane. Your data never leaves your network. We provide the software, you run it with your security controls.

Incident Response

We monitor for security incidents through automated alerts, anomaly detection, and application logging. If an incident is detected, our security team is notified immediately. We have a documented incident response plan that covers identification, containment, eradication, and recovery. Every incident gets a post-mortem with action items to prevent recurrence. If an incident affects customer data, we notify affected customers within 24 hours. We provide details about what happened, what data was affected, and what we’re doing to prevent it from happening again. We’ve never had a data breach. That doesn’t mean we’re perfect or invulnerable. It means we take security seriously and remain vigilant.

Vulnerability Reporting

We work with security researchers through responsible disclosure. Report the issue privately, give us time to fix it, and we’ll credit you publicly once the vulnerability is patched.

Report a Security Vulnerability

Found a security issue? Email . We’ll acknowledge within 24 hours and keep you updated throughout the investigation and fix.

Reference

Checklist

Encryption & Data Protection

ControlStatusImplementation
Data encrypted in transitImplementedTLS 1.3 for all connections
Data encrypted at restImplementedAES-256 for database, backups, and temporary storage
Encryption key managementImplementedGCP KMS with automatic rotation
Database encryptionImplementedPostgreSQL encrypted connections required, private IP only
Backup encryptionImplementedAES-256, geo-redundant storage, 30-day retention

Authentication & Access Control

ControlStatusImplementation
Multi-factor authenticationRequiredSSO via WorkOS with MFA enforcement
Single sign-on (SSO)SupportedSAML 2.0, OpenID Connect
Role-based access controlImplementedLeast privilege, per-resource permissions
Session managementImplemented24-hour expiration, secure token storage
API key securityImplementedScoped permissions, rotation supported
Production access controlsImplementedTime-limited, justification required, audit logged
Password requirementsEnforcedMinimum 12 characters, complexity requirements

Infrastructure Security

ControlStatusImplementation
Cloud providerGCPGoogle Cloud Platform, us-central1 region
Multi-zone deploymentImplementedAutomatic failover between availability zones
Network isolationImplementedPrivate VPC, restricted access
DDoS protectionImplementedCloud Armor with rate limiting
Container securityImplementedImmutable infrastructure, automatic patching
Vulnerability scanningAutomatedDependency checks, container image scanning
Infrastructure as codeImplementedVersion controlled, peer reviewed

Monitoring & Incident Response

ControlStatusImplementation
Security monitoringActiveGCP Security Command Center, real-time alerts
Application monitoringImplementedError tracking, performance monitoring
Audit loggingImplementedInfrastructure and application changes logged
Failed authentication trackingImplementedSuspicious activity detection
Incident response planDocumentedResponse procedures, escalation paths
Vulnerability disclosureActiveResponsible disclosure, 24-hour acknowledgment

Application Security

ControlStatusImplementation
Code reviewRequiredPeer review for all changes
Automated testingImplementedTests required before deployment
Dependency scanningAutomatedVulnerability alerts, prompt patching
Secrets managementImplementedDoppler and GCP Secret Manager
Input validationImplementedAll API endpoints validated
Rate limitingImplementedPer-endpoint and per-user limits
Security headersImplementedHSTS, CSP, X-Frame-Options

Edge Security

ControlStatusImplementation
Customer infrastructureYour ControlEdge runs in your environment
Fail-open designBy DesignNever blocks observability data on error
Local rule processingImplementedNo telemetry content sent to control plane
Encrypted communicationImplementedTLS 1.3 for all control plane sync
Deployment flexibilitySupportedSidecar, pipeline, or boundary deployment

Self-Hosted Security

ControlStatusImplementation
Self-hosted control planeAvailableComplete infrastructure control
Custom AI providersSupportedUse your AWS Bedrock, Azure OpenAI, or other providers
Air-gapped deploymentContact UsAvailable for enterprise requirements
Network isolationYour ControlDeploy within your security boundary
See the complete Checklist across all trust areas.

Documents

DocumentStatusDescription
SOC 2 Type 2 Report2026Independent audit of security, availability, and confidentiality controls.
Penetration Test ReportQ1 2025Third-party security assessment of control plane and edge components.
See all available Documents across all trust areas.

Questions?

Contact Security Team

Need architecture diagrams? Security questionnaire responses? Want to discuss deployment options? Email .