Skip to main content
Privacy
Privacy isn’t a feature we added. It’s built into how Tero works.
Looking for privacy details? Jump to the Reference section at the bottom for our checklist and available documents.

Metadata, Not Content

The fundamental privacy decision we made was architectural: build a semantic catalog of telemetry metadata, not a warehouse of log content. When you connect Tero to your observability platform, we analyze log samples to understand structure and patterns. We extract the schema: what fields exist, what types they are, what semantic meaning they have. This becomes the catalog entry for that log event type. The content itself—the actual error messages, the user IDs in your logs, the request parameters—we don’t store it. We process samples in memory during classification, extract the metadata, and discard the content. This isn’t a limitation. It’s how the product works. This means we can tell you “this log event appears 2 million times per hour and costs $8,000 per month” without ever seeing what those 2 million logs say. The metadata is enough to deliver value. For privacy, this changes everything. We’re not asking you to trust us with your customer data or business secrets. We’re not storing PII from your logs. We’re building a catalog of patterns, not a database of content.

Your Control

You control what data you share with Tero at every step. Each step is optional.
1

Start with read-only access

Connect Tero to your observability platform with read-only API permissions. We analyze your telemetry, build a semantic catalog, and show you where the waste is. No infrastructure changes. No risk to production systems.
2

Grant write access when ready

If you want Tero to configure exclusion rules or adjust sampling instead of implementing recommendations yourself, grant write permissions. Scope them however you need. Revoke them anytime.
3

Deploy edge for maximum savings

The edge proxy runs in your infrastructure and filters telemetry before it leaves your network. Most customers start API-only and add edge after they trust the control plane.
4

Self-host for complete control

Run the entire control plane in your infrastructure. Your data never leaves your network. Your compliance boundary. We provide the software, you run it.

How AI Processing Works

We use AI to classify telemetry and understand quality. This requires sending log samples to an AI provider. By default, we use Anthropic Claude. Sample logs are sent to Anthropic, processed in memory for classification, and the results are returned. The samples themselves are not persisted in Anthropic’s systems or ours. This is enforced through API contracts.
Need to use your approved AI provider? Configure Tero to use AWS Bedrock, Azure OpenAI, or other providers with your API keys and compliance agreements. This works with both Tero-hosted and self-hosted deployments.
If you have approved AI providers, configure Tero to use them. AWS Bedrock, Azure OpenAI, or others. Use your API keys, your compliance agreements, your data residency requirements. If you self-host the control plane, you control the entire AI pipeline. Use your models, your infrastructure, your security boundary. The key point: AI classification happens on samples, not your full dataset. And samples are processed transiently, not stored.

What Actually Gets Stored

To be specific about what we store: Account data includes your name, email, company name. This is what you provide when creating an account. We also store authentication tokens from your SSO provider (WorkOS) for single sign-on. For self-service customers, billing information is processed through Stripe (we don’t store credit card numbers, Stripe does). Enterprise customers typically pay via invoice. Telemetry metadata includes schemas (field names and types), volume metrics (events per hour), quality classifications (valuable, waste, mixed), and the quality rules you define. This is what enables the semantic catalog and quality management. Usage data includes which features you use, actions you take in the product, and where errors occur. Standard product analytics to improve the experience.
Data residency requirements? Self-host the control plane to keep all data in your chosen region and infrastructure. Your data never leaves your network. See Compliance for details.
We keep data while it’s useful. When you delete your account or workspace, we delete the data within 30 days. Backups are retained for 30 days, then permanently deleted.

Privacy by Design

Privacy wasn’t bolted on after building the product. The semantic catalog architecture means we don’t need log content to deliver value. The progressive trust model means you control access at every step. The self-hosted option means you can run everything in your network if requirements demand it. This is privacy by design. The product works this way because we built it to respect data boundaries from the start.

Reference

Checklist

What We Collect

Data TypeWhat We CollectWhat We Don’t Collect
Account informationName, email, company nameGovernment IDs, social security numbers
AuthenticationSSO tokens, MFA settingsPasswords (handled by your SSO provider)
Telemetry metadataSchemas, field types, volume patterns, quality classificationsLog content, metric values, trace data
Usage dataFeatures used, actions taken in the productIndividual browsing behavior
Billing informationPayment details via StripeCredit card numbers (stored by Stripe)

Who We Share Data With

ServiceWhat We ShareWhy
Google Cloud PlatformControl plane data, backupsInfrastructure hosting
Anthropic (default)Telemetry samples, not persistedAI classification
WorkOSUser email, authentication tokensSSO and authentication
StripeBilling information (self-service only)Payment processing for self-service customers
Self-hostedNothing (runs in your infrastructure)Complete data control
See Sub-Processors for complete details.

Your Rights

RightHow to Exercise
Access your dataEmail for JSON export
Correct your dataUpdate in account settings or email
Delete your dataEmail (deleted within 30 days)
Export your dataRequest machine-readable export
Object to processingEmail to discuss concerns
Restrict processingRequest limits on specific uses
We respond to all requests within 30 days. GDPR (EU) and CCPA (California) rights are fully supported. Workspace deletion removes all associated telemetry metadata, quality rules, and configurations.

Data Retention

Data TypeRetention Period
Account dataWhile your account is active
Telemetry metadataWhile your workspace is active
Quality rulesWhile your workspace is active
Usage analytics2 years
Backups30 days, then permanently deleted
When you delete your account or workspace, data is removed from active systems within 30 days. Backup copies are deleted when backups expire.

Data Location

Tero-hosted: United States (GCP us-central1)
Self-hosted: Your chosen region and infrastructure
Edge proxy: Always runs in your infrastructure

Privacy Practices

PracticeImplementation
Data minimizationStore metadata only, not log content or metric values
Purpose limitationData used only for documented purposes
Privacy by designArchitecture built to minimize data collection
TransparencyClear documentation of what we collect and why
User controlProgressive access model, self-hosted option available
See the complete Checklist across all trust areas.

Documents

DocumentStatusDescription
Data Processing Agreement (DPA)AvailableStandard DPA with Standard Contractual Clauses for GDPR compliance.
See all available Documents across all trust areas.

Questions?

Contact Privacy Team

Privacy questions? Data processing agreement? GDPR documentation? Email .