Skip to content

Tracer-Cloud/opensre

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1,075 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

OpenSRE

OpenSRE: Build Your Own AI SRE Agents

The open-source framework for AI SRE agents, and the training and evaluation environment they need to improve. Connect the 40+ tools you already run, define your own workflows, and investigate incidents on your own infrastructure.

Stars License CI Open Source Discord

Quickstart · Docs · FAQ · Security


Why OpenSRE?

When something breaks in production, the evidence is scattered across logs, metrics, traces, runbooks, and Slack threads. OpenSRE is an open-source framework for AI SRE agents that resolve production incidents, built to run on your own infrastructure.

We do that because SWE-bench1 gave coding agents scalable training data and clear feedback. Production incident response still lacks an equivalent.

Distributed failures are slower, noisier, and harder to simulate and evaluate than local code tasks, which is why AI SRE, and AI for production debugging more broadly, remains unsolved.

OpenSRE is building that missing layer:

an open reinforcement learning environment for agentic infrastructure incident response, with end-to-end tests and synthetic incident simulations for realistic production failures

We do that by:

  • building easy-to-deploy, customizable AI SRE agents for production incident investigation and response
  • running scored synthetic RCA suites that check root-cause accuracy, required evidence, and adversarial red herrings (tests/synthetic)
  • running real-world end-to-end tests across cloud-backed scenarios including Kubernetes, EC2, CloudWatch, Lambda, ECS Fargate, and Flink (tests/e2e)
  • keeping semantic test-catalog naming so e2e vs synthetic and local vs cloud boundaries stay obvious (tests/README.md)

Our mission is to build AI SRE agents on top of this, scale it to thousands of realistic infrastructure failure scenarios, and establish OpenSRE as the benchmark and training ground for AI SRE.

1 https://arxiv.org/abs/2310.06770


Install

curl -fsSL https://raw.githubusercontent.com/Tracer-Cloud/opensre/main/install.sh | bash
brew install Tracer-Cloud/opensre/opensre
irm https://raw.githubusercontent.com/Tracer-Cloud/opensre/main/install.ps1 | iex

Quick Start

opensre onboard
opensre investigate -i tests/e2e/kubernetes/fixtures/datadog_k8s_alert.json
opensre update

Development

New to OpenSRE? See SETUP.md for detailed platform-specific setup instructions, including Windows setup, environment configuration, and more.

git clone https://github.com/Tracer-Cloud/opensre
cd opensre
make install
# run opensre onboard to configure your local LLM provider
# and optionally validate/save Grafana, Datadog, Honeycomb, Coralogix, Slack, AWS, GitHub MCP, and Sentry integrations
opensre onboard
opensre investigate -i tests/e2e/kubernetes/fixtures/datadog_k8s_alert.json

How OpenSRE Works

tracer-how-it-works-illustration

Investigation Workflow

When an alert fires, OpenSRE automatically:

  1. Fetches the alert context and correlated logs, metrics, and traces
  2. Reasons across your connected systems to identify anomalies
  3. Generates a structured investigation report with probable root cause
  4. Suggests next steps and, optionally, executes remediation actions
  5. Posts a summary directly to Slack or PagerDuty - no context switching needed

Benchmark

Generate the benchmark report:

make benchmark

Capabilities

🔍 Structured incident investigation Correlated root-cause analysis across all your signals
📋 Runbook-aware reasoning OpenSRE reads your runbooks and applies them automatically
🔮 Predictive failure detection Catch emerging issues before they page you
🔗 Evidence-backed root cause Every conclusion is linked to the data behind it
🤖 Full LLM flexibility Bring your own model — Anthropic, OpenAI, Ollama, Gemini, OpenRouter, NVIDIA NIM

Integrations

OpenSRE connects to 40+ tools and services across the modern cloud stack, from LLM providers and observability platforms to infrastructure, databases, and incident management.

Category Integrations Roadmap
AI / LLM Providers Anthropic · OpenAI · Ollama · Google Gemini · OpenRouter · NVIDIA NIM · Bedrock
Observability Image Grafana (Loki · Mimir · Tempo) · Image Datadog · Honeycomb · Coralogix · Image CloudWatch · Image Sentry · Elasticsearch Splunk · New Relic · Victoria Logs
Infrastructure Image Kubernetes · Image AWS (S3 · Lambda · EKS · EC2 · Bedrock) · Image GCP · Image Azure Helm · ArgoCD
Database MongoDB · ClickHouse PostgreSQL · MySQL · MariaDB · MongoDB Atlas · Azure SQL · RDS · Snowflake
Data Platform Apache Airflow · Apache Kafka · Apache Spark · Prefect RabbitMQ
Dev Tools Image GitHub · GitHub MCP · Bitbucket GitLab
Incident Management Image PagerDuty · Opsgenie · Jira ServiceNow · incident.io · Alertmanager · Linear · Trello
Communication Image Slack · Google Docs Discord · Teams · WhatsApp · Confluence · Notion
Agent Deployment Image Vercel · Image LangSmith · Image EC2 · Image ECS Railway
Protocols Image MCP · Image ACP · Image OpenClaw

Contributing

OpenSRE is community-built. Every integration, improvement, and bug fix makes it better for thousands of engineers. We actively review PRs and welcome contributors of all experience levels.

Join our Discord

Good first issues are labeled good first issue. Ways to contribute:

  • 🐛 Report bugs or missing edge cases
  • 🔌 Add a new tool integration
  • 📖 Improve documentation or runbook examples
  • ⭐ Star the repo - it helps other engineers find OpenSRE

See CONTRIBUTING.md for the full guide.

Thanks goes to these amazing people:

davincios
davincios
VaibhavUpreti
VaibhavUpreti
aliya-tracer
aliya-tracer
arnetracer
arnetracer
kylie-tracer
kylie-tracer
paultracer
paultracer
zeel2104
zeel2104
iamkalio
iamkalio
w3joe
w3joe
yeoreums
yeoreums
anandgupta1202
anandgupta1202
rrajan94
rrajan94
vrk7
vrk7
cerencamkiran
cerencamkiran
edgarmb14
edgarmb14
lukegimza
lukegimza
ebrahim-sameh
ebrahim-sameh
shoaib050326
shoaib050326
venturevd
venturevd
shriyashsoni
shriyashsoni
Devesh36
Devesh36
KindaJayant
KindaJayant
overcastbulb
overcastbulb
Yashkapure06
Yashkapure06
Davda-James
Davda-James
Abhinnavverma
Abhinnavverma
devankitjuneja
devankitjuneja
ramandagar
ramandagar
mvanhorn
mvanhorn
abhishek-marathe04
abhishek-marathe04
yashksaini-coder
yashksaini-coder
haliaeetusvocifer
haliaeetusvocifer
Bahtya
Bahtya
mayankbharati-ops
mayankbharati-ops
harshareddy832
harshareddy832
sundaram2021
sundaram2021
micheal000010000-hub
micheal000010000-hub
ljivesh
ljivesh
gautamjain1503
gautamjain1503
mudittt
mudittt

Security

OpenSRE is designed with production environments in mind:

  • No storing of raw log data beyond the investigation session
  • All LLM calls use structured, auditable prompts
  • Log transcripts are kept locally - never sent externally by default

See SECURITY.md for responsible disclosure.


Telemetry

opensre collects anonymous usage statistics with Posthog to help us understand adoption and demonstrate traction to sponsors and investors who fund the project. What we collect: command name, success/failure, rough runtime, CLI version, Python version, OS family, machine architecture, and a small amount of command-specific metadata such as which subcommand ran. For opensre onboard and opensre investigate, we may also collect the selected model/provider and whether the command used flags such as --interactive or --input.

A randomly generated anonymous ID is created on first run and stored in ~/.config/opensre/. We never collect alert contents, file contents, hostnames, credentials, or any personally identifiable information.

Telemetry is automatically disabled in GitHub Actions and pytest runs.

To opt out locally, set the environment variable before running:

export OPENSRE_NO_TELEMETRY=1

The legacy alias OPENSRE_ANALYTICS_DISABLED=1 also still works.

To inspect the payload locally without sending anything, use:

export OPENSRE_TELEMETRY_DEBUG=1

License

Apache 2.0 - see LICENSE for details.

Citations

1 https://arxiv.org/abs/2310.06770

Packages

 
 
 

Contributors

Languages