Firebolt’s cover photo
Firebolt

Firebolt

Software Development

Palo Alto, California 39,932 followers

Firebolt is the Analytical Database built for AI agents, sub-second analytics, and efficient ELT.

About us

Traditional warehouses and lakehouses force you to choose between performance, cost, and simplicity. Firebolt delivers all three on a single platform—with the best price-performance in the market so you can ship faster without compromises. We're built for companies that need their data platforms to do more — run AI workloads, power sub-second customer-facing analytics at scale, or execute ELT jobs efficiently at a fraction of the cost.

Website
https://www.firebolt.io
Industry
Software Development
Company size
51-200 employees
Headquarters
Palo Alto, California
Type
Privately Held
Founded
2019

Products

Locations

Employees at Firebolt

Updates

  • We’re hosting a happy hour for the data infrastructure ecosystem during the upcoming Iceberg summit in San Francisco. We look forward to spending time with the builders of the world's most demanding data platforms. More details and a registration link in the comments!

    • No alternative text description for this image
  • Now out: The Firebolt Engineering Digest: January 2026 SDK & Integrations: ✓ MCP Server: New RAG-based search tool (#57) ✓ Go SDK: Default parameters on connection level (#176) ✓ Multipart compression support for Parquet/Avro 4.29 Features: ✓ Iceberg write support (preview) ✓ Cross-database queries (database.schema.table syntax) ✓ Late materialization: 50x reduction in data scanned ✓ ALTER TABLE SET PRIMARY INDEX (no rewrites) Customer Story: ✓ MerchJar: 2min → sub-second queries, single platform for OLAP+OLTP https://okt.to/7SQn5g

  • View organization page for Firebolt

    39,932 followers

    The database trade-offs you've accepted don't have to exist. Fast OR flexible. Cloud OR self-hosted. Performance OR cost. We built Firebolt to eliminate these choices: Fast by Default → Vectorized query engine with a mature planner and optimized shuffle → Sub-second analytics on terabytes without careful tuning → Shared computations reused across queries Control When You Need It → Fine-grained control over data layout, caching, and query plans → Rich observability for effective tuning → Decoupled metadata, storage, and compute—test changes safely in isolated environments Deploy Anywhere → Fully managed SaaS on AWS → Self-hosted with Firebolt Core (forever free) → Same efficiency, Iceberg support, and Postgres SQL compatibility Postgres-compliant SQL. Apache Iceberg support. ACID transactions with snapshot isolation. Scale to zero when idle, provision new clusters in seconds. One analytical database. Many workloads. Zero compromises. Learn more https://okt.to/M2Gc0e #DataEngineering #Analytics #OpenSource

    • No alternative text description for this image
  • Now out: The Firebolt Engineering Digest: January 2026 SDK & Integrations: ✓ MCP Server: New RAG-based search tool (#57) ✓ Go SDK: Default parameters on connection level (#176) ✓ Multipart compression support for Parquet/Avro 4.29 Features: ✓ Iceberg write support (preview) ✓ Cross-database queries (database.schema.table syntax) ✓ Late materialization: 50x reduction in data scanned ✓ ALTER TABLE SET PRIMARY INDEX (no rewrites) Customer Story: ✓ MerchJar: 2min → sub-second queries, single platform for OLAP+OLTP

  • 🚀 Firebolt v 4.29 Integration Update Now that Firebolt 4.29 is officially rolled out to all customers, we’re excited to officially support two new integrations: Now Available (4.29): ThoughtSpot Amazon QuickSight Both powered by pg_fire and production-ready. 📚 Docs: https://lnkd.in/gbr5A5tb https://lnkd.in/gUk4GrvV In addition, we now support the following integrations: Preview Access (requires PackDB 4.31): dbt Cloud dbt Core (Postgres adapter) Lightdash DM us or reach out to your account team for early access to 4.31. 📚 Docs: https://lnkd.in/gCxY55Ej https://lnkd.in/gM_uVrvz #firebolt #analyticaldatabase #engineers

  • Most databases cache query results. That's it. User changes one filter value? Full re-execution. Different date range? Start from scratch. Add a WHERE clause? Recompute everything. Firebolt caches at multiple levels: -> Hash tables from joins (so similar queries reuse the expensive part) -> Intermediate subplans (not just final results) -> Full result sets when appropriate The interesting thing is automatic invalidation - when underlying data changes, stale cache entries get dropped. No manual REFRESH MATERIALIZED VIEW commands. No wondering if your cache is showing old data. Your users run the same dashboard query 500 times a day with different date filters. Why recompute the same joins 500 times? Cache the join. Recompute the filter. Done. Check out this blog on caching: https://okt.to/px4yk5 #firebolt #analyticaldatabase #database #caching

  • Your query optimizer is guessing. Ours learns. Most databases plan queries using static cost models - basically educated guesses about data distribution and selectivity. When the guesses are wrong, you get slow queries that should be fast. The standard fix? Manually tune cardinality estimates. Add optimizer hints. Collect statistics. Rinse, repeat. Firebolt's History-Based Optimizer takes a different approach: -> Stores metrics from actual query execution -> Uses normalized fingerprints to recognize similar subplans -> Adjusts cost models based on real performance data -> Gets smarter automatically as you run queries -> No manual tuning. No hints. The optimizer learns what works for your workload and adapts. Set it once. Let it learn. Watch queries improve https://okt.to/UpSbqE #firebolt #queryoptimization #databases

    • No alternative text description for this image
  • View organization page for Firebolt

    39,932 followers

    Customer clicks "refresh" on an analytics dashboard. ->Sees data from 30 seconds ago, even though their write just succeeded. That's eventual consistency - fine for batch analytics, terrible for customer-facing dashboards. Most analytical databases make you choose: 1. Strong consistency that kills concurrency or 2. Fast queries with stale data Firebolt doesn't make you choose: -> Snapshot isolation across distributed queries -> Multi-statement transactions with rollback -> Low-latency performance at high throughput Your operational database supports transactions. Your analytics platform should too. #firebolt #realtimeanalytics #ACID

    • No alternative text description for this image
  • AI agents are powerful. They're also unpredictable cost drivers. An agent generating queries in bursts. Speculative queries running overnight. Context retrieval happening constantly. Traditional warehouses either can't handle the concurrency or make it prohibitively expensive. Firebolt gives you the controls to keep AI workloads sustainable: → Scale-to-zero engines (stop automatically when idle) → Per-engine spend caps and query timeout limits → Workload isolation (agents don't interfere with dashboards or batch jobs) → Granular engine sizing (scale in small steps, not expensive jumps) You get the performance AI agents demand without the budget surprises. Because building with AI shouldn't mean choosing between speed and economics. Check out this whitepaper to learn more: https://okt.to/UJT1B8

    • No alternative text description for this image
  • Firebolt v4.29 puts AI directly in your workflow. The Firebolt Agent is now in your workspace: → Chat mode for exploratory questions → In-editor SQL generation → Automatic error detection and fixes → Code improvement suggestions No context switching. No copy-pasting between tools. Also shipping in 4.29: Dynamic primary indexes (ALTER TABLE support, no rebuild needed) Enhanced compression with codec chaining Top-k query optimization with late materialization Every release focuses on the same goal: make you faster. Explore v4.29 features: https://okt.to/WvVm1X #DeveloperTools #AIAssisted #DataEngineering

Similar pages

Browse jobs

Funding

Firebolt 3 total rounds

Last Round

Series C

US$ 100.0M

See more info on crunchbase