There is a temptation, when describing a system like XTDB, to reach for grand metaphors. But the value is quieter than that. It is the simplicity of asking temporal questions in SQL rather than building temporal logic in application code. The particular calm that comes from an audit trail that cannot be edited, because the database will not permit it. And when your data arrives from five different systems, each with its own notion of when things happened, it is the confidence of knowing those timelines are preserved, not flattened into the moment they arrived. Other systems let you query the present. XTDB lets you query the truth, which includes the present, but is not limited to it.
Temporal Data Integrity with XTDB: Preserving Timelines in SQL Queries
More Relevant Posts
-
Today we got our extension merged into the DuckDB Community Hub. 🎉 Small milestone, but it closes a loop we've been working on for a while. What it does: PFC-JSONL is a compressed log format built around a pipeline that a lot of teams already use: Fluent Bit collecting logs → stored somewhere → queried later. The problem: once logs are compressed, you either decompress everything to query them, or you keep them uncompressed and eat the storage cost. Neither is great at scale. PFC-JSONL solves this by storing logs in compressed blocks, each with a built-in timestamp index. When you query a specific time window, only the relevant blocks are touched — everything else stays compressed on disk. The Fluent Bit integration ships logs directly into rotating .pfc archives. The DuckDB extension lets you query those archives with standard SQL — no decompression step, no intermediate files. Now that it's on the Community Hub, getting started is one line: INSTALL pfc FROM community Free for personal and open-source use.
To view or add a comment, sign in
-
Reading about perfect hash functions. The GNU tool gperf can output C code of a perfect hash using input CSV file. Cloned a good repository of sample data then applied gperf to a file in the repository. https://lnkd.in/e-c5Qxbe gperf Datasets/Bank\ Rate\ history\ and\ data\ \ Bank\ of\ England\ Database.csv >> ./Bank\ Rate\ history\ and\ data\ \ Bank\ of\ England\ Database.cc
To view or add a comment, sign in
-
Annotating the specificity of your TCRs is now as simple as asking your favourite LLM. Here's what happens under the hood: 1) Claude will format your TCRs 2) Claude sends the TCRs to an ImmuneWatch DETECT server, matching your TCRs with our TCR-epitope database. 3) ImmuneWatch DETECT sends back annotations and tells Claude how to interpret this (e.g. provide the advised treshold score for positive hits) 4) Claude reports back to you. We've just opened our beta program for a Claude-DETECT integration. Signup in comments.
To view or add a comment, sign in
-
This is now my default way of using DETECT. I am surprised how this kind of LLM integration can replace a lot of the nitty gritty bioinformatics work that I used to spend time on. Now I just take a screenshot, paste it into a chat window and ask it to extract and annotate the TCRs.
Annotating the specificity of your TCRs is now as simple as asking your favourite LLM. Here's what happens under the hood: 1) Claude will format your TCRs 2) Claude sends the TCRs to an ImmuneWatch DETECT server, matching your TCRs with our TCR-epitope database. 3) ImmuneWatch DETECT sends back annotations and tells Claude how to interpret this (e.g. provide the advised treshold score for positive hits) 4) Claude reports back to you. We've just opened our beta program for a Claude-DETECT integration. Signup in comments.
To view or add a comment, sign in
-
I was reading this paper yesterday which talks about why OLTP systems failed for business data processing & one of the line in paper is "The main user interface device at the time RDBMSs were architected, was the dumb terminal, and vendors imagined operators inputting queries through an interactive terminal prompt". The Author called terminals-prompting dump & said users will use interactive websites instead of prompt-terminals. Interestingly, this doesn't hold true anymore. We all are moving to terminals again :)
To view or add a comment, sign in
-
-
I've had ambitious plans before. This one is different. For years I've been thinking about the way data flows through different tiers, analyzing each aspect ever so deeply. Over time local disk has always been a constraint I've wanted to rid, but not in a complex messy way, in a way that made sense and works naturally, flows naturally. In the new major of TidesDB we introduce "Object Store Mode" which enables pretty much infinite scale utilizing smart tiering at the engine level, though this is not all. Effective tiering is important, keeping what's important near and what's not a bit further away. S3 compatible of course! Slides can be found here: https://lnkd.in/eWDn_3jE https://lnkd.in/eY4r86PG #tidesdb #lsmtree #release #video #objectstorage #scale #cloudnative #mariadb #sql #databases
TidesDB - Optional Object Storage Mode
https://www.youtube.com/
To view or add a comment, sign in
-
Generating sequential numbers per group in a database is a recipe for disaster, or so I thought. Conventional wisdom says it's a surefire way to introduce race conditions and locking issues, but what if I told you there's a way to make it work. I've been digging into this problem, particularly with GBase databases, and stumbled upon a discussion on https://lnkd.in/geD5-PiE that got me thinking. The traditional approach of using SELECT NVL
To view or add a comment, sign in
-
-
Built a high-throughput Event Ingestion Server in Rust. Benchmarked two approaches: • Naive: 1 INSERT per request → 1,783 req/sec | p99: 194ms • Batched: Channel + Bulk INSERTs → 20,128 req/sec | p99: 27ms • 11x more throughput. 7x better tail latency. Same server. Same database. One architectural change. check the repo in here repo:- https://lnkd.in/gt7GnN4b and give it a star
To view or add a comment, sign in
-
🔉 Let’s take a look at the new Success Story using our SDAC: https://lnkd.in/dNVc375K 💡 A developer at KryptoByte Technology PTY LTD, was looking for ways to migrate his small application projects from Firebird databases to Microsoft SQL Server. At the same time, he wanted to establish fast and reliable data access without using ODBC drivers. With SDAC, they improved the speed and performance of their products by about 50%. ✅ Download a 60-day absolutely free trial of our SDAC: https://lnkd.in/dQUb5dGV #Delphi #SDAC #SuccesStory #Devart
To view or add a comment, sign in
-
-
I repeated MariaDB vs sysbench benchmarks, this time using the same charset for all versions and the results are impressive. MariaDB continues to do a great job at avoiding performance regressions. https://lnkd.in/g4H4NcYj
To view or add a comment, sign in
Starter Packs Glasgow•6K followers
3wWe agree, and we listed PBXT here: https://github.com/Vettabase/awesome-innovative-databases