đŸ”„ The DB Grill đŸ”„

Where database blog posts get flame-broiled to perfection

Transaction performance đŸ‘‰đŸ» retry with backoff
Originally from dev.to/feed/franckpachot
August 5, 2025 ‱ Roasted by Dr. Cornelius "By The Book" Fitzgerald Read Original Article

Ah, a communiqué from the digital trenches, attempting to clarify why their particular brand of schemaless alchemy sometimes, shall we say, falters under the merest whisper of concurrency. One might almost infer from this elaborate apology that the initial issue wasn't a "myth" but rather an inconvenient truth rearing its ugly head. To suggest that a benchmark, however flawed in its execution, created a myth about slow transactions rather than merely exposing an architectural impedance mismatch is, frankly, adorable.

The core premise, that the benchmark developers, PostgreSQL experts no less, somehow missed the fundamental tenets of their lock-free optimistic concurrency control because they were... experts in a system that adheres to established relational theory? One almost pities them. Clearly, they've never delved into Stonebraker's seminal work on database system architecture, nor, it seems, have they digested the very foundational principles of transactional integrity that have been well-understood since the 1970s.

Let's dissect this, shall we? We're told MongoDB uses OCC, which requires applications to manage transient errors differently. Ah, yes, the classic industry move: redefine a fundamental database responsibility as an "application concern." So, now the humble application developer, who merely wishes to persist a datum, must become a de facto distributed systems engineer, meticulously implementing retry logic that, as demonstrated, must incorporate exponential backoff and jitter to avoid self-inflicted denial-of-service attacks upon their own precious database. Marvelous! One can only imagine the sheer joy of debugging an issue where the database is effectively performing a DDoS on itself because the application didn't correctly implement a core concurrency strategy that the database ought to be handling internally. This isn't innovation; it's an abdication of responsibility.

The article then provides a stunningly obvious solution involving delays, as if this were some profound, newly discovered wisdom. My dear colleagues, this is Database Concurrency 101! The concept of backing off on contention is not novel; it's a staple of any distributed system designed with even a modicum of foresight. The very notion that a 'demo' from seven years ago, for a feature as critical as transactions, somehow overlooked this fundamental aspect speaks volumes, not about the benchmarkers, but about the initial design philosophy. When the "I" in ACID—Isolation—becomes a conditional feature dependent on the client's retry implementation, you're not building a robust transaction system; you're constructing a house of cards.

And then, the glorious semantic acrobatics to differentiate their "locks" from traditional SQL "locks."

What is called "lock" here is more similar to what SQL databases call "latch" or "lightweight locks", which are short duration and do not span multiple database calls.

Precious. So, when your system aborts with a "WriteConflict" because "transaction isolation (the 'I' in 'ACID') is not possible," it's not a lock, it's... a "latch." A "lightweight" failure, perhaps? This is an eloquent, if desperate, attempt to rename a persistent inconsistency into a transient inconvenience. A write conflict, when reading a stale snapshot, is precisely why one employs a serializable isolation level—which, funnily enough, proper relational databases handle directly, often with pessimistic locking or multi-version concurrency control (MVCC) that doesn't shunt the error handling onto the application layer for every single transaction.

The comparison with PostgreSQL is equally enlightening. PostgreSQL, with its quaint notion of a "single-writer instance," can simply wait because it's designed for consistency and atomicity within a well-defined transaction model. But our friends in the document-oriented paradigm must avoid this because, gasp, it "cannot scale horizontally" and would require "a distributed wait queue." This is a classic example of the CAP theorem being twisted into a justification for sacrificing the 'C' (Consistency) on the altar of unbridled 'P' (Partition Tolerance) and 'A' (Availability), only to then stumble over the very definition of consistency itself. They choose OCC for "horizontal scalability," then boast of "consistent cross shard reads," only to reveal that true transactional consistency requires the application to manually compensate for conflicts. One almost hears Codd weeping.

And finally, the advice on data modeling: "avoid hotspots," "fail fast," and the pearl of wisdom that "the data model should allow critical transactions to be single-document." In other words: don't normalize your data, avoid relational integrity, and stick to simple CRUD operations if you want your 'transactional' system to behave predictably. And the ultimate denunciation of any real-world complexity:

no real application will perform business transaction like this: reserving a flight seat, recording payment, and incrementing an audit counter all in one database transaction.

Oh, if only the world were so simple! The very essence of enterprise applications for the past four decades has revolved around the robust, atomic, and isolated handling of such multi-step business processes within a single logical unit of work. To suggest that these complex, real-world transactions should be fragmented into a series of semi-consistent, loosely coupled operations managed by external services and application-level eventual consistency is not progress; it's a regress to the dark ages of file-based systems.

One can only hope that, after another seven years of such "innovations," the industry might perhaps rediscover the quaint, old-fashioned notion of a database system that reliably manages its own data integrity without requiring its users to possess PhDs in distributed algorithms. Perhaps then, they might even find time to dust off a copy of Ullman or Date. A professor can dream, can't he?