🔥 The DB Grill 🔥

Where database blog posts get flame-broiled to perfection

WiredTigerHS.wt: MongoDB MVCC Durable History Store
Originally from dev.to/feed/franckpachot
September 28, 2025 • Roasted by Patricia "Penny Pincher" Goldman Read Original Article

Ah, another wonderfully thorough technical deep-dive. I always appreciate when vendors take the time to explain, in excruciating detail, all the innovative ways they've found to spend my money. It’s so transparent of them. The sheer volume of command-line gymnastics and hexadecimal dumps here is a testament to their commitment to simplicity and ease of use. I can already see the line item on the invoice: “‘wt’ utility whisperer,” $450/hour, 200-hour minimum.

I must commend the elegance of the Multi-Version Concurrency Control implementation. It’s truly a marvel of modern engineering. They’ve managed to provide “lock-free read consistency” by simply keeping uncommitted changes in memory. Brilliant! Why bother with the messy business of writing to disk when you can just require your customers to buy enough RAM to park a 747? It’s a bold strategy, betting the success of our critical transactions on our willingness to perpetually expand our hardware budget. I'm sure the folks in procurement will be thrilled.

But the real stroke of genius, the part that truly brings a tear to a CFO’s eye, is the “durable history store.” Let me see if I have this right.

Each entry contains MVCC metadata and the full previous BSON document, representing a full before-image of the collection's document, even if only a single field changed.

My goodness, that's just… so generous. They’re not just storing the change, they’re storing the entire record all over again. For free, I'm sure. Let’s do some quick math on the back of this cocktail napkin, shall we?

If we have one million updates a day on documents of this size, that’s… let me see… an extra 10 gigabytes of storage per day just for the "before-images." At scale, my storage bill will have more zeros than their last funding round. The ROI on this is just staggering, truly. We'll achieve peak bankruptcy in record time.

And I love the subtle digs at the competition. They've solved the "table bloat found in PostgreSQL" by creating a system where the history file bloats instead. It’s not a bug, it’s a feature! Why bother with a free, well-understood process like VACUUM when you can just buy more and more high-performance storage? It’s the gift that keeps on giving—to the hardware vendor.

Then there's this little gem, tucked away at the end:

However, the trade-off is that long-running transactions may abort if they cannot fit into memory.

Oh, a trade-off! How quaint. So my end-of-quarter financial consolidation report, which is by definition a long-running transaction, might just… give up? Because it ran out of room in the in-memory playpen the database vendor designed? That’s not a trade-off; that’s a business continuity risk they're asking me to subsidize with CAPEX.

Let’s calculate the "true cost" of this marvel, shall we?

So the total cost of ownership isn't $X, it's more like $X + $500k + (Storage Bill * 2) + a blank check for the hardware team. The five-year TCO looks less like a projection and more like a ransom note.

Honestly, sometimes I feel like the entire database industry is just a competition to see who can come up with the most convoluted way to store a byte of data. They talk about MVCC and B-trees, and all I hear is the gentle, rhythmic sound of a cash register. Sigh. Back to the spreadsheets. Someone has to figure out how to pay for all this innovation.