đŸ”„ The DB Grill đŸ”„

Where database blog posts get flame-broiled to perfection

Postgres 18.0 vs sysbench on a 32-core server
Originally from smalldatum.blogspot.com/feeds/posts/default
October 13, 2025 ‱ Roasted by Dr. Cornelius "By The Book" Fitzgerald Read Original Article

Ah, another missive from the practitioners' corner. One must applaud the sheer enthusiasm. It’s quite charming, really, to see them get so excited about incremental gains in raw throughput. It reminds me of an undergraduate’s first successful make command—the unbridled joy, the glorious feeling of accomplishment.

I must say, the commitment to scientific rigor is truly... aspirational.

One concern is changes in daily temperature because I don't have a climate-controlled server room.

My goodness. To not only conduct an experiment with uncontrolled thermal variables but to admit it in writing—the bravery is simply breathtaking. And then to compound it with OS updates mid-stream! It’s a bold new paradigm for research: stochastic benchmarking. Clearly they've never read Stonebraker's seminal work on performance analysis, where the concept of a controlled environment is, shall we say, rather foundational. But why let a century of established scientific method get in the way of a good blog post?

It's wonderful to see such a deep, exhaustive analysis of Queries Per Second. The charts, the relative percentages, the meticulous tracking of version numbers—it’s all very... thorough. So much focus on the raw speed of the engine, it’s a wonder they have time for trivialities like, oh, I don’t know, data integrity? I scanned the document twice, and I couldn't find a single mention of transaction isolation levels. Not a whisper about whether these blistering speeds are achieved by playing fast and loose with the ‘I’ in ACID. Perhaps they've innovated past the need for serializability. How progressive.

And the sheer number of configuration flags they're tweaking! io_method=sync, io_method=worker, io_method=io_uring. It is a masterclass in knob-fiddling. The hours spent optimizing these implementation-specific details must be immense. One can’t help but feel this energy could have been better spent, perhaps by reading a paper or two. Pondering Codd's Rule 8—physical data independence—might lead one to realize that an elegant relational model shouldn't require the end-user to have an intimate knowledge of the kernel's I/O scheduling subsystem. But I digress; that's just fussy old theory.

The myopic focus on a single, solitary machine is also a lovely touch. It’s all very impressive in this hermetically sealed world of one workstation. I suppose once they discover the existence of a network, Brewer's CAP theorem will come as a rather startling revelation. One can almost picture the wide-eyed astonishment. “You mean we have to choose between consistency and availability in the face of partitions? But... my QPS numbers!” It’s adorable, really.

All of this frantic activity—chasing a 3% regression here, celebrating a 2x improvement there—it all seems to be in service of a goal that is, at best, a footnote in a proper paper. The industry’s obsession with these microbenchmarks is a fascinating sociological phenomenon. They have produced pages of numbers, yet what have we actually learned about the fundamental nature of data management? Very little. But the numbers, you see, they go up.

Still, one shouldn't discourage them. It's a fine effort, for what it is. Keep tweaking those configuration files, my dear boy. It's important work you're doing. Perhaps next time, try leaving a window open to see how humidity affects mutex contention. The results could be groundbreaking.