🔥 The DB Grill 🔥

Where database blog posts get flame-broiled to perfection

The insert benchmark on a small server : Postgres 12.22 through 18.1
Originally from smalldatum.blogspot.com/feeds/posts/default
December 10, 2025 • Roasted by Marcus "Zero Trust" Williams Read Original Article

Alright, let's take a look at this... phew. I just read your little performance analysis, and I have to say, it’s adorable. "Postgres continues to be boring in a good way." You know what else is "boring in a good way"? A server that's been unplugged from the network, encased in concrete, and dropped to the bottom of the Mariana Trench. That's the only kind of boring I trust. Your kind of "boring" is what we in the business call "complacent," and it's the little welcome mat you roll out for every threat actor from here to St. Petersburg.

You kick things off by telling me performance has been stable. Stable. You're benchmarking a dozen different point releases, compiled from source, on a hodge-podge setup, and you call the result stable. I call it a flat-lined EKG. You haven't proven stability; you've just proven you're not measuring anything that matters, like, say, the attack surface you've lovingly cultivated.

Let’s talk about your "lab."

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU... SMT disabled... Storage is one NVMe device... The OS is Ubuntu 24.04.

An ASUS ExpertCenter? Is that what you found in the Best Buy bargain bin? You're running what I assume is supposed to be an enterprise-grade database benchmark on a souped-up streaming box. SMT is disabled—great, you closed one side-channel attack vector, only a few dozen more to go on that consumer-grade chip. And you're running it all on Ubuntu 24.04, the freshest of fresh meat, practically steaming for any zero-day exploit that's been waiting for a wide-eyed early adopter like you. You might as well have put a sign on it that says "FREE KERNEL EXPLOITS, PLEASE FORM AN ORDERLY QUEUE."

And my God, the build process. You compiled Postgres from source. Who audited that toolchain? Where are the SBOMs? You just pulled down a tarball and ran make, didn't you? You've created a beautiful, bespoke, artisanal binary that is accountable to no one and has an unverifiable provenance. It's a supply chain attack waiting to happen. Every single one of those custom flags is a potential deviation from a hardened build, a new way for a buffer to overflow just right. You haven't built a benchmark; you've built a bomb.

Then we get to the configuration files. Named conf.diff.cx10a_c8r32. Chef's kiss. Nothing says "reproducible and auditable" like a filename that looks like a CAPTCHA. How is anyone supposed to track changes? This is a compliance nightmare. I can just see myself explaining this to an auditor. "Yes, the critical security settings for our production database are based on a file named after what appears to be a license plate from the planet Cybertron."

But my favorite part, the real gem, is this: io_method='io_uring'. Oh, you absolute daredevil. You strapped the most notorious CVE-generating engine in the modern Linux kernel directly to your database's I/O subsystem for a little performance boost. Did you enjoy the speed bump on your way to a full system compromise? io_uring has had more holes poked in it than a cheese grater. You've widened your kernel attack surface so much you could land a 747 on it. But hey, at least your point queries are 20% faster while a rootkit is siphoning off all your data. Priorities.

Let's look at the "benchmark" itself. One client, one table. This isn't a workload; it's a sterile laboratory environment that has no bearing on reality. Your little qr100 and qp500 steps with their three connections are just sad pantomimes of a real production load. In the real world, you'd have:

You talk about performance regressions in write-only steps since Postgres 15 and hand-wave it away as "likely from two issues -- vacuum and get_actual_variable_range." Likely. That's the kind of rigorous analysis that gets a CISO fired. That's not a performance regression; that's the system screaming at you that something is fundamentally unstable under load, a canary in the coal mine for a denial-of-service vulnerability. While you're chasing a 10% throughput drop, an attacker is figuring out how to trigger that exact condition to lock up your entire database.

You're so focused on the microseconds you're saving on index creation that you've completely ignored the gaping security holes you're standing on. Every "improvement" you've listed is a change, and every change is a potential vulnerability. That 13% speed-up in index creation? Probably a new race condition. The 22% better point-query performance? Almost certainly a new information leak via a timing attack.

Honestly, this whole thing reads like a "How-To" guide for failing a SOC 2 audit. Custom binaries, unauditable configs, consumer hardware, bleeding-edge kernels, and a total, blissful ignorance of the security implications of every single choice made.

It's a cute science project, really. Keep up the good work. It gives people like me job security. Just, for the love of God, don't ever let this methodology get anywhere near a production system.