Where database blog posts get flame-broiled to perfection
Alright, settle down, whippersnappers. I had to put down my green-screen terminal and my cup of lukewarm Sanka to read this... benchmark. Another one. I swear, you kids spend more time running sysbench than you do actually shipping code that works. I've seen more performance charts in the last five years than I saw reels of tape in the entire 1980s, and let me tell you, we had a lot of tapes. Had a whole library for 'em. Anyway, you wanted my two cents? Fine. Here's what ol' Rick thinks of your "progress."
Oh, would you look at that! You've discovered io_uring. It's just so revolutionary. It's a brand-new way to... talk to the disk without waiting. How novel. You know what we called that back in my day? An I/O channel. Our IBM System/370 had dedicated hardware to offload I/O back when your parents were worried about the Cold War. We'd submit a job with some JCL, the mainframe would chew on it, and the channel processor would handle all that tedious disk chatter. Now you've reinvented it in software and you're acting like you just split the atom. Congratulations, you're finally catching up to 1985. We did it better in DB2, by the way.
I'm just tickled by this whole section on write performance. After all that fiddling with a dozen config files on a server with enough cores to run a small nation's power grid, you found that basic writes are getting slower. Bravo, a stunning achievement. CPU overhead is up, storage reads are up... it's a masterpiece of modern engineering. You've managed to optimize for the sexy read queries that look great on a PowerPoint slide, while the actual work of, you know, storing the data, is getting bogged down. Back in my day, if you introduced a regression that slowed down the CICS transactions for the payroll system, you'd be updating your resume from a payphone.
Let's talk about this gem right here:
for Postgres 17.7 there might be a large regression on the scan test... But the scan test can be prone to variance... and I don't expect to spend time debugging this. Now that's the spirit! When you find a problem, just call it "variance" and move on. What a luxury. I once spent three straight days with a hex editor and a 500-page core dump printout to find one bad pointer in a COBOL program that was causing a rounding error. We didn't have the option of saying, "eh, it's probably just cosmic rays." We had to fix it, because if we didn't, real physical checks wouldn't get printed. You kids and your "ephemeral workloads."
The sheer complexity of your setup is something to behold. All these different config files: x10c, x10cw8, x10cw16... all to figure out if you need 3, 8, or 16 "workers" to read a file efficiently. It's like watching a team of rocket scientists argue over how many hamsters should power the treadmill. We had VSAM. You defined the file, you defined the keys, and it worked. You didn't need to spend a week performing quantum physics on the config file to get a 5% boost on a query that nobody runs. You're so deep in the weeds tweaking knobs you've forgotten what the garden is for.
And the big payoff for all this work? A "1.05X and 1.25X" improvement on some aggregate queries. My goodness, break out the champagne. You've tweaked and compiled ten different versions across six major releases, burned who knows how many CPU-hours on a 48-core monster, and you've eked out a 25% gain on a subset of reads. I got a bigger performance boost in '88 when we upgraded the tape drive and the weekly backup finished on Saturday instead of Sunday morning. You're measuring progress in inches while the goalpost moves by miles.
Honestly, it's exhausting. Every decade it's the same thing. New hardware, new buzzwords, same old problems. Now if you'll excuse me, I've got some perfectly good IMS databases that have been running without a reboot since you were in diapers. They just... work. What a concept.