Where database blog posts get flame-broiled to perfection
Well now, isn't this just a precious little blog post. Took a break from rewinding the backup tapes and adjusting the air conditioning for the server roomâyou know, a room that could actually house more than a hamsterâto read this groundbreaking research. It warms my cynical old heart to see the kids these days discovering the magic of... running a script and plotting a graph.
Itâs just delightful how youâve managed to compare these modern marvels on a machine that has less processing power than the terminal I used to submit my COBOL batch jobs in '89. An "ExpertCenter"? Back in my day, we called that a calculator, and we didn't brag about its "8 cores." We bragged about not causing a city-wide brownout when we powered on the mainframe.
And I have to applaud the sheer, unmitigated audacity of this little gem:
For both Postgres and MySQL fsync on commit is disabled to avoid turning this into an fsync benchmark.
Chef's kiss. That's a work of art, sonny. Disabling fsync to benchmark a database is like timing a sprinter by having them run downhill with a hurricane at their back. It's a fantastic way to produce a completely meaningless number. You might as well just write your data to /dev/null and declare victory. We used to call this "lying," but I see the industry has rebranded it as "performance tuning." We had a word for data that wasn't safely on disk: gone. We learned that lesson the hard way, usually at 3 AM while frantically trying to restore from a finicky reel-to-reel tape that had a bad block. You kids with your "eventual consistency" seem to be speed-running that lesson.
I'm particularly impressed by your penetrating analysis. "Modern Postgres is faster than old Postgres." Astonishing. Someone alert the media. Who knew that years of development from thousands of engineers would result in... improvements? It's a shocking revelation.
And the miserable MySQL mess? Finding that "performance has mostly been dropping from MySQL 5.6 to 8.4" is just beautiful. Itâs a classic case of progress-by-putrefaction. They keep adding shiny new gewgawsâJSON support, "document stores," probably an AI chatbot to tell you how great it isâand in the process, they forget how to do the one thing a database is supposed to do: be fast and not lose data. Youâve just scientifically proven that adding more chrome to the bumper makes the car slower. We figured that out with DB2 on MVS around 1985, but it's nice to see you've caught up.
Your use of partitioning is also quite innovative. I remember doing something similar when we split our VSAM files across multiple DASD volumes to reduce head contention. We did it with a few dozen lines of JCL that looked like an angry cat walked across the keyboard, not some fancy-pants PARTITION BY clause. Itâs adorable that you think youâve discovered something new.
This whole exercise has been a trip down memory lane. All these charts with squiggly lines going up and down, based on a benchmark where youâve casually crippled commit consistency, run on a glorified laptop. It reminds me of the optimism we had before we'd spent a full weekend hand-keying data from printouts after a head crash. You've got all the enthusiasm of a junior programmer who's just discovered the GOTO statement.
So, thank you for this. Youâve managed to show that one toy database is sometimes faster than another toy database, as long as you promise not to actually save anything.
Now if you'll excuse me, I've got a COBOL copybook that has more data integrity than this entire benchmark.