Where database blog posts get flame-broiled to perfection
Alright, let's take a look at this... he says, putting on a pair of blue-light filtering glasses that are clearly not prescription. Oh, a "scaleup" benchmark for MariaDB. How delightful. The tl;dr says "scaleup is better for range queries than for point queries." Fantastic. So you've performance-tuned your database for bulk data exfiltration. I'm sure the attackers who lift your entire user table will send a thank-you note for making their job so efficient.
Let's dig into the "methodology," and I'm using that term very loosely.
You've got an AMD EPYC server, which is fine, but you've built it on... SW RAID 0? Are you kidding me? RAID 0? You've intentionally engineered a system with zero fault tolerance. One NVMe drive gets a bad block and your entire database vaporizes into digital confetti. This isn't a high-performance configuration; it's a data-loss speedrun. You're benchmarking how fast you can destroy evidence after a breach.
And you "compiled MariaDB from source." Oh, that fills me with confidence. I'm sure you personally vetted the entire toolchain, every dependency, and ran a full static analysis to ensure there were no trojans in your make process, right? Of course you didn't. You ran curl | sudo bash on some obscure PPA to get your dependencies and now half your CPU cores are probably mining Monero for a teenager in Minsk. Hope that custom build was worth the backdoor.
But my favorite part? You just posted a link to your my.cnf file. Publicly. On the internet. You've just handed every attacker on the planet a detailed schematic of your database's configuration. Every buffer size, every timeout, every setting. They don't need to probe your system for weaknesses; you've published the goddamn blueprint. Why not just post the root password while you're at it? It would "save time," which seems to be the main engineering principle here, considering you skipped 10 of the 42 microbenchmarks. What were in those 10 tests you conveniently omitted? The ones that test privilege escalation? The ones that stress the authentication plugins? The ones that would have triggered the buffer overflows? This isn't a benchmark; it's a curated highlight reel.
Now for the "results," where every chart is a roadmap to a new CVE. Your big takeaway is that performance suffers from mutex contention. You say "mutex contention" like it's a quirky performance bottleneck. I say "uncontrolled resource consumption leading to a catastrophic denial-of-service vector." You see a high context switch rate; I see a beautiful timing side-channel attack waiting to happen. An attacker doesn't need to crash your server; they just need to craft a few dozen queries that target these "hot points" you've so helpfully identified, and they can grind your entire 48-core beast to a halt. Your fancy EPYC processor will be so busy fighting itself for locks that it won't have time to, you know, reject a fraudulent transaction.
The problem appears to be mutex contention.
It appears to be? You're not even sure? You've just published a paper advertising a critical flaw in your stack, and your root cause analysis is a shrug emoji. This is not going to fly on the SOC 2 audit. "Our system crashes under load." "Why?" "¯\(ツ)/¯ Mutexes, probably."
Let's talk about random-points_range=1000. You found that a SELECT with a large IN-list scales terribly. Shocking. You've discovered that throwing a massive, complex operation at the database makes it... slow. This isn't a discovery; it's a well-known vector for resource exhaustion attacks. Any half-decent WAF would block a query with an IN-list that long, because it's either an amateur developer or someone trying to break things. You're not testing scaleup; you're writing a "how-to" guide for crippling InnoDB with a single line of SQL.
And the write performance... oh, the humanity. The only test that scales reasonably is a mix of reads and writes. Everything else involving DELETE, INSERT, or UPDATE falls apart after a handful of clients. So, your database is great as long as nobody... you know... changes anything. The moment you have actual users creating and modifying data, the whole thing devolves into a lock-and-contention nightmare.
The worst result is from update-one which suffers from data contention as all updates are to the same row. A poor result is expected here.
You expected a poor result on a hot-row update? Then what was the point? To prove that a race condition... is a race condition? That single hot row could be a global configuration flag, a session counter, or an inventory count for your last "revolutionary" new product. You've just confirmed that your architecture is fundamentally incapable of handling high-frequency updates to critical data without collapsing.
So let me summarize your findings for you: You've built a fragile, insecure, single-point-of-failure system with a publicly documented configuration. Its performance bottlenecks are textbook DoS vectors, its write-path is a house of cards, and you've optimized it for the one thing you should be preventing: mass data reads.
This isn't a benchmark. This is a pre-mortem for the data breach you're going to have next quarter. Good luck explaining "relative QPS" to the regulators.