🔥 The DB Grill 🔥

Where database blog posts get flame-broiled to perfection

Percona Operator for PostgreSQL 2025 Wrap Up and What We Are Focusing on Next
Originally from percona.com/blog/feed/
January 15, 2026 • Roasted by Rick "The Relic" Thompson Read Original Article

Alright, settle down, let me put on my reading glasses. What fresh-faced marketing nonsense have the kids cooked up this time? Percona Operator for PostgreSQL... put most of its energy into things that matter. Oh, bless their hearts. They finally decided to focus on things that matter. Took 'em long enough. We were focused on things that mattered back in '83 when our "cluster" was a single IBM 3081 that took up a whole room and sounded like a 747 on takeoff.

So, in the far-flung future of 2025, they've cracked the code on "predictable upgrades." That's adorable. Let me tell you about predictable upgrades. It was a three-day weekend, a stack of binders thick enough to stop a bullet, and a case of Jolt Cola. You'd spend 48 hours running test JCL against a sandboxed LPAR, and if a single job in the nightly COBOL batch run failed, you spent the next 24 hours manually rolling back from a cart of 9-track tapes. You didn't trust some script called an "Operator" to do it for you. An "Operator" was a guy named Stan who fell asleep at the console and drooled on the emergency-stop button. This "Kubernetes" thing you're all so proud of? It's just a glorified, over-caffeinated version of the MVS Job Scheduler, except instead of a handful of punch cards, you've got 10,000 lines of YAML that looks like someone's cat walked across the keyboard. And you think a script is going to magically upgrade that Rube Goldberg machine without a hitch? Good luck with that.

And what's this? "Safer backup and restore." You kids and your ephemeral bits and bobs floating around in some "cloud." Safer. You want to see a safe backup? A safe backup was a box of reel-to-reel tapes, labeled in triplicate with a grease pencil, driven by an armed guard to a climate-controlled salt mine in Pennsylvania. You could drop an A-bomb on the primary data center, and we'd be back online by Monday, assuming we could find enough extension cords. You're telling me your backup is "resilient" because you copied a file from us-east-1 to us-west-2? That's not a backup; that's a long-distance echo. I've had more data integrity in a shoebox full of floppy disks.

Oh, and my personal favorite: "clearer observability." You mean you finally figured out how to read your own log files? We had "observability" in 1985. It was a 2000-page core dump printed on green-bar paper that you'd spread out across the entire data center floor. You'd get down on your hands and knees with a highlighter and a bottle of aspirin, and by God, you'd find that errant pointer in the data block. You didn't need some fancy-pants "Grafana dashboard" with blinking lights to tell you the system was slow. I could tell you the I/O latency by the specific pitch of the screeching from the DASD array. These kids today, they can't fix a problem unless a cartoon thermometer on a screen turns red.

And they're proud of dealing with... let me see here... fewer surprises from image and HA version drift.

fewer surprises

You created a system with a thousand moving parts that are all changing versions constantly without your knowledge, and now you're patting yourselves on the back for being slightly less surprised when it all blows up? That's not an achievement. That's like bragging you only set the kitchen on fire three times this week instead of five. We had one version. It was DB2 v1.2. It ran. Period. If it didn't, you called a guy in Poughkeepsie with a pocket protector who wrote the damn thing, and he'd tell you which bit to flip.

This whole thing... this "Operator"... it's just a bunch of REXX scripts somebody wrote in the 80s to automate IMS database recovery, but re-written in a language with more emojis and given a cool name. They're solving problems we solved thirty years ago, acting like they've just discovered fire.

Mark my words. This whole "cloud native" house of cards is going to come crashing down. One day, a single expired TLS certificate is going to cascade through this YAML-and-Go-spaghetti monstrosity, and your precious "Operator" is going to "predictably upgrade" your entire production database into a black hole. And who are you going to call? Not some dashboard. You'll be looking for someone who still remembers how to read a hex dump. And I'll be retired, fishing on a lake where the only "cloud" is the one in the sky. Good luck, kids. You're gonna need it.