đŸ”„ The DB Grill đŸ”„

Where database blog posts get flame-broiled to perfection

$50 PlanetScale Metal is GA for Postgres
Originally from planetscale.com/blog/feed.atom
December 15, 2025 ‱ Roasted by Marcus "Zero Trust" Williams Read Original Article

Ah, a truly magnificent piece of marketing literature. I must commend you on this bold vision for the future of infrastructure. It’s always a pleasure to see such optimism, such unburdened confidence, in a product announcement.

It’s just wonderful that you’ve lowered the entry price to a mere $50 a month. You’re democratizing access to what I’m sure is a fortress of security. This ensures that even the most budget-conscious, fly-by-night operations can now store their sensitive, unvalidated user input on your “blazingly fast” hardware. I can’t imagine a more robust vetting process. This move practically guarantees a diverse ecosystem of tenants, all behaving responsibly and never, ever attempting to probe the network for their neighbors. The blast radius for a compromise on one of these low-cost instances is surely negligible.

And the decoupling of CPU, RAM, and storage! Genius. Truly. You’ve introduced a wonderfully intricate layer of orchestration to manage all these moving parts. More complexity is always the friend of security, after all. What a fantastic opportunity to introduce novel race conditions and misconfigurations in the control plane. I’m positively giddy thinking about the potential for a cleverly crafted API call to the resizing endpoint to, say, accidentally map a block of one customer’s storage to another customer’s instance during a moment of high I/O. But I’m sure you’ve thought of that. You claim “the fewest possible failure modes,” which is my favorite kind of unprovable, aspirational statement. It will look fantastic on the cover of the inevitable data breach report.

I’m especially fond of the reliance on “locally attached NVMe drives.” So fast! So direct! It brings a tear to my eye. I’m sure your de-provisioning process is a sight to behold. When a customer spins down their $50 database full of PII, the process for wiping that drive before reallocating it is no doubt a rigorous, multi-pass, cryptographically-secure erasure that meets NIST standards. It's definitely not just a quick rm -rf in a bash script run by an intern, right? The thought of data remanence and recovery by the next tenant is purely a fantasy of a paranoid mind like mine.

Let's talk about this impressive density:

get as much as 300GB of storage per GiB of RAM

Oh, fantastic. You’re actively encouraging users to create massively I/O-bound timebombs. What happens when someone tries to run a complex query that requires more than 1 GiB of RAM to sort 300GB of data? I imagine it fails gracefully, with thorough logging, and certainly doesn't create a resource exhaustion vulnerability that could impact other tenants on the same physical host. This architecture is a beautiful breeding ground for what I like to call performance-based denial of service. A feature, not a bug!

Honestly, the whole thing is a work of art. You’ve taken every security best practice—simplicity, isolation, predictable performance—and decided they were merely suggestions. The SOC 2 auditors are going to have an absolute field day with this. I can already see the list of findings.

It’s been an absolute treat to read this. I feel so much more
 secure. Thank you for sharing your innovative approach to infrastructure management.

Now if you’ll excuse me, I’ll be over here, advising my clients to add your IP ranges to their firewall blocklists. I look forward to never reading your blog again.