Where database blog posts get flame-broiled to perfection
Alright, let's pour another cup of stale coffee and talk about this. I've seen this movie before, and I know how it ends: with me, a blinking cursor, and the sinking feeling that "compatible" is the most dangerous word in tech. This whole "emulate MongoDB on a relational database" trend gives me flashbacks to that time we tried to run a key-value store on top of SharePoint. Spoiler alert: it didn't go well.
So, let's break down this masterpiece of misplaced optimism, shall we?
First, we have the glorious promise of the "Seamless Migration" via a compatible API. This is the siren song that lures engineering managers to their doom. The demo looks great, the simple queries run, and everyone gets a promotion. Then you hit production traffic. This article's "simple" queryâfinding 5 records in a rangeâforced the "compatible" DocumentDB to scan nearly 60,000 index keys, fetch them all, and then sort them in memory just to throw 59,930 of them away. Native Mongo scanned five. Five! That's not a performance gap; that's a performance chasm. It's the technical equivalent of boiling the ocean to make a cup of tea.
Then there's the Doubly-Damned Debuggingâ˘. My favorite part of any new abstraction layer is figuring out which layer is lying to me at 3 AM. The beauty of this setup is that you don't just get one execution plan; you get two! You get the friendly, happy MongoDB-esque plan that vaguely hints at disaster, and then you get to docker exec into a container and tail PostgreSQL logs to find the real monstrosity of an execution plan underneath. The Oracle version is even better, presenting a query plan that looks like a lost chapter from the Necronomicon. So now, to fix a slow query, I need to be an expert in Mongo query syntax, the emulation's translation layer, and the deep internals of a relational database it's bolted onto. Fantastic. My on-call anxiety just developed a new subtype.
Let's talk about the comically catastrophic corner cases. The author casually mentions that a core performance optimizationâpushing the ORDER BY down to the index scan for efficient paginationâis a "TODO" in the DocumentDB RUM index access method. A TODO. In the critical path of a database that's supposed to be production-ready. I can already hear the conversation: "Why does page 200 of our user list take 30 seconds to load?" Because the database is secretly reading every single user from A to Z, sorting them by hand, and then picking out the five you asked for. This isn't a database; it's a very expensive Array.prototype.sort().
And the pièce de rĂŠsistance: the illusion of simplicity. The sales pitch is "keep your relational database that your team knows and trusts!" But this article proves that to make it work, you have to install a constellation of extensions (rum, documentdb_core, pg_cron...), become a Docker and psql wizard just to get a query plan, and then learn about proprietary index types like documentdb_rum that behave differently from everything else. You haven't simplified your stack; you've created a fragile, custom-built contraption. Itâs like avoiding learning how to drive a new car by instead welding your old car's chassis onto a tractor engine. Sure, you still have your fuzzy dice, but good luck when it breaks down in the middle of the highway.
In the end, these emulations are just another beautiful, brilliant way to create new and exciting failure modes. We're not solving problems; we're just shifting the complexity around until it lands on the person who gets paged when it all falls over.
...sigh. I need more coffee.
Ah, yes, another dispatch from the wilds of industry, where the fundamental, mathematically proven principles of computer science are treated as mere suggestions. I must confess, reading the headline "Can databases fully replace them?" caused me to spill my Earl Grey. The sheer, unadulterated naivete is almost charming, in the way a toddler attempting calculus might be. Let us, for the sake of what little academic rigor remains in this world, dissect this... notion.
To ask if a database can replace a cache is to fundamentally misunderstand the memory hierarchy, a concept we typically cover in the first semester. Itâs like asking if a sprawling, meticulously cataloged national archive can replace the sticky note on your monitor reminding you to buy milk. One is designed for durable, consistent, complex queries over a massive corpus; the other is for breathtakingly fast access to a tiny, volatile subset of data. They are not competitors; they are different tools for different, and frankly, obvious, purposes.
Apparently, the practitioners of this new "Cache-is-Dead" religion have also managed to solve the CAP Theorem, a feat that has eluded theoreticians for decades. How, you ask? By simply ignoring it! A cache, by its very nature, willingly sacrifices strong Consistency for the sake of Availability and low latency. A proper database, one that respects the sanctity of its data, prioritizes Consistency. To conflate the two is to believe you can have your transactional cake and eat it with sub-millisecond latency, a fantasy worthy of a marketing department, not a serious engineer.
They speak of "eventual consistency" as if it were a revolutionary feature, not a euphemism for "your data will be correct at some unspecified point in the future, we promise. Maybe."
What of our cherished ACID properties? They've been... reimagined. Atomicity, Consistency, Isolation, Durabilityâthese are not buzzwords; they are the pillars of transactional sanity. Yet, in this brave new world, they are treated as optional extras, like heated seats in a car.
The breathless excitement over using a database for caching is particularly galling when one realizes they've simply reinvented the in-memory database, albeit poorly. Clearly they've never read Stonebraker's seminal work on the matter from, oh, the 1980s. They slap a key-value API on it, call it âblazingly fast,â and collect their venture capital, blissfully unaware that they are standing on the shoulders of giants only to scribble graffiti on their ankles.
Ultimately, this entire line of thinking is an assault on the elegant mathematical foundation provided by Edgar F. Codd. He gave us the relational model, a beautiful, logical framework for ensuring data integrity and independence. These... artisans... would rather trade that symphony of relational algebra for a glorified, distributed hash map that occasionally loses your keys. It is the intellectual equivalent of burning down a library because you find a search engine more convenient.
But I digress. One cannot expect literacy from those who believe the primary purpose of a data model is to be easily represented in JSON.
Oh, wonderful. Another dispatch from the land of broken promises and venture-funded amnesia. I see the bright young things at "Tetragon" have discovered a new silver bullet. One shudders to think what fundamental principle of computer science they've chosen to violate this time in their relentless pursuit of... well, whatever it is they're pursuing. Let us dissect this masterpiece of modern engineering, shall we?
First, the foundational heresy: using a search index as a primary database. They celebrate this as a triumph of performance, but it is a flagrant dismissal of nearly fifty years of database theory. Codd must be spinning in his grave. They've traded the mathematical purity of the relational model for what is, in essence, a glorified text indexer with a JSON fetish. I'm certain their system now adheres to a new set of principles: Ambiguity, Confusion, Inconsistency, and Duplication. What a novel concept. They speak of flexibility, but what they mean is they've abandoned all pretense of data integrity.
Then we have the siren song of "Serverless." A delightful bit of marketing fluff that allows engineers to remain blissfully ignorant of the physical realities of their own systems. âWe donât have to manage servers!â they cry with glee. Indeed. Youâve simply outsourced the management to a black box whose failure modes and performance characteristics are a complete abstraction. How does one reason about partition tolerance when you've willfully blinded yourself to the partitions? Itâs an abstraction so profound, one no longer needs to trouble oneself with trifles like... physics.
This invariably leads to the casual disregard for consistency. Brewer's CAP theorem is not, I must remind the toddlers in the room, the CAP Suggestion. By choosing a system optimized for availability and partitioning, they have made a binding pact to sacrifice consistency. But they will surely dress it up in lovely euphemisms.
"Our data enjoys eventual consistency." This is a phrase that means "our data will be correct, but we refuse to commit to a time, a date, or even the correct century." The 'C' and 'I' in ACID are treated as quaint, archaic suggestions, not the bedrock of transactional sanity.
And the justification for all this? "Enhanced performance." At what cost? Clearly they've never read Stonebraker's seminal work on the fallacy of "one size fits all." They've traded the predictable, analyzable performance of a structured system for the chaotic, difficult-to-tune behavior of a distributed document store. They've merely shifted the bottleneck from one place to another, likely creating a dozen new, more insidious ones in the process. It is the architectural equivalent of curing a headache with a guillotine.
But this is the world we live in now. A world where marketing blogs have replaced peer-reviewed papers and nobody has the attention span for a formal proof. They've built a house of cards on a foundation of sand, and they're celebrating the lovely view just before the tsunami hits.
Do carry on, Tetragon. Your eventual, system-wide cascade of data corruption will make for a marvelous post-mortem paper. I shall look forward to peer-reviewing it.
Alright, settle down and grab a cup of coffee that's been on the burner since dawn. I just stumbled across this... masterpiece of modern engineering, and it's got my mustache twitching. Let ol' Rick tell you a thing or two about how you kids are re-inventing the flat tire and calling it a breakthrough in transportation.
So, they're talking about deploying "Elastic Agents" in "air-gapped environments." My sides. You know what we called an air-gapped environment back in my day? A computer. It wasn't connected to ARPANET, it wasn't "phoning home," it was sitting in a refrigerated room, connected to nothing but power and a line printer that sounded like a machine gun. The fact that you have to write a novel-length instruction manual on how to run your software without the internet is not a feature; it's a confession that you designed it wrong in the first place.
But let's break this down, shall we?
You're telling me the solution involves setting up a "Fleet Server" with internet access, downloading a "Package Registry," then carrying it over to the secure zone on a thumb drive like it's some kind of state secret? Congratulations, you've just invented the sneakernet. We were doing that in 1983, but we were carrying 9-track tapes that weighed more than your intern, and we didn't write a self-congratulatory blog post about it. We just called it "Monday." The sheer complexityâdownload the agent, get the policy, enroll the thing, package the artifactsâit's a Rube Goldberg machine of YAML files and CLI commands to do what a single JCL job used to handle before breakfast.
This whole song and dance about a "self-managed package registry" is just hilarious. It's a local repository. We had this. It was called a filing cabinet full of labeled floppy disks. You wanted the new version of the payroll reconciliation module? You walked to the cabinet, you found the disk, and you loaded it. You didn't need a Docker container running a mock-internet just so your precious little "agent" wouldn't have a panic attack because it couldn't ping its mothership.
And the terminology! "Fleet." "Agents." "Elastic." You sound like you're running a spy agency, not a logging utility. Back in the day, we had programs. They were written in COBOL. They ran, they processed data from a VSAM file, and they stopped. They didn't need to be "enrolled" or "managed by a fleet." They were managed by a 300-page printout and a stern-looking operator named Gladys who could kill a job with a single keystroke. This wasn't "observability," it was just... knowing what your system was doing.
The fundamental flaw here is building a distributed, cloud-native system that is so brittle it requires a special life-support system to function offline.
The Elastic Agent downloads all required content from the Elastic Package Registry... This presents a problem for hosts that are in air-gapped environments. You don't say? It's like inventing a fish that needs a special backpack to breathe underwater. The solution isn't a better backpack; it's remembering that fish are supposed to have gills. We built systems on DB2 on the mainframe that were born in an air-gap. They never knew anything else. They were stable, secure, and didn't need a "registry" to remember what to do.
Frankly, this whole process is just a digital pantomime of what we used to do with punch cards. You create your "package" on one machine (the keypunch), you transfer it physically (carry the card deck), and you load it into the disconnected machine (the card reader). The only difference is that if you dropped our punch card deck, your entire production run was ruined. If your YAML file has an extra space, your entire "fleet" refuses to boot. See? Progress.
Honestly, the more things change, the more they stay the same, just with more steps and fancier names dreamed up by some slick-haired marketing VP. Now if you'll excuse me, I've got a CICS transaction to go debug on my 3270 emulator. At least there, the only "cloud" I have to worry about is the one coming from the overheated power supply. Sigh.
Alright team, gather 'round. Marketing just forwarded me the latest "thought leadership" piece from one of our... potential database partners. Theyâve spent over a thousand words celebrating a âfeatureâ that amounts to rewarding bad programming. Let's dissect this masterpiece of corporate fan-fiction before they try to send us an invoice for the privilege of reading it.
First, theyâve managed to brand ânot doing work when nothing changesâ as a revolutionary optimization. The central premise here is that our applications are so inefficientâmindlessly updating fields with the exact same dataâthat we need a database smart enough to clean up the mess. This isn't a feature; it's an expensive crutch for sloppy code. Theyâre selling us a helmet by arguing we should be running into walls more often. Instead of fixing the leaky faucet in the application layer, they want to sell us a billion-dollar, diamond-encrusted bucket to put underneath it.
Second, letâs talk Total Cost of Ownership. The author needed a Docker container, a log parser, and a deep understanding of write component verbosity just to prove this "benefit." What does that tell me? It tells me that when this system inevitably breaks, we're not calling our in-house team. We're calling a consultant who bills at $400/hour to decipher JSON logs. Letâs do some quick math: One senior engineer's salary to build around these "quirks" ($180k) + one specialized consultant on retainer for when it goes sideways ($100k) + "enterprise-grade" licensing that charges per read, even the useless ones ($250k). Suddenly, this "free optimization" is costing us half a million dollars a year just to avoid writing a proper if statement in the application code.
Third, the comparison to PostgreSQL is a masterclass in spin. They present SQL's behaviorâacquiring locks, firing triggers, and creating an audit trailâas a flaw.
In PostgreSQL, an UPDATE statement indicates an intention to perform an operation, and the database executes it even if the stored value remains unchanged. Yes, exactly! Thatâs called a transaction log. That's called compliance. Thatâs called knowing what the hell happened. Theyâre framing predictable, auditable behavior as a burdensome "intention" while positioning their black box as a more enlightened "state." Oh, I see. It's not a bug, it's a philosophical divergence on the nature of persistence. Tell that to the auditors when we can't prove a user attempted to change a record.
Finally, this entire article is the vendor lock-in two-step. They highlight a niche, esoteric behavior that differs from the industry standard. Then, they encourage you to build your entire application architecture around it, praising "idempotent, retry-friendly patterns" that rely on this specific implementation. A few years down the line, when their pricing model "evolves" to charge us based on CPU cycles spent comparing documents to see if they're identical, we're trapped. Migrating off would require a complete logic rewrite. They sell you a unique key, then change the lock every year.
Honestly, sometimes I feel like we're not buying databases anymore; we're funding PhD theses on problems no one actually has. Itâs a solution in search of a six-figure support contract. Now, if you'll excuse me, I need to go approve a PO for a new coffee machine. At least I know what that does.
Oh, what a fantastic read. I just love the boundless optimism. It's so refreshing to see someone ask, "Why change something that just works?" with the unstated, yet screamingly obvious answer: for the thrill of a 72-hour production outage!
Truly, it's inspiring. The argument that Redis's greatest strengthâthat it just worksâis also its "potential challenge" is the kind of galaxy-brain take I've come to expect from thought leaders who haven't had to restore a corrupted key space from a six-hour-old backup at 3:00 AM on a Sunday. My eye is twitching just thinking about it.
I'm especially excited about the prospect of another "simple" migration. My therapist and I have been making real progress working through the memories of the last few:
It's always the same beautiful story. It starts with a whitepaper full of promises, moves to a Slack channel full of excitement, and ends in a war room full of cold pizza and broken dreams. I cherish the moment in every migration when a project manager confidently states:
"The migration script is 98% done, it just needs some light testing."
That phrase is my Vietnam. It's the sound of my weekend evaporating. Itâs the harbinger of cryptic error messages that don't exist on Stack Overflow.
So yes, let's absolutely replace the one component in our stack that doesn't regularly wake me up with a heart attack. Let's introduce a new, exciting system with its own special, innovative failure modes. I'm tired of the same old Redis outages. I want new ones. I want to debug distributed consensus issues, not simple connection pool exhaustion. I want my problems to be as next-gen as our tech stack.
So thank you for this article. You've given me so much to look forward to. I'm already mentally preparing the post-mortem document and drafting the apology email to our customers.
Anyway, my PagerDuty app is freshly updated. Can't wait for the "go-live." It's going to be transformative.
Alright, let's pull on the latex gloves and perform a public autopsy on this... aspirational document. "Building the foundation of trust in government digital strategies," you say? That sounds less like a strategy and more like the first line of a data breach notification. Youâve built a foundation, alrightâa foundation of attack vectors on the bedrock of misplaced optimism.
Let's break down this architectural marvel of naivete, shall we?
Your so-called "foundation of trust" is what I call a "foundational flaw." In a Zero Trust world, "trust" is a four-letter word you scream after you've been breached. Youâre not building a foundation; youâre digging a single point of failure. The moment one of your "trusted" microservices gets poppedâand it willâyour entire glorious house of cards comes tumbling down. This isn't a foundation; it's a welcome mat for lateral movement.
I see you boasting about "seamless citizen services." What I hear is seamlessly siphoning sensitive data. Every API endpoint you expose to "simplify" a process is another gaping maw for unsanitized inputs. I can already picture the SQL injection queries. "Seamless integration" is just marketing-speak for "we chained a bunch of containers together with API keys we hardcoded on a public GitHub repo."
Itâs so user-friendly, the script kiddies won't even need to read the documentation to exfiltrate your entire user database.
You're proud of your "agile and adaptive" framework. A security auditor hears "undocumented, un-audited, and pushed to production on a Friday." Your "adaptability" is a feature for attackers, not for you. Every time your devs pivot without a full security review, they're creating a new, delightfully undiscovered vulnerability. This isn't agile development; it's a perpetual motion machine for generating CVEs.
And the compliance angle⌠oh, the glorious compliance dumpster fire. You think this will pass a SOC 2 audit? Bless your heart. Your auditors will take one look at your loggingâassuming you have anyâand start laughing. The lack of immutable audit trails, the cavalier way you're handling PII, the "trust-based" architecture... you're not just going to fail your audit; you're going to become a cautionary case study in security textbooks.
Look, it's a cute little PowerPoint slide of an idea. Really. Keep at it. Now, go back to the drawing board and come back when you understand that the only thing you should trust is that every single line of your code will be used against you in a court of law.
Alright team, huddle up. Another vendor success story just hit the wire. This one's about how a bank "transformed" itself with Elastic. Let's pour one out for the ops team over there, because I've read this story a hundred times before, just with a different logo on the cover. I can already tell you how this really went down.
First, we have the claim of a "seamless migration" to this new, unified platform. Seamless. I love that word. It usually means they ran the new system in parallel with the old one for six months, manually cross-referencing everything in a panic because neither system showed the same results. The real "transformation" happens when the old monitoring system is finally shut down, and everyone realizes the new one was never configured to watch the legacy batch job that processes all end-of-day transactions. I can't wait for the frantic call during the next market close, wondering why nothing is moving.
Then thereâs the gospel of "a single pane of glass," the holy grail of observability. It's a beautiful idea, like a unicorn that also files your expense reports. In reality, that "single pane" is a 27-tab Chrome window open on a 4K monitor, and the one dashboard you desperately need is the one that's been throwing 503 errors since the last "minor" point-release upgrade. You'll have perfect visibility into the login service while the core banking ledger is silently corrupting itself in the background.
My personal favorite is the understated complexity. The blog post makes it sound like you just point Elastic at your infrastructure and it magically starts finding threats and performance bottlenecks. They conveniently forget to mention that your "observability stack" now has more moving parts than the application it's supposed to be monitoring. It's become a mission-critical service that requires its own on-call rotation. I give it three months before they have an outage of the monitoring system, and the post-mortem reads, "We were blind because the thing that lets us see was broken."
Letâs talk about those "proactive security insights." This translates to the security team buying a new toy and aiming it squarely at my team's production environment. For the first two weeks, my inbox will be flooded with thousands of P1 alerts because a cron job that's been running every hour for five years is now considered a "potential lateral movement attack vector." We'll spend more time tuning the false positives out of the security tool than we do deploying actual code.
So hereâs my prediction: at 2:47 AM on the first day of a three-day holiday weekend, the entire Elastic cluster will go into a rolling restart loop. The cause will be something beautifully mundane, like an expired internal TLS certificate nobody knew about. The on-call engineer will find that all the runbooks are out of date, and the "unified" logs detailing the problem are, of course, trapped inside the dead cluster itself. The vendor's support line will blame it on a "misconfigured network ACL."
I'll save a spot on my laptop for the Elastic sticker. Itâll look great right next to my ones from CoreOS, RethinkDB, and all the other silver bullets that were supposed to make my pager stop going off.
Anyway, I have to go provision a bigger disk for the log shippers. Turns out "observability" generates a lot of data. Who knew?
Well, isn't this just a delightfully detailed dissertation on how to turn a perfectly functional database into a high-maintenance, money-devouring monster. I must applaud the author's commitment to exploring solutions that are, and I quote, "not feasible in a managed service environment." Thatâs exactly the kind of outside-the-box thinking that keeps CFOs like me awake at night, clutching their balance sheets.
Itâs truly inspiring to see someone so casually suggest we should just ârecompile PostgreSQL.â You say it with the same breezy confidence as someone suggesting we change the office coffee filter. Itâs so simple! Just a quick docker build and a few flags. Iâm sure our DevOps team, which is already stretched thinner than a budget proposal in Q4, would be thrilled to take on the care and feeding of a custom-built, artisanal database. This "lab setting" you speak of sounds suspiciously like what I call an "un-budgeted and unsupported liability."
Letâs do some quick, back-of-the-napkin math on the âtrueâ cost of this brilliant little maneuver. You know, for fun.
So, this "free" open-source tweak to save a few buffer hits will only cost us around $116,000 up front. A negligible investment, Iâm sure. And the beautiful part is the vendor lock-in! Weâre not locked into a vendor; weâre locked into the two people in the company who know how this cursed thing works. Brilliant!
And for what? Whatâs the ROI on this six-figure science project?
Buffers: shared hit=4
...unlike the six buffer hits required in the database with an 8 KB block size.
My goodness, we saved two whole buffer hits! The performance gains must be staggering. We've shaved a whole 0.1 milliseconds off a query. At this rate, weâll make back our initial $116,000 investment in, let me see... about 4,000 years. This is a fantastically fanciful fiscal framework.
But the masterstroke is the conclusion. After walking us through a perilous and pricey path of self-managed madness, the article pivots to reveal that another database, MongoDB, just does this out of the box. It's a classic bait-and-switch dressed up in technical jargon. You've painstakingly detailed how to build a car engine out of spare parts, only to end with, "Or, you could just buy a Ferrari."
Thank you for this profoundly particular post. Itâs been an illuminating look into the world of solutions that generate more problems, costs that hide in plain sight, and performance gains that are statistically indistinguishable from a rounding error.
Iâll be sure to file this under "Things That Sound Free But Arenât." Rest assured, I won't be reading this blog again, but I wish you the best of luck with your next spectacularly expensive suggestion.
Cheerio
Ah, yes, another missive from the front lines of industry. "JVM essentials for Elasticsearch." How utterly... practical. It's a title that conjures images of earnest young men in hoodies frantically tweaking heap sizes, a task they seem to regard with the same gravity with which we once approached the P vs. NP problem. One must admire their focus on treating the symptoms while remaining blissfully, almost willfully, ignorant of the underlying disease.
They speak of "memory pressure" and "garbage collection pauses" as if these are unavoidable laws of nature, like thermodynamics or student apathy during an 8 AM lecture on B-trees. My dear boy, a properly designed database system manages its own memory. It doesn't outsource this most critical of tasks to a non-deterministic, general-purpose janitor that periodically freezes the entire world to tidy up. The fact that your primary concern is placating the Javanese deity of Garbage Collection before it smites your precious "cluster" with a ten-second pause is not a sign of operational rigor; it's a foundational architectural flaw. It is an admission of defeat before the first query is even executed.
But of course, one cannot expect adherence to first principles from a system that treats the relational model as a quaint historical artifact. They've replaced the elegant, mathematically-sound world of normalized forms and relational algebra with a glorified key-value store where you just... dump your JSON and pray. One imagines Edgar Codd weeping into his relational calculus. They've abandoned the guaranteed integrity of a well-defined schema for the fleeting convenience of "schema-on-read," which is a delightful euphemism for "we have no idea what's in here, but we'll figure it out later, maybe." It's a flagrant violation of Codd's Information Rule, but I suppose rules are dreadfully inconvenient when you're trying to move fast and break things. Mostly, it seems, you're breaking the data's integrity.
And the way they discuss their distributed architecture! They speak of shards and replicas as if they've discovered some new cosmological principle. In reality, they're just describing a distributed system that plays fast and loose with the 'C' and the 'I' in ACID. They seem to have stumbled upon the CAP theorem, not by reading Brewer's work, but by accidentally building a system that kept losing data during network hiccups and then retroactively labeling its "eventual consistency" a feature.
"Monitor your cluster health..."
Of course you must! When you've forsaken transactional integrity, you are no longer managing a database; you are the frantic zookeeper of a thousand feral data-hamsters, each scurrying in a slightly different direction. You have to "monitor" it constantly because you have no mathematical guarantees about its state. You're replacing proofs with dashboards. Clearly they've never read Stonebraker's seminal work on the "one size fits all" fallacy. They've built a system that's a mediocre search index and a truly abysmal database, excelling at neither, and they've surrounded it with an entire cottage industry of "monitoring solutions" to watch it fail in real-time.
It's all so painfully clear. They don't read the papers. They read blog posts written by other people who also don't read the papers. They are trapped in a recursive loop of shared ignorance, celebrating their workarounds for self-inflicted problems. They're not building on the shoulders of giants; they're dancing on their graves.
This isn't computer science. This is digital plumbing. And forgive me, but I have a lecture to prepare on third normal formâa concept that will still be relevant long after the last Elasticsearch cluster has been garbage-collected into oblivion.