🔥 The DB Grill 🔥

Where database blog posts get flame-broiled to perfection

Converged Datastore for Agentic AI
Originally from mongodb.com
August 21, 2025 • Roasted by Rick "The Relic" Thompson Read Original Article

Alright, settle down, kids, let ol' Rick pour himself a cup of lukewarm coffee from the pot that's been stewing since dawn and have a look at this... this manifesto. I have to hand it to you, the sheer enthusiasm is something to behold. It almost reminds me of the wide-eyed optimism we had back in '88 when we thought X.25 packet switching was going to solve world hunger.

I must say, this idea of a "converged datastore" is truly a monumental achievement. A real breakthrough. You've managed to unify structured and unstructured data into one cohesive... thing. It's breathtaking. Back in my day, we had a similar, albeit less glamorous, technology for this. We called it a "flat file." Sometimes, if we were feeling fancy, we'd stuff everything into a DB2 table with a few structured columns and one massive BLOB field. We were just decades ahead of our time, I suppose. We didn't call it a "cognitive memory architecture," though. We called it "making it work before the batch window closed."

And the central premise here, that AI agents don't just query data but inhabit it... that's poetry, pure and simple. It paints a beautiful picture. It's the same beautiful picture my manager painted when he said our new COBOL program would "live and breathe the business logic." In reality, it just meant it had access to a VSAM file and would occasionally dump a core file so dense it would dim the lights on the whole floor. This idea of an agent having "persistent state" is just adorable. You mean... you're storing session data? In a table? Welcome to 1995, we're glad to have you.

I'm especially impressed by the "five core principles." Let's see here...

And this architectural diagram... a masterpiece of marketing. So many boxes, so many arrows. It's a beautiful sight. It's got the same aspirational quality as the flowcharts we used to draw on whiteboards for systems that would never, ever get funded. You've got your "Data Integration Layer," your "Agentic AI Layer," your "Business Systems Layer"... It's just incredible. We had three layers: the user's green screen, the CICS transaction server, and the mainframe humming away in a refrigerated room the size of a gymnasium. Seemed to work just fine.

The fundamental shift from relational to document-based data architecture represents more than a technical upgrade—it's an architectural revolution...

A revolution! My goodness. Codd is spinning in his grave so fast you could hook him up to a generator and power a small city. You took a data structure designed to prevent redundancy and ensure integrity, and you replaced it with a text file that looks like it was assembled by a committee. I'm looking at this Figure 4 example, and it's a thing of beauty. A single, monolithic document holding everything. It's magnificent. What happens when you need to add one tiny field to the customerPreferences? Do you have to read and rewrite the entire 50KB object? Brilliant. That'll scale wonderfully. It reminds me of the time we had to update a field on a magnetic tape record. You'd read a record, update it in memory, write it to a new tape, and then copy the rest of the millions of records over. You've just reinvented the tape-to-tape update for the cloud generation. Bravo.

Your claim of "sub-second response times for vector searches across billions of embeddings" is also quite a thing. I remember when getting a response from a cross-continental query in under 30 seconds was cause for a champagne celebration. Of course, that was over a 9600 baud modem, but the principle is the same. The amount of hardware you must be throwing at this "problem" must be staggering.

So let me just say, I'm truly, genuinely impressed. You've taken the concepts of flat files, triggers, denormalization, and session state, slapped a coat of "AI-powered cognitive agentic" paint on them, and sold it as the future. It's the kind of bold-faced confidence I haven't seen since the NoSQL evangelists promised me I'd never have to write a JOIN again, right before they invented their own, less-efficient JOIN.

I predict this will all go swimmingly. Right up until the first time one of these "cohesive" mega-documents gets corrupted and you lose the customer, their policy, all their claims, and the AI's entire "memory" in one fell swoop. The ensuing forensic analysis of that unfathomable blob of text will be a project for the ages. They'll probably have to call one of us old relics out of retirement to figure out how to parse it.

Now if you'll excuse me, I think I have a box of punch cards in the attic that's more logically consistent than that JSON example. I'm going to go lie down.