đŸ”„ The DB Grill đŸ”„

Where database blog posts get flame-broiled to perfection

BSON vs OSON: Different design goals
Originally from dev.to/feed/franckpachot
January 13, 2026 ‱ Roasted by Jamie "Vendetta" Mitchell Read Original Article

Ah, another "fair and balanced" technical comparison from the mothership. It warms my cold, cynical heart. Reading this feels like being back in a Q3 "synergy" meeting, where the PowerPoint slides are bright, the buzzwords are flying, and reality has been gently escorted from the building. To be clear, OSON and BSON aren't directly comparable because one was designed to solve a problem and the other was designed to solve a marketing deck.

They say OSON is "specifically engineered for database operations." I remember that engineering meeting. That's the corporate-approved euphemism for “we realized we were a decade late to the NoSQL party and had to bolt a JSON-ish thing onto our 40-year-old relational engine.” It was less "engineering" and more "frantic reverse-engineering" of a competitor's feature set, but with enough proprietary complexity to ensure job security.

Let's talk about these "different trade-offs."

First, the claim of "greater compactness through local dictionary compression." I had to read that twice to make sure it wasn't a typo. Let's look at your own numbers, my friend.

SizeRatio: 1.01, 1.01, 1.01, 1.00, 1.00, 1.00

In every. single. test. OSON is either the same size or larger. That's not a trade-off. That's a rounding error in the wrong direction. That "local dictionary" must be where they store the excuses for the roadmap slips. We spent six months in "architecture review" for a feature that adds zero value and a few extra bytes. Brilliant.

And the metadata! The "comprehensive metadata structures—such as a tree segment and jumpable offsets." We used to call that "Project Over-Engineering." It’s the architectural equivalent of building a multi-story car park for a single unicycle. The idea was to enable these magical in-place updates and partial reads, which sounds great until you see the cost.

Which brings me to the performance. My god, the performance.

The author gracefully notes that encoding OSON is a bit slower, "by design," of course. Slower by design is the most incredible marketing spin I have ever heard. It’s like saying a car is "aerodynamically challenged by design" to "enhance its connection with the pavement."

Let’s look at the largest test case: OSON is 53.23 times slower to encode.

Not 53 percent. Fifty. Three. Times.

The explanation? It's busy "computing navigation metadata for faster reads." This is the best part. This is the absolute chef's kiss of corporate misdirection. You're building all this supposedly life-changing metadata for fast partial reads, but then you casually mention:

...because the Oracle Database Java driver isn’t open source, I tried Python instead, where the Oracle driver is open source. However, it doesn’t provide an equivalent to OracleJsonObject.get()...

Let me translate that for everyone in the back. "The one feature that justifies our abysmal write performance? Yeah, you can't actually see it or test it. It's in the special, secret, closed-source driver. The one we gave you for this test doesn't do it. Just trust us, it's totally revolutionary. It's on the roadmap."

This is a classic. It's the technical equivalent of being sold a flying car, but the flight module is a proprietary, non-viewable black box that, for the purpose of the test drive, has been replaced with a regular engine. But the specs look great!

So to recap: you've benchmarked a system where you took a 5,300% performance hit on writes to generate "extensive metadata" that your own benchmark couldn't use, all to achieve a file size that is objectively worse than the competition. And you're calling this a reasonable trade-off.

BSON’s design goal was to be a fast, simple, and efficient serialization format. OSON’s design goal was clearly to meet a line item on a feature-parity checklist, no matter the cost to performance, sanity, or the poor souls who had to implement it. I know where the bodies are buried on this project, Franck. And they're not stored efficiently.