Where database blog posts get flame-broiled to perfection
Alright, hold my lukewarm coffee. I just read this little gem. "Build full-stack applications faster with the Kiro IDE using deep knowledge of your Supabase project."
Faster. That's always the word, isn't it? We're not building it more reliably, or more maintainably, or with observability that doesn't look like a child's finger painting. No, we're building it faster. Because the thing I love most is getting a frantic call about a poorly understood abstraction that's on fire, all so a developer could save ten minutes typing a SQL command.
And this "deep knowledge" claim is just... chef's kiss. Oh, your IDE has deep knowledge of my Supabase project? Wonderful. Does its deep knowledge include the emergency hotfix I pushed at 2 AM last Tuesday that bypassed the ORM entirely because it was generating insane queries? Does it know about the one weird, under-documented view that the analytics team depends on for their quarterly reports, and if it changes even slightly, the C-suite's dashboards all turn into N/A? No? Didn't think so. "Deep knowledge" is corporate jargon for "we parsed your schema.sql file and made some assumptions." And we all know what happens when you assume. You make an ass out of u and me and the entire production user database.
But let's get to my favorite part. The real reason my pager battery life is measured in hours, not days.
best practices for database migrations
I have to laugh. I really do. I've seen the demos. The "best practice" migration is always adding a last_name column to a users table with 12 rows in it. Wow, revolutionary. It took 14 milliseconds. Give the man a Nobel Prize.
Here's what that "best practice" looks like in the real world. It's 3:15 AM on the Saturday of a long holiday weekend. Your "intelligent" IDE has decided the best way to add a non-nullable field with a default value to our 800-gigabyte user_events table is with a single ALTER TABLE command. It confidently tells you, "Don't worry, I'll handle the locks." The migration starts. The lock is acquired. And then it just… sits there. For ten minutes. Then twenty. The application grinds to a halt because every single INSERT is now queued up behind this genius, "best practice" migration. The monitoring alerts start firing, but what monitoring? We spent the budget on the magic IDE that was supposed to prevent this! The one dashboard we have is probably just a link to the vendor's "System Status: All Green!" page.
My phone starts vibrating off the nightstand. It's the junior dev, bless his heart. "Alex, the migration script Kiro generated is... uh... it's been running for 45 minutes. The site is down. The progress bar is gone. What do I do?"
And I'll be sitting here, staring at my laptop lid, which is covered in the ghosts of solutions past. I've got my Parse sticker right next to my RethinkDB sticker, which is peeling a bit but still holding on, just like their lingering technical debt in a few of our legacy services. I'm already making room for a shiny new Kiro sticker.
So please, tell me more about your edge functions and your security policies. I'm sure an IDE's "deep knowledge" is perfectly capable of writing flawless, context-aware RLS policies that don't accidentally expose every user's PII via a poorly-configured view. That has never happened before.
Go on, build it faster. I'll be here, brewing the coffee, updating my rollback scripts, and waiting for the inevitable. The call will come. It always does. And your "best practice" will be my "incident report."
Ah, another masterpiece of architectural ambition from our friends at Elastic. I’ve just finished reading this, and I have to say, my heart is all aflutter. Truly.
It’s just so inspiring to see someone tackle the problem of "isolated, unstructured data sources." You know, the ones that are isolated for very good reasons, like they’re radioactive, on fire, or still running on a server that has a Turbo button. And now, we get to bring them all together with a "secure data mesh."
I just love that term. It has the same reassuring ring as "artisanal, gluten-free bridge construction." It sounds sophisticated, distributed, and wonderfully resilient. It’s like instead of having one big, easy-to-blame monolith for our logging pipeline, we can now have a hundred tiny, interconnected services that can all fail in novel and exciting ways. It's not a single point of failure; it's failure-as-a-service, democratized across the entire organization. The architectural diagrams are going to be a thing of beauty, a true Jackson Pollock of YAML files and network ACLs.
And the promise to "speed investigations through data and AI" is the chef's kiss. I am genuinely thrilled at the prospect of replacing my late-night, caffeine-fueled intuition with a confident AI. I can already picture it:
It's 3:15 AM on the Sunday of a long weekend. The primary database has evaporated. Every service is screaming. My pager is playing a rhythm that sounds suspiciously like a death metal drum solo. And our brand-new, AI-powered observability platform sends me a single, high-priority alert: "Anomaly Detected: Unusual spike in log messages containing the word 'error'."
Thank you, digital oracle. Your wisdom is boundless. I never would have cracked this case without you.
Then we have the "Elastic Agent Builder." Oh, this is my favorite part. A builder! It sounds so constructive and positive. I love tools that make it easy for anyone to deploy a monitoring agent. It’s a fantastic way to ensure that, when things do go sideways, the monitoring agent itself will be consuming 80% of the host's CPU, helpfully obscuring the actual problem. I can't wait to see the custom-built agent a junior developer "just wanted to test" in production, which accidentally starts shipping terabytes of debug logs and brings our entire ingest cluster to its knees. It’s the gift that keeps on giving.
You know, I have this collection of vendor stickers on my old server rack in the basement. There’s Graphulon, Streami.ly, LogTrove… all these companies that promised a single pane of glass and delivered a beautiful mosaic of shattered dashboards. I’ve already cleared a spot for a new one. It has a certain… elasticity to it.
So, yes, I am all in. Let’s weave this beautiful, intricate data mesh. Let’s connect every forgotten cron job and every shadow IT project's log file. Let’s empower the AI to watch over it all. I predict a future of unparalleled operational tranquility, right up until the moment the AI decides the most "unstructured data source" of all is our production certificate authority and "helpfully" quarantines it for analysis.
I'll have my go-bag ready. It’s going to be a glorious, career-defining outage. Bravo.
Ah, yes. A truly breathtaking piece of prose. I haven't seen a corporate philosophy so perfectly, if unintentionally, captured since my last exit interview. It's a positively poetic paean to panicked progress. Reading this feels like finding the missing Rosetta Stone that explains every frantic, feature-first decision I was ever forced to witness.
The metaphor of the stalled truck? Perfection. I vividly remember the "small periodic forces" being applied. We called them "daily stand-ups," where everyone had to invent some microscopic forward movement, even if it was just refactoring a variable name, to keep the project manager from turning puce. The whole department, rocking a ten-ton monolith back and forth, not to get the engine started, but just to make it look like it was moving for the VPs watching from the air-conditioned C-suite. The engine never caught; we just got really good at rocking. Context stays loaded, you say? The only thing that stayed loaded was the bug tracker.
And the gospel of the ‘messy page’! Oh, chef’s kiss. I haven’t seen such a beautiful rationalization for shipping undocumented, un-unit-tested spaghetti code since the last all-hands where the CTO told us ‘move fast and break things’ was still a virtue. That Macintosh folklore story is choice. We had our own version:
"Remember that time we deliberately shipped a feature with a known memory leak just to hit the Q2 roadmap deadline? We called it ‘creating an opportunity for a future performance-enhancement sprint.’ Fire and maneuver, indeed."
The "practical tips" section is where this transcends mere advice and becomes a work of satirical genius. Using an LLM to "do one of the easiest tasks" because its "mediocrity will annoy you just enough to fix it" is the most honest description of our entire development process I’ve ever seen. It’s a bold strategy: generating garbage to inspire yourself to create something that is merely substandard. We were doing that with junior engineers years before AI made it cool.
And the advice to "work on the part of the project that feels most attractive at the moment"? Beautiful. It so elegantly explains:
Progress is progress, they say. It’s a comforting thought when you’re paving a desire path directly into a swamp of technical debt.
But that closing metaphor... the "powerful and unstoppable" flywheel. It’s true. I’ve seen that flywheel. It’s the one powering the support ticket queue, spinning faster and faster with every ‘small meaningful piece’ of half-baked code we added to the project. They admire the end state, but they don’t see the messy daily pushes that built the momentum. No, they don’t. They just get the PagerDuty alert when that unstoppable force finally meets the immovable object of reality.
That’s not a flywheel, my friend. It’s a countdown clock.
Alright, hold my lukewarm coffee. I just read this masterpiece on "Freedom" and I think I pulled a muscle from rolling my eyes so hard.
"Percona is built on the belief that Freedom matters..."
Oh, absolutely. The freedom to be woken up at 3 AM on a holiday weekend by a PagerDuty alert screaming about cascading replication failure. The freedom to explain to my VP why our "zero-downtime migration" has, in fact, resulted in a very noticeable six hours of total, unadulterated downtime. That’s the kind of liberating experience I live for.
You see, I have a special collection on my laptop. It's a graveyard of vendor stickers. RethinkDB, FoundationDB before Apple adopted it... all these bright-eyed companies that promised a revolution. They all talked about control, transparency, and choice. Let's break down what that really means for the guy who actually has to keep the lights on.
Control? You mean the "control" to tune 742 different configuration variables, where the documentation for the most critical one just says "use with caution" and links to a forum thread from 2014 that ends with the original poster saying, "Never mind, we switched to Postgres." That’s not control; that’s a minefield disguised as a YAML file.
Transparency? My favorite. This is the promise that your "revolutionary" monitoring dashboard will give me deep insights. In reality, it’s always an afterthought. It's a pretty Grafana dashboard that turns bright red after the entire cluster has already fallen over. It's transparent in the way a brick wall is transparent. It tells me, with 100% clarity, that I am completely screwed. It never tells me that the write-ahead log is about to fill the disk because of a rogue analytics query, but it’s fantastic at showing me a flat line for "Transactions Per Second" after the fact. Thanks for the transparency. The post-mortem will be beautiful.
And the best for last: Choice. Ah, yes, the freedom of choice. The choice between three different high-availability models, each with its own unique and exciting failure domain.
I can see it now. It’ll be Labor Day weekend. A junior engineer will be on call. Management will have just read your blog post and pushed for us to "evolve our database infrastructure" by enabling that cool new feature you just announced. The runbook will have a single line: "run the migration script."
At 2:47 AM, that script will hit a lock contention that nobody could have predicted. The "zero-downtime" schema change will pause, holding a lock on the users table. Every login, every API call, every part of the application will grind to a halt. The "transparent" monitoring dashboard will show everything as green, because the hosts are still up. It's just that, you know, they're not doing anything. The pager will finally go off 20 minutes later when the load balancers give up and start reporting 503s.
And I’ll log on, bleary-eyed, to find a mess that your support team's first-line response will be, "Have you tried turning it off and on again?"
So please, keep talking about Freedom. It’s a cute mission statement. It really is. It looks great on a slide deck. But for those of us in the trenches, all we hear is the promise of more complex, more spectacular, and more "transparent" ways for things to break.
I’ve already cleared a spot for your sticker on my laptop. Right next to the others.
Oh, this is just... fantastic. Truly. I was just thinking my life was becoming a little too stable, my sleep schedule a little too regular. And then, like a shining beacon of hope and future on-call alerts, this article appears.
Analytics Buckets! What a wonderfully soothing name. It sounds so harmless, doesn't it? Like a cute little container for your numbers, not a catastrophic single point of failure waiting to happen. It's certainly a friendlier name than the last system we adopted, which I affectionately nicknamed "The Great Devourer of Weekends."
And oh, my heart absolutely sings at the mention of Apache Iceberg and columnar Parquet format. It's so refreshing to see a solution that involves a whole new set of tools, libraries, and failure modes I get to learn about intimately at 3 AM. I was getting so bored with the old cryptic error messages from PostgreSQL. Now I can look forward to a whole new flavor of pain! Will it be a corrupted metadata pointer in Iceberg? A version mismatch in the Parquet library? A silent data-type coercion that only shows up in the quarterly reports? The possibilities for thrilling, career-defining incidents are endless!
Honestly, the promise of a system "optimized for analytical workloads" is my favorite part. It's just so thoughtful. Because we all know that once a new, shiny data store exists, it will only ever be used for its intended purpose.
optimized for analytical workloads
It's just for analytics, they'll say. No one will ever try to build a real-time feature on top of it, they'll promise. I remember hearing similar sweet nothings during the pitch for our last "simple" migration. That one still gives me flashbacks:
I'm sure this time will be different. The scripts will run perfectly the first time. The backfill process won't uncover three generations of data-entry errors we've been blissfully ignoring. The performance characteristics under real-world, panicky, pre-board-meeting query load will be exactly as the documentation promises. It's so bold, so optimistic. It's almost... cute.
So go on, embrace the future. Dive into your Analytics Buckets. Store those "huge datasets." I'll be over here, preemptively brewing a pot of coffee that's strong enough to dissolve steel and updating my emergency contact information. You're gonna do great, champ. I'll see you in the post-mortem.
Ah, yes. I’ve just had the… pleasure of reviewing this brief dispatch from the front lines of industry. One must, of course, applaud the enthusiasm. It’s truly heartwarming to see young people discovering the challenges of data management for the very first time.
What they’ve described here is a "change-data-capture pipeline." It’s a remarkably industrious solution. The sheer mechanical grit involved in sniffing a transaction log, parsing it, and then shuttling the contents across the network is something to behold. It is a monument to the principle that if one lacks a foundational understanding of distributed querying, one can always compensate with a sufficiently complex chain of scripts. A truly valiant effort.
I am particularly taken with their goal: to replicate Postgres tables to "analytical destinations." This is a masterstroke of pragmatism. Why bother with the tiresome constraints of a single source of truth, as prescribed by Codd’s normalization rules, when you can simply have two sources of truth? Or three! Or four! It’s an architectural decision that boldly asks, “What if our data were not only correct over here, but also… slightly different, and perhaps a bit stale, over there?” The possibilities for novel and exciting accounting errors are simply dizzying.
And the destination! An "Analytics Bucket." A bucket. One imagines they simply tip the server over and let the data just… spill in. It’s a beautiful rejection of what they must see as the oppressive yoke of schemas and integrity constraints. Clearly, they've never read Stonebraker's seminal work on the trade-offs of relational versus post-relational systems; they've simply invented a third way: the informational landfill.
But the true pièce de résistance, the detail that reveals the artist’s soul, is this magnificent temporal guarantee:
“near real time.”
Chef’s kiss. What a splendidly non-committal phrase! It’s a wonderful way of admitting one has made a choice in the CAP theorem—without, one assumes, having the faintest idea what the CAP theorem is. They have gleefully sacrificed Consistency for the sake of Availability and Partition Tolerance, but they’ve done it with the wide-eyed innocence of a child who has just discovered that you can plug two extension cords into each other for infinite electricity.
They’ve managed to build a system that bravely subverts the very idea of atomicity and isolation. An ACID transaction, in their world, is not an indivisible unit of work. Oh, no. It’s more of a suggestion, an opening bid in a long and fascinating negotiation with the eventual state of the system. One can only admire the audacity. Their list of achievements is quite impressive:
It’s all rather brilliant, in the way a Rube Goldberg machine is brilliant. An astonishingly complex, fragile, and failure-prone device to achieve something trivial. They’ve looked upon decades of established database theory, upon the foundational papers that define correctness and consistency, and have evidently concluded, "No, thank you. I'd rather build it myself with duct tape and hope."
Congratulations. You have successfully engineered a system with all the latency of a distributed query and none of its transactional guarantees.
Ah, another dispatch from the trenches of "industry practice." One reads this sort of thing not with anger, but with a deep, weary sigh, the kind reserved for a promising student who has decided the study of formal grammars is best advanced by composing limericks. They are so very proud of their benchmarks, so meticulous in their compiler flag comparisons. It’s almost... cute.
But let us, for the sake of pedagogy, examine this artifact. It is a perfect specimen of the modern affliction: the relentless pursuit of "more," with nary a thought for "correct."
One notes with a certain weary amusement the myopic obsession with Queries Per Second. It's as if they've built a phenomenally fast automobile that, by design, occasionally forgets the destination or substitutes the driver's family with a bag of wet leaves. 'But look how fast it gets there!' they cry, celebrating a 10% gain from a compiler flag. The 'C' and 'I' in ACID, one must assume, now stand for 'Compiler' and 'Inconsequential'. The purpose of a database, my dear boy, is not to be a firehose of questionable bits.
Then there is the choice of subject: RocksDB. An LSM-tree. Charming. It seems we've abandoned the mathematical elegance of the relational model for what amounts to a cleverly sorted log file. They have gleefully traded Codd's twelve rules for the singular, frantic goal of writing things to disk slightly faster. One imagines Edgar Codd weeping into his seminal 1970 paper, "A Relational Model of Data for Large Shared Data Banks." Clearly, it is no longer on the syllabus, if it ever was.
My favorite part is the hand-wringing over 'variance.' They list its sources—compilers, "intermittent" overhead, "noisy neighbors"—as if they were unavoidable forces of nature, like continental drift or the tides.
The overhead from compaction is intermittent and the LSM tree layout can help or hurt CPU overhead during reads... A system whose performance is a roll of the dice is not a system; it is a casino. They speak of the CAP theorem as if it were a license to build unpredictable contraptions, rather than a formal trade-off to be navigated with intellectual rigor. Clearly they've never read Stonebraker's seminal work on the matter; they're too busy blaming the cloud's "clever" frequency management.
And the methodology! An exhaustive treatise on compiler flags. clang+LTO versus gcc. It is a masterclass in rearranging deck chairs on a ship that has gleefully jettisoned its navigational charts in favor of a faster engine. They have produced pages of data to prove that one compiler can paint a flaking wall slightly faster than another, all while ignoring the fact that the building's foundation is made of sand and good intentions. 'But the paint job is 15% more efficient!' Yes, splendid.
All told, it is a valiant effort in the field of... empirical tinkering. Truly. One must commend the diligence required to produce so many charts about so little of consequence. Keep up the good work, children. Perhaps one day, when the thrill of measuring raw throughput wanes, you might stumble upon a library. There are some wonderful papers in there you might enjoy.
Oh, this is just wonderful. Truly. I appreciate the masterclass in minimalism. Why waste words on pesky details like “this will detonate your Tuesday night” when you can just get straight to the point?
“We recommend you upgrade.”
It’s just so… reassuring. It has the same gentle, calming energy as a Slack message from my manager at 2 AM that just says "you up?" You just know something beautiful and life-affirming is about to happen. This recommendation is a gift, really. A lovely, ticking gift, lovingly placed on my team's roadmap, which was already a beautifully rendered dumpster fire.
Reading this gives me the same nostalgic thrill as my last "simple" point-release upgrade. My therapist and I are still working through that one. It was a glorious evening of:
And the solution is always so clear, right there in the docs!
...please refer to the release notes.
Ah, yes, the release notes. That famously brief, beach-read of a document. I’ll just curl up with that 300-page PDF of breaking changes, bug fixes for bugs I didn't know I had, and performance improvements that only apply if your entire infrastructure is running on a single Raspberry Pi. It’s my favorite kind of scavenger hunt, where the prize is discovering exactly which esoteric feature we unknowingly relied on has now been "thoughtfully deprecated."
I can’t wait to see what fresh hells 9.2.2 will unleash. My money is on a new, “more resilient” cluster coordination logic that decides our primary node is acting "a little stressed" and promotes a read-replica in another continent just to be safe. Or maybe the memory footprint is now so hyper-optimized that it frees memory it hasn't even used yet, creating a temporal paradox that can only be solved by sacrificing a junior engineer's weekend.
Honestly, thank you. This blog post is a perfect, crystalline reminder of the beautiful, predictable cycle of hope, despair, and late-night pizza. It’s a real gift.
And now, as a gift to myself, I will be setting up a firewall rule to block this domain. Cheers.
Well, look what the automated feed dragged into my inbox. Another version number ticked over. 9.1.8. The sheer, earth-shattering leap from 9.1.7 must have the whole world holding its breath. I've seen punch cards with more significant updates. Before you all rush to type yum update like it's the dawn of a new age, let's pour some cold, stale coffee on this "release."
First off, this relentless version churn is what you kids call "agile," right? Back in my day, we called it “not getting it right the first time.” We’d ship a version of DB2 and it would run, untouched, for years. It was so stable you could etch the version number into the side of the mainframe. You’re on version 9.1.8 and I guarantee you're already planning 9.1.9 to fix whatever "enterprise-ready" feature you broke in this one. We wrote our change logs in COBOL, and if you needed more than one page, you were sent back to design.
I'll bet the "full list of changes" is a marvel of modern marketing. You’re probably bragging about some new "observability" or "resiliency" feature. Let me tell you about resiliency. It’s 3 a.m. in a server room colder than my ex-wife's heart, swapping out 9-track tape reels for the nightly backup. It's spending twelve hours restoring a corrupted VSAM file from tape and getting it right, because the alternative was updating your resume. Your "resiliency" is a checkbox in a YAML file that probably just reroutes traffic to a data center that’s only slightly on fire.
You talk about fixing issues. The very fact that you have to announce this implies the previous version was a house of cards in a hurricane. You know what we had in 1985? Hierarchical databases. You put data in, it stayed there. You wanted it back? You got the same data. It wasn't magic, it was just competent engineering. You kids and your "eventual consistency" is just a fancy way of saying “we'll find your data eventually, maybe. No promises.” You’re probably fixing a bug where your “schemaless” design decided a customer's zip code was a floating-point number.
We recommend you upgrade to this latest version.
Of course you do. This whole ecosystem feels like it's held together with spit and hope. You have to keep moving so the whole thing doesn't collapse. We built systems that outlasted the companies that bought them. You build systems that need patching before the press release is even cached on the CDN. This whole "Elastic Stack" sounds like something you buy to keep your pants up, and it seems about as reliable.
And I'm supposed to read the "release notes" to understand the changes? Son, I used to get my documentation in three-ring binders so heavy you could use them as a boat anchor. Those manuals were declarative. They were final. Your release notes are an ephemeral webpage that will 404 in six months, just like the startup that built the flashy feature you're so proud of. This is the digital equivalent of writing your business plan on a cocktail napkin.
Congratulations on the new number. Let me know when you get to version 10. By then, maybe you'll have reinvented the B-tree index and called it something revolutionary like "synergistic data-kinetic mapping."
I won't be holding my breath. Or reading this blog again.
Alright, settle down, kids. Let me put down my coffee—the real kind, brewed in a pot that's been stained brown since the Clinton administration, not some single-use pod nonsense—and read this... this cri de cœur.
A nightmare. Your nightmare is your manager calling you "Mr. Clever" in a dream.
Oh, you sweet summer child. Let me tell you about a real nightmare. A nightmare is the air conditioning failing in the data center in July. It’s the lead operator tripping over the power cord to the DASD array during the nightly batch run. A nightmare is realizing the tape backup from last night—the one you physically carried to the off-site vault in a rainstorm—is corrupted, and the one from the night before was overwritten by an intern running the wrong JCL deck. Your little psychodrama is what we used to call a "Tuesday."
But okay, let's play along. I'm sure this riveting tale of dream-based performance reviews pivots to some groundbreaking new technology that solves all your problems. Let me guess. It's a fully-managed, multi-cloud, serverless, document-oriented database with AI-powered observability. Am I close?
I can just picture the bullet points in the rest of this post you mercifully spared me from:
"Effortless Scaling!" You mean you pay a cloud provider a king's ransom to throw more hardware at a problem you were too lazy to index properly. Back in my day, we did capacity planning. We had to justify every single CPU cycle and every kilobyte of memory to a guy named Frank who smoked three packs a day and treated the mainframe's resources like it was his firstborn child. You kids just put your credit card on file and call it "elasticity."
"Flexible Schema!" Oh, my favorite. You call it a "flexible schema," I call it a cry for help. It's anarchy. It's giving up. We had this thing called a COBOL copybook. It was a contract. You knew, down to the byte, what CUST-LAST-NAME PIC X(20) meant. It was rigid because the business was rigid. Your "flexibility" is just kicking the data integrity can down the road until some poor soul in analytics has to parse a million variations of a field called customer_name, custName, and c_nm.
"Revolutionary JSON Support!"
You're telling me you can store nested key-value pairs? Wow. Stop the presses. We had hierarchical databases in the seventies. It was called IMS. It was a pain in the ass, sure, but don't you dare stand there and tell me that storing a document is some kind of paradigm shift you invented last Tuesday. We were doing this stuff on green screens when your CEO was still learning how to use a fork. This isn't innovation; it's just cyclical amnesia with a better marketing department.
I can just see the success metrics. "We reduced query latency by 90% and improved developer velocity by 300%!" Compared to what? A system written by chimps on typewriters? A SELECT statement running against a flat file on a floppy disk? We had DB2 in 1985, son. It could join tables. It enforced constraints. It had a query optimizer that was smarter than half the new hires you've got. You think your newfangled database is hot stuff because it can handle a few thousand concurrent users? We were processing the entire payroll for a Fortune 100 company in a four-hour batch window on a machine with less processing power than your doorbell.
So please, spare me the nightmares about your manager. The real nightmare is watching an entire generation of engineers reinventing the wheel, slapping a new coat of paint and a dozen acronyms on it, and calling it a spaceship. You haven't solved the hard problems; you've just outsourced them and wrapped them in a REST API.
Now if you'll excuse me, I have to go check on a real database. One that doesn't have a "feelings" API. The coffee's getting cold.