Hacker Newsnew | past | comments | ask | show | jobs | submit | marceloaltmann's commentslogin

Readyset is an Incremental View Maintenance cache that is powered by a dataflow graph to keep caches (result-set) up-to-date as the underlining data changes on the database (MySQL/PostgreSQL). RocksDB is only used as the persistent storage here, and the whole optimization is done for the DFG execution and not related to the persistent storage itself.


Straddled joins were still a bottleneck in Readyset even after switching to hash joins. By integrating Index Condition Pushdown into the execution path, we eliminated the inefficiency and achieved up to 450× speedups.


Why downvote?


Reads like an ad written by an LLM, is my guess.

It could just be that they translated from their original language to English and got that as a byproduct. Many such cases.


It also does not add anything interesting to the discussion. Like, why add a bland summary of the article?


So you don't have to read the article to figure out if you want to read the article?

I for one appreciate such comments, given the guidelines to avoid submission summaries.


its literally the author of the article.


It is completely disingenuous and unfair to claim that something, especially a small blurb, is written by an LLM. And so what if it actually was written by an LLM. If you want to criticize something, do so on the merits or demerits of the points in it. You don't get a free pass by claiming it's LLM output, irrespective of whether it is or not.


I'm puzzled by this reply. It's perfectly fine for me to hypothesize on the reason for downvotes in response to someone else asking why it has been downvoted.

You're free to opine on the reason for downvotes too. This metacomment, however, is more noise than signal.


What you had claimed is not even a potential reason in the universe of reasons. It is a demonstration of bias, an excuse to refrain from reason.

One line summaries of comprehensible articles can get downvoted because they don't add value beyond what's already very clear from the article.


it is objectively a potential reason in the universe of reasons, but you're 100% free to believe whatever you want, even if it's wrong

and the fact that multiple people upvoted my comment at a minimum suggests others also believe it to be a possible explanation

i have no idea why you've chosen this particular hill to die on, when neither of us stands to profit from this protracted exchange


What happens is that some people routinely use your purported reason "it's LLM generated" as an excuse to try to discredit anything at all, and it's not right, irrespective of whether the material is LLM generated or not. Any material should be critiqued on the basis of its own merits and demerits, irrespective of who or what authored it. We need to shed the pro-human bias.


Hard disagree. In fact, I'm very much pro-human and anti-unqualified "we need to..." statements

Either way, I didn't even downvote the OP so you're beeping at the wrong human


I am pro-truth. Being pro-truth is more pro-human in the long term via indirect effect, than is being pro-human directly. Focusing on being pro-human can reward bad behavior among masses of humans, leading to their ultimate downfall. I will leave it at that.


Replication has been one of MySQL’s most powerful and relied-upon features since the early days — long before it had things like foreign keys or even subqueries. It’s one of the foundational pillars that made MySQL suitable for large-scale, production use. This blog post walks through how replication evolved over time, and why it remains one of the strongest features in the MySQL ecosystem.

The Beginning (MySQL 3.23 — early 2000s) MySQL introduced statement-based replication (SBR) in version 3.23.15 in May 2000. This was a major milestone: it allowed users to replicate changes from one server (the source, previously called master) to others (replicas, previously slaves) by logging SQL statements executed on the source and replaying them on replicas.


That is a good point on the application changes. What is appealing from Readyset is that it does not require you to change your application code. You can just change your database connection string to point to it and it will start to proxy your queries to your database. From there you can choose what you want to cache, and everything else (writes, non supported queries, non cached read queries) will be automatically proxied to your database.


On top of that, quoting @martypitt reply:

> Most commonly the restrictions prevent you from launching a competing offering. In their case, you can't offer database-as-a-service using their code.

Meaning the self hosted version is free to use in any number of servers having in mind the competing offering restriction.


Many people's biggest issue with BSL is that there are dozens of versions of the "additional use grant" which each have bespoke language with very critical clauses, none of which have any case law behind them (correct me if I'm wrong).

Even though software may be licensed under "BSL" it isn't really a standard, even though proponents tend to use the term "BSL" as if it is the same as talking about "GPL" or "BSD". The MariaDB BSL used here is quite different than the HashiCorp BSL, for instance.


In this case the additional use grant is:

  Additional Use Grant: You may make use of the Licensed Work, provided that
  you may not use the Licensed Work for a Database Service. A ‘Database
  Service’ is a commercial offering that allows third parties (other than
  your employees and contractors) to access the functionality of the Licensed Work.
IANAL but to me that sounds like it could be interpreted as almost any commercial work based on it since some part will "access the functionality", indirectly or directly. I know that is not what they intended, but the language is loose enough to allow almost any interpretation.


You can deploy on your own via their .deb packages - https://readyset.io/download

The advantages is that reading from a cache will be faster than from a read replicas. The benefits increase even further if you have to perform computation on the fetched data.


I found one case study on their blog - https://blog.readyset.io/medical-joyworks-improves-page-load...

> It will be interesting to see if any of these introduce some form of write support over time

Writes performed by your application in Readyset are automatically proxied(redirected) to your database.


You will add readyset between your backend and database in order to cache the data you fetch from db.


I'm answering the comment above mine that is asking for a user case. I'm saying that my use case is an example where json/html caching is not sufficient. 100% agree with you.


Caches are never invalidated. Readyset uses CDC to receive updates from PostgreSQL/MySQL and update the cache entries. No invalidation required. The price you pay is eventually consistent data, which is already true if you use any async replication like readyset does.


That’s how ReadySet works, I’m asking the grandparent (rhetorically) how their own method of caching works for them


GDB is the go-to tool for debugging and troubleshooting low-level applications such as C++.

Sometimes all you need simple break at some specific point and print a variable to inspect its value. Other times you need to go even further and loop through some memory structure such as a list.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: