When I was there, DigitalOcean was writing a complete replacement for the Ceph S3 gateway because its performance under high concurrency was awful.
They just completely swapped out the whole service from the stack and wrote one in Go because of how much better the concurrency management was, and Ceph's team and codebase C++ was too resistant to change.
Unrelated, but one of the more annoying aspects of whatever software they use now is lack of IPv6 for the CDN layer of DigitalOcean Spaces. It means I need to proxy requests myself. :(
I confirm this, I used SeaweedFS to serve 1M users daily with 56 million images / ~100TB with 2 servers + HDD only, while Minio can't do this. Seaweedfs performance is much better than Minio's.
The only problem is that SeaweedFS documentation is hard to understand.
They seem to have gone all-in on AI, for commits and ticket management. Not interested in interacting with that.
Otherwise, the built in admin on one-executable was nice, and support for tiered storage, but single node parallel write performance was pretty unimpressive and started throwing strange errors (investigating of which led to the AI ticket discovery).
I'm not sure if it even has any sort of cluster consensus algorithm? I can't imagine it not eating committed writes in a multi-node deployment.
Garage and Ceph (well, radosgw) are the only open source S3-compatible object storage which have undergone serious durability/correctness testing. Anything else will most likely eat your data.
Hi there, RustFS team member here! Thanks for taking a look.
To clarify our architecture: RustFS is purpose-built for high-performance object storage. We intentionally avoid relying on general-purpose consensus algorithms like Raft in the data path, as they introduce unnecessary latency for large blobs.
Instead, we rely on Erasure Coding for durability and Quorum-based Strict Consistency for correctness. A write is strictly acknowledged only after the data has been safely persisted to the majority of drives. This means the concern about "eating committed writes" is addressed through strict read-after-write guarantees rather than a background consensus log.
While we avoid heavy consensus for data transfer, we utilize dsync—a custom, lightweight distributed locking mechanism—for coordination. This specific architectural strategy has been proven reliable in production environments at the EiB scale.
I’m Elvin from the RustFS team in the U.S. Thanks for sharing the benchmark; it’s helpful to see how RustFS performs in real-world setups.
We know trust matters, especially for a newer project, and we try to earn it through transparency and external validation. we were excited to see RustFS recently added as an optional service in Laravel Sail’s official Docker environment (PR #822). Having our implementation reviewed and accepted by a major ecosystem like Laravel was an encouraging milestone for us.
If the “non-technical reasons” you mentioned are around licensing or governance, I’m happy to discuss our long-term Apache 2.0 commitment and path to a stable GA.
Wow, "hardened image" market is getting saturated. I saw atleast 3 companies offering this at Kubecon.
Chainguard came to this first (arguably by accident since they had several other offerings before they realized that people would pay (?!!) for a image that reported zero CVEs).
In a previous role, I found that the value for this for startups is immense. Large enterprise deals can quickly be killed by a security team that that replies with "scanner says no". Chainguard offered images that report 0 CVEs and would basically remove this barrier.
For example, a common CVE that I encountered was a glibc High CVE. We could pretty convincingly show that our app did not use this library in way to be vulnerable but it didn't matter. A high CVE is a full stop for most security teams. Migrated to a Wolfi image and the scanner reported 0. Cool.
But with other orgs like Minimus (founders of Twistlock) coming into this it looks like its about to be crowded.
There is even a govt project called Ironbank to offer something like this to the DoD.
Net positive for the ecosystem but I don't know if there is enough meat on the bone to support this many vendors.
Most likely yes. There are a lot enterprises out there that only trust paid subscriptions.
Paying for something “secure” comes with the benefit of risk mitigation - we paid X to give us a secure version of Y, hence its not our fault “bad thing” happenned.
Counterpoint: most likely no, it really is about all the downstream impacts of critical and high findings in scanners. The risk of failing a soc2 audit for example. Once that risk is removed then the value prop is also removed.
F500s trust the paid subscriptions because it means you can escalate the issue -- you're now a paying client so you get support if/when things explode -- and that also gives you a lever to shift blame or ensure compliance.
I recall being an infra lead at an Big Company that you've heard of and having to spend a month working with procurement to get like 6 Mirantis / Docker licenses to do a CCPA compliance project.
I don't think this is the case here. The reason you want to lower your CVEs is to say "we're compliant" or "it's not our fault a bad thing happened, we use hardened images". Paying doesn't really change that - your SOC2 doesn't ask how much you spent, it asks what your patching policy is. This makes that checkbox free.
Yep differentiation is tricky here. Chainguard are expanding out to VM images and programming language repos, but the core of hardened container images has a lot of options.
The question I'd be interested in is, outside of markets where there's a lot of compliance requirements, how much demand is there for this as a paid service...
People like lower CVE images, but are they willing to pay for them. I guess that's an advantage for Docker's offering. If it's free there is less friction to trying it out compared to a commercial offering.
If you distribute images to your customers it is a huge benefit to not have them come back with CVEs that really don't matter but are still going to make them freak out.
Even if you do SaaS. Some customers would ask you about known vulnerabilities in your images, and making it easy to show quick remediation schedule can make deals easier to close.
Depends what type of shop. If you're in a big dinosaur org and you 'roll your own' that ends up having a vulnerability, you get fired. If you pay someone else and it ends up having a vulnerability you get to blame it on the vendor.
Perhaps in theory, but I’d be willing to wager that most dinosaur orgs have so many unpatched vulns, they would need to fire everyone in their IT org to cover just the criticals
> There is even a govt project called Ironbank to offer something like this to the DoD.
Note that you don't have to be DoD to use Iron Bank images. They are available to other organizations too, though you do have to sign up for an account.
Many IronBank images have CVEs because many are based on ubi8/9 and while some have ubi8/9-micro bases, there are still CVEs. IronBank will disposition the critical and highs. You can access their Vulnerability Tracking Tool and get a free report.
Some images like Vault are pretty bare (eg no shell).
Ironbank was actually doing this before Chainguard existed, and as another mentioned, it's not restricted to DoD and also free to use for anyone, though you do need an account.
My company makes its own competing product that is basically the same thing, and we (and I specifically) were pretty heavily involved in early Platform One. We sell it, but it's basically just a free add-on to existing software subscriptions, an additional inducement to make a purchase, but it costs nothing extra on on its own.
In any case, I applaud Docker. This can be a surprisingly frustrating thing to do, because you can't always just rebase onto your pre-hardened base image and still have everything work, without taking some care to understand the application you're delivering, which is not your application. It was always my biggest complaint with Ironbank and why I would not recommend anyone actually use it. They break containers constantly because hardening to them just means copying binaries out of the upstream image into a UBI container they patch daily to ensure it never has any CVEs. Sometimes this works, but sometimes it doesn't, and it's fairly predictable, like every time Fedora takes a new glibc version that RHEL doesn't have yet, everything that links against starts segfaulting when you try to copy from one to the other. I've told them this many times, but they still don't seem to get it and keep doing it. Plus, they break tags with the daily patching of the same application version, and you can't pin to a sha because Harbor only holds onto three orphaned shas that are no longer associated with a tag.
So short and long of it, I don't know about meat on the bone, but there is real demand and it's getting greater, at least in any kind of government or otherwise regulated business because the government itself is mandating better supply chain provenance. I don't think it entirely makes sense, frankly. The end customers don't seem to understand that, sure, we're signing the container image because we "built" it in the sense that we put together the series of tarballs described by a json file, but we're also delivering an application we didn't develop, on a base image full of upstream GNU/Linux packages we also didn't develop, and though we can assure you all of our employees are US citizens living in CONUS, we're delivering open source software. It's been contributed to by thousands of people from every continent on the planet stretching decades into the past.
Unfortunately, a lot of customers and sales people alike don't really understand how the open source ecosystem works and expect and promise things that are fundamentally impossible. Nonetheless, we can at least deliver the value inherent in patching the non-application components of an image more frequently than whoever creates the application and puts the original image into a public repo. I don't think that's a ton of value, personally, but it's value, and I've seen it done very wrong with Ironbank, so there's value in doing it right.
I suspect it probably has to be a free add-on to some other kind of subscription in most cases, though. It's hard for me to believe it can really be a viable business on its own. I guess Chainguard is getting by somehow, but it also kind of feels like they're an investor darling getting by on the reputations of its founders based on their past work more than the current product. It's the container ecosystem equivalent of selling an enterprise Linux distro, and I guess at least Redhat, SUSE, and Canonical have all managed to do that, but not by just selling the Linux distro. They need other products plus support and professional services.
I think it's a no-brainer for anyone already selling a Linux distro to do this on top of it, though. You've already got the build infrastructure and organizational processes and systems in place.
I've been in contact with some of the security folks at Iron Bank. The last time we dug into Iron Bank images, they were simply worse than what most vendors offered. They just check the STIG box.
I'm not sure if Chainguard was first, but they did come early. The original pain point we looked into when building our company was pricing, but we've since pivoted since there are significant gaps in the market that remain unaddressed.
The AI water usage aspect is pretty clearly a lie and a gross misunderstanding at best
https://open.substack.com/pub/andymasley/p/the-ai-water-issu...
There are dozens of other things that use we use everyday that have a larger impact.
I think there a real concerns here but the water usage argument is a poor one
This video might help explain 3D Gaussian splatting.
https://www.youtube.com/watch?v=wKgMxrWcW1s
Essentially, an entirely new graphics pipeline with different fundamental techniques which allow for high performance and fidelity compared to... what we did before(?)
Cool.
Not quite, it’s just a way to assign a color value to a point in space (think point clouds) based on photogrammetry. It’s voxels on steroids but still is drawn using the same techniques. It’s the magic of creating the splats that’s interesting.
A color value for each point is a good starting place to gain an intuition. Some readers might be interested to know that the color is not constant for each point, but instead dependent on viewing angle. That is part of what allows splats to look realistic. Real objects have some degree of specularity which makes them take on slightly different shades as you move your head.
And since we normally see with binocular vision, a stereoscopic view adds another layer of realism you wouldn't normally perceive otherwise. Each eye sees subsurface scattering differently and integrates in your head.
Sorry but this is a horrible video. The guy just spews superlatives in an annoying voice until 4:30 (of a 6 minute video mind you), when he finally gives a 10 second "explanation" of Gaussian splatting, which doesn't really explain anything, then jumps to a sponsored ad.
yeah... their older videos are a bit more useful from what I remember (more time spent on the research paper content, etc), but they've become so content-free that I just block the channel outright nowadays. it's the "this changes everything (every time, every day)" hype-channel for graphics.
I used HN Algolia to search for "internet cruise" and the top result was promising. Interestingly, the HN post no longer linked to an article! I used Wayback Machine on the HN post and found the link to the original article, which in turn returns a 404! Though, you can find that too in Wayback Machine.
Most style guides would call that an error, em dash should be used without surrounding spaces (while an en dash requires them). The only publication I know that has (recently?) eschewed that advice is WaPo. If the idea was to make it more visible, I believe the correct solution would have been for WaPo to use an en dash but render it longer in their typeface.
yes, i agree with you and this is how i used to use emdashes. chatgpt also agrees with you, which is why spaces are a pretty good indicator that it's not an LLM
This reads like someone who is quite out of touch with the trades. A large number of states implement right to work laws that discourage union membership. I could go out and get a framer job today (and with the current immigration crackdown probably have one by the end of the day). Having worked as a framer before college, it will be incredibly long before these jobs have any level of automation (a thought which gives me comfort when I consider my own job prospects).
However, I'm thankful everyday that I get to sit in an A/C office and type on a computer. Framing is hard work and ruthless. Most people won't last a day doing it because of how challenging it is.
Who is going to buy the houses? Who is going to own the land? Not many people need a plumber. Look back some decades. We can't all just work in the trades. It doesn't make sense from a supply and demand stance.
A decrease in quality of life is an acceptable cost to stay alive. In a very different economy, people will just fix their own toilet with scavenged or bartered parts.
My point was that humans can do most residential plumbing tasks easily, and the effort and cost involved in learning and acquiring tools might outweigh the desire to pay for a service in a future economy with scarce labor opportunities.
Also, in such a bleak future, there might not be running water where you currently own property.
But really you're answering your own question. The economy is not a zero sum game- It adapts. Why do our current jobs exist? Because somebody is paying for what we produce. Then we take our pay and buy what other people produce. There could be an equilibrium today (or 20 years ago) where nobody has any jobs but there are generally feedback loops that help get to a functioning economy.
It's not impossible that unemployment will go up but it's not as simple as LLMs will take our jobs. There's always more jobs to do and there are always some other equilibrium points. And it's not even clear LLMs are taking our jobs, one might argue that they'll end up creating more jobs.
>Why do our current jobs exist? Because somebody is paying for what we produce. Then we take our pay and buy what other people produce.
Because ample property and resources exist that require your (human) labor to turn into products.
If for example pumping water to AI data centers is more profitable than using it on crops and drinking water "the economy" would gladly watch you dehydrate and die. Economic short circuits such as war or governments have to step in and ensure basic human needs are met or a collapse of society can occur, and such things have occurred in the past, so this just isn't some kind of hypothetical.
Just remember there is no need for you in the post labor economy. If rich robot owners get the labor they need from other sources they'll gladly exterminate you and live in a much less populated planet.
> This reads like someone who is quite out of touch
No, it’s Indiana. They practice self-sabotage across many industries in the belief that the big-city folks just across the border will take all the jobs. Chicago, Cincinnati, Louisville. All of them just across the state line. Of course, very little thought about why large metros are just across the state line…
https://www.repoflow.io/blog/benchmarking-self-hosted-s3-com... was useful.
RustFS also looks interesting but for entirely non-technical reasons we had to exclude it.
Anyone have any advice for swapping this in for Minio?
reply