Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Now factor in redundancy and server costs, and the difference is not so huge. You're still paying a multiple of the raw cost, but unless you're storing on the exabyte scale I think it shouldn't really matter in the grand scheme of things.

Cost of a single engineer to manage a Minio cluster probably already outweighs the extra cost you're paying at any reasonable scale (i.e. most companies). And if you're a big player the published costs are not what you're paying.



They use wide erasure code stripes, so redundancy is ~ 1x within a data center. Let's assume 3 data centers.

It's well known how to build a storage node whose cost is mostly disks, let's say 50% of the hardware is not disks.

So, redundancy and server costs explain up to a 6x mark up.

Power and networking really shouldn't account for the other 54x.


You also need to house these servers, including backup power etc., manage them, maintain them, develop and deploy the software etc.. Also, you should really check the power usage of enterprise disks and servers; it may be cheap, but it's far from comparable to your average desktop. Then you need to add in that they need to have a reserve capacity as well; you can now go and store 100TB on S3 and AWS will be fine with it - but they need to have those disks up & running already.

Don't get me wrong, S3 is expensive, but replicating the availability, feature set and scalability is going to be very expensive, too. You can cheap out if you don't need these features, of course.


I agree with you but if you explain the 6x mark up, you only have 10x to explain out of 60x.


It depends on if you're comparing to raw disk, or the replicated storage, but yeah, spending 9x more on network and power than on machines is still pretty extreme.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: