Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: How would we build a cost effective fully redudant website?
2 points by euph0ria on April 2, 2013 | hide | past | favorite | 10 comments
Our website has been down for 5 hours now. Cloudsigma is the hosting partner. They are having serious issues with their firewalls and API which renders the servers unusable.

This is during business hours in my country and needless to say many of our customers are upset as they depend on our application.

However, we are a small startup and the finances are not as strong as we'd like them to be so the question I am posing is: How would we build a fully redundant website over multiple data centers in a cost efficient way?

Anyone that has done this already and would like to share some insights?



If you want to be 100% reliable that implies you need servers at more than one ISP/location. That way if a single one dies you're still up and running on the others.

Unfortunately that gets expensive because instead of having a single machine you suddenly need two/three/four/many.

Then you need to factor in the overhead of setting things up to work in a distributed fashion - database replication, load-balancing to route traffic, etc.

If you want to do things cheaply your best bet is to rent hosts at two locations, and have DNS handled at a third. Configure one host to be live, and leave the other one receiving constant dumps of html/db-content. In the event of the main ISP dying you switch DNS to the second - that gives you a migration time of ~5 minutes.

i.e. If budget is a concern have a hot-spare and use DNS records with very short TTL settings, so you can switch promptly.


Thanks. I thought about this but DNS poses one problem, in Europe many ISPs overrides default TTL for up to 24 hours and caches the response. This means that even if I change the DNS record the ISP will serve stale responses to my customers and the problem will still be there. Therefore I can't rely on DNS unfortunately.


Round robin. DNS queries give out the IP of different servers if one happens to go down, or to keep the load minimal across all of them. Amazon Route 53 works well. http://en.wikipedia.org/wiki/Round-robin_DNS


What's your stack? Azure offer 10 free sites I believe for the first year. That could be a good first port of call.

Other than that - what are you prepared to spend?


We basically use the LAMP stack.

The problem here is CloudSigma is down completely, so we need to span over different providers and data centers. Makes everything more complicated with DNS / Load balancing / Databases etc.

We'd like to spend as little as possible, running on about $100 / month at the moment.


Then I'd take a look at Azure.

http://www.windowsazure.com/en-us/pricing/free-trial/?WT.mc_...

If you want more info drop me an email.


Thanks, but I don't see how Azure solves the problem. What is Azure goes down just like CloudSigma, how would that keep our site alive?


If the host running the VM fails (or is failing), it will move the VM to another host. You can also create a copy of the VM, store it in another Azure data centre region. Spin up only if the other one goes down. You could even script it as there is a REST API.


Right, but most cloud providers do these kinds of things. I know that Azure had some downtime recently as well due to an expired certificate. I'd like to know how to build a 100% reliable application.


100% Reliable?

According to NetCraft, DataPipe has had 100% up-time for the last 7 years, maybe speak to them about what they could do for you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: