Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like the idea of supporting multi cloud, not cloud vendor lock-in, but I still doubt providing a tool can solve this.

We have been building a DBaaS(database as a service) for many years. Until now, we have only supported AWS and GCP, have not supported Azure. It is really hard to let your product run stably on multi clouds.

For example, not all the cloud vendors provide the same infrastructure at the same time. We wanted our database run on K8s, but several years before, only GCP provided its managed K8s - GKS, so we had to provide our DBaaS on GCP at first. We also wanted to let our customer use PrivateLink to access our service, but GCP had supported too late than AWS, so we had to provide the VPC mechanism for a long time.

Even all the cloud vendors provide the same service, there are still some differences in the details above. E.g, one customer complained us that they met a backup problem on Google Cloud Storage, they found that the speed of our backup was not the same as our product spec statement. Then we found that we only fully tested this feature on AWS S3.

So Now even our DBaaS runs well both on AWS and GCP, we still don’t have the confidence to deploy the service on Azure. We plan to devote a quarter to testing to run DBaaS on Azure in the end of this year.

IMO, I still recommend handling the interfacing with the cloud directly to let us know the clouds better. Of course there are some awesome tools helping us to do some functional encapsulation, hiding the underlying implementation, but we must ensure these tools are easy enough to maintain.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: