Hacker Newsnew | past | comments | ask | show | jobs | submit | TonyCoffman's commentslogin

The Route53 control plane is in us-east-1, with an optional temporary auto-failover to us-west-2 during outages. The data plane for public zones is globally distributed and highly resilient, with a 100% SLA. It continues to serve DNS records during regular control plane outages in us-east-1, but access to make changes is lost during outages.

CloudFront CDN has a similar setup. The SSL certificate and key have to be hosted in us-east-1 for control plane operations but once deployed, the public data plane is globally or regionally dispersed. There is no auto failover for the cert dependency yet. The SLA is only three 9s. Also depends on Route53.

The elephant in the room for hyperscalers is the potential for rogue employees or a cyber attack on a control plane. Considering the high stakes and economic criticality of these platforms, both are inevitable and both have likely already happened.


You don’t need any of that. Define a data source to query the instances and then a for_each DNS resource using the data source instances.


If your instances are created by Terraform itself, sure, you can use for_each with a data source to define DNS records dynamically. But if the instances are created dynamically outside of Terraform—such as through an auto-scaling group—then Terraform's static plan model becomes a problem.

Terraform data sources can read existing infrastructure, but they don't automatically trigger new resource creation based on real-time changes. That means your DNS records won't update unless you manually run terraform apply again, and they won’t be part of a single apply cycle. In contrast, a real programming language could handle this as a continuous process, responding to infrastructure changes in real-time.

So yes, you can query instances with a data source and use for_each—but unless you’re running Terraform repeatedly to catch changes, your DNS records won’t reflect real-time scaling events. That’s the exact limitation I’m talking about: Terraform isn’t imperative, it’s declarative, and it doesn’t react dynamically at runtime without external orchestration.


This is an artifact of how the Japanese system works. In a nutshell, they track households (families) with individuals as sub records of the family record.

Everybody on the family register record shares the same surname. Non-citizens are listed as distrinct references (for foreign spouses and the like) and they may have a different surname from the household record.

There is more to it than this but that is the key thing to know about the system.


December 22, 2021 was the last partial impact we had in us-east-1 for EC2 instances. They had power issues in USE1-AZ4 that took a while to sort out.


From the roadmap, it looks like they are reimplementing the same/similar feature set.


Eucalyptus built a clone. It wasn’t one click but it also wasn’t hard to deploy. It really never made much of a dent.

I’m pretty sure it ended up where all software goes to die: HP.


The website is still up and points to a fork on GitHub:

- https://docs.eucalyptus.cloud/eucalyptus/5/install_guide/int...

- https://github.com/corymbia/eucalyptus/

There are quite a few moving parts. I think I got stuck around just comprehending the networking bits.


The killer app is fiber reliability as compared to copper and coax where line errors are the norm, particularly every time it rains and water gets into a compromised circuit.

The speed is just a bonus.


The speed isn't just a bonus, it means you can do things that weren't possible before, or were painfully slow.

Just as dial-up held us back, but the benefits of broadband weren't realized until we finally perfected things like streaming audio and video, the impact of gigabit connected networks will take time and may come in an unexpected form.

The more immediate effects will be that distributed computing becomes no big deal, and accessing your files from the cloud or from a home device while remote becomes frictionless. Instead of necessarily lugging around a laptop you might travel lighter, confident that you can get access to what you need anyway.

It also makes apps like Dropcam possible where you stream endless hours of video to a remote server on the off chance you might need it. The cost of maintaining multiple streams becomes so low you don't even worry about it. No longer do you need to fret over the equivalent of popping a breaker when trying to watch Netflix as well.

The funny thing about these applications is they don't seem like a big deal when you have them, but when you suddenly lose them it's a huge problem. This is much the same way we take electricity and running water for granted, never thinking much of it, but when it cuts out we're in trouble.


Chattanooga's fiber network also has low ping times. That's something that matters to someone like me who uses SSH.


Like many companies - they are likely self-insured. If that is true, then they pay the actual costs of the claims right out of their pockets rather than participating in a larger risk pool.

I agree - it's completely disgusting to bring these costs up in that context.


Have a look at Wickard v. Filburn


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: