Ass covering-wise, you are probably better off going down with everyone else on us-east-1. The not so fun alternative: being targeted during an RCA explaining why you chose some random zone no one ever heard of.
Places nobody's ever heard of like "Ohio" or "Oregon"?
Yeah, I'm not worried about being targeted in an RCA and pointedly asked why I chose a region with way better uptime than `us-tirefire-1`.
What _is_ worth considering is whether your more carefully considered region will perform better during an actual outage where some critical AWS resource goes down in Virginia, taking my region with it anyway.
IIRC, some AWS services are solely deployed on and/or entirely dependent on us-east-1. I don't recall which ones, but I very distinctly remember this coming up once.
The Route53 control plane is in us-east-1, with an optional temporary auto-failover to us-west-2 during outages. The data plane for public zones is globally distributed and highly resilient, with a 100% SLA. It continues to serve DNS records during regular control plane outages in us-east-1, but access to make changes is lost during outages.
CloudFront CDN has a similar setup. The SSL certificate and key have to be hosted in us-east-1 for control plane operations but once deployed, the public data plane is globally or regionally dispersed. There is no auto failover for the cert dependency yet. The SLA is only three 9s. Also depends on Route53.
The elephant in the room for hyperscalers is the potential for rogue employees or a cyber attack on a control plane. Considering the high stakes and economic criticality of these platforms, both are inevitable and both have likely already happened.
I find it funny that we see complaints about why software quality has got worse alongside people advocating to choose objectively risky AWS regions for career risk and blame minimisation reasons.
They are for the same reason. How do customers react to either? If us-east-1 fails, nobody complains. If Microsoft uses a browser to render components on Windows and eats all of your RAM, nobody complains.
Oh, people complain. The companies responsible have just gotten to the point where they are so entrenched that they don't need to care at all about customer complaints.
The value now is not really money from customers, but a company's share price or valuation. That, together with the hard push for subscriptions from every single app and service, devaluated customer experience and feedback. Because not many will go through the hell of unsubscribing process even after the outage or serious issues like private data stolen.
There's just not much motivation left to do better systems.
Istr major resource unavailability in US-East-2 during one of the big US-East-1 outages because people were trying to fail over. Then a week later there was a US-East-2 outage that didn't make the news.
So if you tried to be "smart" and set up in Ohio you got crushed by the thundering herd coming out of Virginia and then bit again because aws barely cares about you region and neither does anyone else.
The truth is Amazon doesn't have any real backup for Virginia. They don't have the capacity anywhere else and the whole geographic distribution scheme is a chimera.
This is an interesting point. As recently as mid-2023 us-east-2 was 3 campuses with a 5 building design capacity at each. I know they've expanded by multiples since, but us-east-1 would still dwarf them.
Makes one wonder, does us-west-2 have the capacity to take on this surge?
> being targeted during an RCA explaining why you chose some random zone no one ever heard of.
“Duh, because there’s an AZ in us-east-1 where you can’t configure EBS volumes for attachment to fargate launch type ECS tasks, of course. Everybody knows that…”
how about following the well-architected framework and building something with a suitable level of 9s where you can justify your decisions during a blameless postmortem (please stamp your buzzword bingo card for a prize.)
I look forward to the eventual launch of a new and improved version of your app using electron.
What’s the point in having 64 Gb of DDR5 and 16 cores @ 4.2 GHz if not to be able to have a couple electron apps sitting at idle yet somehow still using the equivalent computational resources of the most powerful supercomputer on earth in the mid 1990s.
We also plan to incorporate a full local llm, to ensure we fill the memory up. It will be used to direct people to our online knowledge base, which will always be empty
Make sure another LLM summarizes pages upon loading, but doesn’t load any content before that completes. Each page should have a few megs of JS tracking scripts siphoning the users CPU to create massive logs on AWS that nobody will ever use to improve anything.
Oh and put everything behind the strictest cloudflare settings you can, so that even a whiff of anything that’s not a Windows 11 laptop or iPhone on a major U.S. network residential or mobile IP gets non-stop bot checks!
This to me was the real lesson of the outage. A us-east-1 outage is treated like bad weather. A regional outage can be blamed on the dev. us-east-1 is too big to get blamed, which is why it should be the region of choice for an employee.
us-east-2 is objectively a better region to pick if you want US east, yet you feel safer picking use1 because “I’m safer making a worse decision that everyone understands is worse, as long as everyone else does it as well.”
It's about risk profile. The question isn't "which region goes down the least" but "how often will I be blamed for an outage."
If you never get blamed for a US east outage, that's better than us-east-2 if that could get you blamed 0.5% of the time when it goes down and us1 isn't down or etc
But ise1 is down 4x more than use2 (AWS closely guards the numbers and won’t release them, but that is what I’ve seen from 3rd party analysis). Don’t you want your customers to say, “wow, half the internet was down today but XYZ service was up with no issues! I love them.”
I can’t tell if it’s you thinking this way, or if your company is setup to incentivize this. But either way, I think it’s suboptimal.
That’s not about “risk profile” of the business or making the right decision for the customer, that’s about risk profile of saving your own tail in the organizational gamesmanship sense. Which is a shame, tbh. For both the customer and for people making tech decisions.
I fully appreciate that some companies may encourage this behavior, and we all need a job so we have to work somewhere, but this type of thinking objectively leads to worse technology decisions and I hope I never have to work for a company that encourages this.
Edit: addressing blame when things go wrong… don’t you think it would be a better story to tell your boss that you did the right thing for the customer, rather than “I did this because everyone else does it, even though most of us agree it’s worse for the customer in general”. I would assume I’d get more blame for the 2nd decision than the 1st.
If my cloud provider goes down and my site is offline, my customers and my boss will be upset with me and demand I fix it as fast as possible. They will not care what caused it.
If my cloud provider goes down and also takes down Spotify, Snapchat, Venmo, Reddit, and a ton of other major services that my customers and my boss use daily, they will be much more understanding that there is a third party issue that we can more or less wait out.
Every provider has outages. US-east-2 will sometimes go down. If I'm not going to make a system that can fail over from one provider to another (which is a lot of work and can be expensive, and really won't be actively used often), it might be better to just use the popular one and go with the group.
us-east-2 goes down far, far less frequently than us-east-1. AWS doesn’t publicly release the outage numbers (they hold them very close to the chest) but some people have compiled the stats on their own if you poke around.
The regions provide the same functionality, so I see genuinely no downside or additional work to picking the 2 regions over the 2 regions.
It seems like one of those no brainer decisions to me. I take pride in being up when everyone else is down. 5 9s or bust, baby!
I’ve seen people go with IBM Cloud because their salespeople were willing to discount more heavily than AWS/GCP/Azure were. Tier 2 players can be hungrier for your business than tier 1 are. And here I’m talking about completely mainstream workloads (Linux, K8S, etc)
Separately from that, if you are trying to move certain types of non-mainstream IBM workloads to cloud (AIX, IBM i, z/OS) then IBM is tier 1 in that case