Sure, that timeline looks bad when you leave out the 14 updates between 12:11am PDT and 8:04am PDT.
The initial cause appears to be a a bad DNS entry that they rolled back at 2:22am PDT. They started seeing recovery with services but as reports of EC2 failures kept rolling in they found a network issue with a load balancer that was causing the issue at 8:43am.
I didn't say they fixed everything within those 14 updates. I'm pointing out it's disingenuous to say they didn't start working on the issue until start of business when there are 14 updates of what they have found and done during that time.
The initial cause appears to be a a bad DNS entry that they rolled back at 2:22am PDT. They started seeing recovery with services but as reports of EC2 failures kept rolling in they found a network issue with a load balancer that was causing the issue at 8:43am.