Hacker Newsnew | past | comments | ask | show | jobs | submit | 2gremlin181's commentslogin

Search results is by far the biggest one. Now after 3-4 relevant results it just shows shorts and other recommended content

What’s even the point of this? Is showing random videos really driving up engagement more than surfacing the video I want to watch? This was shocking to me when I first came across it

Yes. They know that most humans typically have poor impulse control, and are easily pulled off task and will fall into an addicting and lucrative loop. Makes perfect sense to show random unrelated shit.

It's been like this for a long time so it must be working. My guess is that fewer choices = easier to choose a video.

The article is glaring over a key point- Valve often sold the model at a 20% discount for $320. Clearly this is because the amount spent by a Steam Deck owner would make up the loss in BoM. If they believed they could continue to do so, they would.


Genuinely curious, what are some use cases that you require live Twitter data in your LLM for?


The topic of this HN thread: security, which is ever-evolving.


IMO this is a smart move. A lot of these next-gen dev tools are genuinely great, but the ecosystem is fragmented and the subscriptions add up quickly. If Cursor aquires a few more, like Warp or Linear, they can become a very compelling all-in-one dev platform.


The next step: Point downdector to downdector's downdector's downdector and create a cyclic dependency


Copying my response over from another comment:

I totally get that, but how hard would it be to actually make calls to your own API from the status page? If it fails, display a vague message saying there might be issues and that you are looking into it. Clearly these metrics and alerts exist internally too. I'm not asking for an instant RCA or confirmation of the scope of the outage. Just stop gaslighting me.


There are increasingly more status pages that automatically update based on uptime data (I built a service providing that - OnlineOrNot)

But early-stage startups typically have engineering own the status page, but as they grow, ownership usually transfers to customer support. These teams optimize for controlling the message rather than technical detail, which explains the shift toward vaguer/slower incident descriptions.


Because you'd have a ton of downtime and they'd rather hide it if they could. :)

I used to work at a very big cloud service provider, and as the initial comment mentioned, we'd get a ton of escalations/alerts in a day, but the majority didn't necessarily warrant a status page update (only affecting X% of users, or not 'major' enough, or not having any visible public impact).

I don't really agree with that, but that was how it was. A manger would decide whether or not to update the status page, the wording was reviewed before being posted, etc. All that takes a lot of time.


Not hard at all (our internal dashboards did just that). But to have that data posted publicly was not in the best interests of the business.

And honestly, having been on a few customer escalations where they threatened legal action over outages, one kind of starts to see things the business way...


> Just stop gaslighting me.

I heard this years ago from someone, but there's material impact to a company's bottom line if those pages get updated, so that's why someone fairly senior has to usually "approve" it. Obviously it's technically trivial, but if they acknowledge downtime (for example, like in the AWS case), investors will have questions, it might make quarterly reports, and it might impact stock price.

So it's not just a "status page," it's an indicator that could affect market sentiment, so there's a lot of pressure to leave everything "green" until there's no way to avoid it.


I feel like there should at least be some sort of disclaimer then that tells me the status page can take up to xx minutes to show an outage and not make it seem as if it is updated instantaniously. That way I could way those xx minutes before I file a ticket with support and not have the case thinking it is an isolated problem for me instead of a major outage.


IMO if you have an endpoint or service on your status page, you most definitely have an oncall rotation for it. Regarding the second point, your service might be down due to an AWS outage. It's an upstream issue and I fully understand that but I should not have to track things upstream by guessing what cloud provider your use. Where do we draw the line too? What if its not AWS but Hetzner or some other boutique provider?


well usually you have no way to even validate the issue if is due to a bad route and giving out an inaccurate status report is poorly reflected on a pristine:tm: status page. also status updates send out (in some cases) millions of notifications so (global) notifications are only reserved for P0 type issues.


I totally get that, but how hard would it be to actually make calls to your own API from the status page? If it fails, display a vague message saying there might be issues and that you are looking into it. Clearly these metrics and alerts exist internally too.

I'm not asking for an instant RCA or confirmation of the scope of the outage. Just stop gaslighting me.


> I totally get that, but how hard would it be to actually make calls to your own API from the status page?

Ah, so you're saying the status page should be hooked up to internal monitoring probers?

So how sure are you that it's the service that's broken, and not the probers? How sure are you that the granularity of the probers reflect the actual scope of the outage?

Also this opens up questioning of "well why don't you have probing on the EXACT workflow that happened to break this time?!". Because honestly, that's not helpful.

Say you have a complete end to end workflow for your web store. Should you publish "100% outage, the webstore is down!!" on your status page, automatically, because the very diligent prober failed to get into the shoe section of your store? That's probably not helpful to anybody.

> Clearly these metrics and alerts exist internally too.

Well, no. Probers can never cover every dimension across which a service can have an outage. You may think that the service is simple and has an obvious status, but you're using like 0.1% of the user surface, and have never even heard of the weird things that 99% of actual traffic does.

How do you even model your minority use case? Is it an outage? Or is your workflow maybe a tiny weird one, even though you think it's the straightforward one?

Especially since the nature of outages in complex systems tend to be complex to describe accurately. And a status page needs to boil it down to simple.

In many cases even engineers inspecting the system can not always be certain if real users are experiencing an outage, or if they're chasing an internal user, or if nothing is user visible because internal retries are taking care of everything, or what.

Complex systems are often complex because the world is complex. And if the problem is simple and unevolving then there would be no reason to have outages in the first place.

And often engineers helping phrase an outage statement need to compromise verbosity for clarity.

Another thing is what do you do if you start serving 500s to 90% of traffic? An outage, right? Surely auto-publish to a status page? Oh, but it turns out this was a DoS attack, and no non-DoS traffic was affected. Can your monitoring detect the difference? Unlikely.


Automation is always significantly easier said than done. Furthermore, as has been emphasized elsewhere in this thread, there is a critical need for a human in the loop.


There are many providers who sell seedboxes, which is exactly what you're looking for. They generally include support for Jellyfin as well as other *arr apps. I personally use ultra.cc and have been mostly satisfied with the service.


Ye olde Bias-Variance tradeoff


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: