In this case yes, but on the other hand Red Hat won't publish the RHEL code unless you have the binaries. The GPLv2 license requires you to provide the source code only if you provide the compiled binaries. In theory Meta can apply its own proprietary patches on Linux and don't publish the source code if it runs that patched Linux on its servers only.
RHEL source code is easily available to the public - via CentOS Stream.
For any individual RHEL package, you can find the source code with barely any effort. If you have a list of the exact versions of every package used in RHEL, you could compose it without that much effort by finding those packages in Stream. It's just not served up to you on a silver platter unless you're a paying customer. You have M package versions for N packages - all open source - and you have to figure out the correct construction for yourself.
Can't anyone get a RHEL instance on their favorite cloud, dnf install whatever packages they want sources of, email Redhat to demand the sources, and shut down the instance?
This violates the GPL, which explicitly states that recipients are entitled to the source tree in a form suitable for modification -- which a web view is not.
But that would be silly, because all of the code and binaries is already available via CentOS Stream. There's nothing in RHEL that isn't already public at some point via CentOS Stream.
There's nothing special or proprietary about the RHEL code. Access to the code isn't an issue, it's reconstructing an exact replica of RHEL from all of the different package versions that are available to you, which is a huge temporal superset of what is specifically in RHEL.
Latency-aware scheduling is important in a lot of domains. Getting video frames or controller input delivered on a deadline is a similar problem to getting voice or video packets delivered on a deadline. Meanwhile housecleaning processes like log rotation can sort of happen whenever.
Part of that is the assumption that Amazon/Meta/Google all have dedicated engineers who should be doing nothing but tuning performance for 0.0001% efficiency gains. At the scale of millions of servers, those tweaks add up to real dollar savings, and I suspect little of how they run is stock.
This is really just an example of survivorship bias and the power of Valve's good brand value. Big tech does in fact employ plenty of people working on the kernel to make 0.1% efficiency gains (for the reason you state), it's just not posted on HN. Someone would have found this eventually if not Valve.
And the people at FB who worked to integrate Valve's work into the backend and test it and measure the gains are the same people who go looking for these kernel perf improvements all day.
I vaguely remember reading when this occurred. It was very recent no? Last few years for sure.
> The Linux kernel began transitioning to EEVDF in version 6.6 (as a new option in 2024), moving away from the earlier Completely Fair Scheduler (CFS) in favor of a version of EEVDF proposed by Peter Zijlstra in 2023 [2-4]. More information regarding CFS can be found in CFS Scheduler.
Ultimately, CPU schedulers are about choosing which attributes to weigh more heavily. See this[0] diagram from Github. EEVDF isn't a straight upgrade on CFS. Nor is LAVD over either.
Just traditionally, Linux schedulers have been rather esoteric to tune and by default they've been optimized for throughput and fairness over everything else. Good for workstations and servers, bad for everyone else.
I mean.. many SteamOS flavors (and Linux distros in general have) have switched to Meta's Kyber IO scheduler to fix microstutter issues.. the knife cuts both ways :)
The comment was perfectly valid and topical and applicable. It doesn't matter what kind of improvement Meta supplied that everyone else took up. It could have been better cache invalidation or better usb mouse support.