Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Over time they're going to touch things that people were waiting for Microsoft to do for years. I don't have an example in mind at the moment, but it's a lot better to make the changes yourself than wait for OS or console manufacturer to take action.




I was at Microsoft during the Windows 8 cycle. I remember hearing about a kernel feature I found interesting. Then I found linux had it for a few years at the time.

I think the reality is that Linux is ahead on a lot of kernel stuff. More experimentation is happening.


Linux is behind Windows wrt (Hybrid) Microkernel vs Monolith, which helps with having drivers and subsystems in user mode and support multiple personalities (Win32, POSIX, OS/2 and WSL subsystems). Linux can hot‑patch the kernel, but replacing core components is risky and drivers and filesystems cannot be restarted independently.

I was surprised to hear that Windows just added native NVMe which Linux has had for many years. I wonder if Azure has been paying the SCSI emulation tax this whole time.

Probably, most of stuff you see in Windows Server these days is backported from Azure improvements.

It was always wild to me that their installer was just not able to detect an NVMe drive out of the box in certain situations. I saw it a few times with customers when I was doing support for a Linux company.

Afaik Azure is mostly Linux

The user VMs are mostly Linux but Azure itself runs on a stripped down version of Windows Server and all the VMs are hosted inside Hyper-V. See https://techcommunity.microsoft.com/blog/windowsosplatform/a...

when the hood is open for anyone to tinker, lots of little weirdos get to indulge their ideas. Sometimes those are ideas are even good!

Never underestimate the efficiency and amazing results of autistic focus.

"Now that's curious..."


Passion over paycheques

And behind on a lot of stuff. The Microsoft's ACLs are nothing short of one of the best designed permission systems there are.

On the surface, they are as simple as Linux UOG/rwx stuff if you want it to be, but you can really, REALLY dive into the technology and apply super specific permissions.


And they work on everything. You can have a mutex, a window handle or a process protected by ACL.

The file permission system on Windows allows for super granular permissions, yes; administrating those permissions was a massive pain, especially on Windows file servers.

> The Microsoft's ACLs are nothing short of one of the best designed permission systems there are.

You have a hardened Windows 11 system. A critical application was brought forward from a Windows 10 box but it failed, probably a permissions issue somewhere. Debug it and get it working. You can not try to pass this off to the vendor, it is on you to fix it. Go.


Is this a trick question, because you run it as administrator in a sandboxed account.

Procmon.exe. Give me 2 minutes. You make it sound like it's such a difficult thing to do. It literally will not take me more than 2 minutes to tell you exactly where the permission issue is and how to fix it.

Procmon won't show you every type of resource access. Even when it does, it won't tell you which entity in the resource chain caused the issue.

And then you get security product who have the fun idea of removing privileges when a program creates a handle (I'm not joking, that's a thing some products do). So when you open a file with write access, and then try to write to the file, you end up with permission errors durig the write (and not the open) and end up debugging for hours on end only to discover that some shitty security product is doing stupid stuff...

Granted, thats not related to ACLs. But for every OK idea microsoft had, they have dozen of terrible ideas that make the whole system horrible.


Especially when the permission issue is up the chain from the application. Sure it is allowed to access that subkey, but not the great great grandparent key.

At this point you're just arguing for the sake of bashing on Microsoft. You said it yourself, that's not related to ACL, so what are you doing, mate? This is not healthy foundation for a constructive discussion.

and why is it not on the vendor of the critical application?

Because they aren't allowed on the system where it is installed, and also they don't deal with hardened systems.

Do you have any favorite docs or blogs on these? Reading about one of the best designed permissions systems sounds like a fun way to spend an afternoon ;)

You have ACLs on linux too

ACLs in Linux were tacked on later; not everything supports them properly. They were built into Windows NT from the start and are used consistently across kernel and userspace, making them far more useful in practice.

Also, as far as I know Linux doesn't support DENY ACLs, which Windows does.


Yes it does.

since when?

Since some of us could be bothered reading docs. Give it a try and see how it works out for you.

Some of us can! I certainly enjoy doing it, and according to "man 5 acl" what you assert is completely false. Unless you have a particular commit or document from kernel.org you had in mind?

> Each of these characters is replaced by the - character to denote that a permission is absent in the ACL entry.

Wouldn't the o::--- default ACL, like mode o-rwx, deny others access in the way you're describing?


See 6.2.1 of RFC8881, where NFSv4 ACLs are described. They are quite similar to Windows ACLs.

Here is kernel dev telling they are against adding NFSv4 ACL implementation. The relevant RichAcls patch never got merged: https://lkml.org/lkml/2016/3/15/52


Haha, sure. Sorry, it's not you, it's the ACLs (and me nerves). Have you tried configuring NFSv4 ACLs on Linux? Because kernel devs are against supporting them, you either use some other OS or have all sorts of "fun". Also, not to be confused with all sorts of LSM based ACLs... Linux has ACLs in the most ridiculous way imaginable...

Not by default. Not as extensive as in Windows. What's your point?

Oh yeah for sure. Linux is amazing in a computer science sense, but it still can't beat Windows' vertically integrated registry/GPO based permissions system. Group/Local Policy especially, since it's effectively a zero coding required system.

Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.


Debian (and thus Ubuntu) has full support for automated installs since the 90's. It's built into `dpkg` since forever. That include saving or generating answer to install time questions, PXE deployment, ghosting, CloudInit and everything. Then stuff like Ansible/Puppet have been automating deployment for a long time too. They might have added yet another way of doing it, but full stack deployment automation has been there for as long as Ubuntu existed.

> Ubuntu just recently got a way to automate its installer (recently being during covid).

Preseed is not new at all:

https://wiki.debian.org/DebianInstaller/Preseed

RH has also had kickstart since basically forever now.

I've been using both preseeds and kickstart professionally for over a decade. Maybe you're thinking of the graphical installer?


> Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.

What?! I was doing kickstart on Red Hat (want called Enterprise Linux back then) at my job 25 years ago, I believe we were using floppies for that.


Yeah, I have been working on the RHEL and Fedora installer since 2013 and already back then it had a long history almost lost to time - the git history goes all the way back to 1999 (the history was imported from CVS, as it predates Git) and that actually only cover the first graphical interface - it had automated installation support via kickstart and a text interface long before that, but the commit history has been apparently lost. And there seems to have been even some earlier distict installer before Anaconda, that likely also supported some sort of automated install.

BTW, we managed to get the earlies history of the project written down here by one of the earliest contributors for anyone who might be interested:

https://anaconda-installer.readthedocs.io/en/latest/intro.ht...

As for how the automated installation on RHEL, Fedora and related distros works - it is indeed via kickstart:

https://pykickstart.readthedocs.io/en/latest/

Note how some commands were introduced way back in the single digit Fedora/Fedora Core age - that was from about 2003 to 2008. Latest Fedora is Fedora 43. :)


Still the king but developing/testing/debugging group policy issues is a miserable experience.

I disagree. Group policies are extremely straightforward to administer in my experience.

I always found it straight forward. Never had an issue and I've implemented my fair share on thousands on devices and servers.

Not an implementer of group policy, more of a consumer. There are 2 things that I find extremely problematic about them in practice.

- There does not seem to be a way to determine which machines in the fleet have successfully applied. If you need a policy to be active before doing deployment of something (via a different method), or things break, what do you do?

- I’ve had far too many major incidents that were the result of unexpected interactions between group policy and production deployments.


That's not a problem with group policy. You're just complaining that GPO is not omnipotent. That's out of scope for group policies mate. You win, yeah yeah.... Bye

> Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.

1. cloud-init support was in RHEL 7.2 which released November 19, 2015. A decade ago.

2. Checking on Ubuntu, it looks like it was supported in Ubuntu 18.04 LTS in April 2018.

3. For admining tens of thousands of servers, if you're in the RHEL ecosystem you use Satellite and it's ansible integration. That's also been going on for... about a decade. You don't need much integration though other than a host list of names and IPs.

There are a lot of people on this list handling tens of thousands or hundreds of thousands of linux servers a day (probably a few in the millions).


I'm surprised no one has said NixOS yet.

yeah, but you have IO Completion Ports…

IO_Uring is still a pale imitation :(


io_uring does more than IOCP. It's more like an asynchronous syscall interface that avoids the overhead of directly trapping into the kernel. This avoids some overheads IOCP cannot. I'm rusty on the details but the NT kernel has since introduced an imitation: https://learn.microsoft.com/en-us/windows/win32/api/ioringap...

IOCP is great and was ahead of Linux for decades, but io_uring is also great. It's a different model, not a poor copy.

I think they are a bit different - in the Windows kernel, all IO is asynchronous on the driver level, on Linux, it's not.

io_uring didn't change that, it only got rid of the syscall overhead (which is still present on Windows), so in actuality they are two different technical solutions that affect different levels of the stack.

In practice, Linux I/O is much faster, owing in part to the fact that Windows file I/O requires locking the file, while Linux does not.


io_uring makes synchronous syscalls async simply by offloading them to a pool of kernel threads, just like people have done for decades in userspace.

It's not the async part, it's the not invoking the function part - io_uring replaces syscalls with producer consumer ring buffers.

If that were true then presumably Microsoft wouldn't have ported it to Windows:

https://learn.microsoft.com/en-us/windows/win32/api/ioringap...

Although Windows registered network I/O (RIO) came before io_uring and for all I know might have been an inspiration:

https://learn.microsoft.com/en-us/previous-versions/windows/...


That argument holds no water. IOUring is essential for the performance of some modern POSIX programs.

You can see shims for fork() to stop tanking performance so hard too. IOUring doesnt map at all onto IOCP, at least the windows subtitute for fork has “ZwCreateProcess“ to work from. IOUring had nothing.

IOCP is much nicer from a dev point of view because your program can be signalled when a buffer has data on it but also with the information of how much data, everything else seems to fail at doing this properly.


The CQE for e.g. a successful read(2) operation will have the number of bytes read in the `res` field.

Yeah and Linux is waaay behind in other areas. Windows had a secure attention sequence (ctrl-alt-del to login) for several decades now. Linux still doesn't.

Linux (well, more accurately, X11), has had a SAK for ages now, in the form of the CTRL+ALT+BACKSPACE that immediately kills X11, booting you back to the login screen.

I personally doubt SAK/SAS is a good security measure anyways. If you've got untrusted programs running on your machine, you're probably already pwn'd.


That's not a SAK, you can disable it with setxkbmap. A SAK is on purpose impossible to disable, and it exists on Linux: Alt+SysRq+K.

Unfortunately it doesn't take any display server into consideration, both X11 and Wayland will just get killed.


There are many a ways to disable CTRL+ALT+DEL on windows too, from registry tricks to group policy options. Overall, SAK seems to be a relic of the past that should be kept far away from any security consideration.

There shouldn't be any non-privileged ways to disable ctrl-alt-del.

The "threat model" (if anyone even called it that) of applications back then was bugs resulting in unintended spin-locks, and the user not realizing they're critically short on RAM or disk space.

This setup came from the era of Windows running basically everything as administrator or something close to it.

The whole windows ecosystem had us trained to right click on any Windows 9X/XP program that wasn’t working right and “run as administrator” to get it to work in Vista/7.


Please check the relates wikipedia article. Updated to reflect recent secure attention key in the linux world: https://en.wikipedia.org/wiki/Secure_attention_key


That's not the same thing at all.

No, it's not. It has various functionality, as shown by the built-in help:

> Example output of the SysRq+h command:

> sysrq: HELP : loglevel(0-9) reboot(b) crash(c) terminate-all-tasks(e) memory-full-oom-kill(f) kill-all-tasks(i) thaw-filesystems(j) sak(k) show-backtrace-all-active-cpus(l) show-memory-usage(m) nice-all-RT-tasks(n) poweroff(o) show-registers(p) show-all-timers(q) unraw(r) sync(s) show-task-states(t) unmount(u) force-fb(v) show-blocked-tasks(w) dump-ftrace-buffer(z) dump-sched-ext(D) replay-kernel-logs(R) reset-sched-ext(S)

But note "sak (k)".


That kills X! Hardly useful.

How's it go again, 'raising all elephants is utterly boring'?

Like the GP says in sibling, Alt+SysRq+K is SAK on Linux. But it doesn't work with graphical environments.

Is that something Linux needs? I don’t really understand the benefit of it.

The more powerful form is the UAC full privilege escalation dance that Win 7+(?) does, which is a surprisingly elegant UX solution.

   1. Snapshot the desktop
   2. Switch to a separate secure UI session
   3. Display the snapshot in the background, greyed out, with the UAC prompt running in the current session and topmost
It avoids any chance of a user-space program faking or interacting with a UAC window.

Clever way of dealing with the train wreck of legacy Windows user/program permissioning.


My only experience with non-UAC endpoint privilege management was BeyondTrust and it seemed to try to do what UAC did but with a worse user experience. It looks like the Intune EPM offering also doesn't present as clear a delineation as UAC, which seems like a missed opportunity.

One of the things Windows did right, IMO. I hate that elevation prompts on macOS and most linux desktops are indistinguishable from any other window.

It's not just visual either. The secure desktop is in protected memory, and no other process can access it. Only NTAUTHORITY\System can initiate showing it and interact with it any way, no other process can.

You can also configure it to require you to press CTRL+ALT+DEL on the UAC prompt to be able to interact with it and enter credentials as another safeguard against spoofing.

I'm not even sure if Wayland supports doing something like that.


>Display the snapshot in the background, greyed out,

Is there an offset. I could have sworn things always seemed offset to the side a little.


It made a lot more sense in the bygone years of users casually downloading and running exe's to get more AIM "smilies", or putting in a floppy disk or CD and having the system autoexec whatever malware the last user of that disk had. It was the expected norm for everybody's computer to be an absolute mess.

These days, things have gotten far more reasonable, and I think we can generally expect a linux desktop user to only run software from trusted sources. In this context, such a feature makes much less sense.


It's useful for shared spaces like schools, universities and internet cafes. The point is that without it you can display a fake login screen and gather people's passwords.

I actually wrote a fake version of RMNet login when I was in school (before Windows added ctrl-alt-del to login).

https://www.rmusergroup.net/rm-networks/

I got the teacher's password and then got scared and deleted all trace of it.


Tbh i'm starting to think that I do not see microsoft being able to keep it's position in the OS market ; with steam doing all the hard work and having a great market to play with ; the vast distributions to choose from, and most importantly how easy it has become to create an operating system from scratch - they not only lost all possible appeal, they seem stuck on really weird fetichism with their taskbar and just didn't provide me any kind of reason to be excited about windows.

Their research department rocks however so it's not a full bash on Microsoft at all - i just feel like they are focusing on other way more interesting stuff


Kernel improvements are interesting to geeks and data centers, but open source is fundamentally incompatible with great user experience.

Great UX requires a lot of work that is hard but not algorithmically challenging. It requires consistency and getting many stakeholders to buy in. It requires spending lots of time on things that will never be used by more than 10-20% of people.

Windows got a proper graphics compositor (DWM) in 2006 and made it mandatory in 2012. macOS had one even earlier. Linux fought against Compiz and while Wayland feels inevitable vocal forces still complain about/argue against it. Linux has a dozen incompatible UI toolkits.

Screen readers on Linux are a mess. High contrast is a mess. Setting font size in a way that most programs respect is a mess. Consistent keyboard shortcuts are a mess.

I could go on, but these are problems that open source is not set up to solve. These are problems that are hard, annoying, not particularly fun. People generally only solve them when they are paid to, and often only when governments or large customers pass laws requiring the work to be done and threaten to not buy your product if you don't do it. But they are crucially important things to building a great, widely adopted experience.


Your comment gives the impression that you think open source software is only developed by unpaid hobbyists. This not true, this is quite an outdated view. Many things are worked on by developers paid full time. And that people are mostly interested in algorithmically challenging stuff, which I don't think is the case.

Accessibility does need improvement. It seems severely lacking. Although your link makes it look like it's not that bad actually, I would have expected worse.


…and you are implying that Microsoft Windows 11 is a better example of ”great user experience”?

If you have anything less than perfect vision and need any accessibility features, yes. If you have a High DPI screen, yes. In many important areas (window management, keyboard shortcuts, etc.), yes.

Here's one top search result that goes into far more detail: https://www.reddit.com/r/linux/comments/1ed0j10/the_state_of...


For the general user, yes absolutely.

Linux DEs still can't match the accessibility features alone.

yeah, there's layers and layers of progressively older UIs layered around the OS, but most of it makes sense, is laid out sanely, and is relatively consistent with other dialogs.

macOS beats it, but its still better in a lot of ways over the big Linux DEs.


Start menu in the middle of the screen that takes a couple seconds to even load (because it is implemented in React horribly enought to be this slow) only to show adds next to everything is perfect user experience.

Every other button triggering Copilots assures even better UX goodness.


You can move the menu to left and disable the animations so it opens instantly.

I prefer it. Linux desktop feels a lot more laggy to me on the same hardware.

Of course that is minus all the recent AI/ad stuff on Windows…


> Tbh i'm starting to think that I do not see microsoft being able to keep it's position in the OS market

It's a big space. Traditionally, Microsoft has held both the multimedia, gaming and lots of professional segments, but with Valve doing a large push into the two first and Microsoft not even giving it a half-hearted try, it might just be that corporate computers continue using Microsoft, people's home media equipment is all Valve and hipsters (and others...) keep on using Apple.


I think that's the most likely way it'll go.

Windows will remain as the default "enterprise desktop." It'll effectively become just another piece of business software, like an ERP.

Gamers, devs, enthusiasts will end up on Linux and/or SteamOS via Valve hardware, creatives and personal users that still use a computer instead of their phone or tablet will land in Apple land.


With the massive adoption of web apps in Enterprise I have seen I would expect Windows to become irelevant or even a liability in business use as well.

Still, some sort of OS is required to run that browser that renders the websites, and some team needs to manage a fleet of those computers running that OS. And that's where Microsoft will sit, since they're unable to build good consumer products, they'll eventually start focusing exclusively on businesses and enterprises.

If you just need something that runs a browser, can't you do that with something like Chrome OS/MacOS/RHEL Workstation/whatever SUSE has for workstation users ? :)

Add to that all the bullshit they have been pushing on their customers lately: * OS level adds

* invasive AI integration

* dropping support for 40% of their installed base (Windows 10)

* forcing useless DRM/trusted computing hardware - TPM - as a requirement to install the new and objectively worse Windows version version, with even more spying and worse performance (Windows 11)

With that I think their prospects are bleak & I have no idea who would install anything else than Steam OS or Bazzite in the future with this kind of Microsoft behavior.


"It just works" sleep and hibernate.

"Slide left or right" CPU and GPU underclocking.


“it just works” sleep was working, at least on basically every laptop I had the last 10 years…

until the new s2idle stuff that Microsoft and Intel have foisted on the world (to update your laptop while sleeping… I guess?)


From what I read, it was a lot of the prosumer/gamer brands (MSI, Gigabyte, ASUS) implementing their part of sleep/hibernate badly on their motherboards. Which honestly lines up with my experience with them and other chips they use (in my case, USB controllers). Lots of RGB and maybe overclocking tech, but the cheapest power management and connectivity chips they can get (arguably what usually gets used the most by people).

Sleep brokenness is ecosystem-wide. My Thinkpad crashes/freezes during sleep 3 times a week. Lenovo serviced/replaced it 3 times to no avail.

I have had never any sleep issues with my Macs.

Power management is a really hard problem. It's the stickiest of programming problems, a multi-threaded sequence where timing matters across threads (sometimes down to the ns). I'm convinced only devices that have hardware and software made by the same company (Apple, Andoid phones, Steam deck, maybe Surface laptops) have a shot in hell at getting it perfect. The long-tail/corner cases and testing is a nightmare.

As an example, if you have a mac, run "ioreg -w0 -p IOPower" and see all the drivers that have to interact with each other to do power management.


It never really worked in games even with S3 sleep. The new connected standby stuff created new issues but sleeping a laptop while gaming was a roulette wheel. SteamOS and the like actually work, like maybe 1/100 times I've run into an issue. Windows was 50/50.

Sleep and hibernate don't just work on Windows unless Microsoft work with laptop and boards manufacturers to make Windows play nice with all those drivers. It's inevitable that it's hit and miss on any other OS that manufacturers don't care much about. Apple does nearly everything inside their walls, that's why it just works.

“It just works” sadly isn’t true across the Apple Ecosystem anymore.

Liquid Glass ruined multitasking UX on my iPad. :(

Also my macbook (m4 pro) has random freezes where finder becomes entirely unresponsive. Not sure yet why this happens but thankfully it’s pretty rare.


Regardless of how it must be implemented, if this is a desirable feature then this explanation isn’t an absolution of Linux but rather an indictment: its development model cannot consistently provide this product feature.

(And same for Windows to the degree it is more inconsistent on Windows than Mac)


> its development model cannot consistently provide this product feature.

The real problem is that the hardware vendors aren't using its development model. To make this work you either need a) the hardware vendor to write good drivers/firmware, or b) the hardware vendor to publish the source code or sufficient documentation so that someone else can reasonably fix their bugs.

The Linux model is the second one. Which isn't what's happening when a hardware vendor doesn't do either of them. But some of them are better than others, and it's the sort of thing you can look up before you buy something, so this is a situation where you can vote with your wallet.

A lot of this is also the direct fault of Microsoft for pressuring hardware vendors to support "Modern Standby" instead of rather than in addition to S3 suspend, presumably because they're organizationally incapable of making Windows Update work efficiently so they need Modern Standby to paper over it by having it run when the laptop is "asleep" and then they can't have people noticing that S3 is more efficient. But Microsoft's current mission to get everyone to switch to Linux appears to be in full swing now, so we'll see if their efforts on that front manage to improve the situation over time.


I should have said ‘product development’ model versus just ‘development’ to be more clear. To state another way: Linux has no way, no function, no pathway to providing this. This is not really surprising, because it isn’t the work software developers find fun and self-rewarding, but rather more the relatively mundane business-as-usual scope of product managers and business development folks.

… And that’s all fine, because this is a super niche need: effectively nobody needs Linux laptops and even fewer depend on sleep to work. If ‘Linux’ convinced itself it really really needed to solve this problem for whatever reason, it would do something that doesn’t look like its current development model, something outside that.

Regardless, the net result in the world today is that Linux sleep doesn’t work in general.


It's not the development model at fault here. It's the simple fact that Windows makes up nearly the entire user base for PCs. Companies make sure their hardware works with Windows, but many don't bother with Linux because it's such a tiny percentage of their sales.

Except when it doesn't. I can't upgrade my Intel graphics drivers to any newer version than what came with the laptop or else my laptop will silently die while asleep. Internet is full of similar reports from other laptop and graphics manufacturers and none have any solutions that work. The only thing that reliably worked is to restore the original driver version. Doesn't matter if I use the WHQL version(s) or something else.

The feature itself works. There are just hardware that is buggy and don't support it properly.

That's a vastly different statement.


> Regardless of how it must be implemented, if this is a desirable feature then this explanation isn’t an absolution of Linux but rather an indictment: its development model cannot consistently provide this product feature.

The problem is: the specifications of ACPI are complex, Windows' behavior tends to be pretty much trash and most hardware tends to be trash too (AMD GPUs for example were infamous for not being resettable for years [1]), which means that BIOSes have to work around quirks on both the hardware and software. Usually, as soon as it is reasonably working with Windows (for a varying definition of "reasonably", that is), the ACPI code is shipped and that's it.

Unfortunately, Linux follows standards (or at least, it tries to) and cannot fully emulate the numerous Windows quirks... and on top of that, GPUs tend to be hot piles of dung requiring proprietary blobs that make life even worse.

[1] https://www.nicksherlock.com/2020/11/working-around-the-amd-...


Sleep has always worked on my desktop with a random Asus board from the early 2020s with no issues aside from one Nvidia driver bug earlier this year (which was their fault not MS's). Am I just really lucky?

On my Framework 13 AMD : Sleep just works on Fedora. Sleep is unreliable on Windows; if my fans are all running at full speed while running a game and I close the lid to begin sleeping, it will start sleeping and eventually wake up with all fans blaring.

I don't understand this comment in this context. Both of these features work on my Steam Deck. Neither of them have worked on any Windows laptop my employers have foisted upon me.

That requires driver support. What you're seeing is Microsoft's hardware certification forcing device vendors to care about their products. You're right that this is lacking on Linux, but it's not a slight on the kernel itself.

Both of these have worked fine for the last 15 years or so on all my laptops.

Kernel level anti-cheat with trusted execution / signed kernels is probably a reasonable new frontier for online games, but it requires a certain level of adoption from game makers.

This is a part of Secure Boot, which Linux people have raged against for a long time. Mostly because the main key signing authority was Microsoft.

But here's my rub: no one else bothered to step up to be a key signer. Everyone has instead whined for 15 years and told people to disable Secure Boot and the loads of trusted compute tech that depends on it, instead of actually building and running the necessary infra for everyone to have a Secure Boot authority outside of big tech. Not even Red Hat/IBM even though they have the infra to do it.

Secure Boot and signed kernels are proven tech. But the Linux world absolutely needs to pull their heads out of their butts on this.


The goals of the people mandating Secure Boot are completely opposed to the goals of people who want to decide what software they run on the computer they own. Literally the entire point of remote attestation is to take that choice away from you (e.g. because they don't want you to choose to run cheating software). It's not a matter of "no one stepped up"; it's that Epic Games isn't going to trust my secure boot key for my kernel I built.

The only thing Secure Boot provides is the ability for someone else to measure what I'm running and therefore the ability to tell me what I can run on the device I own (mostly likely leading to them demanding I run malware like like the adware/spyware bundled into Windows). I don't have a maid to protect against; such attacks are a completely non-serious argument for most people.


And all this came from big game makers turning their games into casinos. The reason they want everything locked down is money is on the line.

anti-cheat far precedes the casinoification of modern games.

nobody wants to play games that are full of bots. cheaters will destroy your game and value proposition.

anti-cheat is essentially existential for studios/publishers that rely on multiplayer gaming.

So yes, the second half of your statement is true. The first half--not so much.


> anti-cheat far precedes the casinoification of modern games.

> nobody wants to play games that are full of bots. cheaters will destroy your game and value proposition.

You are correct, but I think I did a bad job of communicating what I meant. It's true that anti-cheat has been around since forever. However, what's changed relatively recently is anti-cheat integrated into the kernel alongside requirements for signed kernels and secure boot. This dates back to 2012, right as games like Battlefield started introducing gambling mechanics into their games.

There were certainly other games that had some gambly aspects to them, but 2010s is pretty close to where esports along with in game gambling was starting to bud.


There are plenty of locked down computers in my life already. I don't need or want another system that only runs crap signed by someone, and it doesn't really matter whether that someone is Microsoft or Redhat. A computer is truly "general purpose" only if it will run exactly the executable code I choose to place there, and Secure Boot is designed to prevent that.

I'm pro secure boot fwiw and have had it working on my of my Linux systems for awhile.

I don't know overall in the ecosystem but Fedora has been working for me with secureboot enabled for a long time.

Having the option to disable secureboot, was probably due to backlash at the time and antitrust concerns.

Aside from providing protection "evil maid" attacks (right?) secureboot is in the interest of software companies. Just like platform "integrity" checks.


I'm not giving game ownership of my kernel, that's fucking insane. That will lead to nothing but other companies using the same tech to enforce other things, like the software you can run on your own stuff.

No thanks.


Valve... please do Github Actions next

I wonder what Valve uses for source control (no pun intended) internally.


I’ve heard from several people who game on Windows that Gamescope side panel with OS-wide tweakables for overlays, performance, power, frame limiters and scaling is something that they miss after playing on Steam Deck. There are separate utilities for each, but not anything so simple and accessible as in Gamescope.

A good one is the shader pre caching with fossilize, microsoft is only now getting around it and it still pales in comparison to Valve's solution for Linux.

Surely a gaming handheld counts

Imagine if windows moved to the linux kernel and then used wine/proton to serve their own userspace.

It kinda looked like this is the future, about at the same time they introduced WSL, released dotNET for Linux and started contributing to the Linux Kernel - all the while making the bank with Azure mostly thanks to running Linux workloads.

But then they deCided it is better to show adds at OS level, rewrite OS UI as a web app, force harware DRM for their new OS version (TPM requirement) as well as automatically capturing content of you screen and feed it to AI.


The Linux kernel and Windows userspace are not very well matched on a fundamental level. I’m not sure we should be looking forward to that, other than for running games and other insular apps.

Ah, I was being facetious, I think it would be pretty funny if it happened though.

Sounds like the sort of oddball corporate experiment that Action Retro or Michael MJD would be examining in fifteen years.

> I don't have an example in mind at the moment

I do, MIDI 2.0. It's not because they're not doing it, just that they're doing it at a glacial pace compared to everyone else. They have reasons for this (a complete rewrite of the windows media services APIs and internals) but it's taken years and delays to do something that shipped on Linux over two years ago and on Apple more like 5 (although there were some protocol changes over that time).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: