Xorg is indeed a lot of painful complexity. This being said, the software is not Linux specific, and for modern Linux distributions, it is more and more a legacy technology.
I think "hacktivist" here means hacking into the politician's inboxes and leaking the contents, like "politicians want to do this to you; let's see how they like it when it's done to them" sort of thing.
Btrfs is NOT constantly eating people data. You have nothing to back this statement.
It's widely used and the default filesystem of several distributions.
Most of the problems are like for the other filesystem: caused by the hardware.
I've been using it for more than 10 years without any problem and enjoy the experience. And like for any filesystem, I backup my data frequently (with btrbk, thanks for asking).
Tell it to my data then. I was 100% invested in Btrfs before 2017, the year where I lost a whole filesystem due to some random metadata corruption. I then started to move all of my storage to ZFS, which has never ever lost me a single byte of data yet despite the fact it's out of tree and stuff. My last Btrfs filesystem died randomly a few days ago (it was a disk in cold storage, once again random metadata corruption, disk is 100% healthy). I do not trust Btrfs in any shape and form nowadays. I also vastly prefer ZFS tooling but that's irrelevant to the argument here. The point is that I've never had nothing but pain from btrfs in more than a decade
> Btrfs is NOT constantly eating people data. You have nothing to back this statement.
Constantly may be a strong word, but there is a long line of people sharing tales of woe. It's good that it works for you, but that's not a universal experience.
> It's widely used and the default filesystem of several distributions.
As a former user, that's horrifying.
> Most of the problems are like for the other filesystem: caused by the hardware.
The whole point of btrfs over (say) ext4 is that it's supposed to hold up when things don't work.
I think any discussion of btrfs needs to acknowledge that raid5/6 support was promised in the early years, shipped in the kernel in 2013 and, until 2021's btrfs-progs 5.11 release, did not warn users that they risked data loss when creating volumes.
For near a decade btrfs raid5/6 was "unsafe at any speed" and many people lost data to it, including myself.
btrfs has eaten my data, which was probably my bad for trying out a newly stable filesystem around 15 years ago. there are plenty of bug reports of btrfs eating other people's data in the years since.
It's probably mostly stable now, but it's silly to act like it's a paragon of stability in the kernel.
> but it's silly to act like it's a paragon of stability in the kernel.
And it's dishonest to act like bugs from 15 years ago justify present-tense claims that it is constantly eating people's data and is a bad joke. Nobody's arguing that btrfs doesn't have a past history of data loss, more than a decade ago; that's not what's being questioned here.
There's no need to call someone pointing out instability of a filesystem dishonest. That's bad faith.
I don't get why folks feel the need to come out and cheer for a tool like this, do you have skin in the game on whether or not btrfs is considered stable? Are you a contributor?
I don't get it.
But since you asked - let me find some recent bugs.
ext4 has "recent" correctness and corruption bugfixes. Just search through the 6.x and 5.x changelogs for "ext4:" to find them. It turns out that nontrivial filesystems are complex things that are hard to get right, even after decades of development by some of the most safety-and-correctness-obsessed people.
I've been using btrfs as the primary filesystem on my daily-driver PCs since 2009, 2010 or so. The only time I've had trouble with it was in the first couple of years I started using it. I've also used it as the primary FS on production systems at $DAYJOB. It works fine.
I run Fedora and for legal reasons, they ship a version that has this problem. Have you tried Mozilla's Flatpak build? I use it instead and it resolves all my problem.
But it's not intended for or good at (without forcing a square peg into a round hole) the sort of thing LFS and promisors are for, which is a public project with binary assets.
git-annex is really for (and shines at) a private backup solution where you'd like to have N copies of some data around on various storage devices, track the history of each copy, ensure that you have at least N copies etc.
Each repository gets a UUID, and each tracked file has a SHA-256 hash. There's a branch which has a timestamp and repo UUID to SHA-256 mapping, if you have 10 repos that file will have (at least) 10 entries.
You can "trust" different repositories to different degrees, e.g. if you're storing a file on both some RAID'd storage server, or an old portable HD you're keeping in a desk drawer.
This really doesn't scale for a public project. E.g. I have a repository that I back up my photos and videos in, that repository has ~700 commits, and ~6000 commits to the metadata "git-annex" branch, pretty close to a 1:10 ratio.
There's an exhaustive history of every file movement that's ever occurred on the 10 storage devices I've ever used for that repository. Now imagine doing all that on a project used by more than one person.
All other solutions to tracking large files along with a git repository forgo all this complexity in favor of basically saying "just get the rest where you cloned me from, they'll have it!".
> Any ideas why it isn’t more popular and more well known?
While git-annex works very well on Unix-style systems with Unix-style filesystems, it heavily depends on symbolic links, which do not exist on filesystems like exFAT, and are problematic on Windows (AFAIK, you have to be an administrator, or enable an obscure group policy). It has a degraded mode for these filesystems, but uses twice the disk space in that mode, and AFAIK loses some features.
I used to be a Kagi customer, but the fact that they waste their energy with all these distractions is depressing. They should instead build a real search engine and stop reselling Bing.
(f) A person who possesses six or more obscene devices or identical or similar obscene articles is presumed to possess them with intent to promote the same.
The Russian foreign agent law is used to attack the public personalities and NGOs, and have nothing in common with the Romanian Electoral Laws. Georgians are absolutely right to be scared.
Having read the article I could not find any example of what is impossible in AppArmor, just a statement repeated in various ways that SELinux is easier to provide a secure-by-default environment with the closest thing to justification being that SELinux models things with types whereas app armor deals with restrictions on specific applications. I’m sure this all makes sense to someone already well-versed in the space, but I’m left with the same question as OP.
> I could not find any example of what is impossible in AppArmor,
AppArmor is simply less granular. For example, it doesn't provide true RBAC or MLS security models. It also uses paths instead of inodes, so a hard link can be used to override some policies.
So it just depends on what the exploit or attack is trying to do. If an attacker gets root and is trying to overwrite a file, they may be able to. Maybe they can't, but they could probably still execute any code they can write and compile themselves. And perhaps they can write to other files and do damage.
SELinux and similar systems allow a lot more granularity. Programs and users can only talk to explicit what they are allowed to talk to, and maybe you want to limit the access to say, append instead of full write access.
It just allows a lot more granularity and restriction, that's the difference.
> The link rules can get pretty granular and seem explicitly designed to prevent that scenario.
It's still an inherent weakness. No getting around that really.
> Assuming the AppArmor profile allows writing to and executing the same files. Which isn't particularly common.
I don't really want to try and come up with examples just so you can show there might be some hacky way of accomplishing something similar to what SELinux can offer - it would be missing my point.
Point is there's a lot more you can do under AppArmor than SELinux. AppArmor isn't as granular and you can't lock down a system to the same extent, period. Is it good enough, sure. Is it better than nothing? Absolutely. Is it comparable to an optimized SELinux config? Not remotely.
Hacky way to accomplish something? Literally every example you gave of AA not being "granular" enough was flat misinformation. There are dedicated rules to prevent writing and executing the same file, prevent using hardlinks to gain privileges, and prevent overwriting a file that should be append only. No hacks here. Just facts.
> Literally every example you gave of AA not being "granular" enough was flat misinformation.
No, there was no misinformation, and this stance you're committed to defending is one of the most bizarre stances I've ever come across.
There can be no question that SELinux is significantly more granular than AppArmor any more than there is that the earth is not flat. Looking at the introductory documentation for both systems should be more than enough to make that abundantly clear to anyone.
> There are dedicated rules to prevent writing and executing the same file, prevent using hardlinks to gain privileges, and prevent overwriting a file that should be append only. No hacks here. Just facts.
So just before I put more effort into replying to you, I want to be 100% clear on your stance. If I am paraphrasing or misconstruing, please correct.
It seems like you are claiming that AppArmor using hardlinks is not any sort of vulnerability or weakness and cannot be, and has never been bypassed? Is this a fair reading of your position?
My position is that you haven't demonstrated a practical example of SELinux being able to constrain a workload that AppArmor doesn't have parity with, i.e. you haven't responded to my initial question:
Can you offer some examples of things you can restrict with SELinux that you wouldn't be able to with AppArmor?
Only valid answer in the thread has been port bindings - AppArmor's network rules don't allow restricting port number, but SELinux can do that.
You tried to claim that SELinux could prevent processes from overwriting files instead of just appending to them while AppArmor could not do the same, but that statement of yours was easily disprovable -- the man page of apparmor.d shows that append-only rules are supported. If you don't want me to call your statement misinformation, then maybe invent another word because that is the only word I have to describe what you said.
> My position is that you haven't demonstrated a practical example of SELinux being able to constrain a workload that AppArmor doesn't have parity with, i.e. you haven't responded to my initial question:
I listed some of the ways AppArmor falls short, which you dismissed.
If I know an object is 3x3x3 feet, and we have a box that is 5x5x5, and another object that is 7x7x7, I don't need to thoroughly test every aspect of these items or even see them to know one of the won't fit in the box.
> Only valid answer in the thread has been port bindings
Not true. AppArmor lacks several of the models SELinux does, and thus, as has been said, is less granular, and thus, and as has also been said, it covers less area than SELinux does. AppArmor doesn't even consider user accounts as far as policy decisions go, and you can't bind policies to user objects. You realize already what a limitation that is, right?
It's sufficient to look at the designs of both systems to see this, see where one falls short, and not need practical examples to understand.
If you want practical, real world examples of SELinux blocking something AppArmor couldn't, as I said to someone else, a comparison of Debian and redHat security advisories should show this, as I would think it is extremely likely that Debian would significantly less often be able to say the issue isn't a threat if AppArmor is enabled vs RHAT saying the same for SELinux.
But, you want a setup. OK. Does AppArmor allow you to basically take root out of the equation entirely, by assigning only the capabilities a user needs to run specific programs (e.g. like binding a port under 1024) to a non root account? Does it then allow severely limiting the root account so it can't really doing anything and 'getting root' is pointless because you can eliminate the entire concept of an all powerful account? No, it doesn't, and there is plenty more it doesn't allow because it's a simpler and more limited system by design.
> You tried to claim that SELinux could prevent processes from overwriting files instead of just appending to them while AppArmor could not do the same, but that statement of yours was easily disprovable
You're right, this was my mistake. AppArmor either didn't have that functionally the last time I really played with it, or I forgot it had it. That's a bad example, sure, but the overall point is still perfectly valid.
> If you don't want me to call your statement misinformation, then maybe invent another word because that is the only word I have to describe what you said.
As far as AppArmor being able to enforce append only functionality, sure. As far as anything else, not so.
I was surprised by his praise of MCS. We noticed it when reusing the same volume for subsequent reuse of a podman volume. It's a couple of years already, but it was not really explained in the documentation, only in a blog post by a RH emloyee. One weird thing is that the labels are random, but the range of possible values is rather small. So a determined attacker could brute force them. Also we always had a mix of files with and without MCS labels on the volume. IIRC moving or copying files led to different results. Not clear to me why a copy should be protected differently than a moved file, they seem of similar sensitivity to me.
It's been a while and we hacked around it, don't remember how. Except that it was not the #1 solution, disable SELinux altogether.