> Windows and Mac Os, for all their faults, are unquestionably ready to use in 2026.
LOL
I installed a new windows 11 yesterday on a fairly powerful machine, everything lags so much on a brand new install it's unreal. Explorer takes ~2-3 seconds to be useable. Any app that opens in the blink of an eye under Linux on the same machine takes seconds to start. Start menu lags. It's just surrealistic. People who say these things work just have never used something that is actually fast.
I am not sure how people get all these issues. I installed fresh windows recently, and I don't see noticable any slowdowns.
Linux is faster in some places, maybe. But still with many issues like some applications not being drawn properly or just some applications not available (nice GUI for monitor control over ddc)
Conversely, I much prefer lowest latency at the cost tearing; when I'm forced to use windows I generally disabled the compositor there too whenever i could (I certainly don't use one under Linux and that's one of my reasons for being there). I find macOS unuseable, even on brand new top-end mac studios the input lag and slow reaction of the OS to... any user input, is frightening when you're used to your computer reacting instantly.
> Every project must colonize a valley of the language, declare a dialect, and bit-fiddle their own thing.
this is really not true in my experience. I don't remember last time I worked a project which outright banned specific C++ features or had a "dialect".
For making music as much as I love the free audio ecosystem there's some very unique audio plugins with specific sounds that will never be ported. Thankfully bridging with wine works fairly well nowadays.
Linux didn't aim to be an OS in the consumer sense (it is entirely an OS in an academic sense - in scientific literature OS == kernel, nothing else).The "consumer" OS is GNU/Linux or Android/Linux.
> it is entirely an OS in an academic sense - in scientific literature OS == kernel, nothing else
No, the academic literature makes the difference between the kernel and the OS as a whole. The OS is meant to provide hardware abstractions to both developers and the user. The Linux world shrugged and said 'okay, this is just the kernel for us, everyone else be damned'. In this view Linux is the complete outlier, because every other commercial OS comes with a full suite of user-mode libraries and applications.
> The problem is that when a final binary is linked everything goes into it
I don't think that's the case on Linux, when using -gsplit-dwarf the debug info is put in separate files at the object file level, they are never linked into binaries.
Google contributed the code, and the entire concept, of DWARF fission to both GCC and LLVM. This suggests that rather than overlooking something obvious that they'll be embarrassed to learn on HN, they were aware of the issues and were using the solutions before you'd even heard of them.
There's no contradiction, no missing link in the facts of the story. They have a huge program, it is 2GiB minus epsilon of .text, and a much larger amount of DWARF stuff. The article is about how to use different code models to potentially go beyond 2GiB of text, and the size of the DWARF sections is irrelevant trivia.
> They have a huge program, it is 2GiB minus epsilon of .text,
but the article says 25+GiB including debug symbols, in a single binary?
also, I appreciate your enthusiasm in assuming that because some people do something in an organization, it is applied consistently everywhere. Hell, if it were microsoft other departments would try to shoot down the "debug tooling optimization" dpt
yes and that's what I'm saying, I find it crazy to not split the debug info out. At least on my machine it really makes a noticeable difference of load time if I load a binary which is ~2GB with debug info in or the same binary which is ~100MB with debug info out.
Doesn't make any difference in practice. The debug info is never mapped into memory by the loader. This only matters if you want to store the two separate i.e lazy load debug symbols if needed.
this is just not true. I just tried with one of my binaries which is 3.2G unstripped, and 150MB-ish stripped. Unstripped takes 23 seconds until the window shows up, stripped takes ~a second
There is something wacky going on with your system, or the program is written in a way that makes it traverse the debug info if it is present. What program is it?
For example I can imagine desktop operating system antivirus/integrity checks having this effect.
ELF is just a container format and you can put literally anything into one of its sections. Whether the DWARF sections are in "the binary" or in another named file is really quite beside the point.
That seems to be a third-party library that claims to add Qt Widgets support to QML, but it's hard to see from those examples how exactly it works. It still seems to be using a QML-style layout framework. What I'm saying is I would want to have an app that is entirely Qt Widgets, with the same behavior as if I had written it in code, but using a declarative language to specify as much of that information as possible. (Also I'd want to be able use it from Python with PyQt/PySide, not just C++. :-)
What are you targetting? for instance all ESP32 now support GCC15 which has support for #embed. AVR also has GCC 15 toolchains for months, as well as ARM which also allows you to target STM32 and Nordic nRF stuff.
LOL
I installed a new windows 11 yesterday on a fairly powerful machine, everything lags so much on a brand new install it's unreal. Explorer takes ~2-3 seconds to be useable. Any app that opens in the blink of an eye under Linux on the same machine takes seconds to start. Start menu lags. It's just surrealistic. People who say these things work just have never used something that is actually fast.
reply