I think mostly it's just that many existing tools use CPU rendering. Without "modern" pixel shaders, crisp anti-aliased text rendering was pretty much impossible on GPU. At 800x600, rendering the UI on CPU used to be no problem. But with retina and high-DPI it became 4x slower and, thereby, unusable.
And yes, that high CPU usage is exactly the issue with Skia. Ideally, it should require a few percent on load and then afterwards everything should be running inside the GPU at almost 0% CPU cost.
I have to disagree that the high CPU usage I mentioned is an issue with Skia in the case of Avalonia.
Other frameworks like Flutter which use Skia don't have such a high CPU load on my computer so I'm pretty sure the high CPU usage is because of Avalonia specifically. (Although Skia might very well contribute to it.)
Just opened Chrome and Firefox (both of which use Skia) and both used about 1% CPU while idle while Avalonia is there with up to 70% CPU usage while idle.
My guess would be that Avalonia is doing something stupid like re-drawing the entire GUI on every mouse cursor movement (in case any hover effect changed) and then Skia is transforming that into high CPU usage. Games using proper GPU acceleration can get away with drawing highly complex GUIs at 144 FPS with negligible CPU usage [1], so while it is wasteful to constantly re-draw an application GUI, that alone will not lead to excessive CPU usage just yet.
That sounds reasonable to me. I remember game loops being split into one draw() and one update() function and would expect UI frameworks to be the same.
I think, in games, it's usual to traverse twice through each object in a graph on each loop, once for drawing and once for updating/handling interactions.
Am I right guessing that UI frameworks do something similar? One update function traversing through each widget to check for interactions. Then two drawing-related functions, with one checking for each object whether it changed and needs to be redrawn, and the second drawing just those widgets that changed?
I'm trying to think in terms of Big O, and it doesn't sound unreasonable to drop the change-calcylation function and redraw from scratch because that way you would have two O(n) functions instead of three. Maybe the constant factor of drawing pixels to the screen is high enough to outweigh that additional change-detection loop.
"Am I right guessing that UI frameworks do something similar?"
most of them try to avoid the update() loop because UI elements usually only change if the application state changes (and then you call paint() or invalidate()) or if the window moves or gets dis-occluded (then the OS calls invalidate()) or if the mouse cursor moves.
So typically, the UI framework will have a "dirty" rectangle flag which represents the pixel area that needs to be redrawn.
Slight sidetrack, but Noesis is free for < $100k revenue, and the licenses for use above that are very affordable and royalty free.
The only problem I had is that IDE support is limited. At least at the time I was using it, Microsoft Blend felt very much like a half functioning and abandoned project that was just being dragged along for the sake of maintaining something at least reminiscent of an IDE.
https://www.noesisengine.com/xamltoy/61c071a0b3a34ff82dfb0e2...
"rendering 2D UIs is just slow in general"
I think mostly it's just that many existing tools use CPU rendering. Without "modern" pixel shaders, crisp anti-aliased text rendering was pretty much impossible on GPU. At 800x600, rendering the UI on CPU used to be no problem. But with retina and high-DPI it became 4x slower and, thereby, unusable.
And yes, that high CPU usage is exactly the issue with Skia. Ideally, it should require a few percent on load and then afterwards everything should be running inside the GPU at almost 0% CPU cost.