it probably shouldn’t be a “release” thing. actually, certainly. i do wonder how many bugs would never have seen the light of day, if someone’s “set” actually turned out to be a sequence (i.e. allowed duplicate values) resulting in a debug build raising an assert.
Debug builds are worthless for catching issues. How many people actually run them? Perhaps developers run debug builds of individual binaries they're working on when they're trying to repro a bug, but my experience at every company of every size and position in the stack (including the Windows team) is that no one does their general purpose use on a debug build.
huh. very cute. in the past, i had an idea for terser lambda syntax, similar to C#'s expression body functions - which i did end up implementing in clang:
auto sum = [](auto a, auto b): a+b;
but this is something else. i didn't think i'd like it at first, but actually i think i might be coming around to it. the.. dollar syntax is regrettable, although it's not a show stopper.
yeah the "distance between frames" latency is just one overhead, everything adds up until you get real latency. 10ms for your wireless mouse then 3ms for your I/O hardware then 5ms for the game engine to process your input then 20ms for the graphics pipeline and so on and on.
30 FPS is 33.33333 MS
60 FPS is 16.66666 MS
90 FPS is 11.11111 MS
120 FPS is 8.333333 MS
140 FPS is 7.142857 MS
144 FPS is 6.944444 MS
180 FPS is 5.555555 MS
240 FPS is 4.166666 MS
Going from 30fps to 120fps is 25ms which is totally 100% noticeable even for layman (I actually tested this with my girlfriend, she could tell between 60fps and 120fps as well), but these generated frames from DLSS don't help with this latency _at all_.
Although the nVidia Reflex technology can help with this kind of latency in some situations in some non quantifiable ways.
i’d consider myself a day-to-day c++ engineer. well, because i am. i like lots of things from rust. there’s a few things i don’t. c++ has a lot to learn from rust, if it is to continue to exist.
but really.. isn’t this the point of the language? you need to
understand the borrow checker because.. that’s why it’s here?
friends. you understand that you can just.. take it off, right?
fully unscrew the cap then just either continue twisting the cap over the the edge - honestly effortless - or just.. pull it off? the cap still functions as a cap, afterward.
apologies, but i don’t understand the furore over this change.
While I am all for immortality, an oft given answer I've seen is, when I can no longer support myself, physically. That means old and frail and unable to walk or move.
With immortality though could also come anti aging, so I'm not sure how strong that answer really is.
Life is not life without death. Personally, I don’t necessarily want to die, I love to live. But also, death is the only thing that makes life precious.
I don't agree at all. That's like saying the world is only beautiful because it will one day be consumed by the sun. I love my life, and it isn't because I'm going to die.
> death is the only thing that makes life precious.
No. Living is what makes life precious. Good memories, good food, good friends, good lovers, good music… if it’d be possible to continue living forever, until you decide you want to take your chances on the existence of an afterlife, it’ll be no less precious than it is now, when nature just takes life from you against your will.
> An application with 10 frames of latency will be faster on a 1 kHz display than a perfectly coded application on a 60 Hz display.
thats actually not true. you seem to be implying that the best a 60hz display can manage is 16.6ms of latency. indeed that is the worst case value, but you should consider that early graphics technologies involved changing display modes mid scan.
it’s actually not ridiculous to suggest that old platforms had sub-millisecond latency; they did. if the scanline was on, or just before, the line where you would interact (i.e., the prompt line), the text you enter would appear immediately.
of course, “vsync”, tear free, and such like approaches “fixed” this - necessarily by adding at least a frame’s worth of latency - but also adding perceptual latency.
it’s an oft-overlooked aspect of refresh rates. a 60hz CRT, without vsync, still has the lower bound of latency lower than a 120hz display. perhaps even 240hz.
i’ve used two 240hz displays for years now. i’ll never go slower than that.
> you seem to be implying that the best a 60hz display can manage is 16.6ms of latency
Yes, if you control the whole software stack it is possible to do beam racing to get lower than one frame of latency (assuming low latency hardware for input and display panel scanout). But I'm talking about desktop/mobile applications. In general operating systems do not do this, and many actually make it impossible. Only very recently has it become possible to do beam racing in a windowed application (not using fullscreen exclusive mode) on Windows with recent graphics hardware with multiplane overlay and very, very few people have attempted to do it. I believe it is strictly impossible to do beam racing for windowed applications on macOS and Linux/Wayland. Not sure about iOS and Android.
you don't need to "beam race" to achieve sub-frame latency - you don't need to be accurate. switching off vsync should, principally, be enough to achieve this.
otherwise, yes, modern APIs go out of their way to avoid the possibility of this (the dreaded "tearing" artifacts you see from the frame buffer being changed during the transmission of the video signal to the monitor). i don't believe older techniques like you've mentioned are at all possible today, and only really made sense to talk about when analogue displays were the norm.
apologies, i wasn't being specific; none of what i said necessitates a CRT display, it was only as an example of how an older technology had less latency.
if modern a modern 60Hz LCD/OLED display couldn't get beneath 16.6ms latency, then what exactly is tearing?
correct me if i’m wrong, but i believe the point that is trying to be made is;
a system user/admin has an intuition about files. saying that ‘journalctl -f -u’ (fu, indeed :) and whatever else is inherently undiscoverable, and is a.. basically orthogonal mechanism for handling what should be a simple task. i.e., viewing some logs. it’s far easier to compose and extend from files (what if i only care about the mtime of the log, for instance), than this.
look, i think systemd isn’t.. terrible. i also think it’s suffered a bit of complexity fetishisation, and it seems as though that this resulting complexity may have become invisible to you.
run0 doesn’t seem like a bad idea. but i am wincing a bit at the thought of unrestricted javascript determining access control.
I think you have me confused with someone who cares about the difference between binary and text logs. I have no pony in this race; my comment was just made to help.