The transition enabled faster and more frequent service, which is something you probably do care about if you need to get into the office and are deciding how to get there.
/ has to be writeable (or have separate writeable mounts under it), /usr doesn't. The reasons for unifying under /usr are clearly documented and make sense and it's incredibly tedious seeing people complain about it without putting any effort into understanding it.
> Improved compatibility [...] That means scripts/programs written for other Unixes or other Linuxes and ported to your distribution will no longer need fixing for the file system paths of the binaries called, which is otherwise a major source of frustration. [..]
Scripts authors should use the binary name without a path and let the user's $PATH choose which binary to use and from where.
This union denies me the choice of using the statically linked busybox in /bin as a fallback if the "full" binaries in /usr are corrupted or segfaults after some library update.
> Improved compatibility with other Unixes (in particular Solaris) in appearance [...]
I don't care about appearances and I care even less about what Solaris looks like.
Did they take a survey of what Linux users care about, or just imposed their view on all of us because they simply know better? Or were paid to "know better" - I never exclude corruption.
> Improved compatibility with GNU build systems. The biggest part of Linux software is built with GNU autoconf/automake (i.e. GNU autotools), which are unaware of the Linux-specific /usr split.
Yeah, right. Please explain to me how GNU, the userspace of 99% of all Linux distributions isn't aware of Linux-specific /usr split.
And how is this any different from #1 ?
> Improved compatibility with current upstream development
AKA devs decided and users' opinion is irrelevant. This explains why GNU isn't aware of Linux /usr split - they simply don't want to be aware.
A meaningful gamble IBM made at the time was whether the BIOS was copyrightable - Williams v. Artic wasn't a thing until 1982, and it was really Apple v. Franklin in 1983 that left the industry concluding they couldn't just copy IBM's ROMs.
The limiting factor is the horizontal refresh frequency. TVs and older monitors were around 15.75kHz, so the maximum number of horizontal lines you could draw per second is around 15750. Divide that by 60 and you get 262.5, which is therefore the maximum vertical resolution (real world is lower for various reasons). CGA ran at 200 lines, so was safely possible with a 60Hz refresh rate.
If you wanted more vertical resolution then you needed either a monitor with a higher horizontal refresh rate or you needed to reduce the effective vertical refresh rate. The former involved more expensive monitors, the latter was typically implemented by still having the CRT refresh at 60Hz but drawing alternate lines each refresh. This meant that the effective refresh rate was 30Hz, which is what you're alluding to.
But the reason you're being downvoted is that at no point was the CRT running with a low refresh rate, and best practice was to use a mode that your monitor could display without interlace anyway. Even in the 80s, using interlace was rare.
Interlace was common on platforms like the Amiga, whose video hardware was tied very closely to television refresh frequencies for a variety of technical reasons which also made the Amiga unbeatable as a video production platform. An Amiga could do 400 lines interlaced NTSC, slightly more for PAL Amigas—but any more vertical resolution and you needed later AmigaOS versions and retargetable graphics (RTG) with custom video hardware expansions that could output to higher-freq CRTs like the SVGA monitors that were becoming commonplace...
CGA ran pretty near 262 or 263 lines, as did many 8-bit computers. 200 addressable lines, yes, but the background color accounted for about another 40 or so lines, and blanking took up the rest.
The irony is that most of those who downvote didn't spend hours in front of those screens as I did. And I do remember these things were tiring, particularly in the dark. And the worst of all were computer CRT screens, that weren't interlaced (in the mid 90s, before higher refresh frequency started showing up).
I spent literally thousands of hours staring at those screens. You have it backwards. Interlacing was worse in terms of refresh, not better.
Interlacing is a trick that lets you sacrifice refresh rates to gain greater vertical resolution. The electron beam scans across the screen the same number of times per second either way. With interlacing, it alternates between even and odd rows.
With NTSC, the beam scans across the screen 60 times per second. With NTSC non-interlaced, every pixel will be refreshed 60 times per second. With NTSC interlaced, every pixel will be refreshed 30 times per second since it only gets hit every other time.
And of course the phosphors on the screen glow for a while after the electron beam hits them. It's the same phosphor, so in interlaced mode, because it's getting hit half as often, it will have more time to fade before it's hit again.
Have you ever seen high speed footage of a CRT in operation? The phosphors on most late-80s/90s TVs and color graphic computer displays decayed instantaneously. A pixel illuminated at the beginning of a scanline would be gone well before the beam reached the end of the scanline. You see a rectangular image, rather than a scanning dot, entirely due to persistence of vision.
Slow-decay phosphors were much more common on old "green/amber screen" terminals and monochrome computer displays like those built into the Commodore PET and certain makes of TRS-80. In fact there's a demo/cyberpunk short story that uses the decay of the PET display's phosphor to display images with shading the PET was nominally not capable of (due to being 1-bit monochrome character-cell pseudographics): https://m.youtube.com/watch?v=n87d7j0hfOE
Interesting. It's basically a compromise between flicker and motion blur, so I assumed they'd pick the phosphor decay time based on the refresh rate to get the best balance. So for example, if your display is 60 Hz, you'd want phosphors to glow for about 16 ms.
But looking at a table of phosphors ( https://en.wikipedia.org/wiki/Phosphor ), it looks like decay time and color are properties of individual phosphorescent materials, so if you want to build an RGB color CRT screen, that limits your choices a lot.
Also, TIL that one of the barriers to creating color TV was finding a red phosphor.
There are no pixels in CRT. The guns go left to right, ¥r¥n, left to right, while True for line in range(line_number).
The RGB stripes or dots are just stripes or dots, they're not tied to pixels. There would be RGB guns that are physically offset to each others, coupled with a strategically designed mesh plates, in such ways that e- from each guns sort of moire into only hitting the right stripes or dots. Apparently fractions of inches of offsets were all it took.
The three guns, really more like fast acting lightbulbs, received brightness signals for each respective RGB channels. Incidentally that means they could go between brightness zero to max couple times over 60[Hz] * 640[px] * 480[px] or so.
Interlacing means the guns draw every other lines but not necessarily pixels, because CRTs has beam spot sizes at least.
No, you don't sacrifice refresh rate! The refresh rate is the same. 50 Hz interlaced and 50 Hz non-interlaced are both ~50 Hz, approx 270 visible scanlines, and the display is refreshed at ~50 Hz in both cases. The difference is that in the 50 Hz interlaced case, alternate frames are offset by 0.5 scanlines, the producing device arranging the timing to make this work on the basis that it's producing even rows on one frame and odd rows on the other. And the offset means the odd rows are displayed slightly lower than the even ones.
This is a valid assumption for 25 Hz double-height TV or film content. It's generally noisy and grainy, typically with no features that occupy less than 1/~270 of the picture vertically for long enough to be noticeable. Combined with persistence of vision, the whole thing just about hangs together.
This sucks for 50 Hz computer output. (For example, Acorn Electron or BBC Micro.) It's perfect every time, and largely the same every time, and so the interlace just introduces a repeated 25 Hz 0.5 scanline jitter. Best turned off, if the hardware can do that. (Even if it didn't annoy you, you'll not be more annoyed if it's eliminated.)
This also sucks for 25 Hz double-height computer output. (For example, Amiga 640x512 row mode.) It's perfect every time, and largely the same every time, and so if there are any features that occupy less than 1/~270 of the picture vertically, those fucking things will stick around repeatedly, and produce an annoying 25 Hz flicker, and it'll be extra annoying because the computer output is perfect and sharp. (And if there are no such features - then this is the 50 Hz case, and you're better off without the interlace.)
I decided to stick to the 50 Hz case, as I know the scanline counts - but my recollection is that going past 50 Hz still sucks. I had a PC years ago that would do 85 Hz interlaced. Still terrible.
I think you are right, I had the LC III and Performa 630 specifically in mind. For some reason I remember they were 30Hz but everthing I find googling it suggest they were 66Hz (both video card and screen refresh).
That being said they were horrible on the eyes, and I think I only got comfortable when 100Hz+ CRT screens started being common. It is just that the threshold for comfort is higher than I remember it, which explains why I didn't feel any better in front of a CRT TV.
Could it be that you were on 60Hz AC at the times? That is near enough to produce something called a "Schwebung" when artificial lighting is used. Especially when using flourescent lamps like they were common in offices. They need to be "phasenkompensiert" (phase compensated?/balanced), meaning they have to be on a different phase of the mains electricity, than the computer screens are on. Otherwise even not so sensitive people notice that as interference/sort of flickering. Happens less when you are on 50Hz AC, and the screens run at 60Hz, but with flourescents on the same phase it can still be noticeable.
To be pedantic: it depends on the last activity of the client, not the user. Anything the client sends counts, even if it's not as a result of user action. This makes it incredibly hard to figure out what could reset that timer - you'd need to know the user's client, its configuration, its plugins and so on.
I'm curious what you think the correct response to defamation is? At multiple opportunities (including the morning of the trial) Roy and Rianne were given the option of just removing the defamatory material and apologising and having the case dropped without having to pay anything. This is in no way my preferred outcome.
I'd have been entirely happy with that outcome, and I sent Roy and Rianne emails asking for that before getting lawyers involved. Even then, the initial request was just for correction - we offered to settle several times after the case started, and Roy documented his refusal in https://techrights.org/n/2025/11/04/We_Turned_Down_Every_Set... . As I said, these efforts continued until the morning of the trial, when I explicitly told my lawyers to make an offer that would involve Roy and Rianne paying nothing.
The way English court costs work is that if someone offers a settlement that would be more favourable than the court eventually orders (ie, the defendant could have settled for less than the damages the court orders, or the claimant could have settled for more than the damages the court orders) and that settlement is refused, then additional damages and costs are due as a consequence of refusing the early settlement offer and costing everyone more money. But for this to work, the court cannot be told about the settlement offer until afterwards - otherwise the judge could be influenced. As a result, there won't be any discussion of settlement offers in the judgement.
(This does have an unfortunate consequence - a defendant who wants to keep a case out of court can make a settlement offer that's higher than the court is likely to offer, and if the claimant refuses then the entire exercise ends up being much more expensive)
My fees came to about 260K GBP so far - while it's likely I'll be awarded some percentage of that, that doesn't mean I'll actually see any of it. As you say, it's not the sort of thing that anyone actually comes out of happy.
reply