You can say fuck on TV, it just increases the age rating. Same with showing nipples. Freedom of speech isn’t freedom from _any_ consequences of speech…
They weren't that high frequency. I could hear computer monitors into my twenties at least. I'd guess somewhere around 20 - 22 kHz. CRTs were largely replaced by LCDs by my late 20s/early 30s, so I don't have a good sense of when I stopped being able to hear frequencies that high.
Have you tried BFI (black frame insertion)? Many people swear by it because it improves the "motion clarity", but it has the side effect of significantly increasing flicker.
Most of these CRT shaders seem to emulate the lowest possible quality CRTs you could find back in the day. I have a nice Trinitron monitor on my desk and it looks nothing like these shaders.
The only pleasant shader I have found is the one included in Dosbox Staging (https://www.dosbox-staging.org/), that one actually looks quite similar to my monitor!
I don't think the culture is the same due to cabinets having network capabilities now, but I do think it's possible.
At the taito station in Akihabara, I've met tourists a few times when I was in town for a large tournament (EVO Japan) and made friends from it. I've also had people watching me play, but unfortunately I don't speak Japanese.
I know there's a few arcades that still have some street fighter III: third strike cabinets with regulars. I can't speak for other games but at least for street fighter, people are almost always open and friendly.
I was there 2 years ago. I went inside one of the multi storey gaming places in Akihabara. The old school (90's and older) era games are a small section in one floor when there is 6 storeys of gaming.
That sounds like the Taito Station on the right side of the street. On the other side there is a Gigo with a whole floor for retro games, and Hey! that is focused almost only on retro games.
No, the reason why 13.5 MHz was chosen is because it was desirable to have the same sampling rate for both PAL and NTSC, and 13.5 happens to be an integer multiple of both line frequecuencies. You can read the full history in this article:
That is only one condition among the conditions that had to be satisfied by the sampling rate, and there are an infinity of multiples which satisfy this condition, so this condition is insufficient to determine the choice of the sampling frequency.
Another condition that had to be satisfied by the sampling frequency was to be high enough in comparison with the maximum bandwidth of the video signal, but not much higher than necessary.
Among the common multiples of the line frequencies, 13.5 MHz was chosen because it also satisfied the second condition, which is the condition that I have discussed, i.e. that it was possible to choose 13.5 MHz only because the analog video bandwidth had been standardized to values smaller than needed for square pixels, otherwise for the sampling frequency a common multiple of the line frequencies that is greater than 15 MHz would have been required (which is 20.25 MHz).
The repo being a single commit doesn't mean it's AI. It is quite common to first develop on a private repo and then clean up the commit history for the first public release.
The problem is that a lot of content today is mixed so that effects like explosions and gunshots are LOUD, whispers are quiet, and dialog is normal.
It only works if you're watching in a room that's acoustically quiet, like a professional recording studio. Once your heater / air conditioner or other appliance turns on, it drowns out everything but the loudest parts of the mix.
Otherwise, the problem is that you probably don't want to listen to ear-splitting gunshots and explosions, then turn it down to a normal volume, only to make the dialog and whispers unintelligible. I hit this problem a lot watching TV after the kids go to bed.
Yes, seems like both audio and video are following a High Dynamic Range trend.
As much as I enjoy deafeningly bright explosions in the movie theater, it's almost never appropriate in the casual living room.
I recently bought a new TV, Bravia 8ii, which was supposedly not bright enough according to reviewers. In it's professional setting, it's way to bright at night, and being an OLED watching HDR content the difference between the brightest and darkest is simply too much, and there seems to be no way to turn it down without compromising the whole brightness curve.
The sound mixing does seem to have gotten much worse over time.
But also, people in old movies often enunciated very clearly as a stylistic choice. The Transatlantic accent—sounds a bit unnatural but you can follow the plot.
Lots of the early actors were highly experienced at live stage acting (without microphones) and radio (with only microphone) before they got into video.
Yes, I forgot to mention that by "old movies" I mean things like Back to the Future. After a lifetime of watching it dubbed, I watched it with the original audio around a year ago, and I was surprised how clear the dialogues are compared to modern movies.
To be fair, the diction in modern movies is different than the diction in all other examples you mentioned. YouTube and live TV is very articulate, and old movies are theater-like in style.
That's interesting. I have heard many people complaining about the sound mix in modern Spanish productions, but I never have problems understanding them. Shows from LATAM are another topic though, some accents are really difficult for us.
I "upgraded" from a 10 year old 1080p Vizio to a 4K LG and the sound is the worst part of the experience. It was very basic and consistent with our old TV but now it's all over the place. It's now a mangled mess of audio that's hard to understand.
I had the same issue, turn on the enhanced dialogue option. This makes the EQ not muffle the voices and have them almost legible. I say almost because modern mixing assume a center channel for voices that no TV have.
reply