Hacker Newsnew | past | comments | ask | show | jobs | submit | pezezin's commentslogin

By that logic, everything is a cultural difference...

You can say fuck on TV, it just increases the age rating. Same with showing nipples. Freedom of speech isn’t freedom from _any_ consequences of speech…

There are indeed a lot of cultural differences between the United States and Europe.

That only applies to TV sets, computer monitors operated at much higher frequencies outside the human hearing range.

And arcade monitors, or at least the ones I've been around do. I can hear an arcade machine in a different room

Most arcade monitors operate at the usual 15 kHz, although some later games operated at 24 kHz (medium resolution) and 31 kHz (high resolution).

They weren't that high frequency. I could hear computer monitors into my twenties at least. I'd guess somewhere around 20 - 22 kHz. CRTs were largely replaced by LCDs by my late 20s/early 30s, so I don't have a good sense of when I stopped being able to hear frequencies that high.

VGA monitors had a minimum horizontal frequency of 31 kHz (480p at 60Hz), way outside the human hearing range.

Have you tried BFI (black frame insertion)? Many people swear by it because it improves the "motion clarity", but it has the side effect of significantly increasing flicker.

Most of these CRT shaders seem to emulate the lowest possible quality CRTs you could find back in the day. I have a nice Trinitron monitor on my desk and it looks nothing like these shaders.

The only pleasant shader I have found is the one included in Dosbox Staging (https://www.dosbox-staging.org/), that one actually looks quite similar to my monitor!


Based on the repo dosbox staging seems to be mostly using crt-hyllian as their shader: https://github.com/dosbox-staging/dosbox-staging/tree/main/r...

That same shader is also available for RetroArch


A Trinitron shader would be two very thin horizontal lines trisecting the screen.

In any modern OS with CoW forking/paging, multiple worker processes of the same app will share code segments by default.

COW on fork has been a given for decades.

You can't COW two different libraries, even if the libraries in question share the source code text.


Nowadays' Japanese arcades are not like the ones GP is describing, most players don't interact with each other directly anymore.

Notable exceptions are places like Mikado centers that organize tournaments and keep the old flame alive.


I don't think the culture is the same due to cabinets having network capabilities now, but I do think it's possible.

At the taito station in Akihabara, I've met tourists a few times when I was in town for a large tournament (EVO Japan) and made friends from it. I've also had people watching me play, but unfortunately I don't speak Japanese.

I know there's a few arcades that still have some street fighter III: third strike cabinets with regulars. I can't speak for other games but at least for street fighter, people are almost always open and friendly.


I was there 2 years ago. I went inside one of the multi storey gaming places in Akihabara. The old school (90's and older) era games are a small section in one floor when there is 6 storeys of gaming.

That sounds like the Taito Station on the right side of the street. On the other side there is a Gigo with a whole floor for retro games, and Hey! that is focused almost only on retro games.

No, the reason why 13.5 MHz was chosen is because it was desirable to have the same sampling rate for both PAL and NTSC, and 13.5 happens to be an integer multiple of both line frequecuencies. You can read the full history in this article:

https://tech.ebu.ch/docs/techreview/trev_304-rec601_wood.pdf


That is only one condition among the conditions that had to be satisfied by the sampling rate, and there are an infinity of multiples which satisfy this condition, so this condition is insufficient to determine the choice of the sampling frequency.

Another condition that had to be satisfied by the sampling frequency was to be high enough in comparison with the maximum bandwidth of the video signal, but not much higher than necessary.

Among the common multiples of the line frequencies, 13.5 MHz was chosen because it also satisfied the second condition, which is the condition that I have discussed, i.e. that it was possible to choose 13.5 MHz only because the analog video bandwidth had been standardized to values smaller than needed for square pixels, otherwise for the sampling frequency a common multiple of the line frequencies that is greater than 15 MHz would have been required (which is 20.25 MHz).


The repo being a single commit doesn't mean it's AI. It is quite common to first develop on a private repo and then clean up the commit history for the first public release.

English is my second language and I always though my lack of understanding was a skill issue.

Then I noticed that native speakers also complain.

Then I started to watch YouTube channels, live TV and old movies, and I found out I could understand almost everything! (depending on the dialect)

When even native speakers can't properly enjoy modern movies and TV shows, you know that something is very wrong...


The problem is that a lot of content today is mixed so that effects like explosions and gunshots are LOUD, whispers are quiet, and dialog is normal.

It only works if you're watching in a room that's acoustically quiet, like a professional recording studio. Once your heater / air conditioner or other appliance turns on, it drowns out everything but the loudest parts of the mix.

Otherwise, the problem is that you probably don't want to listen to ear-splitting gunshots and explosions, then turn it down to a normal volume, only to make the dialog and whispers unintelligible. I hit this problem a lot watching TV after the kids go to bed.


Yes, seems like both audio and video are following a High Dynamic Range trend.

As much as I enjoy deafeningly bright explosions in the movie theater, it's almost never appropriate in the casual living room.

I recently bought a new TV, Bravia 8ii, which was supposedly not bright enough according to reviewers. In it's professional setting, it's way to bright at night, and being an OLED watching HDR content the difference between the brightest and darkest is simply too much, and there seems to be no way to turn it down without compromising the whole brightness curve.


I watch my Bravia in the dark. Then again, mine is 5 years old, so maybe there's some differences.


The sound mixing does seem to have gotten much worse over time.

But also, people in old movies often enunciated very clearly as a stylistic choice. The Transatlantic accent—sounds a bit unnatural but you can follow the plot.


In older movies and TV shows the actors would also speak loudly. There’s a lot of mumbling and whispering in shows today.


Lots of the early actors were highly experienced at live stage acting (without microphones) and radio (with only microphone) before they got into video.


Not just old movies. Anything until mid-2000s or 2010s.


Yes, I forgot to mention that by "old movies" I mean things like Back to the Future. After a lifetime of watching it dubbed, I watched it with the original audio around a year ago, and I was surprised how clear the dialogues are compared to modern movies.


To be fair, the diction in modern movies is different than the diction in all other examples you mentioned. YouTube and live TV is very articulate, and old movies are theater-like in style.


Can we go back to articulate movies and shows? And to crappier microphones where actors had to speak rather than whisper? Thanks.


That is exactly my point, the diction in modern movies sucks.


I have other way around ;)

In Poland our original productions have so badly mixed sound that in almost none series in my native language I cannot understand without captions.

But the upside of it is - with English being my second language - I understand most of movies/series I watched.


That's interesting. I have heard many people complaining about the sound mix in modern Spanish productions, but I never have problems understanding them. Shows from LATAM are another topic though, some accents are really difficult for us.

I "upgraded" from a 10 year old 1080p Vizio to a 4K LG and the sound is the worst part of the experience. It was very basic and consistent with our old TV but now it's all over the place. It's now a mangled mess of audio that's hard to understand.


I had the same issue, turn on the enhanced dialogue option. This makes the EQ not muffle the voices and have them almost legible. I say almost because modern mixing assume a center channel for voices that no TV have.


The TV makers all want to sell you an overpriced soundbar too.


English is my native language and I always watch with captions on. It is ridiculous :)


I don't know about other DE, but at least with Plasma there is a "overscan" option to compensate for hidden borders.


Thanks for that.

Overscan is not supported in wlroots yet. Seems the issue is that handling overscan is display driver specific.

But, now I know the keyword to look for.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: