The xiphmont link is pretty good. Reminded me of the nearly-useless (and growing more so every day) fact that incandescent bulbs not only make some noise, but the noise increases when the bulb is near end of life. I know this from working in an anechoic chamber lit by bare bulbs hanging by cords in the chamber. We would do calibration checks at the start of the day, and sometimes a recording of a silent chamber would be louder than normal and then we'd go in and shut the door and try to figure out which bulb was the loud one.
There’s one thing that bothers me about this. Sure, PCM sampling is a lossless representation of the low frequency portions of a continuous signal. But it is not a latency-free representation. To recover a continuous signal covering the low frequencies (up to 20kHz) from PCM pulses at a sampling frequency f_s (f_s >= 40kHz), you turn each pulse into the appropriate kernel (sinc works and is ideal in a sense, but you probably want to low-pass filter the result as well), and that gives you the decoded signal. But it’s not causal! To recover the signal at time t, you need some pulses from times beyond t. If you’re using the sinc kernel, you need quite a lot of lookahead, because sinc decays very slowly and you don’t want to cut it off until it’s decayed enough.
So if you want to take a continuous (analog) signal, digitize it, then convert back to analog, you are fundamentally adding latency. And if you want to do DSP operations on a digital signal, you also generally add some latency. And the higher the sampling rate, the lower the latency you can achieve, because you can use more compact approximations of sinc that are still good enough below 20kHz.
None of this matters, at least in principle, for audio streaming over the Internet or for a stored library — there is a ton of latency, and up to a few ms extra is irrelevant as long as it’s managed correctly when at synchronizing different devices. But for live sound, or for a potentially long chain of DSP effects, I can easily imagine this making a difference, especially at 44.1ksps.
I don’t work in audio or DSP, and I haven’t extensively experimented. And I haven’t run the numbers. But I suspect that a couple passes of DSP effects or digitization at 44.1ksps may become audible to ordinary humans in terms of added latency if there are multiple different speakers with different effects or if A/V sync is carelessly involved.
Wouldn't each sample be just an amplitude(say, 16bit), not a since function? You can't recover frequency data without a significant number of pulses but that's what the low pass filter is for. Digital audio is cool but PCM is just a collection of analog samples. There's no reason why it couldnt be an energy signal.
This is the sampling theorem. You start with a continuous band-limited signal (e.g. sound pressure [0], low-pass filtered such that there is essentially no content above 20kHz [1]). You then sample it by measuring and recording the pressure, f_s times per second (e.g. 48 kHz). The result is called PCM (Pulse Code Modulation).
Now you could play it back wrong by emitting a sharp pulse f_s times per second with the indicated level. This will have a lot of frequency content above 20kHz and, in fact, above f_s/2. It will sounds all kinds of nasty. In fact, it’s what you get by multiplying the time-domain signal by a pulse train, which is equivalent to convolving the frequency-domain signal with some sort of comb, and the result is not pretty.
Or you do what the sampling theorem says and emit a sinc-shaped pulse for each sample, and you get exactly the original signal. Except that sinc pulses are infinitely long in both directions.
[0] Energy is proportional to pressure squared. You’re sampling pressure, not energy.
[1] This is necessary to prevent aliasing. If you feed this algorithm a signal at f_s/2 + 5kHz, it would come back out at f_s - 5kHz, which may be audible.
This is all true, but it is also true for most _other_ filters and effects, too; you always get some added delay. You generally don't have a lot of conversions in your chain, and they are more on the order of 16 samples and such, so the extra delay from chunking/buffering (you never really process sample-by-sample from the sound card, the overhead would be immense) tends to be more significant.
Sampling does not lose information below the Nyquist limit, but quantization does introduce errors that can't be fixed. And resampling at a different rate might introduce extra errors, like when you recompress a JPEG.
I see I lose data on the [18kHz..) range, but at the same time as a male I'm not supposed to hear that past in my early 30s, sprinkle concerts on top and make it more like 16kHz :/
At least I don't have tinnitus.
Here's my test,
```fish
set -l sample ~/Music/your_sample_song.flac # NOTE: Maybe clip a 30s sample beforehand
set -l borked /tmp/borked.flac # WARN: Will get overwritten (but more likely won't exist yet)
cp -f $sample $borked
for i in (seq 10)
echo "$i: Resampling to 44.1kHz..."
ffmpeg -i $borked -ar 44100 -y $borked.tmp.flac 2>/dev/null
mv $borked.tmp.flac $borked
echo "$i: Resampling to 48kHz..."
ffmpeg -i /tmp/borked.flac -ar 48000 -y $borked.tmp.flac 2>/dev/null
mv $borked.tmp.flac $borked
end
echo "Playing original $sample"
ffplay -nodisp -autoexit $sample 2>/dev/null
echo "Playing borked file $borked"
ffplay -nodisp -autoexit $borked 2>/dev/null
echo "Diffing..."
set -l spec_config 's=2048x1024:start=0:stop=22000:scale=log:legend=1'
ffmpeg -i $sample -lavfi showspectrumpic=$spec_config /tmp/sample.png -y 2>/dev/null
ffmpeg -i $borked -lavfi showspectrumpic=$spec_config /tmp/borked.png -y 2>/dev/null
echo "Spectrograms,"
ls -l /tmp/*.spec.png
```
Yeah, they know and their comment reflects that knowledge. They're saying that if we had infinite bit depth, we could arbitrarily resample anything to anything as long as the sample rate is above the Nyquist frequency; however we don't have an infinite bit depth, we have a finite bit depth (i.e the samples are quantized), which limits the dynamic range (i.e introduces noise). This noise can compound when resampling.
The key point is that even with finite bit depth (as long as you dither properly), the effect of finite bit depth is easily controlled noise of program chosen spectrum. i.e. as long as your sampling isn't doing anything really dumb, the noise introduced by sampling is well below noise floor.
This is a nice video. But I’m wondering: do we even need to get back the original signal from the samples? The zero-order hold output actually contains the same audible frequencies doesn’t it? If we only want to listen to it, the stepped wave would be enough then
If a company choses Cloudflare they would have great service everywhere except Italy. If they chose a service with lower quality / reach, they will suffer degraded service across the board. If they try to use more than one CDN that’s a lot of hassle.
It’s not clear which way the decisions will go in reality. Past experience suggests that tech companies eventually accommodate local laws, trading complexity of explaining this to customers for complexity of implementing targeted blocking tools.
A lot depends on the next couple of months and the US's continued belligerence against - former? - allies. If that isn't toned down, and drastically so then I expect there to be many more consequences than just for CDN providers.
They're at 82% or so of all websites using CDNs, other providers are extremely small in comparison. CF is this large because of a feedback loop with respect to being able to deal with large denial of service attacks. They are - for most serious players - the only game in town now.
The panel is backed by a law. Respect the law. Italy has a judicial system and in cases like this, probably some EU court could be also called. US politicians can reach out to EU/Italian politics to harmonize trade... But wait, do not we kill trade deals. They are so unfair (aka. compromises)
w2c2 has only 2 mentions. wasm2c is not a clear winner, it's specifically losing several of their benchmarks.
In general, using a preexisting compiler as a JIT backend is an old hack, there's nothing new there. It's just another JIT/AoT backend. For example, databases have done query compilation for probably decades by now.
I signed up thinking Claude Code was an IDE and really disappointed with it. Their plugin for vscode is complete trash. Way over hyped. Their models are good but I can get that through other ways.
GitHub doesn't offer any unlimited style AI model plans so I don't think they'll care. Their pricing is fairly aligned with their costs.
This only affects Claude as they try to market their plan as unlimited with various usage rate limits but its clearly costing them a lot more than what they sell it for.
Copilot plan limits are however "per prompt", and prompts that ask the agent to do a lot of stuff with a large context are obviously going to be more expensive to run than prompts that don't.
>Pretty much all electricity markets worldwide set the unit price based on the the cost of the "marginal" (most expensive) generator running during each time period
Indeed. This is inherent failing of the use of auctions for setting price. While using auctions is a laudable goal, in reality it is not very efficient and easily gamed. Having a central purchaser model is not idea from a ideological standpoint but clearly more efficient allowing correctly controlling for more variables than can (crudely) be transmitted through a 30 minute auction period.
The only reason the wind farms got built was because they got guaranteed a high price of electricity that took the risk out of it. This changed more recently which is why building stopped.
Another factor is in the UK everything below the average tide line is owned by the Crown (as in the King not the government) who were very happy to get lease income. The Govt was also happy so it didn't look like the people were funding the King (which they are).
Also the public are very against wind turbines on land which is reasonable in England where there isn't much isolated land to put them.
At least they got built, which is more than can be said for the nuclear plants.
I am ashamed to admit this took me a long time to properly understand. For further reading I'd recommend:
https://people.xiph.org/~xiphmont/demo/neil-young.html https://www.youtube.com/watch?v=cIQ9IXSUzuM
reply