Germany stopped importing Russian gas after the start of the Ukraine war. One also has to understand that only a small fraction of gas use in Germany is for generation of electricity and much bigger part is for heating. Finally, the amount of electricity generated from gas is around 80 TWh and did not increase after shutting down nuclear.
Counter-anecdata: I have 2 Dell U2720Q (Ultrasharp 27") bought in 2021 and they've been great.
That said, I've always stuck for Dell's upper-range Ultrasharp (U prefix in models) monitors, being slightly wary of their cheaper series which the S in your S3221QS implies.
I'm using 2x Dell U3011s, one I purchased around ~2013 probably and the other I got used recently for $100. My only issue with them is that they have PWM coil whine that only goes away if I crank the brightness to ~90%, which seems to produce an immense amount of heat and probably power consumption. I'd love to find a viable alternative solution for this, because these are my favorite monitors for now.
The model appears to have been released 16 years ago.
I haven't yet found a monitor that makes sense to replace them with either.
I think there is a slightly newer version of these, but I have the same set up.
I haven’t been able to find anything that has the vertical space that these monitors do. Even Ultra Wide monitors just aren’t tall enough. If I got this 52 inch behemoth that would help, but I would actually lose horizontal space.
I'm not on Italy's side but I can't say I respect @eastdakota's rhetoric...
> "The crazy stat is that Europe makes more from fining US tech companies than they do from taxing their own technology companies."
That's one way of saying it. Another way is that US companies are so extravagantly huge and violate EU laws so much that the fines are correspondingly huge.
> That's one way of saying it. Another way is that US companies are so extravagantly huge and violate EU laws so much that the fines are correspondingly huge.
Another way of saying it is that because of transfer pricing, basically nobody knows what money was made where (and the notion of profits per country in a world of multinationals with no capital controls is meaningless).
Veering offtopic a bit... Google lost its (search) way years ago. See the "The Man who Killed Google Search" [1], and the room they left for alternatives like DuckDuckGo.
At work, we have full access to Claude, and I find that I now use that instead of doing a search. Sure it's not 100% reliable, but neither is search anyhow, and at least I save time from sifting through a dozen crappy content farms.
The same, I suppose, as using Wikipedia to get an overview of a topic, a surface understanding, before following the citations to dig deeper and fully validate the summary.
it's extra funny to me because the Raspberry Pi SoC is basically a little CPU riding on a big GPU (well, the earlier ones were. Maybe the latest ones shift the balance of power a bit). In fact, to this day the GPU is still the one driving the boot sequence.
So plugging a RasPi into a 5090 is "just" swapping the horse for one 10,000x bigger (someone correct my ratio of the RasPi5 GPU to the RTX5090)
I'm not very familiar with this layer of things; what does it mean for a GPU to drive a boot sequence? Is there something massively parallel that is well suited for the GPU?
The Raspberry Pi contains a Videocore processor (I wrote the original instruction set coding and assembler and simulator for this processor).
This is a general purpose processor which includes 16 way SIMD instructions that can access data in a 64 by 64 byte register file as either rows or columns (and as either 8 or 16 or 32 bit data).
It also has superscalar instructions which access a separate set of 32-bit registers, but is tightly integrated with the SIMD instructions (like in ARM Neon cores or x86 AVX instructions).
This is what boots up originally.
Videocore was designed to be good at the actions needed for video codecs (e.g. motion estimation and DCTs).
I did write a 3d library that could render textured triangles using the SIMD instructions on this processor. This was enough to render simple graphics and I wrote a demo that rendered Tomb Raider levels, but only for a small frame resolution.
The main application was video codecs, so for the original Apple Video iPod I wrote the MPEG4 and h264 decoding software using the Videocore processor, which could run at around QVGA resolution.
However, in later versions of the chip we wanted more video and graphics performance. I designed the hardware to accelerate video, while another team (including Eben) wrote the hardware to accelerate 3d graphics.
So in Raspberry Pis, there is both a Videocore processor (which boots up and handles some tasks), and a separate GPU (which handles 3d graphics, but not booting up).
It is possible to write code that runs on the Videocore processor - on older Pis I accelerated some video decode sofware codecs by using both the GPU and the Videocore to offload bits of transform and deblocking and motion compensation, but on later Pis there is dedicated video decode hardware to do this instead.
Note that the ARMs on the later Pis are much faster and more capable than before, while the Videocore processor has not been developed, so there is not really much use for the Videocore anymore. However, the separate GPU has been developed more and is quite capable.
> what does it mean for a GPU to drive a boot sequence
It's a quirk of the broadcom chips that the rpi family uses; the GPU is the first bit of silicon to power up and do things. The GPU specifically is a bit unusual, but the general idea of "smaller thing does initial bring up, then powers up $main_cpu" is not unusual once $main_cpu is ~ powerful enough to run linux.
That’s interesting, particularly since as far as I can tell, nothing in userland really bothers to make use of its GPU. I would really like to understand why, since I have a whole bunch of Pi’s and it seems like their GPUs can’t be used for much of anything (not really much for transcoding nor for AI).
> their GPUs can’t be used for much of anything (not really much for transcoding nor for AI)
It's both funny and sad to me that we're at the point where someone would (perhaps even reasonably) describe using the GPU only for the "G" in its name as not "much of anything".
The Raspberry Pi GPU has one of the better open source GPU drivers as far as SBCs go. It's limited in performance but its definitely being used for rendering.
There is a Vulkan API, they can run some compute. At least the 4 and 5 can: https://github.com/jdonald/vulkan-compute-rpi . No idea if it's worth the bus latency though. I'd love to know the answer to that.
I'd also love to see the same done on the Zero 2, where the CPU is far less beefy and the trade-off might go a different way. It's an older generation of GPU though so the same code won't work.
One (obscure) example I know of is the RTLSDR-Airband[1] project uses the GPU to do FFT computation on older, less powerful Pis, through the GPU_FFT library[2].
> There's also tailscaled-on-macOS, but it won't have a TPM or Keychain bindings anyway.
Do you mean that on macOS, tailscaled does not and has never leveraged equivalent hardware-attestation functionality from the SEP? (Assuming such functionality is available)
The third one is just the open-source tailscaled binary that you have to compile yourself, and it doesn't talk to the Keychain. It stores a plaintext file on disk like the Linux variant without state encryption. Unlike the GUI variants, this one is not a Swift program that can easily talk to the Keychain API.
In fact, SecurityFramework doesn’t have a real Swift/Obj-C API. The relevant functions are all direct bindings to C ABIs (just with wrappers around the CoreFoundation types).
> The third one is just the open-source tailscaled binary that you have to compile yourself, and it doesn't talk to the Keychain.
I use this one (via nix-darwin) because it has the nice property of starting as a systemwide daemon outside of any user context, which in turn means that it has no (user) keychain to access (there are some conundrums between accessing such keychains and "GUI" i.e user login being needed, irrespective of C vs Swift or whatever).
Maybe it _could_ store things in the system keychain? But I'm not entirely sure what the gain would be when the intent is to have tailscale access through fully unattended reboots.
Good to know, my understanding of the macOS system APIs is fairly limited.
I'm sure it's doable, with some elbow grease and CGO. We just haven't prioritized that variant of the client due to relatively low usage.
reply