This is a good, succinct unpacking of the metaphysical stakes. Nonetheless I am curious for the world where striking a match results in a shower of seagulls.
The article talks about how they are controlling for the variation across time, and they’re reporting a new signal. So even if everyone was working Saturdays before, everyone is more working Saturdays now. (Edited typos.)
I suspect that the biggest limitation in printing vs. emissive displays is the simple fact that your contrast ratio and color reproduction is severely limited in printing, because the dye is modifying ambient illumination.
This affects brightness and contrast: For emissive displays, you can have emissive values that are several to many orders of magnitude brighter than the 'black point', and more importantly, the primaries are defined by the display, not by ambient illumination.
Part of the magic of HDR displays is manipulating local masking (a human perceptual quirk) to drive bright regions on a display much brighter than the darker regions, so you can achieve even higher contrast ratios than the base technology could achieve (LED back-illuminated LCD panels, for many consumer TVs). Basically, a bright pixel will cause other nearby pixels to be brighter, because you can't see the dark details near a bright region anyway — but other regions could be darker, where you can perceive more detail in the blacks. This is achieved by illuminating sections of the display at significantly higher or lower levels, based on what your eyes/brain can actually perceive. That leads to significantly higher contrast ratios.
(As a heuristic: photographers generally say you can only get ~5 stops of contrast out of a print. (That is, bright areas are 2^5 times brighter than the darkest regions.) Modern HDR displays can do 2^10 or better. YMMV.)
But this also affects color... much of the complexity in getting printers to match derives from the interaction between the imperfect gamut caused by differing primaries, as filtered through human perception (and/or perceptual models). But you can't control the ambient illumination, so you're at the mercy of whatever the spectrum of your illumination is, plus whatever adaptation the viewer has. This feels fundamentally impossible to do "correctly" under all circumstances.
Which is to say, the original sin of color theory is the dimensional collapse from a continuous spectrum to a 3-dimensional, discretized representation. It's a miracle we can see color at all...!
> the primaries are defined by the display, not by ambient illumination
In itself that is correct, but as you've noted, our own vision system isn't operating like that. The same display brightness and colors will be perceived very differently depending on the ambient light's brightness and color, and can also mean a severe breakdown in the dynamic range that can be made visible via a display.
And this ambient light also clearly impacts how prints are seen.
This is actually even more puzzling than it seems, because the green part of the spectrum is the most energetic.
I've read elsewhere that photosynthesis is partially limited by dealing with free radicals, and at peak light flux, many plants would be damaged by the monoatomic oxygen species that light-capturing complexes would create. Hence pigments that reflect some green light.
Genuine questions for AI engineers, or self driving car people: is the Tesla approach of only using cameras inherently flawed? I've read that the AI is directly hooked up to the cameras, with no explicit intermediate 3D representation... everything is done in latent space. If true, this seems inherently hard to improve; throw more data at it, sure, but you can't necessarily understand how and why it fails when it does. That seems... non-optimal for safety-critical systems like self-driving cars.
There's plenty of visualizations of their intermediary voxel representation. Hardly worse than any LIDAR for the task but without all the downsides of LIDAR.
Do you know that there is an actual voxel representation that exists before the self-driving AI? I was under the impression (from conversations with engineers who might know, admittedly) that there was not an explicit 3D representation that went into the self-driving module, and the AI was operating directly on pixels. I would be relieved to hear that there's an explicit 3D solve before that step... if accurate. Obviously there are 3D views on the dash, but my understanding is that is not an input to the full self driving solve. But again, it's hearsay (hence my question).
Purely intuitively, it seems like there should be a connection between the two (computation and intelligence). But I have not formally studied any relevant fields, I would be interested to hear thoughts from those who have.
The known laws of physics are computable, so if you believe that human intelligence is purely physical (and the known laws of physics are sufficient to explain it) then that means that human intelligence is in principle no more powerful than a Turing machine, since the Turing machine can emulate human intelligence by simulating physics.
If there's a nonphysical aspect to human intelligence and it's not computable, then that means computers can never match human intelligence even in theory.
> Wait... not having spelling errors is now a mark of AI?
When you output long blog articles more than daily, it is. Proofreading takes time, and someone who cares enough to proofread will probably care enough to put in more time on other things that an LLM wouldn't care about (like information density, as noted in another comment; or editing after the fact to improve the overall structure; or injecting idiosyncratic wit into headings and subheadings).
Please take no offense—I genuinely want to understand. I agree that my blog needs work, especially with less fluff and more value—i'm working on that.
I guess where I’m coming from is this: why is it assumed that using tools like AI or Grammarly takes away from the creative process? For me, they speed up the mechanical side of things—grammar, flow, even structure—so I can spend more time on ideas, storytelling, informing, or just getting unblocked.
I do get frustrated when ChatGPT changes my wording or shifts the meaning of what I’m trying to say. It can definitely throw a wrench into the overall story. But in those cases, I rephrase my prompt, asking it not to touch the narrative or my word choices, just to act like a word processor on steroids or an expert editor.
I’m not saying these tools replace a good human editor—far from it. If I ever get to the point where I can work with a real editor or proof reader and so on, I’d choose the human every time. But until then, these tools help me keep the momentum going—and I don’t see that as a lack of care.
On the contrary, it often takes me more time to get the output right—because I’m trying to make sure it still reflects exactly what I want to say and express.
Even if it’s you pulling the strings, it feels the way it is: a robot talking. It feels fake. Because it is. You’re not unique, so you’ll never stand out either. Just learn grammar on your own, and you’ll retain/add character to the text.
Now you’re just prompting. Just post the prompt, that’d be way more fun to read.