Even after working with colorspaces for decades in Photoshop and various game dev tools, I find color conversion mystifying. I've studied all of the equations and given it my best effort, but would not bet real money that the colors I'm displaying are close to real life. It's like the game of telephone, I just can't trust so many steps.
So for this article, I don't see mathematical proof that the negatives have been inverted accurately, regardless of method, even though I'm sure the results are great. I suspect it comes down to subjective impression.
Here's a video I found discussing monitor calibration:
If I could fix everything, I'd make all image processing something like 64 bit linear RGB and keep the colorspace internal to the storage format and display, like a black box and not relevant to the user. So for example, no more HDR, and we'd always work with RGB in iOS instead of sRGB.
Loosely that would look like: each step of image processing would know the colorspace, so it would alert you if you multiplied sRGB twice, taking the onus off of the user and making it impossible to mess up. This would be like including the character encoding with each string. This sanity check should be included in video card drivers and game dev libraries.
If linear processing isn't accurate enough for this because our eyes are logarithmic, then something has gone terribly wrong. Perhaps 16 bit floating point 3 channel RGB should be standard. I suspect that objections to linearity get into audiophile territory so aren't objective.
For scanning color negatives, the brand of film would be mapped to a colorspace, the light source would have its own colorspace, the two would get multiplied together somehow, and the result would be stored in linear RGB. Inversion would be linear. Then the output linear RGB would get mapped to the display's sRGB or whatever.
My confusion is probably user error on my part, so if someone has a link for best practices around this stuff, I'd love to see it.
As you suggest storage in linear 16-bit float is standard, the procedure for calibrating cameras to produce the SMPTE-specified colourspace is standard, the output transforms for various display types are standards, files have metadata to avoid double-transforming etc etc. It is complex but gives you a lot more confidence than idly wondering how the RGB triplets in a given JPG relate to the light that actually entered the camera in the first place...
They also have lens sets which have the same external form factor regardless of focal length (i.e. makes it easy to swap, use same filters, etc.) and the lenses are made so the color reproduction of each one in a set is the same as well. And going further "to the source" it also plays into the (artificial) lighting used and so on. Which is why all that stuff is so expensive to begin with.
> but would not bet real money that the colors I'm displaying are close to real life
Don’t overthink. Light knows only of wavelengths. Our brain is where colors exist. Everything here is subjective, trying to approach what human eyes would perceive from the original subject, or not - photography is an art, and only sometimes the goal is to accurately represent what’s in front of the camera and, very often, it’s the opposite.
When scanning originals, recording the originals in the most accurate way possible is desirable and, for that, I’d suggest using multiple (as many as needed to capture the response curves of the pigments) narrow bandwidth emitters and sensors tuned to those wavelengths. From there you should be able to reconstruct what a human eye would have seen through the lenses, but, again, what we see is nothing but what our brains make out of the light hitting our retinas. There will never be something that’s perfectly accurate.
BUT.. here's the rub: if your film is old, it has probably faded. So whatever you scan is going to be "wrong" compared to what it looked like the day it was taken. The only way to easily fix that is to try and find the white point and black point in the scan and recalibrate all your channels that way. Then you're really just down to eyeballing it, IMO.
> … but would not bet real money that the colors I'm displaying are close to real life…
You can get there if you have an accurate color profile for your camera and an accurate color profile for your monitor.
> So for this article, I don't see mathematical proof that the negatives have been inverted accurately, regardless of method, even though I'm sure the results are great. I suspect it comes down to subjective impression.
People who work with negatives generally just don’t give a shit about “accurate”. If you care about accurate colors, then maybe you would be shooting color positive film instead, or digital. It is generally accepted that a part of the process of shooting negatives is to make subjective decisions about color, after you develop the film.
That’s not to say that you can’t get accurate colors using negatives. It’s a physical process that records color, and you can make color profiles for negatives.
> For scanning color negatives, the brand of film would be mapped to a colorspace, the light source would have its own colorspace, the two would get multiplied together somehow, and the result would be stored in linear RGB. Inversion would be linear. Then the output linear RGB would get mapped to the display's sRGB or whatever.
What you would do is store a color profile in the image.
You can use linear RGB for storing images, but it’s wasteful. Linear RGB makes poor use of the encoding range.
If you care about correct colors, you can just embed a color profile in the image. It’s easy, and it’s supported by image editors. You just have to go through the tedious process of creating a color profile in the first place, which normally requires colorimetry equipment.
There’s no reason inversion must be linear. The response curve of negative film is, well, a curve. It is not a line. When you shoot negative film and print to paper, the paper has a response curve, too.
The light source does not have a color space. It is just a single color—that’s not really a “space” of colors. It has a spectrum, and the spectrum of light from the light source, combined with the spectral response curve of the dyes in the film, combined with the spectral response curve of your sensor, produces some kind of result which you can combine into a single color profile for the entire process. And you can combine that with the spectral response of the film layers. You can just create a color profile for the entire process—shoot a bunch of test targets under controlled lighting conditions, develop, scan, and then measure the RGB values you get for those test targets. You use test targets with known colors that you buy from the store.
So for this article, I don't see mathematical proof that the negatives have been inverted accurately, regardless of method, even though I'm sure the results are great. I suspect it comes down to subjective impression.
Here's a video I found discussing monitor calibration:
https://www.youtube.com/watch?v=Qxt2HUz3Sv4
If I could fix everything, I'd make all image processing something like 64 bit linear RGB and keep the colorspace internal to the storage format and display, like a black box and not relevant to the user. So for example, no more HDR, and we'd always work with RGB in iOS instead of sRGB.
Loosely that would look like: each step of image processing would know the colorspace, so it would alert you if you multiplied sRGB twice, taking the onus off of the user and making it impossible to mess up. This would be like including the character encoding with each string. This sanity check should be included in video card drivers and game dev libraries.
If linear processing isn't accurate enough for this because our eyes are logarithmic, then something has gone terribly wrong. Perhaps 16 bit floating point 3 channel RGB should be standard. I suspect that objections to linearity get into audiophile territory so aren't objective.
For scanning color negatives, the brand of film would be mapped to a colorspace, the light source would have its own colorspace, the two would get multiplied together somehow, and the result would be stored in linear RGB. Inversion would be linear. Then the output linear RGB would get mapped to the display's sRGB or whatever.
My confusion is probably user error on my part, so if someone has a link for best practices around this stuff, I'd love to see it.