Medium Format Talk Forum
All this talk of per pixel calibration in HNCS and how accurate its colors are got me wondering about one thing (noob alert): How does the camera sensor know the color in a certain location when its looking through a lens that doesn't have a flat transmission curve across the visible spectrum? I sort of understand all the sensor calibration and how it affects the RAW file but the moment there is a lens in the optical path with a certain transmission curve that isn't flat (and it never is) how does the sensor know how to compensate for that curve. Every lens will attenuate certain parts of the spectrum and how does the camera sensor compensate for that? Is the lens spectral curve known to the camera? which means (dumb) adapted lenses will always get the color wrong. Also every lens has different dispersion characteristics across its elements and even the best controlled coatings have some variability which means every lens is different in its spectral transmission curve. How does the camera sensor account for this variation and still get accurate colors?