Late yesterday night, I completed the math and code to adjust my display high precision colorimeter measurements with spectral readings taken with a spectrophotometer.
During #MWC15 , I was going booth to booth with both a X-Rite I1 Display Pro colorimeter, that's particularly quick and a EFI-ES1000 spectrophotometer (same as a X-Rite i1 Pro), connected alternatively to the tablet running my software.
The spectrophotometer is about 3 times slower, but has the merit to see the light wavelengths intensities while the colorimeter uses some kind of RGB sensor that's able to tell the real colors only if their spectral characteristics match what it has been optimized for.
The correction logic was not implemented until now, so here are graphs from a +HTC One M9 unit, before and after correction.
Not only the spectrophotometer doesn't see the same thing, but these measurement confirm why you've heard reviewers mentioning a ''green tint" on the M9. It's indeed here, and our eyes are very sensitive to an excess of green. We tend to be less bothered by wrong amounts of red and blue however. The M9 display also has too much blue and not enough red.
One thing often overlooked is that the CIE 1931 2° color matching function is not ideal to match monitors in the first place for several reason: – When you look at a monitor, the field of view used represents a lot more than 2° – Further research since 1931 characterized the average eye's spectral response a lot better
Wide gamut displays like OLED, LCDs equipped with quantum dot film or furthermore LCDs with laser backlight can produce color primaries with narrow spectrum.
In case the wavelengths composing these primaries fall where the CIE 1931 XYZ function is not so precise, it turns out color matching between displays of different technologies simply doesn't work anymore. Like way off.
As an illustration, you probably have seen an AMOLED equipped smartphone that was too green despite the manufacturer's effort to calibrate it to D65 white point, and measurements with regular tools and software confirming that; but not your eyes.
This +Sony's whitepaper explain they had the same issue when calibrating their reference OLED displays and the solution they adopted.
That's another reason why I develop my own display analysis software suite. The mobile industry is extremely fast at adopting the latest display technologies and it requires state of the art research to keep up 🙂
I've been fascinated by this explanation on how calibration was done for film distribution with inevitable deviations due to the analog nature of the process.
In this video, +CineTechGeek shows it consists in calibrating essentially the primaries coordinates. I wonder what the response curve is tho: I suppose essentially linear with a rolloff in highlights? Cool stuff, I'll watch more or those videos to continue learning about it 😊
As you can see from simulated measurements shown here, both Luminance and Gamma graphs are crapped up on current HCFR 3.1.6, compared to the old 126.96.36.199 version (from April 2012).
On the CIE 1931 gamut and saturations graphs, the saturation targets are actually nice on the new version, but the errors when visualizing the curves are a deal breaker. Well, a little bit of time lost it seems, but I'm still targeting a public release soon.
I noticed no error or misconception; this is rare enough to be congratulated for 🙂 You'll see he makes a series of good points on introduction-level questions and explains them in ways that are easy to understand.
Excellent job +Curtis Judd! All my encouragements and support if you'd like to do talk more about calibration in more videos.
In my last post, I was talking about flat field correction, a technique that allows to fix lens vignetting and associated lens+sensor color cast.
Here's a first result 🙂
This is the same picture of an uniform white reference, with contrast +100 in Lighroom to emphasis the difference:
– Without DNG correction – Without DNG correction but with Lighroom built-in Lens Vignetting tool adjusted to best settings possible – With DNG flat-field correction defined in the DNG file itself: to the user, it's like vignetting or color cast were never here!
Wow, my calibration algorithm works automatically and with 256 points instead. That means correcting every single color in the grayscale instead of the usual grand interpolation approximation as demonstrated here. I also notice that their automatic calibration starts at $299.
I got the black point calculations right this time! This one wasn't so easy to write and required some silence to boot 🙂
Now it works just as expected and allows to generate a XYZ black point target: – from a XYZ input (measured) blackpoint – for a defined RGB colorspace – for target white point – with two parameters allowing to specify how much the black point color should stick to the white point color or to the input black point XYZ color, and another setting the balance between channel clipping and clipping protection.
Coding for my own still feels different (relaxing) compared to coding for work. Phew! Alright, I wasn't sure it would happen.