This is still work in progress and unfinished as I know not everything is implemented yet in terms of color spaces regarding alternative color matching functions.
But already, in this sample using CIE 1931 2° modified by Judd & Vos color matching function, from spectrophotometer readings looks really close to the real thing. This is when comparing, in person the Nexus 10 to the simulated image displayed on a calibrated display.
Late yesterday night, I completed the math and code to adjust my display high precision colorimeter measurements with spectral readings taken with a spectrophotometer.
During #MWC15 , I was going booth to booth with both a X-Rite I1 Display Pro colorimeter, that's particularly quick and a EFI-ES1000 spectrophotometer (same as a X-Rite i1 Pro), connected alternatively to the tablet running my software.
The spectrophotometer is about 3 times slower, but has the merit to see the light wavelengths intensities while the colorimeter uses some kind of RGB sensor that's able to tell the real colors only if their spectral characteristics match what it has been optimized for.
The correction logic was not implemented until now, so here are graphs from a +HTC One M9 unit, before and after correction.
Not only the spectrophotometer doesn't see the same thing, but these measurement confirm why you've heard reviewers mentioning a ''green tint" on the M9. It's indeed here, and our eyes are very sensitive to an excess of green. We tend to be less bothered by wrong amounts of red and blue however. The M9 display also has too much blue and not enough red.
One thing often overlooked is that the CIE 1931 2° color matching function is not ideal to match monitors in the first place for several reason: – When you look at a monitor, the field of view used represents a lot more than 2° – Further research since 1931 characterized the average eye's spectral response a lot better
Wide gamut displays like OLED, LCDs equipped with quantum dot film or furthermore LCDs with laser backlight can produce color primaries with narrow spectrum.
In case the wavelengths composing these primaries fall where the CIE 1931 XYZ function is not so precise, it turns out color matching between displays of different technologies simply doesn't work anymore. Like way off.
As an illustration, you probably have seen an AMOLED equipped smartphone that was too green despite the manufacturer's effort to calibrate it to D65 white point, and measurements with regular tools and software confirming that; but not your eyes.
This +Sony's whitepaper explain they had the same issue when calibrating their reference OLED displays and the solution they adopted.
That's another reason why I develop my own display analysis software suite. The mobile industry is extremely fast at adopting the latest display technologies and it requires state of the art research to keep up 🙂
It's a complete revolution in what Android Camera apps can do with the camera, bringing terrific new processing capabilities (using various forms of hardware acceleration).
Most of what only vendor's Camera apps, using proprietary APIs and sometimes ISP (Image Signal Processing) specific features will now be possible with third party apps. Think: – burst and any application using high speed burst shots like HDR, Superresolution. – RAW saved as DNG – non-compressed de-bayered images to process with the CPU or GPU with GL ES shaders – Color space conversion from native sensor RGB and custom contrast curve or tone mapping.
And a lot more: With L, Android enters a totally new territory for its Camera.
Results are good 🙂 Also the approach is completely different from everything I saw so far.
Attached: D65 calibration on Nexus 7 (2013): very first results
I take all the measurements I need first and then everything can be done with calculations. Usually auto calibration algorithms measure various kind of color patches that I can't justify, then try to improve their vastly interpolated early results by several optimization pass. Not sure why they do that as it seems inefficient. Maybe those algorithms were designed with different goals in mind than mine. Like if you're not sure of what the hardware will do with your profile, so you load it, retry, again and again. But it means you're not measuring correctly to begin with or working with inconsistent and unpredictable hardware.
A huge benefit of my approach seems to be the accuracy first, and also you can tune the algorithm parameters all you want without taking any new measurement (which takes a vast amount of time).
Today I'm adding black point compensation strategies in it in order to provide a smooth gradation near black instead of clipping at rgb (10, 10, 15) on Nexus 7 (2013) when targeting a Gamma 2.2 response curve with pure black output.
And I'm having a lot of fun doing this!
I was really not sure I would be able to make this auto calibration thing, thinking I was too limited in my maths skills. But it seems to be no issue even if it took quite some time to turn to code the theoretical concept I had in mind.