The cost of HDR+ computational photography

+Sam Pullen​​​​​ demonstrates in this video the main limitation with Google's current computational photography approach.

Reported by most reviewers as "lag" or "bugs" of the #Nexus 5X and 6P camera app since this is the mode activated by default, it has a simple explanation:
The amount of time to process multi-exposures shot as one HDR+ picture joined with the limited amount of RAM allowed as buffer makes it unsuitable for consecutive pictures shooting.

Here's the process which occurs with any Android smartphone since the past few years:

– you launch the camera app
– the viewfinder preview starts the ISP to capture full resolution readouts from the sensor at 30 FPS, renders them at near display resolution, adjusting in real-time automatic exposure and white balance
– you press the shutter button
– the camera ISP hardware takes one of the full resolution sensor readout, renders it in full resolution, compresses the output automatically using the hardware JPEG encoder, offers the compressed file as a buffer to the camera app, camera app saves it as a file on disk.
– the previous sequence of operations, using almost no CPU at all can be repeated at 4 times per second or more.

Instead, here's the process with HDR+:

– until you press the shutter button: identical.
– the camera ISP hardware takes multiple (up to 9*) consecutive readouts, some with positive and negative exposure compensation, from the sensor at 30 FPS, renders them and offers them as uncompressed buffers in RAM to the camera application.
– the camera application takes all these images as input and feeds them to a super-resolution algorithm, which also tunes the local contrast and color balance, compressing or extending the dynamic range locally depending on the analyzed image content.
– the HDR+ algorithm takes a few hundred milliseconds to several seconds to render a processed image
– once the HDR+ algorithm is finished, it offers the result as buffer to the hardware JPEG encoder, which returns a buffer to the camera app then saved as a file.
– during the HDR+ processing in background, you can press the shutter button again to trigger the capture, but only as long as there is enough available memory to store those multiple exposures as uncompressed image buffers in RAM.

As you can see from this list of operations, the two modes function rather different.

– Standard mode doesn't rely on the CPU to do much beside synchronizing the preview between the camera and the display hardware, then saving the final result as a file on disk.

– HDR+ relies extensively on the RAM and CPU to build (hopefully) better images from many captures.

As a result, both the RAM and CPU are bottlenecks limiting the consecutive shooting capability.

Now you may ask: why isn't HDR a problem on other phones?

There are several explanations:

– Google chose a target that's unsuitable for the Nexus 5X consecutive image shooting. It means too many images in buffer given the amount of RAM and computational capability available, too complex processing.

– Google HDR+ implementation is not optimized enough.

– Samsung flagships since the Galaxy S5 process their HDR rendering hardware-accelerated instead of relying on the CPU. Their implementation is efficient: enough to process the preview in HDR at 30 FPS, compressing the dynamic range more and better than Google's HDR+, it doesn't slow down shooting either. Samsung HDR rendering is even available to third party applications in their Camera SDK while Google's HDR+ is entirely proprietary.

How can Google improve the situation?

– Reducing the amount of captures directed to HDR+ dynamically depending on the load to avoid stopping and making the photographer miss shots

– Reverting to 100% hardware accelerated standard shooting when at least two HDR+ images are processing in background instead of preventing the user to shoot. A standard image is better than no image at all. As demonstrated by +Sam Pullen​​​​​, the current situation generates user frustration.

– Use more hardware acceleration (OpenGL shaders, Renderscript) and less CPU to improve the HDR+ algorithm speed to catch up with the competition, improving the power efficiency and avoid slowing down even more during consecutive shooting due to CPU thermal throttling.

* 9 frames for HDR+ was mentioned in a Google blog post last year, it could have changed on latest camera app.

#supercurioBlog #camera #video

Source post on Google+

Published by

François Simond

Mobile engineer & analyst specialized in, display, camera color calibration, audio tuning

14 thoughts on “The cost of HDR+ computational photography”

  1. I'm pretty sure Google us already very low level in terms of processing the HDR. The Google a camera App already has a lot of c libs including Renderscript.

    Also about the RAM point. You could first write the raw bytes to disk and queue the processing up. Then you won't get into RAM problems. It's just a question of how much data and how fast the IO is.

    But one point is just evident. There is just not enough processing power to handle all this simultaneously and Google for sure has done optimization given how much emphasis they put on the camera.

  2. +Michael Panzer yes, monitoring renderscript usage is not as easy as observing the
    CPU load and frequencies.
    However if Renderscript kernels end up running on the GPU it should be possible to observe that using +Christian Göllner​ System Monitor app.
    Writing buffers 12Mpixels 8-bit YUV420, like 9 of them for a single image with full disk encryption might not speed up the process much unfortunately.

  3. The one thing I'd point out is that Google's HDR+ also yields by far the most striking HDR results I have seen on any smartphone. While it may be computationally expensive, the results are difficult to argue with.

    I've noted a similar level of very long post-processing time on the Blackberry Priv when it shoots HDR, as well. And that camera also generates some pretty striking HDR images.

  4. Google doesn't seem to want to go hardware specific in their code, maybe because it has android AOSP in mind when programming, i dont know..
    thats the thing that OEMs like Samsung that instead of relying on AOSP do their own code implementation in a way that is hardware specific and as so more effective.

    Anyway, then there's the fact that the nexus 5x should have 3gb and the 6p should have 4gb. The only reason i see for not having is maybe Google doesn't want developers to waste memory in their code and is forcing them to code more efficiently. After all the nexus line is still kinda a developers line.
    In the other hand the nexus should be as example of the future's trend and 3 and 4 gb are becoming the trend for medium/high smaetphones.

    I know I want 4gb in my next phone, if the 6p had 4gb I would buy it in the first day, without I think I'll pass, my OnePlus one is still up for the job anyway

  5. +David Ruddock Samsung and other OEMs are also computational expensive. The difference is that OEMs like Samsung use the gpu and google uses the CPU. It's not more expensive as in code, but its more expensive as in using a less efficient tool for the job.

Leave a Reply to David Ruddock Cancel reply