Mac Format

Smart HDR photograph­y

The latest iPhones put artificial intelligen­ce into every photo

- Adam Banks

All cameras have a lens to focus light, a shutter to control how much light gets in, and something – film or a sensor chip – to record the light hitting the sensor.

But there’s one more crucial element of every modern camera: the software. ‘Computatio­nal photograph­y’ relies on a combinatio­n of hardware and software processes before, during and after you ‘take’ the photo.

It’s particular­ly important in phone cameras, because the laws of physics aren’t keen on miniaturis­ing optics. It’s impossible, for example, to shoot with shallow depth of field using a lens and sensor that will fit into a smartphone. That’s why we have Portrait mode, which simulates it with digital blur.

HDR (high dynamic range) is a slightly different kettle of fish. In one sense, it simulates what a bigger camera could do But it also goes beyond that, using techniques that originated in the darkroom.

Dynamic range

Dynamic range (DR) is a measure of the ability to register detail in the lightest and darkest parts of a scene simultaneo­usly. If you ever shoot in manual mode with a digital camera that has live preview, you’ll see the limits of DR in action: as you try to set the exposure, you have to choose between ‘blowing out’ the brightest areas to white or losing detail in the shadows.

Film gives you two cracks at exposure: once when you shoot, then again when you develop. Because film has relatively high DR – about 14-16 stops, where a stop means halving the amount of light – you can shoot to capture detail in shadows, then develop to rescue detail in highlights. (Phone camera sensors have more like 9-10 stops of DR.)

By messing with chemistry, you can even develop the highlights and shadows at different speeds, compressin­g tonal range without sacrificin­g either one of them, though midtones may start to look artificial.

‘Dodging’ or ‘burning’ can selectivel­y increase or decrease exposure by masking areas at different stages of developing. Ansel Adams, famous for his black-and-white landscapes, was a master of these techniques. Finally, the ultimate trick is to take light and dark areas from frames shot with different exposures, a practice dating back to the early days of photograph­y in the 19th century.

HDR in software combines all of these methods, and now Apple’s Smart HDR goes several steps further. It achieves much greater dynamic range than an iPhone’s tiny sensor ever could, but can differ noticeably from what you’d see with your own eyes.

Beauty filter

In 2010, iOS 4.1 on the iPhone 4 introduced the ability to create an HDR photo from a triple exposure. With the iPhone XS, XS Max and XR, this becomes a full-time automatic process. Whenever the Camera app is open, your iPhone constantly stores bursts of multiple exposures. When you press the shutter button, the latest set is almost instantly combined using the A12 chip’s image signal processor

Computatio­nal photograph­y relies on a combinatio­n of hardware and software processes

(ISP) and Neural Engine artificial intelligen­ce.

Previously, HDR used preset maths to prioritise whichever exposure contained the most detail. Smart HDR now considers the scene to figure out how things ‘should’ look, especially faces. Some people have complained it goes too far, looking like a smoothing ‘beauty’ effect. But any such impression is probably an unintended result of technical limitation­s. (You should find its intensity is at least reduced as of iOS 12.1.)

Constantly buffering multiple exposures means each exposure has to be very short. Think of sensor subpixels as buckets that catch photons falling on them. If you can wait and see whether 100 million photons hit a pixel or 101 million, you can accurately record its relative brightness. If you have to hurry up and take a reading based on just a few thousand, you’ll get a bigger margin of error, which manifests as digital noise.

Some degree of noise reduction is always used to smooth out these errors. The iPhone XS’s sensor has bigger pixels, reducing error, but only so much, especially if you don’t live in sunny Cupertino. Short exposures in low light will show some noise-reduction smoothing.

Avoiding light bounce

Another factor in the look of Smart HDR portraits is shine, or the lack of it. Sit for a studio portrait or TV shoot, and assistants insist on powdering your forehead, nose and chin. This is to avoid light bouncing off, creating specular highlights, which are seen as unflatteri­ng and draw attention away from your eyes. By locally exposing for detail, HDR reduces specular highlights, and this can give a similar impression to a subject being more heavily made-up or a picture being retouched. Similarly, glare and lens flare may be reduced.

Does that make pictures better or worse? There’s no right answer. Computatio­nal photograph­y, like other uses of algorithms and AI, embodies the subjective preference­s of its programmer­s.

 ??  ?? A side effect of Smart HDR is to eliminate shine on faces, which can look slightly artificial.
A side effect of Smart HDR is to eliminate shine on faces, which can look slightly artificial.
 ??  ??

Newspapers in English

Newspapers from Australia