On his blog, Charles Apple discusses the use of high dynamic range (HDR) images in newspapers. The discussion raises questions about what constitutes excessive manipulation for photojournalism, and in particular what the impact of next-generation photo processing software will be on these definitions. I don’t know, in detail, the prevailing ethical standards of photojournalism, so in part I’m looking for the thoughts of experts on the topic.
Let’s begin by discussing HDR photography. The sensor in a digital camera is only capable of recording a certain range of scene brightness, from dark to light. A part of the scene that falls outside this range will be “clipped” to pure black or white, without detail. To get around this technical limitation, it’s possible to take multiple exposures of the scene, with different exposure settings (e.g. shutter speed). This will allow some of the exposures to record the bright parts of the scene, at the expense of the dark parts, and other exposures to do the opposite. These exposures may then be combined in software, to produce a “high dynamic range” image, whose range of scene intensities surpasses what the sensor can record directly.
Apple quotes Matt Erickson on an HDR image:
Because of the technique itself – three exposures merged into one image – it obviously isn’t “photojournalism.”
So that gives us a starting point for standards. Because multiple images were used in the creation of the image, it is considered to be photoillustration, not photojournalism. Fair enough.
That leads us to the next topic closely associated with HDR: tone mapping. Tone mapping is so closely related to HDR that it is often conflated with HDR. Just as the camera sensor has a limited dynamic range, our output media—monitors and prints—also have a limited dynamic range. Your printed newspaper cannot display a black as deep as Basement Cat’s soul, nor as bright as a thousand suns. It merely ranges from “pretty dark” to “dingy white.”
The HDR image that we produced from multiple exposures therefore contains a larger range of brightnesses than our output medium can display. So we would like a way to compress that range of brightness into something we can display or print. The easiest way would be to say “the brightest part of the scene is paper-white, the darkest is ink-black, and everything else is in between, proportional to its brightness.” Unfortunately, that would result in a very “flat,” low-contrast image.
Instead, the HDR is usually “tone mapped” with specialized software that uses algorithms based on human visual perception. The algorithms know how the eye responds to edges between objects, large uniform regions like the sky, and so forth. Through these sophisticated algorithms, the brightness of a pixel in the image is mapped to an output tone based not only on its original brightness, but its context in the larger photograph.
This tone-mapping can be subtle, producing a realistic-looking output image, or it can be exaggerated, which is what people often think of as the “HDR effect.” Erickson’s football scene falls into this exaggerated category:
Now, modern digital camera sensors have enough dynamic range that even a single exposure can exceed the dynamic range of our output media. So it’s possible to skip the step of combining multiple exposures into an HDR image, and go straight to tone-mapping. This obviates Erickson’s specific concern I quoted above—the combination of multiple exposures. In some sense, the resulting image is more true to reality: The detail in the shadows and highlights is shown to the viewer, instead of being clipped to detailless black or white. But I suspect that single-image tone-mapping would still be considered an excessive manipulation by photojournalistic standards. It’s non-traditional, and it certainly allows the possibility of very unnatural-looking results, along the lines of the football scene above.
This leads to my present concern. Adobe has released the first public beta of Lightroom 4. Lightroom is a general-purpose photo cataloging and editing tool, widely used by all kinds of photographers, from documentary to fine-art. I’m sure that at least some photojournalists use Lightroom.
Lightroom 4 has a new development process, PV2012, that revamps the “basic” image controls. These controls affect things like the brightness and contrast of an image. I would expect that photojournalistic ethics permit modest adjustment of brightness and contrast. My concern in the context of this discussion is the following statement on the new basic controls:
Yes, all six Basic controls (Exposure thru Blacks) are content aware. Image-adaptive, in other words.
What that means is that even the most basic controls (“Exposure,” which is the main brightness control in Lightroom 4, and “Contrast”) are using algorithms very similar to the single-image tone mapping described above. The adjustment to a pixel’s brightness depends on what’s around it in the scene. If you rule out single-image tone mapping as a photojournalistic practice, then to be logically consistent you probably have to rule out any tonal adjustments in Lightroom 4, even if the intent is benign, the adjustment is mild, and the result is nearly identical to historically accepted adjustments.
I don’t actually hold the view that those adjustments are unethical, but I think it’s going to be a tough line to draw when the same slider can take an image from “slightly brighter” to “aggressively tone-mapped” depending on how far you slide it.