FEATURE: DEFINING IMAGE QUALITY

By Debra Kaufman

January 28, 2020

Reading Time:
4 Minutes

Determining image quality has become more complicated as technology has created more choices in resolution, dynamic range, color gamut and frame rate, along with an explosion of new digital cameras and new types of digital processing tools. Compression artifacts, grain, noise, down- and up-resing and other factors also figure in to the equation. We also view images on many more screens, from IMAX to smartphones, and archive them for posterity.

Steve Yedlin
Steve Yedlin


Insight Media president Chris Chinnock, visual effects pioneer/RFX president Ray Feeney, and cinematographer Steve Yedlin, ASC, are experts in imagery, and we asked them to explain the basics and the nuances. Feeney and Yedlin noted that moviemaking, versus reproducing reality, calls for different image characteristics. “Cinematographers point to projected film, depth of field, differentiation of focus and other storytelling techniques,” says Feeney. “Those who want to create an image that is like looking out the window need a different set of parameters.” He adds that photoreal imagery “makes sense with a lot of things that TV is used for — making you feel like you’re in the front row at a basketball game or viewing a debate in person. Making it photoreal is usually not in the service of storytelling but in service of taking you along.”

As a cinematographer, Yedlin reports his own surprise at finding that a popular 6K camera actually had “no more effective resolving powers” than another well-known 3K camera. “Other attributes matter besides resolution, such as noise,” he says. In his own research (which led to a presentation on the subject), he discovered that “once the threshold for effectively invisible pixels has been surpassed, increasing pixel count doesn’t have the perceptual importance we think it does.”

Yedlin explains that scaling algorithms, sharpening, and contrast help determine how clarity and sharpness are perceived, pointing out that the smaller photosites of some high-resolution sensors can actually degrade the image by increasing noise. Additionally, the highest pixel-count cameras currently on the market use image compression, unlike some other models that deliver more actual resolving power (or that have less but still above-threshold resolving power) despite lower K count, he adds. “Compression throws away information, which means that comparing photosites counts on cameras with different compression ratios and/or different compression algorithms is not meaningful. … Compression artifacts can be bigger than pixels. If it’s 5:1 compression, whatever the spec is, there’s only one-fifth of the information announced by that spec, no matter how clever the algorithm is.”

Ray Fenney
Ray Fenney


“For narrative storytelling, resolution is a red herring,” agrees Feeney, who notes that factors including the best codec and compression schemes as well as the distribution method need to be taken into consideration. “There’s a certain sized bucket, and people are trying to figure out how best to use it,” says Feeney. “If you double the resolution in both directions, a factor of four, you use more bandwidth, even with really good compression. If you go to HDR, then you’re using your additional bits to cover a larger range of image content. These are design trade-offs that people are working on right now.” High frame rate and variable frame rate as well as HDR, he believes, “are more important than the impact of resolution, but displays will continue to evolve to higher resolutions for other reasons.”

Displays are the bailiwick of Chinnock, whose company provides information and services for the cinema, broadcast, ProAV, consumer electronics and display industries. He looks at resolution from the point of view of visual acuity versus hyperacuity. Acuity is defined by the finest details—in the case of visual acuity, the smallest letters on a Snellen eye chart—that we can distinguish. But hyperacuity goes beyond that, with our ability to detect fine misalignments in edges and lines. Whereas acuity depends on the sensory elements of the retina, hyperacuity depends on information processing in the brain. “Even if two parallel black lines are off by a pixel or two, you can see that from 10 feet out,” says Chinnock. “On a 4K digital display, you see twice as much stair-stepping as on an 8K display, because you have twice the pixel density on the 8K display, which makes those steps finer and harder to see.”

Chris Chinnock
Chris Chinnock


He refers to research done by Japanese broadcaster NHK that showed the impact of cycles per degree, a measure of pixel density. With 20/20 vision attainable at 60 cycles per degree, NHK had to get to 120 to 150 cycles per degree for the image to look indistinguishable from reality; an 8- to 10-foot viewing distance would require a 16K or 32K display to achieve this. “The key takeaway is that there is a lot more going on than simple acuity,” he says. How many K is enough resolution is moot, he adds. “Display makers, compression, chipset, storage and interface speed companies have all been on a constant improvement path for 50 years,” he says. “It’s a whole ecosystem, now on a seven-year time frame. We may repeat it for 16K and it’s possible we’ll go to 32K.”

Image quality will be impacted by the evolution of display technology to MicroLED. “That’s exactly what Light Field Lab is doing—a panelization approach that’s already started,” Chinnock says. “It’s expensive now, but all the trends are moving towards cost reduction. It won’t be long before we have 8K, HDR, wide color gamut and MicroLED systems.”

Yedlin points out the widespread belief that a camera capture should be higher resolution than the final display medium is a myth. “Actually, the opposite is true,” he says. “Projectors and TVs show images that are photographed as well as generated [from computer imagery] and only the latter can have areas that are ultra-high frequency and ultra-high contrast at the same time. So it makes more sense for display devices to have higher pixel counts so they can faithfully resolve either type of imagery, while it makes more sense for camera sensors to balance K-count with noise for the highest quality image data.”

With regard to future-proofing, he notes that our retinas will still resolve imagery the same. “Unless we’re future-proofing against bionic eyes, if you can subdivide the world into fine-enough increments that it looks smooth, that’s what you’re going for,” he says. “You don’t say Charlie Chaplin didn’t future-proof his movies because he didn’t shoot in color. Current devices, though they can display color, are also capable of showing his movies just as he authored them.”