Posts Tagged ‘resolution’

Dithered Colour Correction for 4k and Beyond…

This is how digital colour correction works:

If you change the brightness for any part of the image, all the selected pixels are affected in the same way.

In one of my previous rants, I proposed that resolution was more important than bit-depth… Since then, I’ve been wondering why, in practice, this is never the case. After some thought, I realised that the way digital colour correction works is different from changing the exposure of film.

Changing the exposure of film, on microscopic level, affects the image like this:

The individual grains are not affected uniformly. It’s that a percentage of the grains become exposed (or not), not that every grain becomes more exposed. To the viewer, the result from a distance is that the affected region is brighter. It’s also the reason that grain structures are visible.

Until now, it didn’t make sense for colour correction software to work like this: the resolution of images was too small to make sense. However, for 4k and beyond, non-uniform (or dithered) colour-correction may very well yield superior control over the colour-correction process.

So the question I find myself asking is, why don’t any high-end colour grading systems offer this kind of functionality?

Posted: April 19th, 2008
Categories: Articles
Tags: ,
Comments: No comments

Colour vs Resolution…

I’ve been reading a lot of stuff lately (most likely due to my current involvement with Red data) comparing different format and devices to each other. The one thing that keeps cropping up is that resolution and colour sensitivity are completely independent, and have no bearing on each other.

This is of course, bullshit.

To understand why, you need only look at the great staple of motion picture quality: 35mm film. An effective resolution of between 4k and 8k (depending on who you ask) with a bit-depth between 24 and 48.

But this of course, is also bullshit.

Film is not a digital medium. These measurements of its digital equivalence, are merely a convenient representation. Or in other words, if you go with the idea that film has a resolution of 4k and a 16-bit per channel colour range, you probably won’t lose any quality. It’s basic Nyquist theory put to good use.

But physically (film being a physical medium after all) it’s only 3 bits of colour: red, green and blue (or some combination of those). That’s it, there are no shades of red, nor yellow, nor burnt ochre. But wait a minute, what about the glorious range of Technicolour I get to experience at the cinema? Well, that’s because film may only have 3 bits of colour, but it also has a ridiculously high resolution. But hang on a minute, I can measure the resolution of film with a simple chart, that can’t be right! No, because what you’re measuring there is the effective resolution of film, and that is determined by the grain structure. Even the notion of exposure is just a simplification of what’s really going on: the probability of a range of points on the film switching from 0 to 1.

So let’s get back to digital formats. With no grains to get in the way, just nasty rectangular pixels, we actually have a purer medium. So when people say that a recorded 4k image is as sharp as 35mm film but lacking in colour range, they’re missing the point: in actual fact this means although the bit-depth is several orders of magnitude higher than film, the resolution on the other hand is no way near high enough.

35mm film makes for a great benchmark. Absolutely nothing beats it (except larger pieces of film). The 4k/48-bit model makes sense when trying to preserve its integrity in the digital realm, particularly as we’re then limited by what the digital display devices can output. But if we want to make comparisons that actually make sense in that digital realm, let’s do it properly.

Posted: March 9th, 2008
Categories: Opinion
Tags: , , , , ,
Comments: No comments