News for March 2008

Photoshop’s Weakness…

I’m a big fan of Photoshop. Well, I was a fan of LivePicture first, before that went bust, and when Photoshop first became popular, I preferred Corel PhotoPaint much more, but whenever it was they added multiple levels of undo (well, some roundabout implementation of undo, at least) I made the switch, and never looked back. I loved when they added blending modes– something I think all grading systems could use, and then again with the introduction of adjustment layers. Finally it was turning into a tool that could work in a non-destructive manner.

Apparently there’s a beta version of Photoshop CS4 that’s been leaked… recently, but I’ve not read anything that suggests what’s going to be in the next version. Given that the last few releases have mainly focused on workflow improvements, it almost feels like there’s nothing more to be done with it.

Today I was doing some pre-viz with a colorist, who was explaining to me that he found it very frustrating to try and correct images with Photoshop’s toolset, that it was much more intuitive to use Baselight even just to work with stills. Surely that can’t be right, I thought. Photoshop has been used, probably by millions of people, to accomplish all sorts of wonderful colour changes.

So I went off to try and replicate a Baselight-eque workflow in Photoshop. After a few hours, I realised that it can’t be done. Now, that’s not to say that you can do stuff in Baselight that you can’t do in Photoshop, but it absolutely means you can’t work the same way. For overall changes, there’s no difference between the two: you can use adjustment layers stacked on top of each other in Photoshop to allow non-destructive colour-correction. But secondaries are a different story. It boils down to this: Photoshop’s implementation of vectors is horrible.

Really, really horrible. In Baselight (as well as many other grading systems), you’d do something like this: create a shape/mask, add some softness, adjust the colour within the shape, then tweak, tweak, tweak. The closest analogue in Photoshop that I’ve found, is to create an adjustment layer, then use the pen tool or one of the shape tools to create the shape(s). You can then go ahead and modify the adjustment layer to your liking.

Photoshop-vector-mask

Because of the way Photoshop is designed, you can tweak this in a lot of ways that you probably couldn’t in a grading system. For instance, you can change the blending mode and opacity of the  adjustment layer- effectively giving you control of the overall effect. You can paint on the adjustment layer too, easily combining vectors and pixel-based masked in a single correction.

But none of that changes the fact that Photoshop’s vector tools are really bad. I don’t just mean the way the user interface works (although even that is really awful, given the vast number of applications out there which are able to make bezier curve editing effortless), but I mean that they seem like an afterthought. For instance, there is apparently no way to alter the softness of the edge of a shape, without rasterizing it first, at which point you can blur it or whatever to get the desired effect. However, there’s are good reasons that grading systems don’t work this way (aside from the fact we’re working with moving pictures, of course), which are that it’s a destructive process, and it’s much slower (for the user).

lustre-25-Selective_large.jpg (JPEG Image, 700×438 pixels)

It’s so clear to me now– Photoshop wants you to work with pixels, not shapes. I can understand that, after all, where you would draw a shape around a balloon in Baselight, in the ‘Shop you could just isolate the balloon onto it’s own layer, and then just tweak that layer to your heart’s content, which theoretically will allow you to create much more accurate masks. You can save every mask you create as a pixel-based alpha channel, so you still have the ability to isolate it again if you want. But there’s the pitfall– if you do something to break the balloon layer, you have to go back to square one in a lot of cases. If you blur (or feather) a selection just a little too much, and don’t realise until it’s too late, then you have to start over (unless you had the foresight to save a copy of the layer just before you feathered it).

The vector tools are clearly there for other reasons, and are not meant for doing the sort of things they’re used for in grading systems, and that seems like a real shame, because in combination with the rest of the toolset, they could be very powerful. But then Photoshop’s not supposed to be a vector-based application. It’s not built to replicate the workflow of a Baselight system, and maybe it doesn’t need to be. But more than that, it seems that it’s just not built for speed.

Posted: March 26th, 2008
Categories: Tools
Tags: , , , , , ,
Comments: No comments

Digital Downloads of Movies "Years Away"…

TV Predictions is reporting… a consensus amongst Hollywood studios that digital downloads of movies are not likely to replace Blu-rays and DVDs for some time. They cite the (un)usability of existing services as the main reason why. I whole-heartedly agree- the infrastructure is designed, almost intentionally to be as confusing as possible, and the problem of DRM… only exacerbates the point.

However, usability is something that can always be addressed. I was watching the movie The Boondock Saints recently, and though the movie is only 9 years old, I felt strangely amused to watch Willem Dafoe listening to a portable CD player. I was about to joke about the inferior technology of a bygone era, when I realised something: CDs are superior to MP3s in terms of the audio. Both are digital formats, but the technical quality of a CD is much higher than the typical MP3. In fact, you’d have to go with something like FLAC compression to get CD quality audio. MP3 just doesn’t cut it. When DVD-Audio was first proposed, lots of people thought it was going to revolutionise the music industry. It’s maybe ten years since then, and I’ve still never met someone who bought a DVD-Audio disc.

So what’s going on here? The average consumer just doesn’t care about quality… The audio quality of MP3s is good enough, and it’s the other advantages that win out. I switched to an all digital movie system at my home some time ago, and I’ve never looked back. I convert all my DVDs to MPEG-4 (around 1GB per file) and stick them on a server. There’s a loss in quality, to be sure, but that’s nothing compared to having them all at the push of a button, complete with metadata scraped from the movies IMDB entry. I can cross-reference the years, actors, directors and so on for each film, rate them (or go with the IMDB rating), in exactly the same way as I can with my digital music collection. And it’s great.

Granted, I know what I’m doing in terms of getting this working. It’s complicated to set up, maybe even impossible for the average person. But I’m certain that it won’t always be this way. It used to be that the disk space was the limiting factor, but you can get a 1TB disk drive for a couple of hundred dollars, and that’s enough to store around 200 DVDs without recompressing them. One of the complaints they Hollywood execs made was that the files took too long to download, which is true– you can’t stream a movie. However, that doesn’t stop people ordering or renting DVDs from Amazon, even though they are transmitted through the ultra-slow medium of the postal service.

I see Blu-ray as a stop-gap, nothing more. The studio execs need to step up and take the reins of digital downloads rather than cowering under their desks and hoping that it doesn’t happen until they can collect their pensions. Otherwise we’ll see another media industry get swallowed up by a computer company. How many music execs do you suppose are kicking themselves right now over iTunes?

Posted: March 15th, 2008
Categories: News
Tags:
Comments: No comments

Down-rez methods for 4k Red…

I haven’t seen much in the way of comparisons of the resizing filters available for Red footage, so yesterday I ran some quick tests. The “Redline” processor provides a choice of seven different filters for resizing clips: Bell, Mitchell, Lanczos3, Quadratic, Cubic-bspline, CatmulRom (sic), and Gauss. I took something fairly neutral we’d shot at 4k (4096×2048 to be exact) and down-rezzed it to something more feasible (2048×1024), to see how much sharpness was retained. I then decided that an exact 75% reduction in area is a bit too computer-friendly, so I also did a set down-rezzed to HD (letterboxed to 1920×1080).

There are a lot of available options for processing the clips, for the purposes of this test, I turned off sharpening and noise-reduction, set the detail to high, the ISO to 320, and applied a rec.709 gamma curve.

It’s possibly a little difficult to tell from this image, but the Lanczos and Catmull-Rom filters produced the sharpest results compared to the others. Although a difference matte between those two methods suggested there was a difference, I just couldn’t see it. I suspect that the difference in processing time between the two would also be neglible, even over a lot of frames, so I’ll probably just go with Lanczos when the time comes to actually prep the images for the online. What’s interesting is that the default filter used by the Red software is
Mitchell, which is probably the most middle-of-the-road in terms of sharpness.

As tests go, this one’s not bullet-proof, for example I didn’t test to see how the gamma curve affects the process, so I have no idea whether using a different gamma curve affect the interpolation (this would probably depend upon the order the Red software performs its processing, if it interpolates first and then adjusts the gamma, then there would be no difference between curves). The other problem is that this really only provides half the story– until we get this into a grading suite and see how it looks at speed it will be difficult to say for sure that one filter is perceptually better than another. I also can’t compare to how it would look to something similar shot in-camera at 2k, because we didn’t shoot anything 2k (although the general buzz around the Red forums is that shooting 2k produces far inferior results, at least with the current firmware).

The original 4k frame, along with the converted DPX versions, as well as layered Photoshop versions can be downloaded from here… if anyone wants a closer look (please note all images are Copyright 2008 Entitled Productions).

Posted: March 14th, 2008
Categories: Articles
Tags: , , ,
Comments: No comments

Colour vs Resolution…

I’ve been reading a lot of stuff lately (most likely due to my current involvement with Red data) comparing different format and devices to each other. The one thing that keeps cropping up is that resolution and colour sensitivity are completely independent, and have no bearing on each other.

This is of course, bullshit.

To understand why, you need only look at the great staple of motion picture quality: 35mm film. An effective resolution of between 4k and 8k (depending on who you ask) with a bit-depth between 24 and 48.

But this of course, is also bullshit.

Film is not a digital medium. These measurements of its digital equivalence, are merely a convenient representation. Or in other words, if you go with the idea that film has a resolution of 4k and a 16-bit per channel colour range, you probably won’t lose any quality. It’s basic Nyquist theory put to good use.

But physically (film being a physical medium after all) it’s only 3 bits of colour: red, green and blue (or some combination of those). That’s it, there are no shades of red, nor yellow, nor burnt ochre. But wait a minute, what about the glorious range of Technicolour I get to experience at the cinema? Well, that’s because film may only have 3 bits of colour, but it also has a ridiculously high resolution. But hang on a minute, I can measure the resolution of film with a simple chart, that can’t be right! No, because what you’re measuring there is the effective resolution of film, and that is determined by the grain structure. Even the notion of exposure is just a simplification of what’s really going on: the probability of a range of points on the film switching from 0 to 1.

So let’s get back to digital formats. With no grains to get in the way, just nasty rectangular pixels, we actually have a purer medium. So when people say that a recorded 4k image is as sharp as 35mm film but lacking in colour range, they’re missing the point: in actual fact this means although the bit-depth is several orders of magnitude higher than film, the resolution on the other hand is no way near high enough.

35mm film makes for a great benchmark. Absolutely nothing beats it (except larger pieces of film). The 4k/48-bit model makes sense when trying to preserve its integrity in the digital realm, particularly as we’re then limited by what the digital display devices can output. But if we want to make comparisons that actually make sense in that digital realm, let’s do it properly.

Posted: March 9th, 2008
Categories: Opinion
Tags: , , , , ,
Comments: No comments

The Rise of Toxik…

The answer to my Combustion 2008 woes… may come from the release of Toxik 2008 (which shipped in October 2007). Few people have any real experience of the Toxik product line, and with good reason: it has always been pushed as a collaborative visual effects tool, rather than a standalone one, and requires its own database server to be installed. With that in mind, it requires a conscious decision by the facility to invest in the infrastructure: great for start-ups, but not really that useful to anyone else.

Until now. The great news for 2008 is that Toxik can now be run as a standalone application. It runs on both Windows (32-bit with 64-bit support expected soon) and Linux (32-bit and 64-bit), although currently only NVIDIA Quadro graphics cards are supported (ATI support is in the works). The former Oracle database has been replaced by an XML file, so you lose all of the collaboration options, but keep everything else, like the gorgeous “Touch” user interface.

toxik-006_big

The result of this is that as well as the node-based schematic view synonymous with applications such as Shake, you get a lot of workflow and production management tools that you’d only expect to see in editing systems, such as customisable “pick lists”, render and archive scripts, user-generated metadata (which can actually be used to generate slates), and the often overlooked lynchpin of visual effects creation: version control. There’s even a desktop mode, mimicking Flame’s approach to clip management. Of course these are things that many Shake users would scoff at, mainly because it’s someone else’s job to worry about mundane things like that, but then these are also the same Shake artists that run into problems when they have to work on someone else’s shot.

It uses a whole slew of tricks to optimise performance: large images are tiled to fit the current display, and RAM and disk caching are combined to get decent playback. Everything is open: the interface can be scripted, plug-ins can be created (there is support for openFX), and shots can be processed from a command-line. As you’d expect, it ships with lots and lots of nodes to play with (I’m not going to do a blow-by-blow account of those here though). In short, it seems that it will at least do everything Shake will do.

It also does something that nothing else does: integrates completely with Maya. In addition to loading the standard layered Maya renders, it can also read the Maya scene file, and automatically hook to things like the cameras, lights, groups and locators, using them in the Toxik composition. It will also create the initial composite of the layers based upon this information, which is just the way it should be (although not everyone is in the privileged position of designing the structure of the Maya files in addition to creating the compositing software to deal with it).

There are a couple of areas that still need work: there is no vector-based paint tool in the current version, just a raster-based one (they apparently decided to err on the side of speed rather than flexibility). The tracker is nothing to write home about, particularly after having born witness to the toolset of the latest version of Lustre, but it’s certainly no worse than most of the other trackers out there. There’s no timeline editor, but the provided track view is probably more than adequate for most purposes. It’s also a little lacking in tools specific to video-based material, which will hopefully be addressed in a future release. Similarly, let’s hope we get to see a release for Mac users.

There’s also something of a learning curve to Toxik. While the Touch UI provides a very fast way to work once you’re used to it, it is different from conventional systems and doesn’t seem as intuitive to new users to begin with. Even trying to load footage into a composition can require a lot of leafing through the help system or user guide before you have the “Ah-ha” moment. I can’t put my finger on why exactly, but it the UI just doesn’t feel as “solid” as Combustion’s. It reminds me a lot of when I first started using Maya after years of using 3D Studio; when I was overwhelmed by the sheer wealth of differences in the interface, from the hotbox, to the tool shelves, and the overall scriptability of it all.

toxik-005_big

Toxik 2008 is 2,500GBP per license and all render nodes are free. A subscription to receive updates and new nodes is an extra 470GBP per year.

It seems that Combustion is having a bit of an identity crisis at the moment. And given that Toxik is approaching its price-point whilst boasting superior capabilities, it’s not really that surprising. What I foresee happening is that either there will never be another Combustion release, or that it will undergo a bit of a facelift to make it targeted to 3ds Max Design users, and that meanwhile Toxik starts to get the recognition it deserves.

Posted: March 6th, 2008
Categories: Tools
Tags: , , ,
Comments: 1 comment