Digital Production, Part 5 – Pre-post

With the shoot complete, the bulk of the work is done, it became a case of simply handing over all the rushes and notes to our editor and seeing what we’d got. Or so we thought. In actual fact, our fears about working with HDV became realised at this point, and a lot of technical wrangling (and not a small amount of hair-pulling and frustration) ensued, before we even go to start post-production. Read on to learn more about the phase we’re calling “pre-post”.

First off, let’s take a look at what we were trying to accomplish in post-production. On the wall of my office right now is a large poster on which I scribbled a theoretical workflow diagram. There’s no way I could reproduce it here (in fact I have trouble deciphering parts of it now, but it all made sense at the time). However I can try and summarise the bullet points.

  • Capture the source as a sequence of still images, but keeping the timecode
  • Generate slate frames based on information in our database
  • De-interlace all footage
  • Convert the 30fps NTSC footage to 25fps
  • Create proxies with burnt-in timecode for viewing and offline editing

Well, that seems simple enough. In reality, we accomplished all of the above, but it took 4 weeks. Given that that’s 4 weeks for around 8 hours of footage, that’s a very long time indeed. I should stress that most of this time was spent trying different ways to tackle specific problems, but even so, the processing of the footage by the end was taking 25 hours per hour of footage (admittedly this is only off-the-shelf hardware we’re using, but still…)

Capture
The first obstacle to overcome was capturing the source footage. You’ll remember from part 5 that we had shot on two different formats, 1080i50 HDV and NTSC DVCAM. The DVCAM capture was a no-brainer, as pretty much any editing software can be used to capture DV footage. We opted for Sony Vegas 6, but switched off scene detection so we’d have a single file per reel. The HDV capture was much more problematic. There was no decent generic method for capturing HDV, so once more we used Sony Vegas, only to realise that for some reason the timecode was not captured along with the footage (we later found out that this was a limitation of the current version of the software). This of course gave us the hideously useless .m2t (MPEG-2 transitional) files. Going back to the DVCAM, we realised that the DVCAM footage lost sync with the timecode we had logged on-set at the start of each shot. I’m still unable to find a satisfactory cause for this, all the tapes we used in the camera were brand-new and not rewound or played between shots– furthermore, it’s difficult to prove that it’s a problem as most people do not record timecode on set like we were doing. Strangely, batch-capturing specific timecode ranges corresponding to the logged shots worked without any problem. The only cause can have been that there was a very slight timecode break at the start of each take, and that taken together, these all served to push the entire reel out of sync.
So, we returned to Sony Vegas. What we had to do was recapture all the DVCAM material, this time using scene detection, to guarantee it was resetting the captured timecode correctly for each take. This situation worked, and upon inspection of the captured files we discovered that there was a difference of between 0-7 frames of what we had logged and the recorded timecode. To remedy the problem, we built a timeline, positioning every shot where it should be according to our log, and exported it as a DV file. This gave use our synced, single file for each reel (with absolutely no loss of quality at this stage).
Now there was the problem of converting it all to stills. And there was also the issue of disk space- we’d allocated 0.5 TB of space to the project, and it looked like we were now going to need maybe 8 times that amount to store the images as uncompressed DPX files. Fortunately, at this stage there was no real reason to use DPX, so we opted for JPEGs instead, from which we could easily make the offline/viewing references. To accomplish this, we used Autodesk Combustion, which was mercifully able to read the m2t format (though we had to rename the files .mpg for this to happen), and output the required stills. On the plus side, we could also use Combustion to burn-in the timecodes at this stage.

De-interlaced frame with BITC

Generate slate frames
This in retrospect was the easiest part pf the whole process. From our database, we generated scripts that would use GraphicsMagick to apply specific text to a template frame. We generated a 5 frame slate at the start of each shot and 250 frame slate at the start of each reel. This was easy, however, merging the generated frames with the output frames was tricky- we couldn’t find a way to automatically backup the frames we were replacing in the event a slate was mistakenly generated in the middle of a shot. We had to just overwrite the frames and hope for the best. Luckily, there were no problems.

De-interlacing
We actually did the deinterlacing at the same time as rendering the source frames, using Combustion’s built-in de-interlace operator. In retrospect this may have been unnecessary, as we’d have to de-interlace again from the source later on to pull frames for the online.

Frame-rate conversion
This was done in Combustion, prior to adding the slates. There was nothing particularly tough about this, only that we had to update the timecodes logged in the database to have a base of 25fps. This was fairly simple- as it was non-drop-frame, the hour:minute:second components of the timecode were unchanged, which meant the frame number component could be found using:
(original_frame_number / original_frame_rate ) * new_frame_rate = new_frame_number (The result ould have to be rounded, of course).

Create proxies
Unbelievably, this proved to be one of the toughest stages, due to the following reason: there is almost no way to convert a jpeg sequence to a movie format such as Quicktime
That’s a little unfair actually. Avid systems will do it, but only if the sequence is less than 8,000 frames (we learnt that the hard way). Cleaner XL 1.5 is supposed to be able to do this but we kept getting errors. Popwire on the Mac doesn’t seem to be able to load in any kind of image sequence. The answer was to load everything into Sony Vegas (apparently the only system we had available to read in a large number of JPEGs), combine it with the audio stream, and then render out an MJPEG file. We could then bring that file into either Popwire or Cleaner XL to get the output formats for viewing and offline editing. For the first time, we were able to watch the rushes with timecode and audio, and it was at this stage we noticed that there was some dropout. Because of the m2t format, this dropout effectively wrecked about 1 minute’s-worth of footage in total. In our case it didn’t really occur over anything crucial, but it could have been devastating. Definately makes a case for dual-recording onto two tapes at a time.

Online trial run
With this done, we took the opportunity to try out the online process. Online was to be done using Assimilate’s Scratch, and we cut together a trailer to see how it would all work. Our first issue was the file format to use. Scratch would accept both DPX and JPEG sequences, and using JPEGs would save a lot of disk space. The question I needed to answer was whether there would be a significant difference in quality between the two formats, bearing in mind the source was compressed using MPEG-2. Would the additional JPEG layer really make a difference to the quality? The way to find out was to output both file formats, and then do a difference matte between the two.

The original JPEG frame from MPEG-2 source

Difference matte between JPEG and DPX versions

And lo and behold, there was a significant difference, which meant that we’d have to render out DPX’s, which further meant we’d have to only render out the files we needed. In the case of the trailer, this meant going through the EDL line by line and rendering out each shot (with handles in the case of shots with motion effects or opticals). Loading the footage and conforming in Scratch worked perfectly, and were able to complete the online of the trailer in about 2 hours. There’ll be more on Scratch in the next part of this series.

Watch the finished trailer online…

The next part will focus on post-production.

Posted: June 12th, 2006
Categories: Articles
Tags: , , , ,
Comment from JustinT - 7/5/2006 at 1:59 am

I know it sounds obvious, but did you try QT pro for converting a jpg sequence?I’m sure quick time pro allows you to load a jpg image sequence into it. Did you try it and find it couldn’t handle

Comment from Jack - 7/10/2006 at 5:04 pm

Actually no, I did not. I didn’t even consider it because I got burned trying something similar using QT pro once before. I’ll take another look at it and get back to you.

Comment from Jack - 7/13/2006 at 9:38 am

Well, it turns out that, as I suspected QuickTime Pro would crash at some point every time I tried loading the sequence (around 20,000 frames or so).

Add your comment











*/ ?>
Comment from JustinT - 7/5/2006 at 1:59 am

I know it sounds obvious, but did you try QT pro for converting a jpg sequence?I’m sure quick time pro allows you to load a jpg image sequence into it. Did you try it and find it couldn’t handle

Comment from Jack - 7/10/2006 at 5:04 pm

Actually no, I did not. I didn’t even consider it because I got burned trying something similar using QT pro once before. I’ll take another look at it and get back to you.

Comment from Jack - 7/13/2006 at 9:38 am

Well, it turns out that, as I suspected QuickTime Pro would crash at some point every time I tried loading the sequence (around 20,000 frames or so).

Leave a Reply

Your email address will not be published. Required fields are marked *