Posts Tagged ‘digital cinema’

Introducing Synaesthesia…

Synaesthesia is the new software product by Surreal Road. It’s been in development for around four years now, and is almost at a point where it is production-ready.

But what is it?

Having worked on all sorts of film and TV productions in different capacities (of greatly varying budgets), it often amazed me how “disconnected” every role seems. This is especially true in areas like post-production, where people employed to enhance or otherwise change particular shots would do so without any knowledge of the history of that shot. It might be possible to find out the camera and lighting setup used for a particular shot in some cases, but what about the intent behind that setup? What was the cinematographer aiming for, and how can I better enhance that, as opposed to the more usual practices of (at best) attempting to reverse-engineer a shot in order to understand it, or (at worst) changing things in a more haphazard manner until something looks good.

This was a problem I’ve encountered on almost every production, and in part it’s unavoidable. The reality is that just as the writers are often left outside the gates of production, so too are the production crew long-gone when it comes to post. This also becomes a practical and logistical problem. Where is a particular reel of film? What was the time and date of a particular shot? On a very organised production, it is likely that the editor would be armed with most of this information, but in all other cases, there is simply no one around to ask.

I look at software created for the visual effects industry, and it is staggering: the functionality and capabilities of VFX software is advanced to the point where it’s possible to use these tools to quickly create shots that are indistinguishable from reality. But when it comes to the actual production process, we’re in a technological drought. Even popular writing software, such as Final Draft, is only slightly more useable than TextEdit, even with years of industry experience and development put into it. What was I supposed to use in my capacity as data manager on various things to stay on top of everything? Excel?

The solution of course, is that (those computer-savvy enough) people tend to cobble together some sort of database (usually in the ubiquitous FileMaker Pro) which serves the immediate needs of the production. Much of the time this works out rather well, the production ends up with a bespoke system that covers most of the bases, something “good enough”. But what about those people who haven’t the time or the resources to create something from scratch, or those people who just want to hit the ground running? Well, you are who Synaesthesia was designed for.

At its core, Synaesthesia is about keeping track of things about a production, from start to finish. Here’s a typical scenario:

  1. You have a production. You add notes, storyboards, descriptions of characters, of sets, all to get a sense of what it’s about.
  2. At some point you have a screenplay. You import that and it links all the scenes with sets and characters you’ve previously created, and adds anything that’s missing.
  3. You refine the script, importing new versions as you go along, further fleshing out what you want to shoot and so on.
  4. You create a database of people and equipment  you’re going to need, and assign them to different parts of the production.
  5. You start shooting. You log what’s shot as it happens, along with notes, things like whether the take was good or not, what was recorded and making last-minute script updates.
  6. You import data directly from digital footage (such as RED camera footage), in order to accurately log timecodes, and shooting parameters.
  7. You start editing, having access to all your previous notes for each clip of footage that was shot. You can import sequences from an editing system and have Synaesthesia tell you which shot is used where. You can make changes to the edit from within Synaesthesia, and save those back to your editing system.
  8. You can designate certain shots as needing effects work, and update those shots as new effects versions are completed.
  9. Finally you can archive all the reels of footage, noting their locations, in case they’re ever needed again.

That’s quite a broad overview, and it assumes you’re going to use Synaesthesia from start to finish. But perhaps the best part of it is that you don’t have to. Maybe you’re only concerned with pre-production, and just want a place to keep storyboards, concept art, and screenplay versions organised? Maybe you just want to log continuity during a shoot? Or maybe you just want to tweak a couple of edits? Well then, Synaesthesia can help you.

It’s probably also helpful to mention what Synaesthesia (at least, in its current form) isn’t for:

  • It’s not for budgeting or scheduling
  • It’s not a replacement for software such as Final Draft
  • It’s not a replacement for systems such as Final Cut Studio
  • It’s not a server-based system, (it’s not possible for multiple people to make changes to the data at the same time).

A more detailed list of features is available  here. As I’ve said, Synaesthesia isn’t quite finished yet. It’s capabilities are still being worked out. But there are several key principles that we’ll always try to adhere to:

  • It will be simple to use
  • It will integrate with software you already use
  • It will give you the information you need

But more than anything else, I want it to be for whatever you (the user) need. With that in mind, we will be inviting people to try out pre-release versions in order to tell us what you like, what you don’t, and what’s missing. You can sign up for an invitation here.

Fix It In Post available for pre-order…

Fix It In Post coverMy latest book, “Fix It In Post” is available for pre-order now on Amazon.

Thanks to everyone who let me pick their brains over the course of the last few months.

The blurb:

“Finally!  A well-written software agnostic guide to fixing common problems in post ranging from shaky camera to film look!”

—Jerry Hofmann, Apple Certified Trainer; FCP Forum Leader, Creative Cow; Owner, JLH Productions

Fix It In Post provides an array of concise solutions to the wide variety of problems encountered in the post process. With an application-agnostic approach, it gives proven, step-by-step methods to solving the most frequent postproduction problems. Also included is access to a free companion website, featuring application-specific resolutions to the problems presented, with fixes for working in Apple’s Final Cut Studio suite, Avid’s Media Composer, Adobe Premiere Pro, as well as other applications.

Solutions are provided for common audio, video, digital, editorial, color, timing and compositing problems, such as, but not limited to:
* automated dialogue recording, adjusting sync, and creating surround sound
* turning SD into HD (and vice-versa) and restoration of damaged film and video
* removing duplicate frames, reducing noise, and anti-aliasing
* maintaining continuity, creating customized transitions, and troubleshooting timecodes
* removing vignettes, color casts, and lens flare
* speeding shots up, slowing shots down, and getting great-looking timelapse shots
* turning day into night, replacing skies and logos and changing camera motion

Fix It in Post: Solutions for Postproduction Problems

Is holographic storage the way forward?…

Last week I got up in a discussion with someone at NBC Universal about archiving. “We reckon the solution is holographic storage,” they said. They then went on to say that such systems have been in development by companies such as InPhase Technologies… for around 7 years now, and the PhD’s who have come up with the idea reckon it’s good for around 50 years.

Well, I’ve heard holographic storage mentioned a few times, but I remain skeptical that this is the right way to go for now. The obvious problem is that it’s unproven. I take issue with the prediction that it’s good for 50 years when it’s only been in development for 7. I have had DTF2 tapes that have developed faults within a 6-month period, and countless disk drives that have died within a shorter period. It’s for the same reason that we don’t yet use LTO4 technology here at Surreal Road yet- it looks good on paper, but not yet as proven as LTO3…

Even so, let’s assume that they live up to the hype. What you essentially have is an investment in a particular product. If the company that manufactures the readers/writers or the company that manufactures the media (or both) goes bust, you’re left with something that then becomes useless. And as far as I’m aware, holographic storage technology is not particularly lucrative right now. So that adds a huge risk to the investment. On the back of this is that the technology isn’t exactly widespread. You couldn’t for example, archive to a bunch of holographic disks and then send them off to someone to restore them at a later date.

Aside from all this even, there is a larger issue lurking under the surface: no-one is particularly sure what data to archive anyway (in the film/video world at least). Right now, it seems that the digital cinema master is the best bet, as it is the format least likely to change right now. But what of non-D-Cinema productions? For instance, if your final output is DVCAM, should you archive the DVCAM avi or Quicktime files?

Personally I always convert everything to still sequences and then save off the audio separately. This minimizes the impact of any data corruption, allows quick access to specific portions of the production (if you only need to restore a specific shot later on for instance). It goes without saying that I also aim to create two copies of everything and keep one off-site if possible. Using an image format such as DPX means that it should be readable by at least some software in 10 years time. I have also anticipated the need to do spot-checks on the data integrity every year, and am ready to transcode everything completely or copy to a new media at some unspecified point in the future. Back in the ’90s I was archiving to CD-R (and slightly later to DVD-R), until it got to a stage (around 4-5 years later) when the discs were starting to become unreadable (despite being kept in ideal conditions). At this point I transferred everything to a new format (at that time I was actually using a nifty little system to back up raw data to DV tapes via firewire), and have repeated this a couple of times since. Needless to say, I still have data hanging around that is 15 years old. I now rely on Internet-based storage almost exclusively for everything except large files (but that’s a discussion for another article).

Another problem is metadata. There’s no agreed specification for many types of metadata (at least, not yet). By this I of course mean things like the title of a project, the respective rights to the images and so on. This isn’t a huge problem, you can pretty much get away with saving any relevant detail in a text file or Excel spreadsheet for instance (although notice how frequently Microsoft change the Excel and Word document formats- will they still be good 10 years from now?), but it is something that should be standardised. There is also the issue of other metadata, such as project files and software settings. Final Cut Pro XML is absolutely the right way to go in this regard- provided that you are using FCP of course. And even then, the project data is only really useful if you backup all the source data along with it, and let’s face it, that can often seem like a waste of time.

Ultimately, holographic storage may provide a decent long-term archive medium. But without a robust, long-term data strategy to support it, what is the real benefit?

Building Earth, One Frame at a Time…

At first glance, it doesn’t seem particularly ambitious. Take some wildlife documentary footage, cut it together, and cut it into a film. That was the idea of “Earth…“, the BBC’s film version of their highly acclaimed “Planet Earth” TV series. But look at the details, and things become much more complex. The series has 4,000 days-worth of footage to draw upon, across every format in use today, from DVCAM through to Super-35mm film. Much of it was processed under less than ideal situations, and as such is missing vital metadata such as timecode, sync points and so on. Pretty much all of the audio had to be created from scratch.

earth © BBC Worldwide Ltd 2007

My involvement was limited to the picture side of things, so I’ll be focussing on that for this article. More than anything else, what was unique about this project was the production’s unwavering commitment to quality. By the time we were done, every second every pixel of the film had been scrutinized. It’s probably the first thing I’ve worked on since Band of Brothers that completely exploited the possibilities of the digital intermediate process. It mixed different kinds of media, and each shot had multiple versions right up until the end. But I’m getting ahead of myself slightly. In the beginning there was just a QuickTime offline reference, some EDLs, a stack of HDCAM tapes and around 4TB of storage (we had a lot of technical problems with our conforming/grading system, so I’ll spare the parent company their blushes and not reveal which system we used). The first thing to be done was to capture what we could from the tapes, ensuring that all the video headroom was captured as well. That was a fairly painless process, as the same tapes had been used to build the offline edit. There were a couple of sticking points, oddities such as duplicate tapes with different timecodes, but nothing particularly out of the ordinary. We also had to massage the data a little to get it to adhere to a project/source/reel/frame.dpx structure at this stage (and retrospectively, we were very glad that we did).

earth © BBC Worldwide Ltd 2007

Then things started to get a little tougher. Much of the film material had no correlation to its offline equivalent, and so the scanning submissions were put together by eyeballing shots on the rushes tapes. Then when the scanned data was supplied, it had to be eye-matched into the timeline against the offline. There were also things such as DVCPro varispeed footage which had to undergo an elaborate process devised by HD Consultant Jonathan Smiles to preserve the integrity of each frame and get it to look right, ultimately resulting in a set of frames that also had to be matched by eye. Everything was ingested in its native form and then processed in its uncompressed digital form. And, as is the norm in the film world, shots arrived on a very irregular basis. As many of these shots did not have specific reel numbers, I assigned them unique ones- this would also ensure that the arbitrary timecodes they now had became less significant. The best way to do that was to use the event numbers from the EDL. In actual fact, at this stage we had a sort of master spreadsheet for the production (this was just prior to the advent of things such as Google Docs & Spreadsheets, unfortunately, so they had to be shared by memory stick rather than synchronised online), so this method made it easy to cross-reference the shots with all of their metadata.

earth © BBC Worldwide Ltd 2007

At this point, the conformed timeline was almost complete (this is around two weeks into the digital intermediate process), then everything stepped up a gear, and we left the typical DI workflow behind. For example, our film output resolution was fixed at 2048×1242, but we now had everything from PAL SD resolution through to 4k film scans. Normally the grading system handles all of this for you, but as the production team was well aware, different scaling algorithms affect different shots differently. So a decision was made to scale everything prior to grading it. We used a variety of scaling processes depending on the content (some of the techniques we used will be covered in future articles), generating a new set of data each time. We needed this process to be non-destructive, that is, we had to be sure we could go back to the original shot if we discovered any artefacts later on. So I made codes for each of the resize methods, and appended these to the original reel numbers to generate the new reel. So S248 would be reel (shot) 248 scaled using a Shake method. This new reel is then loaded into the conform timeline on a new track above the original. Again, this naming convention proved to be very robust, even right at the end when everything was very complicated, and even at this stage there were some shots with two or three different versions kept in the conform for comparison purposes. Colourist Luke Rainey was able to switch between the different versions of a shot very quickly and approve (or not) the scaling.

earth © BBC Worldwide Ltd 2007

The same tact was taken with things like de-interlacing processes (we used Natress in conjunction with Final Cut for the bulk of these) as well as things like time-lapse sequences (many of which were actually done by director Mark Linfield on a Mac in the back room). Soon we had an average of 2 or 3 versions of each shot in the timeline, and I was starting to realise we’d seriously underestimated the amount of disk space we needed for the project. Without any sort of SAN storage, what we’d done is set up two independent systems, both running the grading software, and both with identical copies of the source data. We’d been synchronising the timelines between the two using EDLs, but at this stage it became far too complicated. We now had 5 separate projects (one for each of the output reels, this was done for performance reasons) each with 4-8 timeline tracks. Since Luke would only be working on one reel at a time we decided a far better option would be to move the actual project files back and forth instead, and actually this process worked out rather well.

earth © BBC Worldwide Ltd 2007

The grading process was done using a Barco projector. What was interesting about it was that the primary grade was actually done in 709 (HD) colour-space, rather than P3 (film), even though the primary output would be film. The grade would be done as if it were a HD project, and then the result would go through a 709 to P3 transform and then tweaked using a LUT. This decision was made by the production after weeks of testing, and produced very accurate results. There was some sacrifice of dynamic range, but on the other hand, the film version truly looks identical to the HD version (which itself looks stunning). It also meant that the conversion to digital cinema (one of our primary output formats) would also be very accurate. The entire color calibration process was overseen by Post-Production Producer Jon Thompson.

earth © BBC Worldwide Ltd 2007

One of the unusual things about this film was that it had no visual effects, and yet there were several effects companies involved that we were bouncing shots backwards and forwards with. Most of what they were doing was things like reducing noise or stabilizing shots, but even here we found that it was necessary to hold on to many of the versions. So once we were nearing the end of the grade (and had added a lot more disk space), we had an average of eight versions for every shot: that’s 8 frames of source material for every frame of output (and that’s not including handle frames either). There were now more source reels than there were cuts in the film, so it was just as well that the naming conventions we established early on were still holding up. With the grade pretty much in the bag, Luke had turned his attention to adding grain to the video material, to increase the continuity between video and film-sourced shots. Much of this was now tracked in Surreal Road’s proprietary database (more on that soon, I promise) as the master spreadsheet had reached mind-boggling complexity at this point. This also allowed us to track the whereabouts of physical assets, which was also useful, because missing shots and last-minute recuts meant we had a big stack of the BBC’s tapes to look after.

earth © BBC Worldwide Ltd 2007

Output was fairly straightforward, we had some bizarre render errors and caching issues that I won’t bore you with here, but nothing really severe. We output to one 2TB removable drive for each of the film, HD and digital cinema versions, and then, to be really safe, copied all of that back onto the grading system to QC it before giving the ok for it to be printed to film. And for a nice change, we sent everything to be filmed out in one go, rather than drip-feeding it to them in reels. This was May 2007.

earth © BBC Worldwide Ltd 2007

Since then, there have been more changes and recuts made. At the time of writing, there are no less than three distinct cuts of the film (not including regional differences), each of which exist in three formats (digital cinema, film, and HD video). In fact, the version that hits the UK cinemas today is one I’ve not actually seen. The recuts mostly involve rearrangement of the existing material, and so were reconformed directly from the output data, rather than from the source data, which has made the process significantly less complicated.

UPDATE: Jon Thomson provides more information about the Digital Cinema mastering process:

“The D-cinema version was made by Martin Greenwood who wrote a whole new set of algorithms (which now form part of the Yo-Yo system from Pandora). The D-cinema version is 1998 by 1080 pixels, which gives an exact 1.85 ratio.”

And more on the film recording process:

Our output to film used Cinesite’s “Super-2K” method, so everything was done at 2048 by 1242 pixels using a 1.66 ratio, giving us some safety room for 1.85 projection. The reason for this is that every theatre seems to have a aperture plate that says 1.85, but never seems to match a 1.85 test chart.”

And on the color space used:

We graded in P3 colour space as this was the route I was used to working with. Jim Whittlesea and Howard Lukk in the U.S. had defined and proved it worked, when working on the Stem tests for DCI (in 2004). P3 was a fairly close match to the colour space of film and meant that we also had a DCI P3 version for the DCDM without needing to re-grade. The route we eventually took was to grade in 709 [providing the HD and DC masters] and then do a pass in P3 space at gamma 2.6, then finally convert into log space and tweak to make the film output version.”