Friday, June 10, 2016

The Odyssey of trying to upscale interlaced SD to HD without losing quality: Part 2

You might want to read the first post in this series, because I'm not going to explain interlacing or what the various programs do again.

Since Part 1, I've discovered a few things:

VapourSynth currently requires the same plugins as AVISynth in order to run QTGMC, with the same stability issues and limitations. I'm actually trying to learn Python at the moment anyways, though, so I might revisit it again in the future and see if someone has ported the process natively in a way that supports multithreading without hacks.

Setting the topmost SetMTMode in my AVISynth script to (5, 10) versus (5, 12) appears to have solved my remaining stability issues. I'll be posting my full script down below.

I discovered AvsPmod, a program that loads AVISynth scripts with syntax highlighting and video preview. It makes adjusting scripts a heck of a lot easier. Because of AvsPmod, I don't need to use VirtualDub to check my work anymore.

I also discovered that I don't need VirtualDub to render the AVISynth scripts, because FFMPEG can load them directly. That means I can convert .avs scripts directly to any format that FFMPEG supports, including ProRes. This is awesome, because I can write .bat files that include all the settings I want for a particular codec/container/etc. All I have to do is change the input and output file names and double-click the .bat file. Also, while ProRes is awesome, I don't actually need to use an intermediate codec - I can render straight to a YouTube-friendly H.264 .mp4 file if I want without the (admittedly minor) quality loss of the extra step.

According to a few TV editors I asked, FFMPEG should probably not be used to render a ProRes deliverable for broadcast TV. Apparently, the implementation of ProRes is not recognized by Apple, and might be rejected by QC because of differences in embedded metadata. Not a problem for my current work, but if I did need to generate a file for broadcast, I would probably want to get a cheap Mac Mini or a month-long subscription to Scratch to make "official" ProRes deliverables. Incidentally, if I could afford to get a permanent license for Scratch, I would do it. Even with its oddball interface, it's still far and away the most responsive grading/compositing program I've ever used.

BIG DISCLAIMER: This process may not work, may crash, or do other things to your system. 

You have been warned. 

If you're on a deadline (and using Premiere Pro, After Effects, or Final Cut Pro) probably your best best is to use a paid plugin like FieldsKit.

Here's my .avs script settings for QTGMC deinterlacing:

SetMTMode (5, 10)
QTSource ("", audio= 1)
QTGMC(preset="Slower, SourceMatch=3, Lossless=2, EdiThreads=1)

I've also found an awesome conservative sharpening filter that can be added at the end for a little extra punch:


So, where do you go to get all this goodness? Here's some web links:

AVISynth (You will need both the 32-bit Official Build and the 32-bit Unofficial Build)

AVISynth source filters (Get the source filter for the format/codec you want to load. In the above script, I use QTInput.)

QTGMC (Get the "Plugin Package for multithreading".)

LimitedSharpen (Optional, note that it requires RGTools to run. Get the x86 version of the latter in order for it to work within a script using QTGMC. )

FFMPEG Windows Binaries (Get the 32-bit static version)


Install AVISynth. Put the AVISynth filters in the AVISynth Plugins directory, copy the multithreaded build of avisynth.dll to your system folder and replace the existing file. Copy a couple of system .dlls from the QTGMC package to your system folder. Extract FFMPEG and AvsPmod to their own folders. Set up FFMPEG to run from any directory on your PC by adding it to your PATH variable:  (skip to where it says "Windows Vista and Windows 7 users:")

In the same directory as your video file that you want to process, make an .avs script with the settings I listed above, changing the filename, source filter, and crop settings as necessary. Loading this script in AvsPmod will let you preview your results, and change to your preference. When you're done, don't forget to save your work.

Make a .bat file with the FFMPEG commands of your choice. For example, here's a command to encode an .avs script to a ProRes 422HQ Quicktime file:

ffmpeg -i "videofile.avs" -c:v prores_ks -profile:v 3 -qscale:v 9 -pix_fmt yuv422p10le ""

Change where necessary. The quotes around the filenames allow you to enter filenames with spaces in them.

I'll post a video soon with a full tutorial and more examples.

Monday, March 21, 2016

The Odyssey of trying to upscale interlaced SD to HD without losing quality: Part 1

In the past, I did some work involving upscaling a letterboxed standard definition (SD) show to HD for online streaming video. As it turned out, the built-in After Effects and Premiere Pro resizing tools are not the best for this sort of scenario.

The reason why the otherwise fabulous tools fall short is how they deal with interlacing.

Interlacing is a technique started in the early days of television that allows for the appearance of 60fps motion with the bandwidth of a 30fps signal. This is achieved by using "fields", which are only half the resolution of a full progressive frame. However, the image data in each field is displayed every other horizontal line on one field, then the next field does the same but displays where the blank lines were in the first field. Because the two fields are interlaced together, the eye and brain of the viewer combine them together, giving you roughly the appearance of 60 frames per second (fps) video. Because of the reduction in signal bandwidth, interlaced video has been used by TV broadcasts ever since (yup, even in HD). For a slightly clearer example, here's a section of two fields combined together with a fast-moving object as the subject (In motion, these combing artifacts are usually not as noticeable):

Once we moved into the age of digital video editing, interlacing made everything more complicated. Since digital video devices didn't want to show individual jagged-looking fields, they combined (or deinterlaced) them into discrete frames, then based timecode standards around 30 frames per second (technically 29.97 fps for color NTSC video) rather than 60 fields per second (technically 59.94 fps). In order to work with broadcast video devices and timecode, computer editing/compositing/etc. devices and programs had to follow the same standards, but still be able to output either progressive or interlaced video at the end.

The bottom line is this: in order to upscale (most) SD video, it first needs to be deinterlaced.

I'm greatly oversimplifying, but deinterlacing is commonly done in one of three ways:

1. Double the lines in each field to fill in the gaps, then treat each of the line-doubled fields as individual frames. With this method, you end up with less overall resolution, but it's quick and easy. In the old days, they used to make "line doubler" devices that would do this sort of thing for high-end TVs and such. Depending on the algorithm, the process might also "decimate" the framerate to 30fps in order to avoid twitchy artifacts from constantly switching between "upper" and "lower" fields.

2. Combine every 2 fields together into a single frame via an image processing algorithm like Yadif. This gives you better frame resolution, but still decimates the framerate to 30fps and can look like a Photoshop "artistic" filter. This might be a good thing; it gives a slightly more "filmic" look and saves on video filesize. After Effects uses a somewhat similar process if you check the "Preserve Edges" checkbox in the Interpret Footage right-click menu of a clip. A better way (in my opinion) involves using VirtualDub to perform the deinterlacing and upscaling. This is the workflow I've used in the past.

3. Use complex algorithms to look at detail from several fields at a time to create interpolated frames at 60fps. This retains both the full detail and the full motion of the original video, but... while consumer TVs do an okay job with this, in the pro editing world it's either been done by very expensive dedicated hardware devices (like the Teranex) or moderately-to-somewhat-pricey software plugins (like FieldsKit or Tachyon) that still require a fair amount of fiddling to get working properly. It can also result in minor-to-moderate "ghosting" artifacts. To be fair, proper frame interpolation is not a trivial process, and the above solutions do a great job.

I assumed option 3 was basically out of my reach - the best "double framerate" deinterlace option in VirtualDub can use Yadif, but has the issues of a 60fps line doubler conversion, and my budget hasn't allowed me to purchase any of the commercial solutions.

So, I gave up for a while. Then, a new project came along from the same client for upscaling some more SD footage. Since I already use AVISynth scripts to load Quicktime files into Virtualdub and I've seen some great inverse telecine (aka ITVC, which is a process of removing redundant frames from 24p video that has been transferred to 60i video) plugins, I decided to check out deinterlace filters for AVISynth again. That's when I found out about QTGMC, an AVISynth plugin that does pretty much everything I want it to, and it's free.

Unfortunately, it has some drawbacks.

First, let me tell you about AVISynth and VirtualDub. Both of these programs were developed as open-source video processing tools in the early 2000's.

VirtualDub is kind of like a video swiss army knife - it uses built-in filters to do everything from resizing a video to replacing audio, sharpening, and even some visual effects. The downside with VirtualDub is that by default it only loads and saves files in an .AVI container, which significantly limits the number of codecs that it supports. There have been plugins developed that allow it to load a number of other container formats, but the plugins don't always work properly or continue to be supported as new codecs are released. However, if you combine VirtualDub with AVISynth, you can read almost any video codec that's ever been released.

AVISynth is probably the oddest video tool I've ever used. It's not a standalone program per se, and it has no interface. It's a scripting language that gives instructions on how to process video using a frameserver. This means you have to write a text file with a list of instructions, then load that text file into a separate program that can that can communicate with AVISynth's frameserver. VirtualDub is one such program. AVISynth's syntax can be confusing, arcane, and not terribly user-friendly. However, it can do all the video processing tasks of VirtualDub and more, and do so before the video is displayed in VirtualDub. It also has a truly staggering number of plugins developed for it, and some of them rival commercial programs in their functionality.

Now, back to QTGMC. QTGMC is a plugin that uses other AVISynth plugins to perform frame interpolation and deinterlacing. I will not attempt to explain the details, suffice to say it has a huge amount of variables and settings... but it works. It really, really works. You can use the combination of AVISynth with QTGMC plus VirtualDub to turn SD interlaced footage into 60fps HD footage.

Unfortunately, there are a few problems. Remember when I mentioned when the tools were originally developed? By default, neither VirtualDub nor AVISynth are multithreaded, which means they don't take avantage of modern multi-core processors. They're also 32-bit apps. There are technically 64-bit versions of VirtualDub and AVISynth, but they lack the plugin support of the 32-bit versions. The attempt to "fork" AVISynth for proper 64-bit support (known as AVISynth+) doesn't appear to support multithreading natively, either.

Now, there is a replacement library for AVISynth that enables multithreading support, in fact QTGMC basically requires it to perform properly. It's not, however, what you would call stable. AVISynth is prone to crashing or just stop rendering if you've set something it doesn't like, and those settings may be different depending on your hardware, the versions of the plugins you're using, etc. etc. etc.

Currently, I'm having trouble getting QTGMC to render beyond about 15mins of footage. I got to that point by gradually lowering the number after "EdiThreads" from 6 to 1. will keep playing with settings to see if there's a magic combination that will work.

I should mention that there is one other option: a new ground-up rewrite of AVISynth called VapourSynth. It's 64-bit native and supports multithreading "out of the box". It also uses an entirely different scripting syntax because it's both written in and uses Python. It can now load AVISynth plugins, but you still have to learn a whole new scripting language to use them.

Stay tuned for part 2, where I reveal the results of my AVISynth experimentation and see if I'm going to be willing to try VapourSynth or not.

Tuesday, July 30, 2013

Grading Workflow update 7-30-2013

Since my last entry, I've upgraded to the "CC" editions of all my Adobe apps... except for Encore CS6. CC has no new version of Encore, nor is it installed by default. Thankfully, I caught this issue before uninstalling Encore (I've found out subsequently that you can install it again, but you have to re-enable the CS6 version of Premiere Pro in Adobe CC).

Unfortunately, one additional wrinkle of the new CC Premiere Pro is that my old copy of Cineform NEO no longer works with it. This is not a total loss, however, because after trying my previous color grading workflow and finding that it simply took too much time to render the individual clips for a long-form project, I've decided to take an entirely new approach.

The new version of Premiere Pro has SpeedGrade's Lumetri Deep Color engine built in, and as a result, you can create "looks" in Speedgrade that can be imported into Premiere Pro and used as filters (Sort of like Magic Bullet Quick Looks). It would be awesome if you could actually adjust these looks in Premiere Pro, but I'll take what I can get.

So, my current workflow is:

  1. Do a "Send to Adobe Speedgrade" of each of the sequences in my project
  2. Grade those sequences in Speedgrade.
  3. Save the grades as individual "looks".
  4. Transfer the look files to a looks sub-folder in my project's main footage folder. If you're on Windows, Speedgrade's custom look files are stored in: C:\Users\Owner\AppData\Roaming\Adobe\SpeedGrade\7.0\settings\looks  (I highly recommend creating a desktop shortcut to the folder so you can get back to it easily).
  5. Apply the looks individually to the respective clips.

If you don't have a bunch of hard drive space to work with, you can just do the "Send to Adobe Speedgrade" for one sequence at a time, but it's handy to have the Speedgrade sequences available if you need to adjust one or more of the looks.

The only issue I've run into so far is on the project's sizzler reel, where rendering to DVD occasionally will produce a twitchy white bar on the right side of some of the clips with the "looks" applied to them. I'm still trying to track down the issue, but thankfully, there's an easy solution - render out to a full-res format (I use Uncompressed 10-bit YUV Quicktime) first, and then use that .mov to render/encode the DVD files in Adobe Media Encoder.

Thursday, April 11, 2013

CS6 and DaVinci Resolve workflow update

Here's what I've come up with as a workflow to edit and finish in Premiere Pro, but color grade in Resolve Lite:
  1. Edit project in Premiere Pro.
  2. When finished, consolidate project to new folder and/or drive.
  3. Import all used footage in new folder into After Effects as separate clips. If you can trim the clips with handles on the sides, even better.
  4. De-noise/sharpen clips with Neat Video. Render all clips out (separately) to new "Ungraded" folder. This takes approximately 6x real time on my system.
  5. Import clips from "ungraded" folder into new Resolve project.
  6. Grade clips in Resolve Lite.
  7. Export clips (again, as separate clips) to new "Graded" folder. Make sure settings and naming match those of "Ungraded" clips. If the "Ungraded" clips have audio, remember to render out "Graded" clips with audio.
  8. Make new Premiere Pro project, import old project into it. Save project and Close Premiere Pro before next step.
  9. Move "Ungraded" folder to different directory.
  10. Open copied Premiere Pro project.
  11. Link files to "Graded" clips.
  12. Save project, render to appropriate format(s).
I alleviate some of the storage concerns by rendering out to Cineform Film Scan 1 444 instead of uncompressed video, but the render time for denoising all that footage is absolutely nuts. Even after consolidating my project in Premiere Pro, it would end up taking me over a week of rendering (12 hour days) to get all the footage prepared for my latest project. This is mainly because the project manager trim footage option doesn't appear to work for DSLR footage. Oh well.

Even with this workflow, Resolve still has some issues. It can be very finicky when it comes to what footage it will actually import:
  • I had to re-render some Cineform transcodes twice to get Resolve to see them. No idea why.
  • Resolve does not import most flavors of .AVI files, so I had to re-wrap my Cineform .AVIs to Quicktime files (Can be done in Cineform's own HDLink program, but only for Cineform files). 
  • Rendering the original footage to uncompressed Quicktimes files appears to alleviate some of the import issues, but comes with a huge filesize increase.
  • When rendering the final graded clips out of Resolve, make sure to go to the timeline in the "Deliver" panel and right-click above the clips so you can "Select All". Otherwise, you might end up rendering one clip and banging your head on your desk in frustration.
  • Make sure you render to the same bit-depth that you work in, or your luminance values will be screwed up.
So what can I do if I can't use the above workflow? My current solution is to clean up my sequences in Premiere Pro so they're all on one track (one video track and one audio track) with only cuts, speed changes and cross dissolves. Then, I  import the project into After Effects to denoise/do a basic grade. I think in the future, I'll see if I can use just Premiere Pro plugins to do all this stuff instead of having to roundtrip or finish in another program. Or I could just shoot footage with a better camera/codec so that I don't have to go through the denoising step.


Just for the heck of it, I also tried messing with the footage using the ACES colorspace. 

ACES is basically a super wide gamut color space that is designed to encompass all other color spaces. It works by selecting a premade LUT for the input source (camera model, film, etc.), a working LUT (don't ask), and a display/output LUT to make sure the device/format you're outputting to displays the footage properly.

That's the theory. The reality is that my low-gamut DSLR footage ended up looking like crud when imported, and I couldn't figure out how to adjust the grade to fix it. I will investigate more later, possibly with footage from a better camera.

Thursday, November 22, 2012

Adobe CS6 and DaVinci Resolve - Update #3

Okay, so since my last post, I've found out a few more things:

  • A guy named John Schultz has created an RGB Curves preset for Premiere Pro that acts as a LUT for the Technicolor Cinestyle picture profile. It looks great, doesn't hog resources, is GPU accelerated for realtime playback and fast rendering.
  • When I tried importing a timeline from Premiere Pro into Davinci Resolve Lite (as an exported .XML) and doing a rough color grade, Resolve added a few random black frames to the footage. I didn't notice them until I rendered out the graded footage. I went back in to Resolve and confirmed the problem appears on the timeline... but not consistently. It could be a framerate mismatch at some point in the importing process, but I haven't been able to figure it out yet.
  • Resolve Lite doesn't like anything other than cuts and dissolves in an XML import of an edit. In my limited testing, any error or missing clip will cause the offending clip to be replaced by another clip - usually the same clip for all errors.
  • The whole color grading process has turned out to be a lot more work than I expected. The grading itself is fun, but the process of getting a project into either SpeedGrade or Resolve is counter-intuitive pain in the butt. I can see why Adobe encourages you to render out a project to .DPX before importing it into SpeedGrade.
  • Resolve Lite has some quirks, like needing to load config presets twice to get the settings to load properly.
  • My massive Premiere Pro project file for the reality show pilot I'm working on freezes After Effects if I try to import it into the latter. I'll probably need to use a trimmed-down version for the final conform.
  • If you need a file that will play back on a lower-end PC, a DV-Widescreen .WMV file appears to work quite well. It's especially good for dailies. If you know a good Mac equivalent, feel free to leave a comment. 
  • Adobe Media Encoder is friggin' awesome. You can queue up multiple jobs, different settings, and don't need to leave Premiere Pro open once a project is queued up to render.
  • Adobe Prelude is pretty good for ingesting DSLR footage, although there are a lot of features I wish it had, such as:
    • Renaming files before/as they're being ingested with custom auto-increment options (e.g.: "" could be automatically changed to something like "SGP Test Shoot - 10-23-2012 - Camera A - Shot".)
    • Being able to ingest audio files without kicking up an error message.
    • Automatic syncing of dual-system/multi-cam footage (maybe via a Pluraleyes plugin, just like Premiere Pro?)

Monday, October 1, 2012

Adobe CS6: Update #2

Well, it's now been a few weeks since getting my new CS6 system, and I'm still working a few issues out. Regardless, here's an update:

- My "Render" RAID 0 array (for exporting DVD and Blu-Ray files, as well as gigantic uncompressed files) has crapped out two more times, but each time I've been able to rebuild it and verify that no hard disk errors are (apparently) present. Since my other array has been just fine, I'm not quite sure what the issue is. If it fails another time, I'll call the ADK guys and see what we can figure out.

- I've started using Adobe Prelude for footage ingest, and it works great. If I was doing another "Data Wrangler" job (capture and backup, but no on-set image processing), I would be using it. However, bit-for-bit verification takes significantly longer than a straight copy, so it may not be appropriate for every shoot. Since I ingest for the current project at my home machine with no time pressure, I enable bit-for-bit verification and then do other things until the copying is done.

- I've gone back and forth on what I should do about Cineform. I finally decided to transfer my Cineform NEO license from my old machine to the new one, which has had the unexpected benefit of installing project presets for Premiere Pro. I also don't see a "gamma shift and pause after starting playback" issue like on my old system. The earlier issues with Cineform clips rendering out as random noise is now gone as well.

- As a side note, I've started an After Effects project where I render out a clip in different codecs/quality levels, then import them back into the project so I can A/B them for differences. So far, I can see that Blackmagic Uncompressed Quicktime files have a color shift from my original file, and that the size difference between Cineform High and Cineform Low is not nearly large enough to justify using Cineform Low. Cineform Film Scan 1 is my preferred format at the moment, although DNxHD 220 10-bit is pretty good, too. Oddly enough, the filesize difference between 422 and 444 colorspace appears to be nonexistent on Film Scan 1 and High HD Optimized Cineform files. Since I'm using de-noised DSLR footage for this test, I won't draw any conclusions until I re-test with more detailed footage.

- I installed DaVinci Resolve Lite, and while it has its own quirks, I definitely prefer it to SpeedGrade. Resolve has a much better interface, a ton of options, and a node-based workflow that's easier to figure out. Unlike SpeedGrade, LUTs can also be applied on a per-node basis, so I can easily A/B the change and remove quickly if it looks wrong.

- The current version of Resolve was designed to work well with Cinema DNG files, and after playing around with a test clip, I'm now sold on DNG as a format for uncompressed recording (although the hard drive space required by it is a real issue). At the moment, no Arri or Red camera uses Cinema DNG, but the Blackmagic Cinema Camera, the Kineraw cameras, and Aaton's Penelope Delta all use it. Two of those are cameras that indie filmmakers could potentially afford...

- Another side note: I just noticed that the Kineraw can record to Cineform RAW. Nifty, although the test clips from the Kineraw haven't really impressed me so far. Cineform RAW, however, looks like some hot stuff (compressed RAW sensor data, significantly lighter processor and storage requirements than Redcode). I wonder if I could compress Cinema DNG files into Cineform RAW. If they offered it as an in-camera option for the Blackmagic Cinema Camera, I think even more people would buy said camera.

- When it comes to footage management, I tend to import all the footage for a project into one Premiere Pro project file. This has the disadvantage of making projects take a long time to load, but once they're loaded, I have a huge amount of options. To be fair, 16GB of memory and a RAID 0 array are significant contributing factors as well. I've decided that I'll wait on a 16GB memory upgrade until Premiere Pro can address more than 12GB of RAM.

- My only complaint with Premiere Pro so far is that sped-up footage tends to choke playback for both the sped-up footage and the following footage, unless you stop playback after the sped-up footage and then resume playing. This could just be an issue because I'm editing in a native DSLR timeline, but I would still like to have the option to pre-render the footage so I could have consistently smooth playback without having to render out the whole sequence.

Friday, August 31, 2012

Adobe CS6: The awesome and the annoying - part 1

So, I've been working on the next Snow Goose Productions project; and this time it's our own reality show.

For those of you rolling your eyes right now, I'll just say that it's an interesting idea for a reality show that involves helping people with problems in an unconventional way; and without the ridiculous exploitation that has come to define the reality show space. That's what the goal is, at least.

But enough about that. This is a post about workflow and editing systems.

For at least the pilot of this show, I'll be shooting on two Canon T3i cameras. In order to deal with the 12-minute-per-clip recording time, I'll be staggering/overlapping the recording starts of each camera so at least one is recording at all times, and syncing the footage of both cameras to audio from a Zoom H4n audio recorder (which will be running in 4-channel mode - two external mics plugged in as well as the internal mics).

Despite finding a decent workflow for editing H.264 footage, my aging PC was far too slow to work on a project of the scope of this show - especially when I start throwing effects on. So, after much discussion with the other members of Snow Goose Productions, we decided to get a new editing system.

I thought about three possible choices:

  1. Build it myself. 
  2. Get it custom made. 
  3. Get a Mac.

After dealing with the quirks of a "build it yourself" system in the past, I decided not to do that again; even though it would be significantly cheaper. That leaves two choices: Mac or pre-built PC.

I've been wanting to get a Mac for years, but the higher cost of entry has always held me back. Now, the performance deficit is the issue. I can't afford to get even the baseline Mac Pro; and the remaining systems may be relatively stable, but don't give a lot of bang for the buck. Still, the advantage here is that I wouldn't have to deal with all the Windows configuration BS.

I've been a Premiere Pro (actually the whole Production Premium suite) user since 1.0, and while I've never been completely happy with it, it's (ultimately) gotten the job done. I didn't try to upgrade past CS3, since I haven't been working with processor/disk intensive codecs. Once I started working with a DSLR, that's all changed. Also, ever since CS5 was released, Adobe appears to have really gotten their stuff together and made Premiere Pro into a true contender to compete with FCP and Avid.

And one other thing: Thanks to Adobe Creative Cloud, you can now rent (almost) the entire Master Collection of Adobe programs for $350 this year - if you have CS3 or newer. So, CS6 seems like the logical choice.

CS6 is available for Mac. The Mac has ProRes. Pretty much every independent filmmaker I know uses a Mac. Macs tend to be more stable than Windows PCs. All of these a great reasons for getting a Mac.

There's one other issue, though: Ever since CS5, Premiere Pro and After Effects use GPU acceleration to greatly speed up rendering and allow you to work with more layers of video/effects in real time. In theory, Premiere Pro/After Effects/Media Encoder CS6 works with the GPUs in last year's MacBook Pros. From reading the Adobe forums, however, it appears that that support is sketchy at best. The new Retina display MacBook Pros? You have to "hack" them (add the name of their GPU to a text file), and there's no performance benefit to using them over software-only mode. The iMac GPUs? Not supported, even with the "hack".

Of course, I could always switch to Final Cut Pro X, but I would be re-learning a whole new editing paradigm right in the middle of shooting a major project - not a great idea.

I could run Premiere Pro in software-only mode, but would have vastly increased rendering times.

Using Final Cut Pro 7 would mean that I would have to convert my all my footage to an intermediate format - just like I do now.

To set up either a PC or Mac with Avid would blow more than half my entire budget.

So basically I've talked myself out of a Mac, Final Cut, and Avid. Which leaves a pre-built PC running Creative Cloud.

There are a number of video-oriented PC building stores out there.

I went with one.

I got a PC with:

  • An Intel i7 3930K processor. 
  • 16GB of RAM manufactured specifically for the system builder. 
  • A 256GB SSD drive. 
  • 8TB of internal RAID storage (Two 2x2TB RAID 0 arrays - one for editing, one for rendering). 
  • An Nvidia GeForce GTX670 2GB video card - which is faster than a Quadro 4000 for less than half the price. 
  • Two year parts and labor + 1 year express pick up warranty.

I ended going with ADK, since they were priced right, had a great warranty, and allowed me to send in my Blackmagic capture card to fit into the new system. Oh, and they test the system thoroughly before sending it out. Unfortunately, my capture card was DOA (and I didn't have enough money in the budget to replace it), so I'm currently doing without it.

I've now had the system for 3 days, during which I've poked and prodded at the video production programs (Premiere, After Effects, Encore and SpeedGrade). I've barely scratched the surface of the other programs in the suite.

So, without further ado, on to the pluses and minuses.

Given the right hardware, Premiere Pro CS6 is now a pretty awesome workhorse. Instead of doing the proxy file dance, I can now edit footage from my T3i without transcoding. Many common effects play back smoothly without rendering.

In particular, Warp Stabilizer is amazing; not so much because of how well it stabilizes footage (other editing programs have the same or similar capabilities), but because it can do so in the background while you edit, and it doesn't require rendering the clip it's used on, even when you change the basic settings... Unless you stay zoomed out and want it to try to fill in the black borders generated by the filter moving the image around; then you have to render. Otherwise, you can change the basic parameters to your heart's content, and it still plays back the footage buttery smooth unrendered.

What used to be a whole process involving rendering out uncompressed to After Effects, setting tracking points and rendering back out to an uncompressed file is basically reduced to dragging and dropping an effect on a clip. Awesome.

One particular gripe I had for a long time was how crappy the process of making a DVD from Premiere Pro was. Instead of downsampling, Premiere Pro's Media Encoder would render out everything - including titles and effects - directly at DVD resolution, even if the timeline was HD. To add insult to injury, Premiere Pro had a 2-use-only Dolby Digital encoder; so you were forced to render out the audio in PCM, import the video and audio files into Encore (which had an unlimited-usage Dolby Digital encoder), and hope that you calculated the video bitrate low enough so it didn't go over DVD capacity.

In Premiere Pro CS6, there's finally a proper unlimited-use Dolby Digital encoder, and really clear labels for encoding presets. Awesome. I still had to select the Dolby Digital encoder in the settings (and therefore made a preset), but it works consistently.

This version of Premiere Pro (like the previous two versions) uses a maximum of 12GB of RAM, so 16GB or greater is ideal, especially if you run memory-intensive 3rd-party plugins. The plus side? I can load up a whole timeline of clips, and it hasn't crashed yet. I could get used to this.

Media Encoder is now a truly separate application, so you can use it to quickly convert a standalone clip to a Youtube version or whatever you'd like without creating a new Premiere Pro project, or fiddling around with After Effects.

After Effects doesn't feel that different to me, but I have yet to try the Mocha tracker built in to it. If you do a lot of After Effects work, get as much RAM (and as many processor cores) as you can afford - After Effects will use it all if you let it. Based on what I've seen so far, I would recommend 32GB of RAM or more.

I might be having some issues with the results of the Neat Video plugin not showing up properly in the monitor window, but otherwise, After Effects is nice and stable.

SpeedGrade is a bit of an odd duck. It's a really powerful grading program, but it's designed primarily for an uncompressed video workflow (the "send to SpeedGrade" option in Premiere Pro renders out your timeline to .DPX frames). I haven't had much luck getting it to import .EDL sequences from Premiere that point to the original media.

The interface is pretty arcane, too; I think there's a theory that if a program was designed to cost thousands of dollars, it should have an interface designed to fit with a very particular workflow and setup, rather than having silly things like menus and a help file. The timeline is confusing, and trying to do simple things like set up a new project (rather than just deleting the timeline of the old one and adding new clips) still eludes me.

On the other hand, the actual quality of the grade you can get beats the ever-loving #$%@ out of the built-in color correction effects in Premiere Pro and After Effects. I'm going to have to basically take a course to learn how to use them properly, but the secondary correction passes alone are just incredible. Now, to learn how to properly set the colorspace to match 8-bit output files...

I haven't messed around with the menu creation abilities, but Encore appears to be basically unchanged. I need to get some Blu-Ray discs to play around with that a bit - maybe a BD-RE disc for test burns?

Anyways, I've have a lot more time to play with these programs (and more) in the next few days, so I'll might report more in a week or so.

Update 1 (9/7/2012):

I had one disk of my "render" RAID 0 array drop out with an error. I re-built the RAID, checked it out thoroughly and it seems to be working fine, but I will definitely be backing stuff up more frequently. Also, the hard disks make a loud "click" from time to time - although I've read that this is normal for that particular model of drive.

I've experienced some occasional program crashes in After Effects and SpeedGrade. SpeedGrade has an (automatic, background) Quicktime importer that crashes frequently, although the main program remains running. Clearly, SpeedGrade works best with .DPX files - although my "render" RAID dropped out in the middle of working with .DPX files. If it does so again, I'll note it here.

I might seriously consider upgrading to 32GB of RAM - After Effects sure can use it, and I'm sure other programs would love it, too.

Cineform Neo 5 footage imports fine, but renders out as colored noise in After Effects and Premiere Pro. Thankfully, Virtualdub can access the footage (using the Quicktime plugin, I believe, which you need to download seperately), so I could transcode it to another format. There appears to be no upgrade pricing for Neo 5 to Cineform Studio Pro... Which is unfortunate considering how soon after I bought Neo that the latter came out. So, I think I won't be using Cineform for intermediate files.

I'm going to have to really study the SpeedGrade manual to figure out how to make sure I'm working on a new project. As it stands, I guess it's basically set up to work on one project, then clear out all the files associated with that project before going on to the next one.