Friday, February 2, 2018

Tutorial #3 - Using ShotCut as a ProRes transcoder

After the complexity of my previous tutorials, I thought I'd do something simpler this time, just in case there are some of you who aren't a fan of the complex AVISynth-FFMPEG workflow. It just uses one program, and a minimum of steps.

The program in question is ShotCut, a free, open-source editing program for Windows, MacOS and Linux. Like a lot of other open-source video editing apps, it uses the MLT Engine for timeline playback/editing, and FFMPEG for export. Unlike most of those apps, the Windows port is pretty stable, even with heavily compressed h.264 footage.

ShotCut is a fairly simple program, but it does have enough features that it can be used as a decent replacement for Windows Movie Maker or any of the slew of $15-$50 editing programs that flood the internet. It'll do multi-track editing, text, some basic color correction, and even supports some video output cards/devices like the Blackmagic Intensity Pro. If you're interested in learning how to do that, The ShotCut site has a tutorials page that's fairly comprehensive. For the purposes of this tutorial, I'm just treating ShotCut as a transcoding program.


This process may not work, may crash, or do other things to your system. I check for viruses before using any software, but malicious hackers have been known to break into developer accounts and insert code into previously benign programs.

If you are working on a professional production, the results may or may not be acceptable, especially if you're trying to send out a ProRes master to a TV station. Their QC department may reject ProRes files that haven't come from an official, licensed-by-Apple encoding app.

You have been warned. 

First, download ShotCut. I personally think the installer is the way to go, but if you want more control over were it installs everything, then you can grab the "portable" version.

Next, virus scan the file using VirusTotal or a major anti-virus/malware scanning program.

Then, install ShotCut. This is pretty easy, basically just click "next" a few times, then "finish". If you went with the portable version, extract the files into your location of choice.

Once ShotCut is installed, run it. You'll see a fairly simple interface with a few buttons at the top. At the moment, it's just showing the Source panel, and that's all we'll need to get started. Drag your video file of choice onto the blank area, and it will load and start playback. Pause it with the spacebar.

Click on the Export button, which will bring up the Export panel. Make sure that the resolution and framerate settings match your video file, then scroll the list of codecs on the left to get to "Intermediate-ProRes" and select it. If you want to change the subtype of ProRes, click on the Other panel and change the "vprofile=" number to your preference. If you need a refresher from the FFMPEG tutorial:

0 = ProRes Proxy
1 = ProRes LT
2 = ProRes 422
3 = ProRes 422HQ

If you change the codec to "Intermediate-ProRes-Kostoya", then you can also do:

4 = ProRes 444

Once you're happy with the settings, hit the Export button, name your file and save. Rendering will start, and once it's done, you should be good to go.

Best of all, because you didn't add anything to a timeline, you won't get a prompt to save your work when closing ShotCut.

The caveats to this process are that the encoder is slow - I only get about the equivalent of one core's performance, which is far slower than command-line FFMPEG. Not sure why that is, although I suspect it may have something to do with a bottleneck between the MLT engine and FFMPEG.

Also, deinterlacing is performed using YADIF, so don't expect the kind of high-res interpolation that AVISynth+QTGMC can provide, and don't expect it to frame-double. It's fine for quick and dirty conversions, but for pro work, you might want to consider using either the AVISynth+QTGMC method or Premiere Pro or Final Cut Pro X with the FieldsKit plugin installed.

Oh, and don't try to deinterlace and upscale. In my experience, it causes a chroma/colorspace shift. Upscaling progressive video should be fine, but check before using the exported video just in case.

You can actually encode a number of different video files, but I don't recommend it. To do so, click on the Playlist button to give you a Project bin-like place to drop multiple files. Drag your files over, then switch to the Export panel. In the "From" drop-down, select "Each Playlist Item", then proceed as if you're working with a single file.

There are some significant downsides to encoding multiple files at once. You need to have the exact same resolution and framerate settings for all the files in the playlist, or the files that don't match the preset will be converted. It also doesn't automatically carry over the filenames from the original files, and will only let you enter a single filename for the exported videos with a "-1", "-2", etc appended to the end.

Thursday, June 15, 2017

Tutorial #2 - Converting DVDs to editable video files - Option 1

After my last tutorial, I decided to focus on improving my antiquated DVD-ProRes workflow.

Thankfully, newer software can make this process a lot simpler.

A program called MakeMKV works as a sort of all-in-one DVD/Blu-Ray extracting program, and unlike most similar programs, the resulting file is not only all in a single file, but is not recompressed. You get an .mkv wrapper around the video and audio that you choose to extract, and that's about it.

From there, you can use the FFmpegSource filter in AVISynth to load the file, and then apply any additional processing you need. If MakeMKV detects your footage as being 24p, then it will include that metadata in its .mkv, so you shouldn't have to go through the IVTC process. If it detects it as 60i, then you can use QTGMC and it should work... however, the preview window in AVSPmod can be a little glitchy when using FFmpegSource, and scrubbing back and forth may display hitches not present in the rendered end result.

One additional note: FFmpegSource needs to index the file before loading it, so don't worry if it takes a long time to load initially. This index is dumped into an .ffindex file in the same folder as your video file, which will load automatically in the future.

I should also mention that this process doesn't work on all DVDs. Some will have audio sync issues, and need an alternate (and more complicated) workflow with its own benefits and tradeoffs that I'll outline in a future post.


This process may not work, may crash, or do other things to your system. I check for viruses before using any software, but malicious hackers have been known to break into developer accounts and insert code into benign programs.

You should always obtain the written permission of the copyright owner of the content before decrypting their DVD. Failure to do so may result in fines, lawsuits and/or jail time depending on your country's laws.

You have been warned. 

Also, this tutorial is for Windows 10. Most of the steps work for previous versions of Windows, but may require slight modifications. Users of MacOS and other OSes should look elsewhere.

Here's the video version of this tutorial:

And here's where you go to get the software:


AVISynth (You will need both the 32-bit "Official" build and the 32-bit "Unofficial" multi-threaded build)


If you're going to be dealing with 60i footage, then you'll probably want to deinterlace with QTGMC. If you need to know how to do this, check out my previous blog post.

QTGMC (Get the QTGMC download, then get both the "required" and "optional" filters as listed on the page. If there's a choice, get the 32-bit [x86] version of filters.)

LimitedSharpen (Technically LimitedSharpenFaster. Optional. Get the x86 version in order for it to work within a script using QTGMC. )

FFMPEG Windows Binaries (Get the current 32-bit static version)



  1. Install MakeMKV.
  2. Run the main AVISynth installer. 
  3. Put the AVISynth filters you downloaded (.dlls and .avsi files) in the AVISynth Plugins directory. Make sure to use the 32-bit (x86) versions of any filters if there is a choice. Make sure you're using version 2.2.7 (or later) of MaskTools2.
  4. Copy the multithreaded build of avisynth.dll to your system folder (Usually C:/Windows/SysWow64/) and replace the existing file.
  5. Copy the libfftw3f-3.dll file your system folder.
  6. Extract FFMPEG and AvsPmod to their own folders. Keep the folder with the ffmpeg.exe binary open.
  7. Set up FFMPEG to run from any directory on your PC by adding it to your PATH variable:
    1. Press the Windows and R keys.
    2. Type "control sysdm.cpl,,3". Click "Run".
    3. Click on "Environment Variables".
    4. Select "Path" under "System variables" and click "Edit".
    5. Go back to your open folder where ffmpeg.exe is located. Select and copy the folder's path address from the address bar towards the top of the window.
    6. Back in the "Edit environment variable" window, click "New" and paste in the folder path. Click OK on all the windows you opened to get here.


Open MakeMKV.

Put a DVD in your optical drive. Open the DVD in MakeMKV.

Note: I've run into a DVD sent by a client that had some odd authoring, which prevented me from opening it successfully in MakeMKV. I ended up using my older method to process it, but there's now an option that gets around the problem in the last two betas of MakeMKV. Before clicking on the giant drive icon, check the "Open DVD Manually" box, then when you get a list of the disk contents, type each of the titles you want to load in the text box, separated by a space between them. Hit OK and those titles should now open up properly. Also, you might need to decrease the minimum length of a detected title in MakeMKV's preferences.

Select the Titles and Audio tracks that you want to copy over. Select the output directory. Click on the Save to MKV button.

When done, close MakeMKV and go to the directory you just saved the .mkv file to.

In the same directory as your .mkv file, make an .avs script with the settings listed below. Change the filename, source filter, and crop settings as necessary.

Here's my boilerplate .avs script settings:

SetMTMode (5, 10)
FFmpegSource2("videofile.mkv", atrack=1)

If you're using 4:3 video, then a pixel aspect ratio conversion is necessary (unless you'll be outputting back to which case, ugh):

And here's what to add if you want to upscale to 720p HD:

You can also use Lanczos4Resize or Spline64Resize on the last resize step if you need some overall sharpening. I'll also sometimes use LimitedSharpenFaster at the end for a little extra punch.

Please keep in mind that DVD video tends to have lots of compression artifacts, and sharpening can accentuate them.

Working on this script in AvsPmod will let you preview your results, and change them to your preference. When you're done, don't forget to save your work.

Create a new text file in the same directory and change its extension to .bat. Add the FFMPEG commands of your choice. For example, here's a command to encode an .avs script to a ProRes 422HQ Quicktime file:

ffmpeg -i "videofile.avs" -c:v prores -profile:v 3 -pix_fmt yuv422p10le -c:a pcm_s16le ""

Change where necessary. The quotes around the filenames in the FFMPEG command allow you to enter path addresses and filenames with spaces in them.

You might notice i'm using "prores" instead of "prores_ks" for the video codec setting. Since DVDs are already very heavily compressed, the difference in visual quality between the two encoders isn't enough to justify the massive difference in speed. If you're really keen on getting every last little detail, then "-c:v prores_ks -profile:v 3 -qscale:v 5" will provide it at the cost of a much larger filesize and a 2-5x longer rendering time. Also, the resulting file is nonstandard, so (I have been told) it might be bounced by the QC department of some TV stations.

Saturday, April 15, 2017

An SD interlaced to HD progressive Conversion Tutorial, or A Long-delayed Video

After all the time (see here and here) I've put into getting the AVISynth+QTGMC+FFMPEG deinterlacing workflow working properly on my system, I figured it was about time for a video tutorial. In my spare time, I've been trying to record one.

It's taken a lot longer than I thought.

Partly, this was due to finding new tricks/plugins that improved the process. Partly, it was due to finding unexpected bugs.

However, the main delay came from me not just wanting to do a "do this, then this' tutorial where you come away with no context whatsoever about what's actually happening. I want people who watch the video to get a good idea of what AVISynth is, and what FFMPEG brings to the process.

So, I'd start recording a tutorial, then get caught up in a tangent. An hour and a half later, I still haven't finished the video.

Plus, I make mistakes. There's a lot of material to cover, and I found myself leaving out crucial points (or ones that clear up confusing areas). I could have tried writing out a script, or just patching over my mistakes. However, it turned out that just repeating the actions so many times that you can practically do them in your sleep eventually got me to the right place. I still had some audio issues to address in post, but my overall presentation quality is much improved from earlier attempts.

I could try adding more production values than just a screencast. But at a certain point, you just have to release a project. So, here without further ado is my first full-length tutorial. Because the total video is around 45 minutes, I will probably add chapter links on the video at a later point. For now, though, here is my work. Constructive criticism is welcome.

UPDATE: The AVISynth wiki has been updated with new download links for many of the filters, and the plugin pack is no longer in a place I can easily link to. I've updated the instructions below to reflect this, and made an update video to the tutorial (linked below), but the instructions for setting up QTGMC in the above video are now outdated. I will try to keep this page updated if other significant changes occur in the future. Anyways...

Here's a written description of the process, including links:

Friday, April 14, 2017

Deinterlacing HD footage without losing significant quality - Part 1

After my last post, I played around with bunch of different settings, and discovered that plain old FFMPEG can do decent deinterlacing. One of the reasons I looked into this is that QTGMC (my AVISynth deinterlacing plugin of choice) is ridiculously slow and prone to crashing when processing HD footage.

The problem with FFMPEG is that its documentation is enormous, and while it sometimes includes clear examples to show you how to use certain features, other times it's simply assumed that you're technically savvy enough to know what they're talking about. For example, trying to figure out the best method for deinterlacing involved me doing Google searches for each of the various methods. Normally, something like that would give results pretty quickly, but most of the substantive discussions were from at least five to nine (!) years ago or related to programs that use FFMPEG as a back-end engine, rather than as a standalone app. Even after all that, it wasn't immediately obvious which method would be superior, so I decided to run a few tests.

Method 1: kerndeint - a relatively simple deinterlacing method that automatically halves the framerate and is roughly on par with the old After Effects / Premiere Pro deinterlacer.

Method 2: w3fdif - Developed by the BBC, this is much closer to what I was looking for. Unfortunately it was originally designed for interlaced standard definition PAL content, and as such somehow does not have a mechanism for selecting Top Field First field order. It did successfully deinterlace the footage, but ended up looking herky-jerky as a result of the field order mismatch.

Method 3: nnedi - The method that QTGMC was designed to incorporate/replace. In theory, it does many of the same things, but to say that trying to figure out the options was confusing is an understatement.

Method 4: bwdif - To quote the FFMPEG documentation: “Motion adaptive deinterlacing based on yadif with the use of w3fdif and cubic interpolation algorithms”. This turned out to be the perfect solution. The default settings automatically detect field order (if it's set in the file's metadata) and do not introduce weird image artifacts, but still give decent deinterlacing. Oh, and it runs significantly faster than AVISynth+QTGMC. Like, 6-8 times as fast.

Now, there are some downsides to bwdif. QTGMC can introduce some ghosting artifacts (including misaligned chroma artifacts), but it can also pull more detail out of the original image. This makes bwdif a sub-par solution for SD interlaced to HD progressive upconversion. FFMPEG also doesn't have the (relatively) easy-to-read syntax of an AVISynth script, and definitely lacks the versatility of the insane range of plugins available for the latter.

As always, here’s an example of the command-line options so you can run this process on your system. Note that you have to use -vf before naming the filter in quotes:

ffmpeg -i "input_video.avi" -vf "bwdif" -c:v libx264 -preset slow -crf 18 -pix_fmt yuv420p -c:a aac -b:a 320k "output_video.mp4" 

This is what I used to deinterlace a 60i HD file and re-encode it for YouTube. For ProRes encoding, see my previous post.

Friday, June 10, 2016

The Odyssey of trying to upscale interlaced SD to HD without losing quality: Part 2

You might want to read the first post in this series, because I'm not going to explain interlacing or what the various programs do again.

Since Part 1, I've discovered a few things:

VapourSynth currently requires the same plugins as AVISynth in order to run QTGMC, with the same stability issues and limitations. I'm actually trying to learn Python at the moment anyways, though, so I might revisit it again in the future and see if someone has ported the process natively in a way that supports multithreading without hacks.

Setting the topmost SetMTMode in my AVISynth script to (5, 10) versus (5, 12) appears to have solved my remaining stability issues. I'll be posting my full script down below.

I discovered AvsPmod, a program that loads AVISynth scripts with syntax highlighting and video preview. It makes adjusting scripts a heck of a lot easier. Because of AvsPmod, I don't need to use VirtualDub to check my work anymore.

I also discovered that I don't need VirtualDub to render the AVISynth scripts, because FFMPEG can load them directly. That means I can convert .avs scripts directly to any format that FFMPEG supports, including ProRes. This is awesome, because I can write .bat files that include all the settings I want for a particular codec/container/etc. All I have to do is change the input and output file names and double-click the .bat file. Also, while ProRes is awesome, I don't actually need to use an intermediate codec - I can render straight to a YouTube-friendly H.264 .mp4 file if I want without the (admittedly minor) quality loss of the extra step.

According to a few TV editors I asked, FFMPEG should probably not be used to render a ProRes deliverable for broadcast TV. Apparently, the implementation of ProRes is not recognized by Apple, and might be rejected by QC because of differences in embedded metadata. Not a problem for my current work, but if I did need to generate a file for broadcast, I would probably want to get a cheap Mac Mini or a month-long subscription to Scratch to make "official" ProRes deliverables. Incidentally, if I could afford to get a permanent license for Scratch, I would do it. Even with its oddball interface, it's still far and away the most responsive grading/compositing program I've ever used.

BIG DISCLAIMER: This process may not work, may crash, or do other things to your system. 

You have been warned. 

If you're on a deadline (and using Premiere Pro, After Effects, or Final Cut Pro) probably your best best is to use a paid plugin like FieldsKit.

Here's my .avs script settings for QTGMC deinterlacing:

SetMTMode (5, 10)
QTSource ("", audio= 1)
QTGMC(preset="Slower, SourceMatch=3, Lossless=2, EdiThreads=1)

I've also found an awesome conservative sharpening filter that can be added at the end for a little extra punch:


So, where do you go to get all this goodness? Here's some web links:

AVISynth (You will need both the 32-bit Official Build and the 32-bit Unofficial Build)

AVISynth source filters (Get the source filter for the format/codec you want to load. In the above script, I use QTInput.)

QTGMC (Get the "Plugin Package for multithreading".)

LimitedSharpen (Optional, note that it requires RGTools to run. Get the x86 version of the latter in order for it to work within a script using QTGMC. )

FFMPEG Windows Binaries (Get the 32-bit static version)


Install AVISynth. Put the AVISynth filters in the AVISynth Plugins directory, copy the multithreaded build of avisynth.dll to your system folder and replace the existing file. Copy a couple of system .dlls from the QTGMC package to your system folder. Extract FFMPEG and AvsPmod to their own folders. Set up FFMPEG to run from any directory on your PC by adding it to your PATH variable:  (skip to where it says "Windows Vista and Windows 7 users:")

In the same directory as your video file that you want to process, make an .avs script with the settings I listed above, changing the filename, source filter, and crop settings as necessary. Loading this script in AvsPmod will let you preview your results, and change to your preference. When you're done, don't forget to save your work.

Make a .bat file with the FFMPEG commands of your choice. For example, here's a command to encode an .avs script to a ProRes 422HQ Quicktime file:

ffmpeg -i "videofile.avs" -c:v prores_ks -profile:v 3 -qscale:v 9 -pix_fmt yuv422p10le ""

Change where necessary. The quotes around the filenames allow you to enter filenames with spaces in them.

I'll post a video soon with a full tutorial and more examples.

Monday, March 21, 2016

The Odyssey of trying to upscale interlaced SD to HD without losing quality: Part 1

In the past, I did some work involving upscaling a letterboxed standard definition (SD) show to HD for online streaming video. As it turned out, the built-in After Effects and Premiere Pro resizing tools are not the best for this sort of scenario.

The reason why the otherwise fabulous tools fall short is how they deal with interlacing.

Interlacing is a technique started in the early days of television that allows for the appearance of 60fps motion with the bandwidth of a 30fps signal. This is achieved by using "fields", which are only half the resolution of a full progressive frame. However, the image data in each field is displayed every other horizontal line on one field, then the next field does the same but displays where the blank lines were in the first field. Because the two fields are interlaced together, the eye and brain of the viewer combine them together, giving you roughly the appearance of 60 frames per second (fps) video. Because of the reduction in signal bandwidth, interlaced video has been used by TV broadcasts ever since (yup, even in HD). For a slightly clearer example, here's a section of two fields combined together with a fast-moving object as the subject (In motion, these combing artifacts are usually not as noticeable):

Once we moved into the age of digital video editing, interlacing made everything more complicated. Since digital video devices didn't want to show individual jagged-looking fields, they combined (or deinterlaced) them into discrete frames, then based timecode standards around 30 frames per second (technically 29.97 fps for color NTSC video) rather than 60 fields per second (technically 59.94 fps). In order to work with broadcast video devices and timecode, computer editing/compositing/etc. devices and programs had to follow the same standards, but still be able to output either progressive or interlaced video at the end.

The bottom line is this: in order to upscale (most) SD video, it first needs to be deinterlaced.

I'm greatly oversimplifying, but deinterlacing is commonly done in one of three ways:

1. Double the lines in each field to fill in the gaps, then treat each of the line-doubled fields as individual frames. With this method, you end up with less overall resolution, but it's quick and easy. In the old days, they used to make "line doubler" devices that would do this sort of thing for high-end TVs and such. Depending on the algorithm, the process might also "decimate" the framerate to 30fps in order to avoid twitchy artifacts from constantly switching between "upper" and "lower" fields.

2. Combine every 2 fields together into a single frame via an image processing algorithm like Yadif. This gives you better frame resolution, but still decimates the framerate to 30fps and can look like a Photoshop "artistic" filter. This might be a good thing; it gives a slightly more "filmic" look and saves on video filesize. After Effects uses a somewhat similar process if you check the "Preserve Edges" checkbox in the Interpret Footage right-click menu of a clip. A better way (in my opinion) involves using VirtualDub to perform the deinterlacing and upscaling. This is the workflow I've used in the past.

3. Use complex algorithms to look at detail from several fields at a time to create interpolated frames at 60fps. This retains both the full detail and the full motion of the original video, but... while consumer TVs do an okay job with this, in the pro editing world it's either been done by very expensive dedicated hardware devices (like the Teranex) or moderately-to-somewhat-pricey software plugins (like FieldsKit or Tachyon) that still require a fair amount of fiddling to get working properly. It can also result in minor-to-moderate "ghosting" artifacts. To be fair, proper frame interpolation is not a trivial process, and the above solutions do a great job.

I assumed option 3 was basically out of my reach - the best "double framerate" deinterlace option in VirtualDub can use Yadif, but has the issues of a 60fps line doubler conversion, and my budget hasn't allowed me to purchase any of the commercial solutions.

So, I gave up for a while. Then, a new project came along from the same client for upscaling some more SD footage. Since I already use AVISynth scripts to load Quicktime files into Virtualdub and I've seen some great inverse telecine (aka ITVC, which is a process of removing redundant frames from 24p video that has been transferred to 60i video) plugins, I decided to check out deinterlace filters for AVISynth again. That's when I found out about QTGMC, an AVISynth plugin that does pretty much everything I want it to, and it's free.

Unfortunately, it has some drawbacks.

First, let me tell you about AVISynth and VirtualDub. Both of these programs were developed as open-source video processing tools in the early 2000's.

VirtualDub is kind of like a video swiss army knife - it uses built-in filters to do everything from resizing a video to replacing audio, sharpening, and even some visual effects. The downside with VirtualDub is that by default it only loads and saves files in an .AVI container, which significantly limits the number of codecs that it supports. There have been plugins developed that allow it to load a number of other container formats, but the plugins don't always work properly or continue to be supported as new codecs are released. However, if you combine VirtualDub with AVISynth, you can read almost any video codec that's ever been released.

AVISynth is probably the oddest video tool I've ever used. It's not a standalone program per se, and it has no interface. It's a scripting language that gives instructions on how to process video using a frameserver. This means you have to write a text file with a list of instructions, then load that text file into a separate program that can that can communicate with AVISynth's frameserver. VirtualDub is one such program. AVISynth's syntax can be confusing, arcane, and not terribly user-friendly. However, it can do all the video processing tasks of VirtualDub and more, and do so before the video is displayed in VirtualDub. It also has a truly staggering number of plugins developed for it, and some of them rival commercial programs in their functionality.

Now, back to QTGMC. QTGMC is a plugin that uses other AVISynth plugins to perform frame interpolation and deinterlacing. I will not attempt to explain the details, suffice to say it has a huge amount of variables and settings... but it works. It really, really works. You can use the combination of AVISynth with QTGMC plus VirtualDub to turn SD interlaced footage into 60fps HD footage.

Unfortunately, there are a few problems. Remember when I mentioned when the tools were originally developed? By default, neither VirtualDub nor AVISynth are multithreaded, which means they don't take avantage of modern multi-core processors. They're also 32-bit apps. There are technically 64-bit versions of VirtualDub and AVISynth, but they lack the plugin support of the 32-bit versions. The attempt to "fork" AVISynth for proper 64-bit support (known as AVISynth+) doesn't appear to support multithreading natively, either.

Now, there is a replacement library for AVISynth that enables multithreading support, in fact QTGMC basically requires it to perform properly. It's not, however, what you would call stable. AVISynth is prone to crashing or just stop rendering if you've set something it doesn't like, and those settings may be different depending on your hardware, the versions of the plugins you're using, etc. etc. etc.

Currently, I'm having trouble getting QTGMC to render beyond about 15mins of footage. I got to that point by gradually lowering the number after "EdiThreads" from 6 to 1. will keep playing with settings to see if there's a magic combination that will work.

I should mention that there is one other option: a new ground-up rewrite of AVISynth called VapourSynth. It's 64-bit native and supports multithreading "out of the box". It also uses an entirely different scripting syntax because it's both written in and uses Python. It can now load AVISynth plugins, but you still have to learn a whole new scripting language to use them.

Stay tuned for part 2, where I reveal the results of my AVISynth experimentation and see if I'm going to be willing to try VapourSynth or not.

Tuesday, July 30, 2013

Grading Workflow update 7-30-2013

Since my last entry, I've upgraded to the "CC" editions of all my Adobe apps... except for Encore CS6. CC has no new version of Encore, nor is it installed by default. Thankfully, I caught this issue before uninstalling Encore (I've found out subsequently that you can install it again, but you have to re-enable the CS6 version of Premiere Pro in Adobe CC).

Unfortunately, one additional wrinkle of the new CC Premiere Pro is that my old copy of Cineform NEO no longer works with it. This is not a total loss, however, because after trying my previous color grading workflow and finding that it simply took too much time to render the individual clips for a long-form project, I've decided to take an entirely new approach.

The new version of Premiere Pro has SpeedGrade's Lumetri Deep Color engine built in, and as a result, you can create "looks" in Speedgrade that can be imported into Premiere Pro and used as filters (Sort of like Magic Bullet Quick Looks). It would be awesome if you could actually adjust these looks in Premiere Pro, but I'll take what I can get.

So, my current workflow is:

  1. Do a "Send to Adobe Speedgrade" of each of the sequences in my project
  2. Grade those sequences in Speedgrade.
  3. Save the grades as individual "looks".
  4. Transfer the look files to a looks sub-folder in my project's main footage folder. If you're on Windows, Speedgrade's custom look files are stored in: C:\Users\Owner\AppData\Roaming\Adobe\SpeedGrade\7.0\settings\looks  (I highly recommend creating a desktop shortcut to the folder so you can get back to it easily).
  5. Apply the looks individually to the respective clips.

If you don't have a bunch of hard drive space to work with, you can just do the "Send to Adobe Speedgrade" for one sequence at a time, but it's handy to have the Speedgrade sequences available if you need to adjust one or more of the looks.

The only issue I've run into so far is on the project's sizzler reel, where rendering to DVD occasionally will produce a twitchy white bar on the right side of some of the clips with the "looks" applied to them. I'm still trying to track down the issue, but thankfully, there's an easy solution - render out to a full-res format (I use Uncompressed 10-bit YUV Quicktime) first, and then use that .mov to render/encode the DVD files in Adobe Media Encoder.