Saturday, April 15, 2017

An SD interlaced to HD progressive Conversion Tutorial, or A Long-delayed Video

After all the time (see here and here) I've put into getting the AVISynth+QTGMC+FFMPEG deinterlacing workflow working properly on my system, I figured it was about time for a video tutorial. In my spare time, I've been trying to record one.

It's taken a lot longer than I thought.

Partly, this was due to finding new tricks/plugins that improved the process. Partly, it was due to finding unexpected bugs.

However, the main delay came from me not just wanting to do a "do this, then this' tutorial where you come away with no context whatsoever about what's actually happening. I want people who watch the video to get a good idea of what AVISynth is, and what FFMPEG brings to the process.

So, I'd start recording a tutorial, then get caught up in a tangent. An hour and a half later, I still haven't finished the video.

Plus, I make mistakes. There's a lot of material to cover, and I found myself leaving out crucial points (or ones that clear up confusing areas). I could have tried writing out a script, or just patching over my mistakes. However, it turned out that just repeating the actions so many times that you can practically do them in your sleep eventually got me to the right place. I still had some audio issues to address in post, but my overall presentation quality is much improved from earlier attempts.

I could try adding more production values than just a screencast. But at a certain point, you just have to release a project. So, here without further ado is my first full-length tutorial. Because the total video is around 45 minutes, I will probably add chapter links on the video at a later point. For now, though, here is my work. Constructive criticism is welcome.

Here's a written description of the process, including links:

Friday, April 14, 2017

Deinterlacing HD footage without losing significant quality - Part 1

After my last post, I played around with bunch of different settings, and discovered that plain old FFMPEG can do decent deinterlacing. One of the reasons I looked into this is that QTGMC (my AVISynth deinterlacing plugin of choice) is ridiculously slow and prone to crashing when processing HD footage.

The problem with FFMPEG is that its documentation is enormous, and while it sometimes includes clear examples to show you how to use certain features, other times it's simply assumed that you're technically savvy enough to know what they're talking about. For example, trying to figure out the best method for deinterlacing involved me doing Google searches for each of the various methods. Normally, something like that would give results pretty quickly, but most of the substantive discussions were from at least five to nine (!) years ago or related to programs that use FFMPEG as a back-end engine, rather than as a standalone app. Even after all that, it wasn't immediately obvious which method would be superior, so I decided to run a few tests.

Method 1: kerndeint - a relatively simple deinterlacing method that automatically halves the framerate and is roughly on par with the default After Effects / Premiere Pro deinterlacer.

Method 2: w3fdif - Developed by the BBC, this is much closer to what I was looking for. Unfortunately it was originally designed for interlaced standard definition PAL content, and as such somehow does not have a mechanism for selecting Top Field First field order. It did successfully deinterlace the footage, but ended up looking herky-jerky as a result of the field order mismatch.

Method 3: nnedi - The method that QTGMC was designed to incorporate/replace. In theory, it does many of the same things, but to say that trying to figure out the options was confusing is an understatement.

Method 4: bwdif - To quote the FFMPEG documentation: “Motion adaptive deinterlacing based on yadif with the use of w3fdif and cubic interpolation algorithms”. This turned out to be the perfect solution. The default settings automatically detect field order (if it's set in the file's metadata) and do not introduce weird image artifacts, but still give decent deinterlacing. Oh, and it runs significantly faster than AVISynth+QTGMC. Like, 6-8 times as fast.

Now, there are some downsides to bwdif. QTGMC can introduce some ghosting artifacts (including misaligned chroma artifacts), but it can also pull more detail out of the original image. This makes bwdif a sub-par solution for SD interlaced to HD progressive upconversion. FFMPEG also doesn't have the (relatively) easy-to-read syntax of an AVISynth script, and definitely lacks the versatility of the insane range of plugins available for the latter.

As always, here’s an example of the command-line options so you can run this process on your system. Note that you have to use -vf before naming the filter in quotes:

ffmpeg -i "input_video.avi" -vf "bwdif" -c:v libx264 -preset slow -crf 18 -pix_fmt yuv420p -c:a aac -b:a 320k "output_video.mp4" 

This is what I used to deinterlace a 60i HD file and re-encode it for YouTube. For ProRes encoding, see my previous post.

Friday, June 10, 2016

The Odyssey of trying to upscale interlaced SD to HD without losing quality: Part 2

You might want to read the first post in this series, because I'm not going to explain interlacing or what the various programs do again.

Since Part 1, I've discovered a few things:

VapourSynth currently requires the same plugins as AVISynth in order to run QTGMC, with the same stability issues and limitations. I'm actually trying to learn Python at the moment anyways, though, so I might revisit it again in the future and see if someone has ported the process natively in a way that supports multithreading without hacks.

Setting the topmost SetMTMode in my AVISynth script to (5, 10) versus (5, 12) appears to have solved my remaining stability issues. I'll be posting my full script down below.

I discovered AvsPmod, a program that loads AVISynth scripts with syntax highlighting and video preview. It makes adjusting scripts a heck of a lot easier. Because of AvsPmod, I don't need to use VirtualDub to check my work anymore.

I also discovered that I don't need VirtualDub to render the AVISynth scripts, because FFMPEG can load them directly. That means I can convert .avs scripts directly to any format that FFMPEG supports, including ProRes. This is awesome, because I can write .bat files that include all the settings I want for a particular codec/container/etc. All I have to do is change the input and output file names and double-click the .bat file. Also, while ProRes is awesome, I don't actually need to use an intermediate codec - I can render straight to a YouTube-friendly H.264 .mp4 file if I want without the (admittedly minor) quality loss of the extra step.

According to a few TV editors I asked, FFMPEG should probably not be used to render a ProRes deliverable for broadcast TV. Apparently, the implementation of ProRes is not recognized by Apple, and might be rejected by QC because of differences in embedded metadata. Not a problem for my current work, but if I did need to generate a file for broadcast, I would probably want to get a cheap Mac Mini or a month-long subscription to Scratch to make "official" ProRes deliverables. Incidentally, if I could afford to get a permanent license for Scratch, I would do it. Even with its oddball interface, it's still far and away the most responsive grading/compositing program I've ever used.

BIG DISCLAIMER: This process may not work, may crash, or do other things to your system. 

You have been warned. 

If you're on a deadline (and using Premiere Pro, After Effects, or Final Cut Pro) probably your best best is to use a paid plugin like FieldsKit.

Here's my .avs script settings for QTGMC deinterlacing:

SetMTMode (5, 10)
QTSource ("", audio= 1)
QTGMC(preset="Slower, SourceMatch=3, Lossless=2, EdiThreads=1)

I've also found an awesome conservative sharpening filter that can be added at the end for a little extra punch:


So, where do you go to get all this goodness? Here's some web links:

AVISynth (You will need both the 32-bit Official Build and the 32-bit Unofficial Build)

AVISynth source filters (Get the source filter for the format/codec you want to load. In the above script, I use QTInput.)

QTGMC (Get the "Plugin Package for multithreading".)

LimitedSharpen (Optional, note that it requires RGTools to run. Get the x86 version of the latter in order for it to work within a script using QTGMC. )

FFMPEG Windows Binaries (Get the 32-bit static version)


Install AVISynth. Put the AVISynth filters in the AVISynth Plugins directory, copy the multithreaded build of avisynth.dll to your system folder and replace the existing file. Copy a couple of system .dlls from the QTGMC package to your system folder. Extract FFMPEG and AvsPmod to their own folders. Set up FFMPEG to run from any directory on your PC by adding it to your PATH variable:  (skip to where it says "Windows Vista and Windows 7 users:")

In the same directory as your video file that you want to process, make an .avs script with the settings I listed above, changing the filename, source filter, and crop settings as necessary. Loading this script in AvsPmod will let you preview your results, and change to your preference. When you're done, don't forget to save your work.

Make a .bat file with the FFMPEG commands of your choice. For example, here's a command to encode an .avs script to a ProRes 422HQ Quicktime file:

ffmpeg -i "videofile.avs" -c:v prores_ks -profile:v 3 -qscale:v 9 -pix_fmt yuv422p10le ""

Change where necessary. The quotes around the filenames allow you to enter filenames with spaces in them.

I'll post a video soon with a full tutorial and more examples.

Monday, March 21, 2016

The Odyssey of trying to upscale interlaced SD to HD without losing quality: Part 1

In the past, I did some work involving upscaling a letterboxed standard definition (SD) show to HD for online streaming video. As it turned out, the built-in After Effects and Premiere Pro resizing tools are not the best for this sort of scenario.

The reason why the otherwise fabulous tools fall short is how they deal with interlacing.

Interlacing is a technique started in the early days of television that allows for the appearance of 60fps motion with the bandwidth of a 30fps signal. This is achieved by using "fields", which are only half the resolution of a full progressive frame. However, the image data in each field is displayed every other horizontal line on one field, then the next field does the same but displays where the blank lines were in the first field. Because the two fields are interlaced together, the eye and brain of the viewer combine them together, giving you roughly the appearance of 60 frames per second (fps) video. Because of the reduction in signal bandwidth, interlaced video has been used by TV broadcasts ever since (yup, even in HD). For a slightly clearer example, here's a section of two fields combined together with a fast-moving object as the subject (In motion, these combing artifacts are usually not as noticeable):

Once we moved into the age of digital video editing, interlacing made everything more complicated. Since digital video devices didn't want to show individual jagged-looking fields, they combined (or deinterlaced) them into discrete frames, then based timecode standards around 30 frames per second (technically 29.97 fps for color NTSC video) rather than 60 fields per second (technically 59.94 fps). In order to work with broadcast video devices and timecode, computer editing/compositing/etc. devices and programs had to follow the same standards, but still be able to output either progressive or interlaced video at the end.

The bottom line is this: in order to upscale (most) SD video, it first needs to be deinterlaced.

I'm greatly oversimplifying, but deinterlacing is commonly done in one of three ways:

1. Double the lines in each field to fill in the gaps, then treat each of the line-doubled fields as individual frames. With this method, you end up with less overall resolution, but it's quick and easy. In the old days, they used to make "line doubler" devices that would do this sort of thing for high-end TVs and such. Depending on the algorithm, the process might also "decimate" the framerate to 30fps in order to avoid twitchy artifacts from constantly switching between "upper" and "lower" fields.

2. Combine every 2 fields together into a single frame via an image processing algorithm like Yadif. This gives you better frame resolution, but still decimates the framerate to 30fps and can look like a Photoshop "artistic" filter. This might be a good thing; it gives a slightly more "filmic" look and saves on video filesize. After Effects uses a somewhat similar process if you check the "Preserve Edges" checkbox in the Interpret Footage right-click menu of a clip. A better way (in my opinion) involves using VirtualDub to perform the deinterlacing and upscaling. This is the workflow I've used in the past.

3. Use complex algorithms to look at detail from several fields at a time to create interpolated frames at 60fps. This retains both the full detail and the full motion of the original video, but... while consumer TVs do an okay job with this, in the pro editing world it's either been done by very expensive dedicated hardware devices (like the Teranex) or moderately-to-somewhat-pricey software plugins (like FieldsKit or Tachyon) that still require a fair amount of fiddling to get working properly. It can also result in minor-to-moderate "ghosting" artifacts. To be fair, proper frame interpolation is not a trivial process, and the above solutions do a great job.

I assumed option 3 was basically out of my reach - the best "double framerate" deinterlace option in VirtualDub can use Yadif, but has the issues of a 60fps line doubler conversion, and my budget hasn't allowed me to purchase any of the commercial solutions.

So, I gave up for a while. Then, a new project came along from the same client for upscaling some more SD footage. Since I already use AVISynth scripts to load Quicktime files into Virtualdub and I've seen some great inverse telecine (aka ITVC, which is a process of removing redundant frames from 24p video that has been transferred to 60i video) plugins, I decided to check out deinterlace filters for AVISynth again. That's when I found out about QTGMC, an AVISynth plugin that does pretty much everything I want it to, and it's free.

Unfortunately, it has some drawbacks.

First, let me tell you about AVISynth and VirtualDub. Both of these programs were developed as open-source video processing tools in the early 2000's.

VirtualDub is kind of like a video swiss army knife - it uses built-in filters to do everything from resizing a video to replacing audio, sharpening, and even some visual effects. The downside with VirtualDub is that by default it only loads and saves files in an .AVI container, which significantly limits the number of codecs that it supports. There have been plugins developed that allow it to load a number of other container formats, but the plugins don't always work properly or continue to be supported as new codecs are released. However, if you combine VirtualDub with AVISynth, you can read almost any video codec that's ever been released.

AVISynth is probably the oddest video tool I've ever used. It's not a standalone program per se, and it has no interface. It's a scripting language that gives instructions on how to process video using a frameserver. This means you have to write a text file with a list of instructions, then load that text file into a separate program that can that can communicate with AVISynth's frameserver. VirtualDub is one such program. AVISynth's syntax can be confusing, arcane, and not terribly user-friendly. However, it can do all the video processing tasks of VirtualDub and more, and do so before the video is displayed in VirtualDub. It also has a truly staggering number of plugins developed for it, and some of them rival commercial programs in their functionality.

Now, back to QTGMC. QTGMC is a plugin that uses other AVISynth plugins to perform frame interpolation and deinterlacing. I will not attempt to explain the details, suffice to say it has a huge amount of variables and settings... but it works. It really, really works. You can use the combination of AVISynth with QTGMC plus VirtualDub to turn SD interlaced footage into 60fps HD footage.

Unfortunately, there are a few problems. Remember when I mentioned when the tools were originally developed? By default, neither VirtualDub nor AVISynth are multithreaded, which means they don't take avantage of modern multi-core processors. They're also 32-bit apps. There are technically 64-bit versions of VirtualDub and AVISynth, but they lack the plugin support of the 32-bit versions. The attempt to "fork" AVISynth for proper 64-bit support (known as AVISynth+) doesn't appear to support multithreading natively, either.

Now, there is a replacement library for AVISynth that enables multithreading support, in fact QTGMC basically requires it to perform properly. It's not, however, what you would call stable. AVISynth is prone to crashing or just stop rendering if you've set something it doesn't like, and those settings may be different depending on your hardware, the versions of the plugins you're using, etc. etc. etc.

Currently, I'm having trouble getting QTGMC to render beyond about 15mins of footage. I got to that point by gradually lowering the number after "EdiThreads" from 6 to 1. will keep playing with settings to see if there's a magic combination that will work.

I should mention that there is one other option: a new ground-up rewrite of AVISynth called VapourSynth. It's 64-bit native and supports multithreading "out of the box". It also uses an entirely different scripting syntax because it's both written in and uses Python. It can now load AVISynth plugins, but you still have to learn a whole new scripting language to use them.

Stay tuned for part 2, where I reveal the results of my AVISynth experimentation and see if I'm going to be willing to try VapourSynth or not.

Tuesday, July 30, 2013

Grading Workflow update 7-30-2013

Since my last entry, I've upgraded to the "CC" editions of all my Adobe apps... except for Encore CS6. CC has no new version of Encore, nor is it installed by default. Thankfully, I caught this issue before uninstalling Encore (I've found out subsequently that you can install it again, but you have to re-enable the CS6 version of Premiere Pro in Adobe CC).

Unfortunately, one additional wrinkle of the new CC Premiere Pro is that my old copy of Cineform NEO no longer works with it. This is not a total loss, however, because after trying my previous color grading workflow and finding that it simply took too much time to render the individual clips for a long-form project, I've decided to take an entirely new approach.

The new version of Premiere Pro has SpeedGrade's Lumetri Deep Color engine built in, and as a result, you can create "looks" in Speedgrade that can be imported into Premiere Pro and used as filters (Sort of like Magic Bullet Quick Looks). It would be awesome if you could actually adjust these looks in Premiere Pro, but I'll take what I can get.

So, my current workflow is:

  1. Do a "Send to Adobe Speedgrade" of each of the sequences in my project
  2. Grade those sequences in Speedgrade.
  3. Save the grades as individual "looks".
  4. Transfer the look files to a looks sub-folder in my project's main footage folder. If you're on Windows, Speedgrade's custom look files are stored in: C:\Users\Owner\AppData\Roaming\Adobe\SpeedGrade\7.0\settings\looks  (I highly recommend creating a desktop shortcut to the folder so you can get back to it easily).
  5. Apply the looks individually to the respective clips.

If you don't have a bunch of hard drive space to work with, you can just do the "Send to Adobe Speedgrade" for one sequence at a time, but it's handy to have the Speedgrade sequences available if you need to adjust one or more of the looks.

The only issue I've run into so far is on the project's sizzler reel, where rendering to DVD occasionally will produce a twitchy white bar on the right side of some of the clips with the "looks" applied to them. I'm still trying to track down the issue, but thankfully, there's an easy solution - render out to a full-res format (I use Uncompressed 10-bit YUV Quicktime) first, and then use that .mov to render/encode the DVD files in Adobe Media Encoder.

Thursday, April 11, 2013

CS6 and DaVinci Resolve workflow update

Here's what I've come up with as a workflow to edit and finish in Premiere Pro, but color grade in Resolve Lite:
  1. Edit project in Premiere Pro.
  2. When finished, consolidate project to new folder and/or drive.
  3. Import all used footage in new folder into After Effects as separate clips. If you can trim the clips with handles on the sides, even better.
  4. De-noise/sharpen clips with Neat Video. Render all clips out (separately) to new "Ungraded" folder. This takes approximately 6x real time on my system.
  5. Import clips from "ungraded" folder into new Resolve project.
  6. Grade clips in Resolve Lite.
  7. Export clips (again, as separate clips) to new "Graded" folder. Make sure settings and naming match those of "Ungraded" clips. If the "Ungraded" clips have audio, remember to render out "Graded" clips with audio.
  8. Make new Premiere Pro project, import old project into it. Save project and Close Premiere Pro before next step.
  9. Move "Ungraded" folder to different directory.
  10. Open copied Premiere Pro project.
  11. Link files to "Graded" clips.
  12. Save project, render to appropriate format(s).
I alleviate some of the storage concerns by rendering out to Cineform Film Scan 1 444 instead of uncompressed video, but the render time for denoising all that footage is absolutely nuts. Even after consolidating my project in Premiere Pro, it would end up taking me over a week of rendering (12 hour days) to get all the footage prepared for my latest project. This is mainly because the project manager trim footage option doesn't appear to work for DSLR footage. Oh well.

Even with this workflow, Resolve still has some issues. It can be very finicky when it comes to what footage it will actually import:
  • I had to re-render some Cineform transcodes twice to get Resolve to see them. No idea why.
  • Resolve does not import most flavors of .AVI files, so I had to re-wrap my Cineform .AVIs to Quicktime files (Can be done in Cineform's own HDLink program, but only for Cineform files). 
  • Rendering the original footage to uncompressed Quicktimes files appears to alleviate some of the import issues, but comes with a huge filesize increase.
  • When rendering the final graded clips out of Resolve, make sure to go to the timeline in the "Deliver" panel and right-click above the clips so you can "Select All". Otherwise, you might end up rendering one clip and banging your head on your desk in frustration.
  • Make sure you render to the same bit-depth that you work in, or your luminance values will be screwed up.
So what can I do if I can't use the above workflow? My current solution is to clean up my sequences in Premiere Pro so they're all on one track (one video track and one audio track) with only cuts, speed changes and cross dissolves. Then, I  import the project into After Effects to denoise/do a basic grade. I think in the future, I'll see if I can use just Premiere Pro plugins to do all this stuff instead of having to roundtrip or finish in another program. Or I could just shoot footage with a better camera/codec so that I don't have to go through the denoising step.


Just for the heck of it, I also tried messing with the footage using the ACES colorspace. 

ACES is basically a super wide gamut color space that is designed to encompass all other color spaces. It works by selecting a premade LUT for the input source (camera model, film, etc.), a working LUT (don't ask), and a display/output LUT to make sure the device/format you're outputting to displays the footage properly.

That's the theory. The reality is that my low-gamut DSLR footage ended up looking like crud when imported, and I couldn't figure out how to adjust the grade to fix it. I will investigate more later, possibly with footage from a better camera.

Thursday, November 22, 2012

Adobe CS6 and DaVinci Resolve - Update #3

Okay, so since my last post, I've found out a few more things:

  • A guy named John Schultz has created an RGB Curves preset for Premiere Pro that acts as a LUT for the Technicolor Cinestyle picture profile. It looks great, doesn't hog resources, is GPU accelerated for realtime playback and fast rendering.
  • When I tried importing a timeline from Premiere Pro into Davinci Resolve Lite (as an exported .XML) and doing a rough color grade, Resolve added a few random black frames to the footage. I didn't notice them until I rendered out the graded footage. I went back in to Resolve and confirmed the problem appears on the timeline... but not consistently. It could be a framerate mismatch at some point in the importing process, but I haven't been able to figure it out yet.
  • Resolve Lite doesn't like anything other than cuts and dissolves in an XML import of an edit. In my limited testing, any error or missing clip will cause the offending clip to be replaced by another clip - usually the same clip for all errors.
  • The whole color grading process has turned out to be a lot more work than I expected. The grading itself is fun, but the process of getting a project into either SpeedGrade or Resolve is counter-intuitive pain in the butt. I can see why Adobe encourages you to render out a project to .DPX before importing it into SpeedGrade.
  • Resolve Lite has some quirks, like needing to load config presets twice to get the settings to load properly.
  • My massive Premiere Pro project file for the reality show pilot I'm working on freezes After Effects if I try to import it into the latter. I'll probably need to use a trimmed-down version for the final conform.
  • If you need a file that will play back on a lower-end PC, a DV-Widescreen .WMV file appears to work quite well. It's especially good for dailies. If you know a good Mac equivalent, feel free to leave a comment. 
  • Adobe Media Encoder is friggin' awesome. You can queue up multiple jobs, different settings, and don't need to leave Premiere Pro open once a project is queued up to render.
  • Adobe Prelude is pretty good for ingesting DSLR footage, although there are a lot of features I wish it had, such as:
    • Renaming files before/as they're being ingested with custom auto-increment options (e.g.: "" could be automatically changed to something like "SGP Test Shoot - 10-23-2012 - Camera A - Shot".)
    • Being able to ingest audio files without kicking up an error message.
    • Automatic syncing of dual-system/multi-cam footage (maybe via a Pluraleyes plugin, just like Premiere Pro?)