Sunday, April 26, 2026

DVD conversion workflow update

Hey folks, it's been a while. 

Since the last time I posted, I started a job teaching film and video classes at a pretty awesome local college. Also, nerve issues with my hands have prevented me from doing a lot of extracurricular computer activity (I'm currently using speech to text to write most of this). 

As part of my job, I sometimes need to get clips from various DVDs for educational reference. Since I've done this a number of times, I figured it might be worth updating. I do have a previous post going over this process, but it's now almost 8 years old and a lot has changed, so I decided to rework it based on all of the things I've learned since then. If you notice that I get anything wrong, please let me know in the comments and I'll update the post (And probably give you credit as well).

BIG DISCLAIMER: This process may not work, may crash, or may do other things to your system. Virus scan everything you download. It's not a 100% guarantee that you'll avoid getting a malware infection, but it's a lot better than not checking at all.

In this case, please also follow the copyright laws of your country, and be aware of any anti DRM-circumvention laws.

Follow this tutorial at your own risk. 

Also, this tutorial is for Windows 11. It might work for previous versions of Windows, but may require slight modifications. Users of MacOS and other OSes should look elsewhere.

First up, I'm going to assume that you've already set up AVISynth+ and VirtualDub2. If not, check out my previous post.

Briefly, the steps for this workflow are as follows:

  • Copy over the DVD's files (if DVD is unencrypted) or use a decryption program and point the output to a directory on a local drive.
  • Determine what title you want to convert.
  • Open up the appropriate .vob files in DGIndex. Save project and demux audio.
  • Convert audio to .wav using FFMPEG, Shutter Encoder or Audacity with FFMPEG add-on.
  • Create AVISynth script
    • Load DGIndex project file
    • Load audio
    • Mix audio and video
    • Perform any additional video processing
  • Render out avs script to desired format using your rendering program of choice.

Grab DGMPGDec:

http://rationalqm.us/dgmpgdec/dgmpgdec.html

Grab TIVC for telecined (film) content:


And of course you can use QTGMC for any actually interlaced content.

Virus scan all downloaded files. 

Extract the DGMPGDec archive into a directory on your system, maybe named "DGMPGDec". Copy the DGDecode.dll file to your AVISynth+ plugins64+ directory (as of version 3.x, DGMPEGDec no longer supports 32-bit, so if you need that for some reason, use an earlier release and put the DLL in the plugins+ folder).

Create a folder somewhere on one of your drives for the DVD files, named whatever you want. Put a DVD in your drive.

If the DVD isn't encrypted, just copy over the VIDEO_TS folder or the .VOB files contained within to your new folder.

If it is encrypted, you'll need to use a decryption program of some sort. Due to the various legal issues involved with DVD ripping programs, I'm a little reluctant to link to any specific programs, and I would caution you to be careful - in addition to my warnings above, semi-to-fully illegal software is a common target for malware authors to try to inject code or compromise the websites of said software in some way.

Regardless, once you have your unencrypted .VOB files copied over, you'll need to find the title you want to transcode.

Often, this is made up of the largest similarly-named files on the disc, but if you don't know what title contains your desired content, use VLC to play back the .VOB files until you find the correct one.

When you know what title you want, open DGIndex from your DGDecode folder. Go to the Audio menu, select Output Method and then Demux All Tracks. Go to the File menu, then select Open. Browse to the folder with your DVD files. Select all the .VOB files in a particular title. They will usually be named with VTS_(title number)_(part number).VOB. For example:


Notice that I don't select the "_0.VOB" files - they're not needed in this case. Click Open, and then you'll get a "File List" window. Confirm that you have all the files you need, or add any you missed. When finished, click OK.

Drag around the playback bar to make sure the entire title is present. If you'd like to get a sense of what DGIndex thinks your video's framerate/aspect ratio is, hit the F5 key and preview it for a minute or so (you can stop playback with the ESC key). When satisfied, go to the File menu and select Save Project. I generally just save it in the same directory as the DVD files, and will proceed as if you have done so.

One thing to look out for: if you get a box like this:


Generally, I will click "No". However, if the resulting project file gives you an "access violation" error during preview or encoding later in the process, then you might need to go back and re-save the project file, than click Yes. (Thanks to C. Sandor for reminding me about this).

Anyways, Save Project will create a .d2v file and at least one audio track in the same directory.

If you'd like, you can convert the audio file to a Wav file ahead of time using Shutter Encoder, or with a simple FFMPEG command like this:

ffmpeg -i "audio.ac3" -c:a pcm_s16le "audio.wav" 

The resulting file should work in editing programs either on its own or muxed with the video file we'll be creating in a moment. If the original file has more than 2 channels, they may not be in the correct order is you just use the command above. If you need to mix down 5.1 surround audio to stereo, a common suggestion that I've used is the -ac 2 option, like so:

ffmpeg -i "audio.ac3" -c:a pcm_s16le -ac 2 "audio.wav"

Alternatively, you can use the NicAudio AVISynth filter to import the AC3(or DTS) file, although I've sometimes run into sync issues when using it.

Create a new text document, then change the extension to .avs. Double-click to open it in Notepad or AvsPmod.

Add the following to the newly-created avs file:

video = MPEG2Source("yourproject.d2v")
audio = WavSource("audio.wav")
AudioDub(video, audio)


If you want to upscale to 720p from an anamorphic widescreen DVD, add a command like this:.

            Spline64Resize(1280, 720)


 If your video is 4:3 aspect ratio, then you can use
Spline64Resize(720, 540)

or 

          BilinearResize(640,480) 


instead to correct for the pixel aspect ratio difference between NTSC video and modern displays. Spline64 uses sharpening, Bilinear does not.


For 23.976/25 fps DVDs (most movies), you'll need to perform an "inverse telecine" to extract out the progressive frames from the interlaced video. In my experience, it's best here to use the .d2v file to inform TFM what field order to use:

video = MPEG2Source("yourproject.d2v")
audio = WavSource("audio.wav")
AudioDub(video, audio)
TFM(order=-1, d2v="yourproject.d2v")
TDecimate(cycle=1)
Spline64Resize(1280, 720)

For originally interlaced content, you'll need to use QTGMC:
video = MPEG2Source("yourproject.d2v")
audio = WavSource("audio.wav")
AudioDub(video, audio)
AssumeBFF()
QTGMC(Preset="Slower", FPSDivisor=1)
Spline64Resize(720, 540)

FPSDivisor as included in the above command does nothing, letting the framerate be doubled to 59.94fps. However, if you change the 1 to 2, it will divide the framerate in half, resulting in a cleanly derived 29.97fps. Match the framerate to the project you're importing the video into, or the deliverables spec for your ultimate destination.

If you need to load subtitles, you can do so using the VobSub filter.

        VobSub("movie_track3_[eng].sub")

When you're ready, you can load your .avs script into VirtualDub2 and test it out. If you don't get any errors, you can try rendering.

If you'd like to use command-line FFMPEG instead, create a new text file in the same directory, then change the extension to .bat. Right-click -> Edit In Notepad to open it, then type something like the following:

ffmpeg -i "your.avs" -c:v prores -profile:v 3 -c:a pcm_s16le "output.mov"

Save the file, then close Notepad.

If you're using a DTS file for audio, then -c:a should be pcm_s24le. If you're using DNxHR/HD for your output codec instead of ProRes, you may need to add a colormatrix conversion to avoid a color shift bug on SD-HD upscales:

ffmpeg -i "your.avs" -vf "colormatrix=bt601:bt709" -c:v prores -profile:v 3 -c:a pcm_s16le "output.mov"

If you need more info on colorspace conversions using pix_fmt, check out the ProRes section at the following link:

https://trac.ffmpeg.org/wiki/Encode/VFX

And that's it. Simple, right? ;)

Tuesday, May 4, 2021

Which deinterlacing algorithm is the best? Part 1 - HD interlaced footage

Before I began, I'd like to give a special thanks to Aleksander Kozak for his help in testing and providing the HD footage used in this post.

Also, this is part 1 of a series of posts. When I've finished part 2, I'll link to it here.


Background

One of the things about working as an editor is that the technology is constantly changing.  There are new pieces of software, plugins, and even hardware that are released on a regular basis, and keeping up with all of it can be challenging.

Roughly about 10 years ago, deinterlacing had four general types of solutions: 

- Hardware-based video conversion boxes like the Teranex that are designed to perform resolution and frame rate conversions in real time.  Simple, but not always the best quality, and the hardware is pretty expensive (especially when you consider that you also need both a separate dedicated playback device and a recording device in addition to the conversion box).

- Services like Cinnafilm's Tachyon that do the same sort of job using very complex and proprietary software and hardware on a rental (or on-premises, for a higher fee) basis.  Great results, but very expensive.

- Built-in deinterlacing within video editing programs.  I haven't used either Media Composer or Final Cut Pro X's implementation, but with everything else I've tried, this is garbage.  Unless your entire project from beginning to end uses interlaced settings, editing programs tend to toss out every other field and line double the remaining field, which cuts both motion detail and frame detail essentially in half. The only way I've seen to get around this inside an editing program is to use the FieldsKit plugin, which works a bit better than real-time conversion boxes, but not as well as SAAS options like Tachyon.

- Open source video filters.  For a while, your only options here were frame blending and "bob" deinterlacing, neither of which were particularly great.  Most early tools for dealing with interlacing in this space focused more on combining similar fields together - mainly as a result of them being geared towards processing TV captures of anime.  Slightly later methods like Yadif improved on deinterlacing quality, but were still only working on individual fields, using still image interpolation to essentially round off the edges of aliasing after line doubling.

That all changed when QTGMC came along. Like its less-optimized forerunner TempGaussMC (which I've never used), QTGMC is an AVISynth deinterlacing script that uses a combination of several different plugins. It separates out individual fields and tries to use data from surrounding fields to generate full resolution progressive frames from them.  This is very similar to what "motion flow" and other motion interpolation algorithms do on modern TV sets, but focuses on accuracy rather than wow factor.  While it's not trivial to set up, the end results are excellent, combining greater detail with highly configurable noise reduction and other options. In some ways, the results can actually be better than many of the above solutions.

But it's always good to do a reality check periodically - which of these methods works best? I don't have access to a Teranex or the cash to use Tachyon, so I'll be comparing mostly free options and/or ones embedded in >$300 programs.


TL:DR

If you don't want to read through the entire post, here's my opinion about each of the options that I currently have access to:

QTGMC: Best retention of original image detail and handling of edge case artifacts, but you have to watch for colorspace issues when working with Adobe programs if you render out using FFMPEG. Also, different options can change smoothing and noise reduction quite dramatically.

Topaz AI: Excellent results that can be occasionally thrown off by oversaturated colors and constantly fluctuating semi-transparent detail (like heat haze).  Best used for upconverting content to higher resolutions.  Also uses FFMPEG to render out results, so some codecs can have the above-mentioned colorspace issues.

DaVinci Resolve 17 Studio: Solid deinterlacing, but can also be thrown off by oversaturated colors.  Not as much smoothed detail as the other options, but in motion most viewers probably wouldn't notice.

Bwdif (FFMPEG): Good at reconstructing smooth lines and retaining overall image detail with SD content, faster than QTGMC but not as good.

Estdif (FFMPEG): Good at reconstructing smooth lines and retaining overall image detail with HD content, faster than QTGMC but not as good.

Yadif (FFMPEG): Just does interpolated line double of each single field rather than taking surrounding fields into account.  Diagonal high contrast lines have weird "rounded" jaggies that some (like me) might find displeasing. Very fast.

Bob, Weave, etc - These are all variations on very basic line-doubled deinterlacing. Bob would be my preference out of these, but in general I would only use them if you absolutely have to.

Premiere, Resolve (non-Studio), editing programs in general - Garbage. Cuts both resolution and framerate in half. Note that I'm only talking about the standard deinterlacing inside of these programs - you can get/install plugins that can do a much better job.


Let's check out some examples:


First clip - 25i HD material

This was a tough clip for the commercial options to deal with due to some issues with the very oversaturated reds, but I feel like it shows off the differences between deinterlacing methods pretty well. Also, the differences between these methods are much less noticeable in motion.

(Please click pictures to enlarge)


Original



This is what an interlaced frame looks like - two fields woven together. Obvious combing artifacts. If you look close, you can see the issue with the edges of the oversaturated reds - some interlacing artifacts don't line up with the interlacing lines. I've actually never seen this before, but I feel like it turns out to be an interesting indicator for what various methods actually do.


Bob (After Effects, Premiere Pro, many other editing programs)


This is just the original image run through After Effects. By default, AE uses a terrible method to display interlaced footage that drops every other field and does a simple line double for the remainder. However, it does get rid of the interlacing artifacts in the oversaturated red.


DaVinci Resolve 17 Studio


This is a recent edition to the paid ($300) version of Resolve that supposedly uses some "Neural Engine" processing for frame interpolation. Decent handling of edges, but tripped up by the oversaturated red. I would consider this a good solution for most work,


Topaz AI






Really good recovery of detail throughout the image, as well as adding some extra detail (that's kind of it's thing).  However, it's thrown off a bit by the haze around the railing, and once again doesn't deal well with the interlacing artifacts in the oversaturated red.


Now, let's check out the FFMPEG (and other programs that use FFMPEG) options:


Yadif





Works, but look at those railings. Remember, Yadif is just taking the individual fields and using smoothing algorithms to resize them. I personally don't like the results, but it's better than nothing.


Estdif





Much better, although some details like the lower contrast parts of the railing aren't as defined as I'd like. Unlike Yadif, it's at least attempting to do interpolation using surrounding frames, and as a result takes a speed hit. My recommendation for FFMPEG-only deinterlacing of HD content.


Bwdif





My favorite FFMPEG-only method for deinterlacing SD content does a bit worse here - the edges look a bit rougher and less defined. Still better than Yadif, though.



Now for QTGMC. To simplify the crazy range of options, I'm limiting myself to two presets - Faster for mostly no noise reduction and a focus on speedy processing, then Slower for best "all options" processing without getting into ridiculous processing times.


QTGMC - "Faster" Preset




Cleaner, more defined edges with more perceived detail.


QTGMC - "Slower" Preset






The key word here is "smooth". The extra processing makes the image look more defined overall IMO. My personal favorite, although Topaz AI definitely is another option if you're okay with its artificially added detail.



Addendum - Colorspace Issues


You might remember that I mentioned something earlier about colorspace issues?  The screen grabs I've shown above were all captured from DaVinci Resolve.  If I use After Effects to view the clips instead, anything rendered out by FFMPEG looks like this:



Don't see it?  Enlarge the image, then use the left and right arrow keys to flip between the other screen grabs.  You'll notice that the colors in the overall image are shifted, especially the reds.

So what's going on here?  Well, I'm still researching this at the moment, but as far as I can tell FFMPEG does some sort of screwy colorspace conversion on some footage. If you don't correct for this, then After Effects and Premiere Pro display the colors in the wrong way. 

What's confusing though, is that this doesn't happen on every file you export out of FFMPEG.  Only certain codecs have this behavior, and one of them is ProRes. Now, this isn't a new issue - I've run into a variation of this problem years ago when I was upscaling standard definition video to high definition resolutions using AVISynth and FFMPEG.  However, one of my original fixes - to use the other ProRes encoder - doesn't do anything this time.  Thankfully, the other fix - rendering out to DNxHD or DNxHR - does solve the problem.  So, if you want to be sure you don't run into any problems, use DNxHR. If you have to use both FFMPEG's ProRes output and Adobe programs, then make sure you use AVISynth to at least load the file first, and then you can add a conversion step in your .avs script:

ColorMatrix(mode="rec.709->rec.601")

Yes, you read that right: converting from rec.709 (the colorspace for HD video) to rec.601 (the colorspace for SD video) fixes the problem.

But be warned - this actually changes the way that the video looks for other programs. If you bring the video into Resolve, it looks like this:



And suddenly, we've gone orange.

Anyways, I'll keep looking into this and see if there's an easier fix. Currently, I've tried the fixes documented here:


and here:


With no success.


I'll end this post here, but look to the next post for a comparison between deinterlacing methods in standard definition.

Wednesday, March 17, 2021

Let's Test QTGMC settings - Part 1

A question that I've thought about from time to time is this: what difference exactly is there between QTGMC's various settings? There's a metric ton of them, and while I've tended to just follow the recommendations in the documentation for what to use, I'm not always happy at the results I get.

So let's not be casual. I'm pretty busy these days, but as I have time, I'm going to go through and look at what each of the major settings does to a range of sample clips, and hopefully provide a better reference than the scattershot documentation that's out there currently.

Now, obviously I don't have time to test everything, and I'm not a plugin author or somebody who is particularly skilled at programming. This is all to say: don't just take my word for it. Test for yourself, and see what works for you. If you find something different, or an edge case that I haven't covered, great. Leave me a comment below, and I'll see if I can replicate your results. If so, I will update this blog post.

Some things to get out of the way up front:

  • I'm currently using AVISynth+ 3.7 64-bit with version 3.376 of QTGMC. If you're using an older (or newer) version of either of those, you may have different results. I will update the version numbers on results below if they differ.
  • I won't compare this to VapourSynth's havsfunc implementation - at least not right away. In theory, you should have the same results, but I have noticed chroma shifting differences in the past. I'm not currently using VapourSynth, however, and am not nearly as familiar with it.
  • My source footage is mostly DV, which has a very nonstandard chroma subsampling method (4:1:1). As such, the color information in your video may behave differently then shown here. The black and white (aka luma) portion of the image should be similar, although depending on the source noise in your original footage, things might be different there as well.
  • The first clip I'm going to use to test everything is from a 15+ year old video about building straw bale houses. It was shot on a consumer DV camera (not by me), and IMO has a good combination of in-camera sharpening and a range of both noise and detail. I'll try and find a less pre-sharpened image for comparison at some point.

Please click on the pictures to enlarge. Even better, this will bring up the "lightbox" so you can flip between pics with the left and right arrow keys on your keyboard. Sorry about the lack of labels in the lightbox - I'm looking into that.



Test 1 - Overall Presets


There are a total of 11 main preset options for QTGMC: Draft, Ultra Fast, Super Fast, Very Fast, Faster, Fast, Medium, Slow, Slower, Very Slow, Placebo.



Draft





This preset is pretty much only useful as a way to check and see if your field order is set correctly. It uses a simple "bob" deinterlacing method that makes essentially no real effort to create full resolution progressive frames.

Ultra Fast





This preset uses Yadif as a deinterlacing method, and looks similar to what you would get out of VirtualDub (Which uses Yadif by defualt for deinterlacing) or some real time commercial video processing devices. It does at least give a doubled frame rate, but I would only use it for comparison with other methods.


Super Fast





Here's the first preset that takes advantage of QTGMC's features. You'll notice immediately that the blocky effects of the first two presets are gone. Personally, I think this looks ever so slightly softer than most of the other presets - possibly because it uses TDeint rather than NNEDI3 for deinterlacing?



Very Fast






You'll notice that fine detail is a bit clearer, and some overall sharpening appears to be applied. Noise reduction is kept to a minimum, however. There's also a sort of haloing artifact around edges of fine detail.



Faster






Not a huge difference in sharpness here, but if you look closely at the tassels, you'll see that they look a bit clearer. Haloing is still present, and for some reason, there's an added line artifact on the right side that isn't present in any of the other presets. Not sure why.



Fast






This is the first preset that I would actually consider using. The haloing artifacts are gone, a little extra sharpening appears to have been added, but the image doesn't look terribly "smoothed".



Medium






Similar to Fast, but appears to perform more noise reduction.



Slow





By flipping back and forth between medium and slow, I can see that the interpolation has resulted in very slightly different images, but I don't really notice a big distinction in quality between them.



Slower





This is QTGMC's default preset, and I can definitely see how some people don't like it. While it does remove more of the artifacts from the video, it definitely has more of a "smoothed" look to it, with a slight smudging of details. Again, you would have to look at these back to back to really see the differences in my opinion, but this is starting to look a little over processed. On the other hand, if you're processing video for online streaming services, you might prefer this to keep recompression artifacts to a minimum.
 

Very Slow





Like Slower, but more so. More image detail is smoothed away, but more artifacts are removed as well.



Placebo





Aptly named, although I can see very slightly more smoothing applied to the image. I personally would never use this because of the ridiculously long processing times, but if you ever wondered what it does, here you go.




Test 2 - Sourcematch/Lossless


Because I'm not specifying anything else in the command, you can assume that the rest of the settings are at default (essentially preset=Slower). The main difference I can see is that Lossless on its own has some combing artifacts, and SourceMatch=1 is a bit softer than the rest.

SourceMatch=1



SourceMatch=2




SourceMatch=3



Lossless=1




Lossless=2



SourceMatch=3, Lossless=1



SourceMatch=3, Lossless=2





Test 3 - Noise Reduction settings


These are pulled from the QTGMC examples section.


Full denoise


NoiseProcess=1, NoiseRestore=0.0, Sigma=4.0



Does what it says. Beware of the "waxy" look with this one.

Restore all noise


NoiseProcess=1, NoiseRestore=1.0, Sigma=2.0 



Yup, this works as described as well.

Restore all "stable" grain


NoiseProcess=1, GrainRestore=1.0, Sigma=2.0 




This setting is a bit more interesting. It attempts to find more "stable" elements of noise and restore it to the image. This results in reducing the flat/waxy appearance of a full denoise, but still removes a lot of noise. Not sure if I'd use it all the time, but I could see some benefit.

Suggested settings for "Strong Detail/Grain"


NoiseProcess=2, GrainRestore=0.4, NoiseRestore=0.2, Sigma=1.8, NoiseDeint="Generate", StabilizeNoise=true


Seems reasonable, but maybe not for every source. I'll try to dig into this in more detail, but for now I'll refer you to the documentation for more of what the settings mean:




Reference


Just for reference, here's the original interlaced frame:




And here's one of the individual fields:


And here's the same frame processed with Yadif in VirtualDub2:





Friday, January 1, 2021

Deinterlacing with AVISynth and QTGMC - Updated occasionally

Welcome to the new home of my QTGMC deinterlacing tutorial. This post will be updated periodically as new changes happen and new info comes out, or until a better option comes along.

Changes since last post/video:

  • I've switched to an entirely 64-bit workflow
  • QTGMC has some new requirements
  • I've dropped FFMPEG as the recommended rendering program in favor of VirtualDub2
  • AVISynth Info Tool is now the recommended way to check and make sure that you've installed AVISynth correctly.


BIG DISCLAIMER: This process may not work, may crash, or may do other things to your system. Virus scan everything you download. It's not a 100% guarantee that you'll avoid getting a malware infection, but it's a lot better than not checking at all.

If you're doing professional work, always watch any files you create using this process before submitting/uploading them to check for audio and/or video glitches.

Follow this tutorial at your own risk. 

Also, this tutorial is for Windows 10. Most of the steps work for previous versions of Windows, but may require slight modifications. Users of MacOS and other OSes should look elsewhere.


Here's the video version of the setup and basic QTGMC deinterlacing workflow:




Setup


7-zip (optional)

If you don't have 7-zip already installed, you won't be able to open many of the downloaded archives. The stable, 64-bit version should be fine.

Next, we're going to need to get AVISynth+. You can grab it from here:


If you don't already have the MSVC 2019 Redistributable installed, grab the version ending with "vcredist".

Then, we'll need to get all the filters (plugins) needed:

FFMPEGSource

Note: Version 2.40 has been shown to corrupt video frames. Please either use a different version, or use LSMASHSource as linked below.

This is what will allow AVISynth+ to load our video files. Works with almost any container/codec combination, but read the "Known Issues" section for things to look out for (hint: don't use interlaced h.264 video with it).

Note: If you do need to work with interlaced h.264 video, try LSMASHSource instead. It requires two commands rather than one to work (technically three if you count AudioDub(video, audio) to combine the two together), but can support some non-standard formats. Thanks to neonfrog for the heads up.


QTGMC

The deinterlacing plugin of choice. Requires a whole host of additional plugins, of which you will be using the 64-bit versions (if an option is given). For most uses, only the "Core Plugins and scripts" are necessary.

For the "Zs_RF_Shared.avsi" link, right-click on the link and do "Save Link As", or whatever the equivalent is in your browser of choice.

DO NOT forget to download the 32-bit FFTW3 library as well. Without it, QTGMC will not run.


With all the AVISynth filters and scripts grabbed, it's time to get the supporting software:

AvsPMod

This is like an IDE for AVISynth scripts, and is pretty much essential IMO. Grab the 64-bit version for this tutorial.


This allows us to detect processor features and check to see if AVISynth is properly installed.

Finally we need to grab a rendering program.  For ease of use, I now recommend VirtualDub2 - a fork of the original program used to render AVISynth scripts. 

VirtualDub2


When finished downloading, virus scan all the things.


Installing

AVISynth+ has a simple installer. I recommend installing into whatever your user folder is (for example C:\Users\Me\) rather than the default of Program Files(x86) so you don't have to deal with authentication requests every time you add a plugin. Also, I highly recommend checking the "Add AVISynth Script to New items menu" option. Otherwise, you can stick with the defaults.

Then, go to the plugins64+ directory for AVISynth+. For example, the default install creates a plugin folder at:

C:\Program Files(x86)\AVISynth+\plugins64+\

Extract all "x64" versions of the .dll and .avsi files to the plugins directory (if given a choice) from all plugins EXCEPT the fftw-3*.dll archive. If there's a choice between AVX, AVX2 and a version without AVX, you'll need to know what instruction sets your processor supports. CPU-Z  or AVISynth Info Tool can tell you this if you're not sure.

Now, open the fftw-3*.dll archive, then (as noted on the QTGMC page) extract the libfftw3f-3.dll file. Make a copy of it and rename it as "FFTW3.dll". Place the files "libfftw3f-3.dll" and "FFTW3.dll" in the SysWow64 folder. Don't ask me why you have to do this, I agree that it seems pointlessly tedious.

Please note: If you're only using AVISynth+ (not older versions of AVISynth) and only need to use QTGMC, you can skip the above step and just download/put the 64-bit version of libfftw3f-3.dll in the plugins64+ directory. Despite the error message in 32-bit AVSInfoTool, QTGMC will now work in both modes (although you can drop the 32-bit version of libfftw3f-3.dll in the plugins+ director to clear the error if you like, and to clear the Zf_RF_Shared error, you need to do the regular install method above)

Also, if you want to be able to run QTGMC's "EZDenoise" & "Sharpness" functions, put the 64-bit version of "fft3dfilter.dll" into the plugins64+ directory.

Thanks to Spammy Family for pointing this out.

Speaking of tedious, if you want to use the "Very Slow" or "Placebo" preset for QTGMC, it looks like you need to install the 64-bit version of FFTW in your System32 directory using the same method mentioned above.

Extract the AvsPMod archive to wherever you want to run it from - again, I recommend your user folder.

Extract AVISynth Info Tool in the same way. If you like, you can install it more officially by clicking on the AVSInfoToolInstaller.exe file, but I generally just run it directly from the "AVSInfoTool_FilesOnly" folder. 

Go ahead and run it now, making sure to select the "AVISynth 64-bit" option once loaded.  If you get errors that are not about the FFTW3 library or Zs_RF_Shared.avsi (to fix those errors, install the 64-bit version of FFTW in your System32 directory using the same renaming/copying process listed above), then double-check and make sure you followed all the previous steps correctly. You might also want to take note of how many cores and logical processors you have, along with what instruction sets are supported by your CPU. 

Finally, create a folder for VirtualDub2 and extract its archive there.


Making your .avs script


Now that everything's ready, let's go to the directory with your video files and make an .avs script. Right-click anywhere in the directory, select New, then AVISynth Script and rename it however you want. If that option doesn't show up, you can just create a new text file and change the .txt extension to .avs.

Open AvsPMod, then go to the Options menu and click on "Associate .avs files with AvsPMod". You should now be able to double-click on .avs scripts and have them open directly in AvsPmod. Do so with the script you just created.

Here's my boilerplate .avs script settings for deinterlacing:

    SetFilterMTMode ("QTGMC", 2)
    FFMPEGSource2("videofile.avi", atrack=1)
    AssumeBFF()
    QTGMC(preset="Slower")
    BilinearResize(720,540)
    Prefetch(10)


The "atrack=1" option for FFMPEGSource selects the track of audio that is passed through during processing. If the default option doesn't give you audio in the results, try "atrack=-1" instead.

Please note that by default both FFMPEGSource and (certain modes of) LSMASHSource index your video file the first time you load the .avs script into either AvsPMod or your rendering program of choice. This may take a while and give the appearance of freezing the program. When it's finished, you'll see an .ffindex or .lwi file appear in the same directory as your video with the same name as your video.

Monday, September 21, 2020

Zeranoe is down

 An era has ended. As of the 18th of this month, Zeranoe, the best source for Windows binaries of FFMPEG, has shut down.

 Two other sites have stepped up to become the official FFMPEG Windows binary maintainers, but neither of them builds 32 bit versions.

This leaves those of us who rely on FFMPEG for AVISynth processing with a choice: trust less popular/probably not vetted sources for new 32-bit versions of FFMPEG, or build the source code ourselves. As this thread on Doom9 suggests, the latter seems to be the more popular choice:

http://forum.doom9.org/showthread.php?p=1923902#post1923902

In particular, commenters there suggest using a set of scripts called media-autobuild_suite.

https://github.com/m-ab-s/media-autobuild_suite

Since everything involved with MABS is open source and well tested, I'm going to be using this solution for 32-bit from now on unless it absolutely breaks down on me. It'll mean a little extra complexity, but gains the benefit of being able to control what's compiled into FFMPEG.

It also means that I'm probably going to have to go back and update several of my tutorials, which I don't have a lot of time to do these days.

Going forward, I'm going to recommend binaries from the official FFMPEG Windows binary maintainers for 64-bit, and building from source for 32-bit. 

Personally, I wish I could go 100% 64-bit at this point, but there's still a few important 32-bit AVISynth plugins that I need to use, So until those either get updated or become obsolete, that's going to be the situation. 

Wednesday, January 1, 2020

Two different solutions for Denoising video with AVISynth

Last year, I worked on a couple of projects that involved not just deinterlacing, but denoising. Some scenes had a fine, natural noise that I preferred to keep intact. Other scenes had so much noise it was difficult to make out the faces of the actors. Figuring out how to minimize the noise without sacrificing too much detail or looking jarringly different from the surrounding shots turned out to be a real challenge.

Here are some notes from my experimenting.

Commercial tools


There are some commercial tools for denoising, each with their own strengths and weaknesses. Here's three of the most common:

I've used Neat Video in the past, and it has some real advantages in terms of denoising strength, being able to manually select what to consider noise (if there's a large enough area of just noise in frame), and parameters to tweak. In particular, it's great at dealing with blotchy chroma (color) noise. It's really easy to overdo it, however, and sometimes it takes a lot of tweaking to remove only the noise you want.

There's also Red Giant's Denoiser III, which has a much simpler interface, and is better for even noise reduction, although not as great for dealing with truly terrible noise.

Then there's the Studio version of DaVinci Resolve. I don't own this and can't get a trial version, so I don't know how well it would perform.

If you're looking to denoise HD or higher resolution footage, I'd recommend one of the above. They're all fully GPU accelerated, focus on modern camera sensor noise, and they don't have a chance of introducing gamma or color shifts.

However, for SD interlaced video, I think AVISynth has some better options.

TemporalDegrain2


Let's start with TemporalDegrain2. With low to medium grain/noise, the default settings are usually fine. It uses many of the same requirement filters as QTGMC, so if you've already got QTGMC set up, you should be able to use TemporalDegrain2 at defaults. The one setting to pay attention to is grainLevel=True. Setting this to False may give better performance on some footage, so try it both ways to check.

Here's a brightened capture of some noisy SD footage (Note that I haven't done a pixel aspect ratio correction, so the image is slightly squashed):


And here's the same footage passed through TemporalDegrain2 at grainLevel=False:


And grainLevel=True:



The difference is subtle in this case, but definitely there.

More detailed instructions are included in the .avsi script if you want to play around with things, but in my experience, the defaults do the best job at denoising without undesirable artifacts.

Oh, and one more important thing: TemporalDegrain2 has a 64-bit version for AVISynth.

SMDegrain


Next, let's look at SMDegrain. If you thought QTGMC has a lot of options...

Using the recommendations for a starting point from the documentation:

SMDegrain(tr=2,thSAD=250,contrasharp=true,refinemotion=true,lsb=true)
Gives this result:


Not very impressive in this case. Let's try the recommendation for "dark scenes", and trigger the interlaced switch so the denoising can be done prior to calling QTGMC:

 SMDegrain(tr=3,thSAD=300,contrasharp=true,str=2.0,refinemotion=true,lsb=true,interlaced=true)
Which gives us this:


Better, but still not great. Again, this is a fairly noisy clip. Depending on the type/amount of noise, the defaults might be much more effective.

Now, let's look at the option for "grainy sources". I like to call this the "kitchen sink" option:

pre=fluxsmootht(3).removegrain(11)
SMDegrain(tr=6,thSAD=500,contrasharp=30,prefilter=pre,str=1.2,refinemotion=true,lsb=true)
What's happening here is that in addition to increasing the strength of denoising, we've added a prefilter. This is designed to blur the image first when calculating what detail to preserve, so the denoise can be that much more aggressive. If you just use a number here, SMDegrain will do some variation of a simple blur for it's initial calculations. In this case, we're using more intensive solutions. Here's the end result:


Now we're talking. However, you should know some things about the Kitchen Sink method:
  • If you use the interlaced option it'll distort, so don't do that. You could use SeparateFields() first and AssumeFieldBased() plus Weave() after to try to preserve the interlacing while reducing the amount of time it takes to denoise, but in my experience, that makes the processing look crappier/lower resolution. Either use it after QTGMC, or use it before and accept that the motion may not always look quite right.
  • You can do multiple passes, but since SMDegrain is 32-bit only, you're realistically limited to 2 passes without render freezes/crashes due to memory issues.
  • Yes, SMDegrain is available in 64-bit for VapourSynth (via havsfunc, which also contains a port of QTGMC). There, you're more likely to be able to squeeze in 3 passes, but it'll be excruciatingly slow if you do that, and can end up overprocessing the image. Also, you'll need to pay attention to the documentation for the various options, as they sometimes use different capitalization than on AVISynth. Why? Basically, VapourSynth scripts use Python, and Python is case-sensitive. Also, you have to re-write the fluxsmootht and removegrain entries in a different way, and remove the lsb=true option (because VapourSynth doesn't need it).
  • In footage with significant motion, you may notice more of a smearing effect, similar to the old-school electronic denoising on the laserdisk versions of the original Star Wars trilogy.
  • If you use multiple passes, you can end up with a dithering effect that's very noticeable on dark footage.
Basically, there are some drawbacks. The good news is that you can get very close to the same results with a line like this:

SMDegrain(tr=6,thSAD=500,contrasharp=0,str=1.2,refinemotion=true,lsb=true,interlaced=true)
Which gives this:


This is easier to run, has less of a smearing effect, works better with interlaced footage, and I've removed the built-in sharpening (Contrasharp) so I can use LimitedSharpenFaster later instead.

For those interested, here's a condensed (AVISynth+) version of my .avs script using the above options:

SetFilterMTMode("QTGMC", 2)
SetFilterMTMode("f3kdb", 2)
AVISource("DV Noise Test intro.avi", audio=true)
ConvertToYV12()

SMDegrain(tr=6,thSAD=500,contrasharp=0,str=1.2,refinemotion=true,lsb=true,interlaced=True,Globals=2) 
QTGMC( Preset="Slower", EdiThreads=3 ) 
#--------------(Optional) Reduce chroma noise----------------------
#f3kdb(grainY=0, grainC=0, sample_mode=1)  
Prefetch(10) 
Since the original clip is DV video, I've done a colorspace conversion to YV12. If you're using a Digibeta file or ProRes capture, you might be able to get away without it, or you could convert to YUY2 instead.


Some other info that might be worth knowing:

SMDegrain has a variable called "globals" that allows you to change the way motion vectors are used. Basically, the globals are motion vector calculations, which can be either generated each time SMDegrain is called (default), passed through, or used from previous passes. In other words, if you want the same patterns of noise to be denoised in subsequent passes, you can use:

SMDegrain(tr=6,thSAD=500,contrasharp=0,str=1.2,refinemotion=true,lsb=true,interlaced=true,Globals=2)

for the first pass and:

SMDegrain(tr=6,thSAD=500,contrasharp=0,str=1.2,refinemotion=true,lsb=true,interlaced=true,Globals=1)

for the second. While doing so can reduce processor load a bit, it's still generally not a good idea to do three or more passes, even with globals=1.

From the documentation, here's how each number option works:
  • 0: Ignore globals, just process
  • 1: Read globals and Process
  • 2: Process and output globals
  • 3: Output globals only, don't process 

  • Doing a globals=2 pass and then a globals=1 pass results in the following:



    Which is pretty darn clean. You may still notice some artifacts on fast-moving objects:



    but that's true of any SMDegrain setting that I've tried. TemporalDegrain2 does better with these artifacts, but not so well with overall noise in noisy footage, at least at defaults:



    Oh, and if you still have chroma noise after this, you might want to try messing with something like f3kdb.

    Other notes


    For comparison's sake, here's the best results I could get using Magic Bullet Denoiser III and Neat Video 5, applied to a version of the video already deinterlaced with QTGMC. Yes, both have ugly watermarks due to them being trial versions.



    I could probably get Neat Video looking a bit better with some more tweaking, but this is what I would consider an acceptable denoise from it. If you look closely, you'll notice that Denoiser III retains some noise in this case, while Neat Video removes more, but has a slightly plastic look. This is the tradeoff in denoising - removing *all* grain often removes surface details you might want to keep. For an extreme example of this, check out the Predator Ultimate Edition Blu-Ray.



    I may update this post later, but I think that's it for now. Post a comment if you have a question or correction.

    DVD conversion workflow update

    Hey folks, it's been a while.  Since the last time I posted, I started a job teaching film and video classes at a pretty awesome local c...