Tuesday, May 4, 2021

Which deinterlacing algorithm is the best? Part 1 - HD interlaced footage

Before I began, I'd like to give a special thanks to Aleksander Kozak for his help in testing and providing the HD footage used in this post.

Also, this is part 1 of a series of posts. When I've finished part 2, I'll link to it here.


Background

One of the things about working as an editor is that the technology is constantly changing.  There are new pieces of software, plugins, and even hardware that are released on a regular basis, and keeping up with all of it can be challenging.

Roughly about 10 years ago, deinterlacing had four general types of solutions: 

- Hardware-based video conversion boxes like the Teranex that are designed to perform resolution and frame rate conversions in real time.  Simple, but not always the best quality, and the hardware is pretty expensive (especially when you consider that you also need both a separate dedicated playback device and a recording device in addition to the conversion box).

- Services like Cinnafilm's Tachyon that do the same sort of job using very complex and proprietary software and hardware on a rental (or on-premises, for a higher fee) basis.  Great results, but very expensive.

- Built-in deinterlacing within video editing programs.  I haven't used either Media Composer or Final Cut Pro X's implementation, but with everything else I've tried, this is garbage.  Unless your entire project from beginning to end uses interlaced settings, editing programs tend to toss out every other field and line double the remaining field, which cuts both motion detail and frame detail essentially in half. The only way I've seen to get around this inside an editing program is to use the FieldsKit plugin, which works a bit better than real-time conversion boxes, but not as well as SAAS options like Tachyon.

- Open source video filters.  For a while, your only options here were frame blending and "bob" deinterlacing, neither of which were particularly great.  Most early tools for dealing with interlacing in this space focused more on combining similar fields together - mainly as a result of them being geared towards processing TV captures of anime.  Slightly later methods like Yadif improved on deinterlacing quality, but were still only working on individual fields, using still image interpolation to essentially round off the edges of aliasing after line doubling.

That all changed when QTGMC came along. Like its less-optimized forerunner TempGaussMC (which I've never used), QTGMC is an AVISynth deinterlacing script that uses a combination of several different plugins. It separates out individual fields and tries to use data from surrounding fields to generate full resolution progressive frames from them.  This is very similar to what "motion flow" and other motion interpolation algorithms do on modern TV sets, but focuses on accuracy rather than wow factor.  While it's not trivial to set up, the end results are excellent, combining greater detail with highly configurable noise reduction and other options. In some ways, the results can actually be better than many of the above solutions.

But it's always good to do a reality check periodically - which of these methods works best? I don't have access to a Teranex or the cash to use Tachyon, so I'll be comparing mostly free options and/or ones embedded in >$300 programs.


TL:DR

If you don't want to read through the entire post, here's my opinion about each of the options that I currently have access to:

QTGMC: Best retention of original image detail and handling of edge case artifacts, but you have to watch for colorspace issues when working with Adobe programs if you render out using FFMPEG. Also, different options can change smoothing and noise reduction quite dramatically.

Topaz AI: Excellent results that can be occasionally thrown off by oversaturated colors and constantly fluctuating semi-transparent detail (like heat haze).  Best used for upconverting content to higher resolutions.  Also uses FFMPEG to render out results, so some codecs can have the above-mentioned colorspace issues.

DaVinci Resolve 17 Studio: Solid deinterlacing, but can also be thrown off by oversaturated colors.  Not as much smoothed detail as the other options, but in motion most viewers probably wouldn't notice.

Bwdif (FFMPEG): Good at reconstructing smooth lines and retaining overall image detail with SD content, faster than QTGMC but not as good.

Estdif (FFMPEG): Good at reconstructing smooth lines and retaining overall image detail with HD content, faster than QTGMC but not as good.

Yadif (FFMPEG): Just does interpolated line double of each single field rather than taking surrounding fields into account.  Diagonal high contrast lines have weird "rounded" jaggies that some (like me) might find displeasing. Very fast.

Bob, Weave, etc - These are all variations on very basic line-doubled deinterlacing. Bob would be my preference out of these, but in general I would only use them if you absolutely have to.

Premiere, Resolve (non-Studio), editing programs in general - Garbage. Cuts both resolution and framerate in half. Note that I'm only talking about the standard deinterlacing inside of these programs - you can get/install plugins that can do a much better job.


Let's check out some examples:


First clip - 25i HD material

This was a tough clip for the commercial options to deal with due to some issues with the very oversaturated reds, but I feel like it shows off the differences between deinterlacing methods pretty well. Also, the differences between these methods are much less noticeable in motion.

(Please click pictures to enlarge)


Original



This is what an interlaced frame looks like - two fields woven together. Obvious combing artifacts. If you look close, you can see the issue with the edges of the oversaturated reds - some interlacing artifacts don't line up with the interlacing lines. I've actually never seen this before, but I feel like it turns out to be an interesting indicator for what various methods actually do.


Bob (After Effects, Premiere Pro, many other editing programs)


This is just the original image run through After Effects. By default, AE uses a terrible method to display interlaced footage that drops every other field and does a simple line double for the remainder. However, it does get rid of the interlacing artifacts in the oversaturated red.


DaVinci Resolve 17 Studio


This is a recent edition to the paid ($300) version of Resolve that supposedly uses some "Neural Engine" processing for frame interpolation. Decent handling of edges, but tripped up by the oversaturated red. I would consider this a good solution for most work,


Topaz AI






Really good recovery of detail throughout the image, as well as adding some extra detail (that's kind of it's thing).  However, it's thrown off a bit by the haze around the railing, and once again doesn't deal well with the interlacing artifacts in the oversaturated red.


Now, let's check out the FFMPEG (and other programs that use FFMPEG) options:


Yadif





Works, but look at those railings. Remember, Yadif is just taking the individual fields and using smoothing algorithms to resize them. I personally don't like the results, but it's better than nothing.


Estdif





Much better, although some details like the lower contrast parts of the railing aren't as defined as I'd like. Unlike Yadif, it's at least attempting to do interpolation using surrounding frames, and as a result takes a speed hit. My recommendation for FFMPEG-only deinterlacing of HD content.


Bwdif





My favorite FFMPEG-only method for deinterlacing SD content does a bit worse here - the edges look a bit rougher and less defined. Still better than Yadif, though.



Now for QTGMC. To simplify the crazy range of options, I'm limiting myself to two presets - Faster for mostly no noise reduction and a focus on speedy processing, then Slower for best "all options" processing without getting into ridiculous processing times.


QTGMC - "Faster" Preset




Cleaner, more defined edges with more perceived detail.


QTGMC - "Slower" Preset






The key word here is "smooth". The extra processing makes the image look more defined overall IMO. My personal favorite, although Topaz AI definitely is another option if you're okay with its artificially added detail.



Addendum - Colorspace Issues


You might remember that I mentioned something earlier about colorspace issues?  The screen grabs I've shown above were all captured from DaVinci Resolve.  If I use After Effects to view the clips instead, anything rendered out by FFMPEG looks like this:



Don't see it?  Enlarge the image, then use the left and right arrow keys to flip between the other screen grabs.  You'll notice that the colors in the overall image are shifted, especially the reds.

So what's going on here?  Well, I'm still researching this at the moment, but as far as I can tell FFMPEG does some sort of screwy colorspace conversion on some footage. If you don't correct for this, then After Effects and Premiere Pro display the colors in the wrong way. 

What's confusing though, is that this doesn't happen on every file you export out of FFMPEG.  Only certain codecs have this behavior, and one of them is ProRes. Now, this isn't a new issue - I've run into a variation of this problem years ago when I was upscaling standard definition video to high definition resolutions using AVISynth and FFMPEG.  However, one of my original fixes - to use the other ProRes encoder - doesn't do anything this time.  Thankfully, the other fix - rendering out to DNxHD or DNxHR - does solve the problem.  So, if you want to be sure you don't run into any problems, use DNxHR. If you have to use both FFMPEG's ProRes output and Adobe programs, then make sure you use AVISynth to at least load the file first, and then you can add a conversion step in your .avs script:

ColorMatrix(mode="rec.709->rec.601")

Yes, you read that right: converting from rec.709 (the colorspace for HD video) to rec.601 (the colorspace for SD video) fixes the problem.

But be warned - this actually changes the way that the video looks for other programs. If you bring the video into Resolve, it looks like this:



And suddenly, we've gone orange.

Anyways, I'll keep looking into this and see if there's an easier fix. Currently, I've tried the fixes documented here:


and here:


With no success.


I'll end this post here, but look to the next post for a comparison between deinterlacing methods in standard definition.

10 comments:

i4004 said...

could you provide these clips for dload, i would like to check how my tv does it, given that i already established it had much better deinterlacing than yadif.

i once thought about deint. everything laced i have(ie a lot) but ultimately decided against it on hopes display tech. would solve it to a satisfying degree.

offcourse it doesn't apply to all scenarios (for example mixing laced and progr. content of different resolutions during editing etc.) but overall once i see rather sharp graphics(like subtitles etc.) i know deinterlacer is doing its job properly.

one more thing to consider: good deinterlacer followed by bad scaler still gives poor results.

Andrew Swan said...

@i4004:

I'll have to ask for permission to share the original clips, but I can say in the meantime that all the options I listed *and* the original clip all look pretty equally good when played back on my cheap 4K TV. The main focus of this post is about comparing options for editors who want to use interlaced content in a progressive project, though, so I didn't really test exhaustively.

I would agree that poor scaling can look bad even with good progressive SD media. I'll try covering that in a future post.

i4004 said...

what ffpmeg build did you use?
i think bwdif is recent addition.

i dunno does the background move in your clip or just the boat, if everything moves then it would be hard to see much difference anyway.
like i said, sharp graphics are a good test, esp. if something is moving beneath them...channel "dog" for example.
https://en.wikipedia.org/wiki/Digital_on-screen_graphic
in a same way graphics suffers, everything else does, but if you don't have some pointer you're not able to see how much.

yadif, for example, is rather bad deinterlacer. hd probably helps a bit, but for sd one of the worst.

4k tv adds another layer of scaling, and scaling is never perfect, sd is unbareable on 4k tvs (as it's usually larger screens on top), hd usually ok, but worse than on hd as nothing beats 1:1 pixel mapping.

Andrew Swan said...

@i4004

I believe I'm currently using 2021-04-20-git-718e03e5f2-essentials_build. bwdif has been out for a while on FFMPEG, but estdif is very new - I'd never heard of it until I saw it listed in the changelog.

The background doesn't move in the clip, but the boat does.

I'll be doing more of a sharp graphics test when I get around to doing a SD test, as I think that'll show the biggest difference.

BattierJam said...

Thank you!

AustinC said...

Been reading thru your articles about using QTGMC. I have sort of a newb question. How can you verify that indeed your captured avi is interlaced? I suspect mine is due to the artifacts. I'm capturing miniDV source Canon ZR80 --> output DV ---> Canon GR-DVL915 (for TBC) ---> output svideo ---> Panasonic DMR-ES15 (more noise reduction) ---> Elgato video capture in format NTSC_M UYVY 720 x 480. Which looking up seems to be...

UYVY 0x59565955 16 YUV 4:2:2 (Y sample at every pixel, U and V sampled at every second pixel horizontally on each line). A macropixel contains 2 pixels in 1 u_int32.

Any help would be appreciated!

Andrew Swan said...

@AustinC

Generally speaking, all standard definition video is interlaced. That being said, you can always check using MediaInfo - which should tell you both whether something is interlaced or not, and a good guess as to the field order.

i4004 said...

only for mpeg2 files, but even then it can be misleading, depending on how the file was made (it's rather trivial to change field order just by rewriting header).
otoh, .avi doesn't store info on interlacing at all, and video decoders can't guess it...

but yeah, dv generally is interlaced and bottom field first.
pal dv also had 420 color subsampling that produced crappy reds.

in the above mentioned chain, i dunno how will elgato produce BFF video, but you can check if video made top field first (with media info..if it's mpeg), and then you'll know why video will be jerky (if that's you'r issue), as you would be playing back BFF video with TFF order...and that's fairly obviously wrong.

the point of dv as digital format would be to have pc program that just transfers camera contents to dv.avi file on pc.

i would prefer less hardware processing in video chain, and more avisynth denoising etc.

Plubbingworth said...

For what it's worth, I use the Avsresize plugin for Avisynth whenever I'm resizing anything. I've been getting away with this using these lines:

z_spline64resize(1440,1080) #for 4:3 stuff, regular spline6resize can probably be used
z_ConvertFormat(colorspace_op="601:auto:auto:auto=>709:same:same:l")

And then when I render with VirtualDub2 I set the decode format to 709 and it interprets the colors correctly.

Andrew Swan said...

@Plubbingworth,

Thanks for the heads up. I might try using that next time I do a conversion.

Which deinterlacing algorithm is the best? Part 1 - HD interlaced footage

Before I began, I'd like to give a special thanks to Aleksander Kozak for his help in testing and providing the HD footage used in this ...