In the past, I did some work involving upscaling a letterboxed standard definition (SD) show to HD for online streaming video. As it turned out, the built-in After Effects and Premiere Pro resizing tools are not the best for this sort of scenario.
The reason why the otherwise fabulous tools fall short is how they deal with interlacing.
Interlacing is a technique started in the early days of television that allows for the appearance of 60fps motion with the bandwidth of a 30fps signal. This is achieved by using "fields", which are only half the resolution of a full progressive frame. However, the image data in each field is displayed every other horizontal line on one field, then the next field does the same but displays where the blank lines were in the first field. Because the two fields are interlaced together, the eye and brain of the viewer combine them together, giving you roughly the appearance of 60 frames per second (fps) video. Because of the reduction in signal bandwidth, interlaced video has been used by TV broadcasts ever since (yup, even in HD). For a slightly clearer example, here's a section of two fields combined together with a fast-moving object as the subject (In motion, these combing artifacts are usually not as noticeable):
Once we moved into the age of digital video editing, interlacing made everything more complicated. Since digital video devices didn't want to show individual jagged-looking fields, they combined (or deinterlaced) them into discrete frames, then based timecode standards around 30 frames per second (technically 29.97 fps for color NTSC video) rather than 60 fields per second (technically 59.94 fps). In order to work with broadcast video devices and timecode, computer editing/compositing/etc. devices and programs had to follow the same standards, but still be able to output either progressive or interlaced video at the end.
The bottom line is this: in order to upscale (most) SD video, it first needs to be deinterlaced.
I'm greatly oversimplifying, but deinterlacing is commonly done in one of three ways:
1. Double the lines in each field to fill in the gaps, then treat each of the line-doubled fields as individual frames. With this method, you end up with less overall resolution, but it's quick and easy. In the old days, they used to make "line doubler" devices that would do this sort of thing for high-end TVs and such. Depending on the algorithm, the process might also "decimate" the framerate to 30fps in order to avoid twitchy artifacts from constantly switching between "upper" and "lower" fields.
2. Combine every 2 fields together into a single frame via an image processing algorithm like Yadif. This gives you better frame resolution, but still decimates the framerate to 30fps and can look like a Photoshop "artistic" filter. This might be a good thing; it gives a slightly more "filmic" look and saves on video filesize. After Effects uses a somewhat similar process if you check the "Preserve Edges" checkbox in the Interpret Footage right-click menu of a clip. A better way (in my opinion) involves using VirtualDub to perform the deinterlacing and upscaling. This is the workflow I've used in the past.
3. Use complex algorithms to look at detail from several fields at a time to create interpolated frames at 60fps. This retains both the full detail and the full motion of the original video, but... while consumer TVs do an okay job with this, in the pro editing world it's either been done by very expensive dedicated hardware devices (like the Teranex) or moderately-to-somewhat-pricey software plugins (like FieldsKit or Tachyon) that still require a fair amount of fiddling to get working properly. It can also result in minor-to-moderate "ghosting" artifacts. To be fair, proper frame interpolation is not a trivial process, and the above solutions do a great job.
I assumed option 3 was basically out of my reach - the best "double framerate" deinterlace option in VirtualDub can use Yadif, but has the issues of a 60fps line doubler conversion, and my budget hasn't allowed me to purchase any of the commercial solutions.
So, I gave up for a while. Then, a new project came along from the same client for upscaling some more SD footage. Since I already use AVISynth scripts to load Quicktime files into Virtualdub and I've seen some great inverse telecine (aka ITVC, which is a process of removing redundant frames from 24p video that has been transferred to 60i video) plugins, I decided to check out deinterlace filters for AVISynth again. That's when I found out about QTGMC, an AVISynth plugin that does pretty much everything I want it to, and it's free.
Unfortunately, it has some drawbacks.
First, let me tell you about AVISynth and VirtualDub. Both of these programs were developed as open-source video processing tools in the early 2000's.
VirtualDub is kind of like a video swiss army knife - it uses built-in filters to do everything from resizing a video to replacing audio, sharpening, and even some visual effects. The downside with VirtualDub is that by default it only loads and saves files in an .AVI container, which significantly limits the number of codecs that it supports. There have been plugins developed that allow it to load a number of other container formats, but the plugins don't always work properly or continue to be supported as new codecs are released. However, if you combine VirtualDub with AVISynth, you can read almost any video codec that's ever been released.
AVISynth is probably the oddest video tool I've ever used. It's not a standalone program per se, and it has no interface. It's a scripting language that gives instructions on how to process video using a frameserver. This means you have to write a text file with a list of instructions, then load that text file into a separate program that can that can communicate with AVISynth's frameserver. VirtualDub is one such program. AVISynth's syntax can be confusing, arcane, and not terribly user-friendly. However, it can do all the video processing tasks of VirtualDub and more, and do so before the video is displayed in VirtualDub. It also has a truly staggering number of plugins developed for it, and some of them rival commercial programs in their functionality.
Now, back to QTGMC. QTGMC is a plugin that uses other AVISynth plugins to perform frame interpolation and deinterlacing. I will not attempt to explain the details, suffice to say it has a huge amount of variables and settings... but it works. It really, really works. You can use the combination of AVISynth with QTGMC plus VirtualDub to turn SD interlaced footage into 60fps HD footage.
Unfortunately, there are a few problems. Remember when I mentioned when the tools were originally developed? By default, neither VirtualDub nor AVISynth are multithreaded, which means they don't take avantage of modern multi-core processors. They're also 32-bit apps. There are technically 64-bit versions of VirtualDub and AVISynth, but they lack the plugin support of the 32-bit versions. The attempt to "fork" AVISynth for proper 64-bit support (known as AVISynth+) doesn't appear to support multithreading natively, either.
Now, there is a replacement library for AVISynth that enables multithreading support, in fact QTGMC basically requires it to perform properly. It's not, however, what you would call stable. AVISynth is prone to crashing or just stop rendering if you've set something it doesn't like, and those settings may be different depending on your hardware, the versions of the plugins you're using, etc. etc. etc.
Currently, I'm having trouble getting QTGMC to render beyond about 15mins of footage. I got to that point by gradually lowering the number after "EdiThreads" from 6 to 1. will keep playing with settings to see if there's a magic combination that will work.
I should mention that there is one other option: a new ground-up rewrite of AVISynth called VapourSynth. It's 64-bit native and supports multithreading "out of the box". It also uses an entirely different scripting syntax because it's both written in and uses Python. It can now load AVISynth plugins, but you still have to learn a whole new scripting language to use them.
Stay tuned for part 2, where I reveal the results of my AVISynth experimentation and see if I'm going to be willing to try VapourSynth or not.