Monday, October 1, 2012

Adobe CS6: Update #2

Well, it's now been a few weeks since getting my new CS6 system, and I'm still working a few issues out. Regardless, here's an update:

- My "Render" RAID 0 array (for exporting DVD and Blu-Ray files, as well as gigantic uncompressed files) has crapped out two more times, but each time I've been able to rebuild it and verify that no hard disk errors are (apparently) present. Since my other array has been just fine, I'm not quite sure what the issue is. If it fails another time, I'll call the ADK guys and see what we can figure out.

- I've started using Adobe Prelude for footage ingest, and it works great. If I was doing another "Data Wrangler" job (capture and backup, but no on-set image processing), I would be using it. However, bit-for-bit verification takes significantly longer than a straight copy, so it may not be appropriate for every shoot. Since I ingest for the current project at my home machine with no time pressure, I enable bit-for-bit verification and then do other things until the copying is done.

- I've gone back and forth on what I should do about Cineform. I finally decided to transfer my Cineform NEO license from my old machine to the new one, which has had the unexpected benefit of installing project presets for Premiere Pro. I also don't see a "gamma shift and pause after starting playback" issue like on my old system. The earlier issues with Cineform clips rendering out as random noise is now gone as well.

- As a side note, I've started an After Effects project where I render out a clip in different codecs/quality levels, then import them back into the project so I can A/B them for differences. So far, I can see that Blackmagic Uncompressed Quicktime files have a color shift from my original file, and that the size difference between Cineform High and Cineform Low is not nearly large enough to justify using Cineform Low. Cineform Film Scan 1 is my preferred format at the moment, although DNxHD 220 10-bit is pretty good, too. Oddly enough, the filesize difference between 422 and 444 colorspace appears to be nonexistent on Film Scan 1 and High HD Optimized Cineform files. Since I'm using de-noised DSLR footage for this test, I won't draw any conclusions until I re-test with more detailed footage.

- I installed DaVinci Resolve Lite, and while it has its own quirks, I definitely prefer it to SpeedGrade. Resolve has a much better interface, a ton of options, and a node-based workflow that's easier to figure out. Unlike SpeedGrade, LUTs can also be applied on a per-node basis, so I can easily A/B the change and remove quickly if it looks wrong.

- The current version of Resolve was designed to work well with Cinema DNG files, and after playing around with a test clip, I'm now sold on DNG as a format for uncompressed recording (although the hard drive space required by it is a real issue). At the moment, no Arri or Red camera uses Cinema DNG, but the Blackmagic Cinema Camera, the Kineraw cameras, and Aaton's Penelope Delta all use it. Two of those are cameras that indie filmmakers could potentially afford...

- Another side note: I just noticed that the Kineraw can record to Cineform RAW. Nifty, although the test clips from the Kineraw haven't really impressed me so far. Cineform RAW, however, looks like some hot stuff (compressed RAW sensor data, significantly lighter processor and storage requirements than Redcode). I wonder if I could compress Cinema DNG files into Cineform RAW. If they offered it as an in-camera option for the Blackmagic Cinema Camera, I think even more people would buy said camera.

- When it comes to footage management, I tend to import all the footage for a project into one Premiere Pro project file. This has the disadvantage of making projects take a long time to load, but once they're loaded, I have a huge amount of options. To be fair, 16GB of memory and a RAID 0 array are significant contributing factors as well. I've decided that I'll wait on a 16GB memory upgrade until Premiere Pro can address more than 12GB of RAM.

- My only complaint with Premiere Pro so far is that sped-up footage tends to choke playback for both the sped-up footage and the following footage, unless you stop playback after the sped-up footage and then resume playing. This could just be an issue because I'm editing in a native DSLR timeline, but I would still like to have the option to pre-render the footage so I could have consistently smooth playback without having to render out the whole sequence.

Friday, August 31, 2012

Adobe CS6: The awesome and the annoying - part 1


So, I've been working on the next Snow Goose Productions project; and this time it's our own reality show.

For those of you rolling your eyes right now, I'll just say that it's an interesting idea for a reality show that involves helping people with problems in an unconventional way; and without the ridiculous exploitation that has come to define the reality show space. That's what the goal is, at least.

But enough about that. This is a post about workflow and editing systems.

For at least the pilot of this show, I'll be shooting on two Canon T3i cameras. In order to deal with the 12-minute-per-clip recording time, I'll be staggering/overlapping the recording starts of each camera so at least one is recording at all times, and syncing the footage of both cameras to audio from a Zoom H4n audio recorder (which will be running in 4-channel mode - two external mics plugged in as well as the internal mics).

Despite finding a decent workflow for editing H.264 footage, my aging PC was far too slow to work on a project of the scope of this show - especially when I start throwing effects on. So, after much discussion with the other members of Snow Goose Productions, we decided to get a new editing system.

I thought about three possible choices:

  1. Build it myself. 
  2. Get it custom made. 
  3. Get a Mac.

After dealing with the quirks of a "build it yourself" system in the past, I decided not to do that again; even though it would be significantly cheaper. That leaves two choices: Mac or pre-built PC.

I've been wanting to get a Mac for years, but the higher cost of entry has always held me back. Now, the performance deficit is the issue. I can't afford to get even the baseline Mac Pro; and the remaining systems may be relatively stable, but don't give a lot of bang for the buck. Still, the advantage here is that I wouldn't have to deal with all the Windows configuration BS.

I've been a Premiere Pro (actually the whole Production Premium suite) user since 1.0, and while I've never been completely happy with it, it's (ultimately) gotten the job done. I didn't try to upgrade past CS3, since I haven't been working with processor/disk intensive codecs. Once I started working with a DSLR, that's all changed. Also, ever since CS5 was released, Adobe appears to have really gotten their stuff together and made Premiere Pro into a true contender to compete with FCP and Avid.

And one other thing: Thanks to Adobe Creative Cloud, you can now rent (almost) the entire Master Collection of Adobe programs for $350 this year - if you have CS3 or newer. So, CS6 seems like the logical choice.

CS6 is available for Mac. The Mac has ProRes. Pretty much every independent filmmaker I know uses a Mac. Macs tend to be more stable than Windows PCs. All of these a great reasons for getting a Mac.

There's one other issue, though: Ever since CS5, Premiere Pro and After Effects use GPU acceleration to greatly speed up rendering and allow you to work with more layers of video/effects in real time. In theory, Premiere Pro/After Effects/Media Encoder CS6 works with the GPUs in last year's MacBook Pros. From reading the Adobe forums, however, it appears that that support is sketchy at best. The new Retina display MacBook Pros? You have to "hack" them (add the name of their GPU to a text file), and there's no performance benefit to using them over software-only mode. The iMac GPUs? Not supported, even with the "hack".

Of course, I could always switch to Final Cut Pro X, but I would be re-learning a whole new editing paradigm right in the middle of shooting a major project - not a great idea.

I could run Premiere Pro in software-only mode, but would have vastly increased rendering times.

Using Final Cut Pro 7 would mean that I would have to convert my all my footage to an intermediate format - just like I do now.

To set up either a PC or Mac with Avid would blow more than half my entire budget.

So basically I've talked myself out of a Mac, Final Cut, and Avid. Which leaves a pre-built PC running Creative Cloud.

There are a number of video-oriented PC building stores out there.

I went with one.

I got a PC with:

  • An Intel i7 3930K processor. 
  • 16GB of RAM manufactured specifically for the system builder. 
  • A 256GB SSD drive. 
  • 8TB of internal RAID storage (Two 2x2TB RAID 0 arrays - one for editing, one for rendering). 
  • An Nvidia GeForce GTX670 2GB video card - which is faster than a Quadro 4000 for less than half the price. 
  • Two year parts and labor + 1 year express pick up warranty.

I ended going with ADK, since they were priced right, had a great warranty, and allowed me to send in my Blackmagic capture card to fit into the new system. Oh, and they test the system thoroughly before sending it out. Unfortunately, my capture card was DOA (and I didn't have enough money in the budget to replace it), so I'm currently doing without it.

I've now had the system for 3 days, during which I've poked and prodded at the video production programs (Premiere, After Effects, Encore and SpeedGrade). I've barely scratched the surface of the other programs in the suite.


So, without further ado, on to the pluses and minuses.

Given the right hardware, Premiere Pro CS6 is now a pretty awesome workhorse. Instead of doing the proxy file dance, I can now edit footage from my T3i without transcoding. Many common effects play back smoothly without rendering.

In particular, Warp Stabilizer is amazing; not so much because of how well it stabilizes footage (other editing programs have the same or similar capabilities), but because it can do so in the background while you edit, and it doesn't require rendering the clip it's used on, even when you change the basic settings... Unless you stay zoomed out and want it to try to fill in the black borders generated by the filter moving the image around; then you have to render. Otherwise, you can change the basic parameters to your heart's content, and it still plays back the footage buttery smooth unrendered.

What used to be a whole process involving rendering out uncompressed to After Effects, setting tracking points and rendering back out to an uncompressed file is basically reduced to dragging and dropping an effect on a clip. Awesome.

One particular gripe I had for a long time was how crappy the process of making a DVD from Premiere Pro was. Instead of downsampling, Premiere Pro's Media Encoder would render out everything - including titles and effects - directly at DVD resolution, even if the timeline was HD. To add insult to injury, Premiere Pro had a 2-use-only Dolby Digital encoder; so you were forced to render out the audio in PCM, import the video and audio files into Encore (which had an unlimited-usage Dolby Digital encoder), and hope that you calculated the video bitrate low enough so it didn't go over DVD capacity.

In Premiere Pro CS6, there's finally a proper unlimited-use Dolby Digital encoder, and really clear labels for encoding presets. Awesome. I still had to select the Dolby Digital encoder in the settings (and therefore made a preset), but it works consistently.

This version of Premiere Pro (like the previous two versions) uses a maximum of 12GB of RAM, so 16GB or greater is ideal, especially if you run memory-intensive 3rd-party plugins. The plus side? I can load up a whole timeline of clips, and it hasn't crashed yet. I could get used to this.

Media Encoder is now a truly separate application, so you can use it to quickly convert a standalone clip to a Youtube version or whatever you'd like without creating a new Premiere Pro project, or fiddling around with After Effects.

After Effects doesn't feel that different to me, but I have yet to try the Mocha tracker built in to it. If you do a lot of After Effects work, get as much RAM (and as many processor cores) as you can afford - After Effects will use it all if you let it. Based on what I've seen so far, I would recommend 32GB of RAM or more.

I might be having some issues with the results of the Neat Video plugin not showing up properly in the monitor window, but otherwise, After Effects is nice and stable.

SpeedGrade is a bit of an odd duck. It's a really powerful grading program, but it's designed primarily for an uncompressed video workflow (the "send to SpeedGrade" option in Premiere Pro renders out your timeline to .DPX frames). I haven't had much luck getting it to import .EDL sequences from Premiere that point to the original media.

The interface is pretty arcane, too; I think there's a theory that if a program was designed to cost thousands of dollars, it should have an interface designed to fit with a very particular workflow and setup, rather than having silly things like menus and a help file. The timeline is confusing, and trying to do simple things like set up a new project (rather than just deleting the timeline of the old one and adding new clips) still eludes me.

On the other hand, the actual quality of the grade you can get beats the ever-loving #$%@ out of the built-in color correction effects in Premiere Pro and After Effects. I'm going to have to basically take a course to learn how to use them properly, but the secondary correction passes alone are just incredible. Now, to learn how to properly set the colorspace to match 8-bit output files...

I haven't messed around with the menu creation abilities, but Encore appears to be basically unchanged. I need to get some Blu-Ray discs to play around with that a bit - maybe a BD-RE disc for test burns?

Anyways, I've have a lot more time to play with these programs (and more) in the next few days, so I'll might report more in a week or so.

Update 1 (9/7/2012):

I had one disk of my "render" RAID 0 array drop out with an error. I re-built the RAID, checked it out thoroughly and it seems to be working fine, but I will definitely be backing stuff up more frequently. Also, the hard disks make a loud "click" from time to time - although I've read that this is normal for that particular model of drive.

I've experienced some occasional program crashes in After Effects and SpeedGrade. SpeedGrade has an (automatic, background) Quicktime importer that crashes frequently, although the main program remains running. Clearly, SpeedGrade works best with .DPX files - although my "render" RAID dropped out in the middle of working with .DPX files. If it does so again, I'll note it here.

I might seriously consider upgrading to 32GB of RAM - After Effects sure can use it, and I'm sure other programs would love it, too.

Cineform Neo 5 footage imports fine, but renders out as colored noise in After Effects and Premiere Pro. Thankfully, Virtualdub can access the footage (using the Quicktime plugin, I believe, which you need to download seperately), so I could transcode it to another format. There appears to be no upgrade pricing for Neo 5 to Cineform Studio Pro... Which is unfortunate considering how soon after I bought Neo that the latter came out. So, I think I won't be using Cineform for intermediate files.

I'm going to have to really study the SpeedGrade manual to figure out how to make sure I'm working on a new project. As it stands, I guess it's basically set up to work on one project, then clear out all the files associated with that project before going on to the next one.


Thursday, May 10, 2012

The Old Man and the Taily-po - Production Notes

So, around the end of March, I finished my first real short film since film school. It's called "The Old Man and the Taily-po", and it's based on a folk legend with a ton of variations around the deep south, Ozarks, etc. that's usually told as just "Tailypo". I adapted my version to the area of New Mexico that I live in. Here's the short itself:



If you'd like to read more about the legend, here's the Wikipedia entry: http://en.wikipedia.org/wiki/Tailypo

The short was primarily meant to be an entry into YouTube's "Your Film Festival", but it was also an exercise for me in how much of this story could be done with (basically) no budget.

My production equipment was:

  • Canon T3i/600D DSLR camera
  • Canon EF-S 18-55mm f/3.5-f/5 kit lens
  • Canon EF 50mm f/1.4 prime lens
  • Zoom H4n digital audio recorder
I considered renting a better lens, but two decisions changed my mind:

  1. This project was going straight to YouTube, where overall composition and lighting would make more of an impact than raw sharpness.
  2. I was seriously considering converting the whole project in black and white, to give it a "Twilight Zone" look.
It turned out that I would have been hampered by using a bunch of other lenses anyways, because I shot fast, and even changing between the two lenses I had was a luxury.

The actor, dog, locations, and music were all obtained either for free, trade, or very reasonably. This has been a great upside of the area I live in at the moment: There are a lot of people around here with passion about art, music and movies who love to just go out and do the things they love, and are also experienced enough to set proper boundaries about what they will and won't do.

I shot over the course of four days: two at outdoor locations and two at a cabin built as a movie set. I was wildly ambitious about how much I could get done at the cabin set, but it was also a great experience in shooting incredibly quickly.

How quickly? Try 30-40 setups per day at the cabin. The outdoor shooting was much less ambitious, but I also wasn't under the same time pressure to do so.

This speed was only possible because of the help of several other people, a surprisingly cooperative (untrained) dog, and the versatility of the T3i: I didn't have a ton of equipment jamming up the set, and I could very quickly re-position the camera to get new and interesting angles.

I shot about 95% of the footage using the Similaar "Flaat_2" picture profile. Because I was going to export to Youtube, the very slight in-camera sharpening would be acceptable, and the shadows wouldn't have to be wrestled back into place like they would with Technicolor's Cinestyle profile. I could have also used the Marvels Cine V3.4 profile, but (I reasoned at the time) in the event that I decided to go with color footage, Flaat_2 has better default skintones (a somewhat reddish tint, same as the "Portrait" picture profile on which Flaat_2 is based).

All three profiles mentioned above are designed to create a more "flat" image that decreases the amount of contrast so that more detail information can be recorded by the camera in the shadows and highlights without a "baked-in" look that cannot be adjusted significantly in post. It's kind of a kludgey way to imitate the "raw" mode of higher-end digital cinema cameras, but I've come to appreciate using it since I don't carry around a portable video monitor with false color/vector scopes/etc.

Anyways, the point of choosing a flat picture profile is that it gave me enough image information to simply shoot by exposing what I wanted to show; in scenes lit by practical sources. This did not mean I  ignored lighting, just that I didn't have to add a significant amount of light to most scenes. It meant that I could use a 75-watt incandescent bulb to light much of the cabin interior shots at night, and that was enough light for my particular aesthetic - which was to only use motivated light sources. For some shots, I only used a Coleman lantern. For this story, I wish I could have used it more.

There were some downsides to this approach. For many nighttime cabin shots, I had to shoot at around f/1.4 to keep my ISO down. Shallow depth-of-field junkies love this, but if you're trying to keep an actor's face in focus, it's a pain in the ass. Since I didn't have a follow focus rig, I decided to just let the shots go out of focus when my actor moved a fraction of an inch. Also, it limited me to the 50mm lens, - great for close-ups, not so much for wide shots. For the most part, I don't think it turned out too badly, but in the future, I would love to use a Canon 5DmkIII and just increase the ISO.

Speaking of high ISOs, one of the expenses on this short was buying a copy of the Neat Video plugin for After Effects. It's not a click-and-forget sort of plugin - you do have to adjust and fine-tune every shot you use it on - but it allowed me to take 3200 ISO shots and make them look pretty darn clean. I even used some 6400 ISO shots that were lit only with two tactical flashlights; I wouldn't recommend doing this due to the awful amount of noise in the image, but as long as you keep the shots short, Neat Video can make a nasty-looking image look significantly more acceptable.

I'm currently still using the Adobe Production Premium CS3 Suite, which is getting very long in the tooth. At first, I wasn't even sure I could do a real workflow for H.264 material without transcoding everything to an intermediate codec. Initially, that intermediate codec was going to be Cineform, but my current hardware/software combination made that unrealistic, so I figured out an offline/online edit workflow.

My post workflow was:

  • I transcoded 23.976fps 16x9 anamorphic DV25 (aka "MiniDV") Quicktime proxies from the original camera footage with the exact same file names (obviously rendering them out to a different directory than the originals). I used MPEG Streamclip for the transcoding, and I highly recommend it for batch-encoding to Quicktime formats, especially since it will preserve the original framerate settings by default.
  • I edited the short in Premiere Pro CS3 using the DV25 proxy files.
  • When done, I exported a mixdown of the audio tracks to a single stereo .WAV file, then imported it into the project, put it into the timeline, and muted every other audio track.
  • I opened up a new uncompressed 1080p/23.976fps Premiere Pro project and imported the DV project file. I made all the video clips offline, then re-linked the clips to the original footage. Without opening any of the sequences, I saved the new project and quit. If I tried to open/scrub through any of the sequences, Premiere Pro would run out of memory and crash.
  • Opening up After Effects, I imported the uncompressed Premiere Pro project file. I went through the main sequence and checked to make sure that all the clips were properly linked to the original footage, at the right size, and I removed all the audio tracks aside from the mixdown track. 
  • I then went through and did all my color grading clip by clip using Magic Bullet Looks, and noise reduction using Neat Video.
  • Once everything looked good, I rendered out a Blu-Ray, DVD and Cineform Quicktime copy. The Cineform version was used to encode to an internet-friendly H.264 version for Youtube using MPEG Streamclip.
And some things I learned:

  • I enjoyed directing and shooting the short, and I will likely direct, shoot, etc, again.
  • The media file swapping "trick" only works if you keep the framerate the same between the originals and the proxies, so I don't believe 720p/60 footage could be edited accurately using DV25 footage (it only does up to 29.97fps). 
  • However, I did end up using 30p and 60p footage that was played back at 23.976fps for some slow motion shots. You can set this via the "interpret footage" menu option in Premiere Pro or After Effects. I believe I had to set it in After Effects, so it's not ideal for getting the timing precise during the DV25 proxy editing for the above-mentioned reasons. I adjusted the cut in After Effects but kept it the same length so audio sync would be maintained. Yeah, I know, kludgey; but it worked.
  • For greater precision with editing slo-motion clips, you could import the higher-framerate clip into After Effects on it's own, set the playback speed to your project framerate, and render out to an intermediate format (like Cineform), which could then be used to generate a proxy. This way, you wouldn't have to adjust anything later.
  • You need to set your memory limits properly in AE, or AE will crash constantly. I used the /3GB memory switch to allow Windows XP to give AE more memory, which helps if you have more than 2GB of RAM installed.
  • Only basic dissolves, image size/motion and framerate changes will survive the After Effects import process.
  • You need to apply Neat Video before Magic Bullet Looks (or any other effect) in order for the former to work.
  • I ended up not having enough hard drive space to transcode all my footage to Cineform, but I did end up using Cineform as an intermediate codec for a couple of HDV clips I processed. As a high-quality lossy intermediate codec, it's pretty darn good, (and as a 10-bit colorspace format, it's great for upsampling your 8-bit footage for color correction). It's just not a great editing codec; at least with my current software/system.
  • I need more memory.
  • I need more/bigger hard drives. There is no such thing as "too much free hard drive space", but if I'm ever going to work on a documentary again, I will need a lot more hard drive space.
  • You really can edit and finish an H.264 project in Premiere Pro CS3 with the help of After Effects.
  • I really need a faster processor - render times for the After Effects conform were around 4 hours for 12 minutes of footage. Granted, this is with Magic Bullet Looks applied to every clip and over 60% of the short going through Neat Video processing, but still...
So in all, making this short was a great experience. For my next project, I hope to get a system upgrade so I can run CS6, and be able to edit H.264 footage natively (Along with Red, Alexa, F3, Canon's higher-end codec, etc).

Thursday, February 16, 2012

T3i / Cineform Preliminary Notes


Quick Update:

I will have a much longer post in the future about my experiences actually shooting and editing a project with the T3i and Cineform, but here's some of my initial thoughts:

  • Cineform has an odd issue where it changes its gamma setting during playback in Adobe Premiere Pro. Supposedly, this is an issue with the video overlay function of NVIDIA graphics cards, but that seems a little strange to me, since no other codec that I've used has this issue. You can mitigate this effect a bit by playing with the image controls in the video overlay section of NVIDA's control panel settings, but it doesn't really go away. A more annoying quirk is that there is a slight pause between hitting the playback button on the timeline and having the footage start playing. This drove me a bit bonkers after a while.
  • My computer is not fast enough to edit DSLR footage natively (even if Premiere Pro CS3 could handle it), so my current workflow is a little odd, but seems to work:
    • Transfer Quicktime H.264 .MOVs from SD card.
    • Convert .MOVs to DV25 ("MiniDV") .AVIs
    • Edit project using DV25 .AVI files (cuts and basic transitions only for picture).
    • When project is finished, clean up project so only footage in use is in project file.
    • Make a note of footage files used, then convert .MOV originals of footage to Cineform .AVIs - into subfolder of project folder (I call it "Cineform Transcodes").
    • Move DV25 .AVIs into subfolder of project folder (I call it "DV Proxies").
    • Move Cineform transcodes into main project folder.
    • Create new Cineform project file in Premiere Pro.
    • Import DV25 project file into Cineform project.
    • Add all needed transitions, effects work and titles.
    • Render out project to Cineform master, Blu-Ray and DVD.
So yeah, I'm essentially doing an old-fashioned offline/online edit. It's not great if you desperately need to save storage space, but the DV25 .AVI files will play back buttery smooth on any computer from the last 7 years or so. In the future, I'm looking to upgrade to a current computer and version of Premiere Pro so that I can do all my editing natively and then export to Cineform for color correction and such. The monthly leasing option for CS6 looks like the way to go for me, but we'll see.


As to the T3i, the 3x "digital zoom" ends up being a lifesaver when you quickly need to switch over to a longer lens; or in my case when I need the image stabilization of the 18-55mm kit lens on a subject that needs more zoom. Do not zoom in any farther than 3X, though, or you will start to see the image artifacts of an actual digital zoom that will be almost as bad as zooming the image in post.

I will also look into renting some better quality (and image stabilized) lenses for my next production, so stay tuned for updates.

Thursday, December 1, 2011

T3i Picture Profile Camera Tests

After a bit of trial and error, here are some picture profile tests I shot using the T3i.

I'm having to re-orient my video brain a bit, but aside from the moire/color fringing artifacts, this camera beats my old Sony HDV camera handily (especially as a stills camera :) ).

My next round of tests will focus on adjusting the in-camera sharpening and color within these looks, trying to see if I can "bake-in" the look more so that I don't have the same dire need to color correct in post.




Tuesday, November 22, 2011

A Long-Expected Update - DSLRs and Cineform

So, I finally got a DSLR - A Canon T3i, to be exact. There are a few reasons for this:

  1. It's pretty cheap. For around $1500, you can get a camera with a kit zoom lens, a decent prime lens (I chose the Canon 50mm f/1.4 USM, which works out to about an 85mm equivalent on the T3i due to it's sensor size), a bunch of decent SD memory cards, a bunch of batteries, and a basic filter kit.
  2. The T3i has a sort of sensor crop mode that's called "digital zoom', even though it only becomes that (as far as I know) once you increase the zoom factor more than the basic 3x level. This is nice for those of us who don't necessarily want to lug around a big zoom lens all the time, but it's really important if you need to shoot a subject that would normally cause moire and aliasing issues - since the sensor is cropped to 1920x1080 resolution instead of line-skipping from a higher resolution, the aforementioned artifacts are significantly reduced - at the cost of some image sharpness. Also, it's a 3x crop factor, so to keep the same framing and depth of field you might have to change to a wider lens/focal length or back up and sacrifice some of the image characteristics of having the camera closer to your subject.
  3. It has an articulating rear LCD display. This means you can see what you're shooting without having to always be right behind the camera. You can even flip the display over for checking framing while shooting yourself (which you might literally consider once you see how goofy you look on camera).
  4. It provides an upgrade path to other Canon DSLR and cinema cameras. By getting prime lenses that work on full-frame cameras, you're essentially future-proofing your investment in good glass that you can use on either better DSLRs or cinema cameras with a Super35-sized sensor and a Canon mount/adapter. The C300 seems like a decent upgrade goal, but you can also use Canon (or Nikon) glass on the newer Red cameras with an adapter.
The only problem with getting the T3i is that I have a fairly old computer (built late 2008) that coughs and wheezes when trying to playback the H.264 video footage that this camera produces. It's also a PC, so no ProRes love.

The good news? I got Cineform Neo. Here's why that's awesome:


  1. It lets me transcode the 8-bit, 4:2:0 footage to 10-bit, 4:2:2 footage, at a fraction of the disk space of uncompressed footage - just like ProRes. Why is this important when it's not actually increasing the quality of the shot? Two words: Color correction. In general, any post processing that you do on footage benefits from a higher color bit-depth.
  2. Speaking of color correction, Cineform Neo has FirstLight, which is kind of the poor man's version of RedCineX. Like RedCineX, FirstLight lets you tweak color and contrast in meta-data. Translation: It lets you do really basic looks and color correction without rendering out a new file, and you can change the settings at any time, and those changes will show up in any program that can play the file.
  3. Cineform is a wavelet-based codec, which means that it kind of smooths the image a little, and compression artifacts look more like smears than blocks. This is an aesthetic preference, and it won't solve glaring compression artifacts from the original footage (I'm considering getting the Neat Video plugin for After Effects to help with that a bit), but it's still pretty cool.
  4. Like ProRes, Cineform is made to hold quality through multiple recompressions. I have yet to test this, but Cineform says anything at "High" quality or better can do this.
  5. It has a decent built-in capture program, so no need to launch your massive video editing program just to do a decent video capture.
  6. It has built-in presets for Premiere Pro, which happens to be my video editor of choice/necessity.
  7. It has no real image size constraints, so it can work with footage of any size, even 6K Red Epic footage (if you're insane).
There's only a couple of issues with Cineform:

  1. To get all the nifty features I've outlined above, you have to pay $300. I think they're worth it, but I could definitely see folks getting turned off by the price. The good news? Cineform NeoScene gives you just the conversion ability for $129.
  2. Cineform is about 2-4x larger than H.264 files depending on the source image complexity and Cineform's quality setting, so you'll need a bunch of storage space to convert your footage. Personally, I'm going to try to go for a film-style workflow and only convert clips as I intend to use them.
  3. Larger files means larger datarates, and if you're rocking single un-RAIDified hard drives like I am, your realtime capture will be thusly limited to standard definition (unless you like dropped frames).
I've been playing around with picture profiles for the T3i, and have decided that the low-contrast profiles are best for cinema shooting, rather than general purpose. For wildlife videography, I think a little sharpness and baked-in color actually is a good thing, although this is definitely a personal preference rather than a professional opinion.

Next up? Trying out the proxy generation feature of Cineform; which retains the metadata of the parent file, even if you manipulate it after making the proxy. Translation: any color corrections made in FirstLight is automatically applied to both the proxy and the original Cineform files. Also, I'm trying to figure out how to expose outdoor scenery properly. I think a color chart might be in order. Oh, and maybe posting some of my test videos like I promised people ages ago.

Friday, April 15, 2011

Stabilizing footage on the cheap with VirtualDub and DeShaker

So, I recently shot some helicopter footage... handheld. It has a raw feel to it that I like, but I wanted it to be smoother so I could speed it up without it looking like a Keystone cops action scene. I've tried to use After Effects (CS3) to stabilize footage, but it takes forever and produces some ugly-looking moving black borders.

Enter VirtualDub and the DeShaker plugin. Here's my tests:

First time:

http://vimeo.com/19858460

Second time:

http://vimeo.com/20394119

Pretty cool, yeah? So, a fellow Vimeo member sent me a message asking for my settings. Since he was using an AVCHD camera, I directed him here first:

http://vimeo.com/groups/avchdlite/forumthread:8538

And then the rest of the reply:

Stop when you get to the section called "Deshaking :". If your video is now loaded (successfully) in VirtualDub, continue. If not, then you should probably export your footage over to an .AVI format using your usual editing program (Vegas, Premiere Pro, etc.) and load that into VirtualDub.

1) Go to the Video menu, and select "Filters". click "Add", select "DeShaker", click okay. This will get you to the DeShaker controls.

2) Make sure you have "Pass 1" selected. If your footage is interlaced (I think yours is), make sure to select that in the "Video Type" menu.

3) Set "Scale" to "Full (most precise)". Set "Use pixels" to "All (most robust)". Uncheck "Detect zoom". Click "OK", then "OK" again to close the Filters window.

4) Make sure your footage is at the beginning, then hit the F5 key, and watch as the calculation pass runs. :) It may take a while...

5) When it's done, open the Filters window again, select the DeShaker plugin and click on "configure".  Select "Pass 2". Set the "Edge compensation" menu to "Fixed zoom (no borders)". Click "Ok", then "Ok" again.

6) If you have enough hard drive space to use uncompressed video (around 400 GBs per hour), then go to the File menu and click on "Save as AVI". If you don't, make sure you go the the "Video" menu then click on "Compression" to set a different video compression format (that your video editing program can use).

Monday, September 27, 2010

On making an interlaced DVD from interlaced HD, and why it's such a pain in the @#$@#

I hate interlacing. Really, I do. It gives the illusion of greater motion detail than is there - but in the digital video editing realm, it adds all sorts of problems. This is especially true of interlaced high definition video, and doubly true when you want to turn HD interlaced video into standard definition (SD) interlaced video. Just about every method of making a DVD from an HD source assumes that you want to deinterlace the footage at some point along the way, which removes the benefits of shooting interlaced in the first place (and can end up looking very choppy if you aren't careful with your field order settings. Yes, there's actually a field order in interlaced video. Don't get me started...)

I use Premiere Pro CS3 to edit with, which is a very versatile program, but making a DVD directly out of an HD project in Premiere Pro results in crap quality, since Premiere decides to render all the elements at SD resolution (including titles, effects, SD footage, etc.) instead of at HD first and then downconverting to SD. In the past, the only solution to this has been to export out an uncompressed (or relatively low compression) HD movie file, then downconvert that somehow. Needless to say, if you're on a low hard drive budget and working on a huge project, this is a major pain in the ass.

One way to handle this is to take your rendered-out HD video and import it directly into Encore, but that can also create odd-looking results. For best quality, you generally have to downconvert the HD video first by making an uncompressed intermediate (16x9 anamorphic) SD file from the HD file, and import that into Encore. You can also just drop the HD file into After Effects in a new composition, change the composition to the DV widescreen preset, conform the image to fit, and render out to MPEG-2 from there. The problem with both processes is that they deinterlace the video automatically at some point along the way. I've gotten used to just doing this, and it creates a sort of pseudo-film look, but again, it removes the point of shooting interlaced video in the first place.

The HD2SD filter for AVISynth combined with the Debugmode Frameserver plugin for PPro CS3 allows a crazy, hacktastic workaround that essentially serves out a frame at a time of rendered-HD-then-downconverted-to-SD goodness to another program (in this case Virtualdub) so all you need to do is render out an SD intermediate file (in this case, using the lossless Lagarith codec). You can then import said intermediate file into Encore, and render out a properly interlaced DVD that preserves all the lovely interlaced motion...in theory. In actuality, Encore seems to have some trouble properly detecting the interlacing, and so once again deinterlaces the footage.

Also (by default), nothing in the AVISynth process chain is multithreaded, so if you have a multicore processor (or multiple processors, if you're a lucky dog) you will either have to suffer through single-core performance, or config everything in your render chain to be multicore, which is complicated and not guaranteed to boost your performance much (remember, any holdup in Premiere Pro's rendering will negate the advantages of multicore rendering further down the chain.) The upshot? the process is very, very slow. On my Core 2 Quad 2.4GHZ machine, it takes around 12 minutes to render every (one) minute of video... and that's with little to no effects.

So what's the solution? Well, there are basically two:
  1. Play back the uncompressed HD footage through some sort of hardware downconverter into another device set up to record uncompressed video. Being that I don't have another computer around with another Blackmagic Intensity card, not happening here. This is by far the easiest, though, and happens in real time. Of course, there's still no guarantee Encore will recognize the SD footage as interlaced, and if you experience a playback skip during the recording, you either end up with two files you have to stitch together, or you have to start the recording over again. So, you probably should have the HD video on a RAID (0,5 or 6) array-equipped system to do this properly.
  2. Pass the SD downconvert through a third-party MPEG-2 encoder that will detect the interlacing (or allow you to set it properly). So far, I've only been able to do this with the freeware HCEncoder, and the quality doesn't seem to be all that great - but it does produce properly interlaced video as long as you set the field order properly (usually Bottom Field First).
Update: I downloaded and installed TMPEGEnc Free, and that will also work with the SD downconvert - as long as you make sure the interlaced and bottom field first parameters are either detected automatically or set. The video quality is more consistent than HCEncoder for me, but that could also be because i'm not quite sure how to properly set the latter. Note that the MPEG-2 encoding in TMPEGEnc Free expires after 30 days, so use it wisely, and then consider buying the Plus version.

My guess is that most standalone encoding programs would produce similar results. Anyone have any issues with this sort of thing with DVD Studio Pro?

Saturday, September 11, 2010

Mounting CDs/DVDs in Dosbox on Ubuntu 10.04


Since Ubuntu 10.04 uses the CD/DVD volume name for the subdirectory it mounts the CD/DVD to, you can't use
/media/cdrom0
as a standard CD/DVD mount directory in Dosbox like you could in the past. This means you either have to change your /etc/fstab file (as detailed here - use at your own risk! I have not tried this.), or you have to make separate entires in your dosbox.conf file for each CD you want to use. For example, if you wanted to mount the Full Throttle CD, you'd put this in the [autoexec] section of your dosbox.conf:
mount d /media/FT1_00 -t cdrom
You could then generate a list of CDs like so:

mount d /media/FT1_00 -t cdrom
mount d /media/DN_3D -t cdrom
mount d /media/SQVI -t cdrom

And use the commenting marks to select out everything except the CD you're trying to mount:

mount d /media/FT1_00 -t cdrom
#mount d /media/DN_3D -t cdrom
#mount d /media/SQVI -t cdrom

A front-end could potentially make this problem a non-issue, but I dislike using them, so there yah go. ;)

Oh, and remember that you can still use .ISO or .BIN/.CUE disk images as well (in fact, they're the *only* way to properly use multisession CDs in DOSBox at the moment.)

Saturday, August 7, 2010

CD Ripping under Ubuntu 10.04

Need to rip a CD? Got Ubuntu? Try Rubyripper. Inspired by Exact Audio Copy (Which will no longer run for me under Ubuntu 10.04 using WINE), Rubyripper has the same priority of accuracy over everything else as the former, and after testing it on my quite-scratched Metallica CD, I can confirm it does at least as good a job as EAC, although the interface could certainly be prettier (again, just like EAC).

For ease of install, try grabbing the "GetDeb" .deb file apropriate for your version of Ubuntu here: http://linuxappfinder.com/package/rubyripper. Please also make sure you have the audio codecs you want to transcode to installed as well (You can install LAME for MP3 encoding through the Ubuntu Software Center).

Sunday, May 16, 2010

Timidity and DOSBox in Ubuntu 10.04

For those of you who care about this sort of thing, here's how I finally got decent General MIDI playback for DOSBox in Ubuntu 10.04:

(These instructions are somewhat adapted from a post by Malor on the official Ubuntu forums)


1. Go into Synaptic Package Manager and install these packages:
  • dosbox
  • timidity
  • fluid-soundfont-gm
  • fluid-soundfont-gs
2. Open a command prompt, and type:
sudo gedit /etc/timidity/timidity.cfg
  • The last line in the file says:
    source /etc/timidity/freepats.cfg
    Put a # mark at the beginning of that line to comment it out. We're going to use the soundfonts we installed from the previous step.

  • On the next line, type:
    soundfont /usr/share/sounds/sf2/FluidR3_GM.sf2
  • Save and exit
3. If you don't already have a dosbox.conf you want to use, type:
dosbox
and hit enter. Dosbox will pop up. At its command prompt, type:
config -writeconf dosbox.conf
This will generate one for you. Then type:
exit
to quit dosbox.

4. Type
gedit dosbox.conf
  • Click the Find button, type:
    mpu401=
    and click Find. You should see a line highlighted.
  • Close the search popup.
  • The three lines starting with mpu401= should look like this:
    mpu401=intelligent
    device=alsa
    config=128:0
  • Save and exit
5. Restart Ubuntu
6. Open a command prompt, type:
timidity -iA -B2,8 -Os -EFreverb=0 2>&1 &
7. Start dosbox
8. Enjoy General MIDI. :)
9. Close terminal window when finished.

Note: You may have to occasionally change
config=128:0
To
config=129:0
in your dosbox.conf. Watch the output when you start timidity and it'll show which to use.

Also, if you get permission errors while installing or running timidity, you may have to add "timidity" to the "audio", "pulse", and "pulse-access" groups in Users and Groups (In the System menu).

DVD conversion workflow update

Hey folks, it's been a while.  Since the last time I posted, I started a job teaching film and video classes at a pretty awesome local c...