There have been lots of comparisons between Final Cut Pro X and Premiere Pro CS6, with most focusing on features and workflows. This article discusses a series of multiple-format benchmark tests that analyzed comparative performance between the two programs.
Source and Export Formats
There are myriad formats to test and an unlimited combination of effects to apply. I tried to keep my approach simple. I tested with common formats that both Premiere Pro and FCP X could handle natively, like AVCHD, XDCAM EX, and footage from DSLRs.
I used one basic output preset with each program for all tests, encoding to 720p output using the H.264 codec, Main Profile at a video data rate of 10Mbps and audio at 320Kbps stereo. The files I created could be used for uploading to a user-generated content site like YouTube or Vimeo, or an online video platform like Brightcove, Sorenson 360, or Kaltura. I encoded at 29.97fps for 29.97 and 60p source footage, and at 23.976 for 23.976 source footage.
I performed all tests on an 2 x 2.93 GHz Quad-Core Mac Pro from early 2009 running MacOS X version 10.7.4 with 12 GB of RAM and a NVIDIA Quadro FX 4800 graphics card with 1.5 GB of onboard RAM.
I describe each test project in detail below. With one or two exceptions, with each format, I started with a simple test, just a single 1-minute color/brightness-adjusted video output to the H.264 target. Then I added a range of common effects, including titles, blurs, sharpening, layering via opacity adjustments, picture-in-picture, and creating a video wall, to see how these effects impacted performance.
All the effects that I applied were GPU-accelerated via Adobe’s Mercury Playback Engine and the NVIDIA graphics card, as I suspect most Adobe producers in a hurry also do. Fortunately, in CS6, Adobe greatly expanded the list of accelerated effects, so this wasn’t limiting in any way.
Workflow and Encoding Options
Both Premiere Pro CS6 and Final Cut Pro X enable multiple encoding workflows, so let me define which I used up front. When rendering from Premiere Pro, I exported directly from Premiere Pro, not by queuing and outputting via Adobe Media Encoder. This locked up Premiere Pro for the duration, but in some instances, decreased rendering time by as much as 50%. My thought was, if you were in that much of a hurry to get the file rendered, you would use this technique and find something else to do during the encode.
FCPX offers a wide range of rendering options, as shown in Figure 1 (below). I started testing using the Send to Compressor option, which saved the time and disk space associated with creating an intermediate file. Then I tried Export Using Compressor Setting, which proved significantly faster in many cases, so I ran all tests using this option.
Figure 1. Choosing the most efficient output
When encoding in the Adobe Media Encoder, I exported with Use Maximum Render Quality enabled. While this can extend encoding time significantly, it can also increase quality. Since this is the setting I recommend that producers use, it seemed fair to use it in my tests. I also used 2-pass VBR for encoding, because it’s the setting I use in my own practice. In my Compressor presets, I used multiple-pass encoding, again because this is what I recommend, and because it’s also the default setting in all of the Compressor presets that I checked.
In addition, I disabled background rendering for all of my FCP X tests for a number of reasons, the most important of which was the ability to create duplicatable results. With background rendering enabled, I would get one result if I worked straight through a project, and another if I took a break for lunch in the middle.
I created sequence settings in Premiere Pro and projects in FCP X by dragging a video clip into the timeline, which both programs conformed to the configuration of the video. I ran each test with only that editor running and rebooted each time I changed editors. Other than frame grabs and other administrative writing-related activities, the machine was totally dedicated to rendering during all tests.
With most formats, I ingested in Final Cut Pro, and simply used the footage that FCP X ingested in the Premiere Pro projects. That led to some interesting developments, as FCP X appears to change the file name in some instances, which, of course, broke the link in Premiere Pro. I can’t see Apple engineers losing sleep over this dynamic, but I have to say, changing the filename is probably something they should avoid doing if at all possible.
Just for the record, I ran several tests with Final Cut Pro X after converting the original source video to ProRes. I found very little difference in performance, and in most cases, working with ProRes was slightly slower. While I was surprised, I wasn’t shocked. It seems that the rendering bottleneck isn’t the conversion from H.264 into FCP X’s internal format.
Note that your mileage on this score will likely vary by computing horsepower. On a dual-core notebook, converting to ProRes might be required for usable performance. Let’s take a look at my tests.
Canon T3i DSLR
The footage used in these tests came from a newly acquired Canon T3i, shot in 720p60 mode. The shots themselves were boring just me standing around, working on these tests.
In the first test, I rendered a single file after applying brightness and sharpness adjustments. Then I overlaid another clip over the first and toggled opacity from 0 to 100 over the duration of the clip. The next test involved a single picture-in-picture at 30% of original size while the final test was a five-clip video wall (shown below in Figure 2), with all videos at 30% of original size so there was no overlap, and no shadows or borders applied.
Figure 2. Great DSLR, dull video.
Table 1 (below) shows the results. The first two columns are rendering times in minutes:seconds while the third column shows the number of minutes saved for an hour-long project. As you can see, even with a Plain Jane single-layer project, CS6 saves over 2 hours of rendering time, which only increases with project complexity. If you’re producing an hour-long, six-stream video wall with DSLR footage (hopefully with more interesting content), CS6 would save close to 11 hours of rendering time.
Table 1. Performance comparisons for the DSLR format test.
For the AVCHD, I used footage of a local rodeo shot with my Canon Vixia HFS10 (Figure 3, below). The first test involved a straight export of one minute of footage; the next involved three layers of footage, all one-minute long. The top clip started at 100% opacity, and transitioned to 0 over the duration of the clip; the second clip did the reverse, starting at zero and transitioning to 100%. The bottom clip remained at 100% opacity. All test clips were color-corrected (Figure 3, below).
Figure 3. And you thought your job was stressful! Test AVCHD footage from a local rodeo.
Table 2 (below) shows the results. The results were most significant with a simple project, with the gap narrowing as the project became more complex.
XDCAM EX Tests
This footage used in this test came courtesy of NVIDIA, delivered with a reviewer’s guide for their GPU technology. The first test involved a straight render of one minute of XDCAM EX 1080p 23.976 fps footage, with color and brightness correction applied, as well as a sharpen effect. All these effects were applied to all clips in these three tests.
For the second test, I placed shots of a music video side by side, while the third shows four videos, each in their respective quadrants (Figure 4, below). To render the footage from all these tests, I used a 23.976 fps export preset in both programs.
Figure 4. The Betacam EX footage I gleaned from a NVIDIA reviewers’ kit.
Table 3 (below) tells the tale. Most scaling activities, as well as all applied effects, are GPU accelerated, which likely is the reason that CS6’s comparative performance increased with scene complexity, as we saw with the DSLR footage in Table 1. With the video wall project, you would save well over two hours of rendering time.
Table 3. Sony XDCAM EX results.
The video used in this test was part of a ballet audition DVD that I produced for the dancer. I shot in 720p60 mode with a JVC GY-HM700U, which encodes into H.264 format and stores the video in an MPEG-2 wrapper which both programs edit natively.
In the first test, I encoded the color corrected footage with a simple lower-third title overlay (Figure 5, below). The second test involved a Gaussian blur filter that changed value over the 1-minute clip, starting at 0 and finishing at 10.
Figure 5. Footage from the JVC GY-HM700U.
Table 4 (below) shows a significant advantage in the single stream project that increases with project complexity.
Table 4. Results using H.264-encoded footage from the JVC GY-HD700U.
I wanted to run one multi-cam test, so I borrowed 720p 23.976 ProRes 422 footage from the DVD that comes with Mitch Jacobson’s excellent book, Mastering MultiCamera Techniques: From Preproduction to Editing and Deliverables. While it’s probably unusual for a Premiere Pro editor to be working with ProRes, it’s certainly not unheard of, and I work with ProRes sources frequently in my consulting and general production work.
In this series of tests, I used the footage shown in Figure 6 (below) from a Paul McCartney project (there’s also SD footage from an Elton John concert), and in the first test, created a 2.5-minute multi-cam clip in each editor and changed camera angles every 15 seconds. For the second test, I created a one-minute project in each editor, then overlaid one clip over another, adjusting the top clip to 30% opacity.
In the final test, I stacked five tracks over the bottom track and created five picture in picture effects, each 35% of original size, so there was some overlap. I applied a drop shadow to the clips in Premiere Pro, and a border to the clips in Final Cut Pro, since there is no native drop shadow in Final Cut Pro X (or border effect in Premiere Pro).
Figure 6. ProRes footage from a Paul McCartney project.
Table 5 (below) shows the result. Note that the first project was 2.5 minutes long, which throws off the math in the 60-minute column compared to the other one-minute projects. Again, we see some benefit with a simple, single-stream project, that increases with project complexity.
Table 5. Results from ProRes footage.
These results are targeted primarily for streaming producers and apply to a much lesser degree to long-form producers, if at all. Within the streaming media production community, however, these tests reveal that the Adobe CS6 encodes faster than FCP X, and can shave hours of rendering time off longer, more complex projects.