The source material is heavily mixed. It comprises DV AVI in PAL, HDV 1080i PAL, XDCAM-EX HQ PAL, Canon 422 MXF, Red 4K and AVCHD 1080i NTSC. The source is exported to H.264-BR HDTV 1080i 29.97. It is loaded with lots of effects and transitions, a lot of them keyframed with bezier curves and up to 6 tracks in use. This means that on export there is some scaling and rendering, as well as field reversal from LFF to UFF for some sources.

Here we are talking about H.264 compression, one of the most complex and taxing codecs for a computer in these days. The main difference in comparison to MPEG2 is that the CPU takes many more steps to process data because of the more complex decoding. The CPU load is much higher and the threading takes longer before data are handed off to RAM, even when the hyper-threading is particularly good. Meanwhile the next data to be processed are loaded into RAM. When the CPU is finished on the first block of data it hands it back to RAM and loads the next set of data. Now this comprises both algorithm data and frame data, so there is a lot of traffic on the road between CPU via cache to RAM, and vice versa. Occasionally traffic may halt and just like traffic jams, there is not always a clear reason what causes the traffic jam.

It can be that all the logical cores are crunching along, or the cache on the CPU is depleted, or the memory controller needs a break, or the RAM is still waiting for data from the pagefile, a whole lot of reasons.

In contrast to the MPEG2-DVD test, where all material needed to be scaled down from 1920 x 1080 or 1440 x 1080 to 720 x 480, this test only uses scaling for some DV and HDV material, so there is less handing off data packets to the GPU for MPE scaling, reducing the latency on the route from RAM to GPU to RAM to disk.

Because of the high compression efficiency of the H.264 codec, the discriminating factor is the speed with which algorithm data and frames are handed over to the CPU and its cache.

The basic ingredients here are the speed of RAM, number of cores and the clock speed.

H.264 encoding is like a highway, where the number of lanes available depict the number of CPU cores, the speed limit depicts the clock and memory speed, the number of vehicles on the highway depicts the amount of CPU cache used. The more lanes (cores) the better, a higher speed limit (clock and memory speed) can improve traffic flow, the more vehicles (cache used), the bigger the chance of traffic jams.

Interchanges can also cause traffic jams, depicting dual processor setups.

Dual processors need to continuously monitor each other and communicate about progress. It is like reducing the number of lanes on the highway by half. That often causes traffic jams, unless traffic is light. The number of cores is like the number of lanes. The more the better.