- Last Updated on Wednesday, 06 May 2015 14:09
Keep your ears and eyes open for SSD developments.
Currently SSD's are hampered in their performance by the typical SATA connection. Their capacity tends to get more interesting and their prices continue to fall, so from that perspective, they get more competitive to conventional HDD's and the faster storage solution. But there are new developments around the corner to make SSD's the storage solution of the future.
NVME is the new magic word.
NVME stands for 'Non-Volatile Memory Express' and is specifically designed to overcome the SATA limitations of limited bandwidth.
The SATA interface has already earned his spurs but the standard is designed around traditional spinning hard drives. These drives are inherently slow because the read and write heads of hard disks simply can only be in one place at a time. These restrictions do not apply to solid state drives. The flash memory can very well read or writte in parallel but because of the relatively slow SATA interface speed, the potential of SSD's is unused.
The intended successor to the SATA bus is NVM Express, an acronym for non-volatile memory (Host Controller Interface) express. As the acronym implies, NVME was specifically designed and built for solid state memory. The standard uses PCI-Express interconnects, leading to scalability. With only a single PCI-e lane a NVME connected SSD already has nearly a gigabyte per second of bandwidth, compared to 600 megabytes per second SATA. With several lanes, NVME reaches a multiple of the SATA interconnect bandwidth. For more background on this new interface, see NVM Express, the Optimized PCI Express® SSD Interface.
With vastly lower latencies and minimal power consumption, this may well be the storage solution of the future.
M.2: high speed in small package.
The M.2 interface first gained notoriety with Apple's MacBook Air, which is a M.2-SSD. Which looks a bit like an extended mSATA SSD (but really a simpler and more relative comparison is the it looks very much like a stick of gum), but has a different connection. The M.2-SSD from Apple directly illustrates the diversity of the M.2 interface. Apple chose to use a type 2 (2-PCIE lanes) connect, which did not lead to optimal speed and latencies. All the newer M.2 SSDs use 4-PCIe lanes. Typical (April 2015) M.2 SSD's are Samsungs XP941 and the SM951. The standard SM951 is rated at 2150 MB/second read rste and 1550 MB/sec write rate when used in a PCIe x4 Gen3 socket. Bill has actually measured a write rate from Premiere PPBM8 of 1500MB/second.
M.2 PCI Express sockets
The interface can also handle M.2 with PCI-Express interconnects (and USB, for completeness), which theoretically leads to higher speeds and lower latencies, as with NVME. There are two types of sockets possible: 2 and 3, and a combination that supports both. The socket 2 has the advantage of flexibility; it can be used for both SSDs and other cards, such as network cards. However, the speed is limited to a PCI-Express interface x2. Socket 3 can not be used for network cards,but offers a x4 interface, improved performance, and would be the primary choice for SSD's. Most all X99 motherboards and many newer Laptops have sockets for one of these devices. Current examples of this M.2 technology are:
- Plextor M6e PX-G256M6e M.2 2280 256GB PCI-Express 2.0 x2 Solid State Drive (SSD)
- Samsung XP941 MZHPU256HCGL M.2 2280 256GB PCI Express 2.0 x4 Enterprise SSD - OEM
- Samsung SM951 MZHPV256HDGL M.2 2280 256GB PCI Express 3.0 x4 Enterprise SSD - OEM
- (Next Generation) Samsung SM951 NVMe MZVPV256HDGL M.2 2280 512GB PCI Express 3.0 x4 Enterprise SSD -OEM
These four M.2 drives are also available in 512 GB versions. These first three drives are not NVME SSD's yet. In Bill's X99 (where the motherboard M.2 socket is PCIE 3.0 x4) testing with PPBM8 the write rates of these three, respectively are:; 618 MB/sec, 927 MB/sec, and 1483 MB/sec. A new version of the SM951 has been announced it is the long promised NVMe version confusing it will also be called the SM951 but it does have different part numbers MZVPV512HDGL/MZVPV256HDGL/MZVPV128HDGL, rumor has it that these NG version might start becoming available in June
There are PCIe cards to install these M.2 devices in desktops without M.2 sockets on the motherboards. If your system has PCIe Gen 3 available make sure you get a Gen 3 card but be aware not all cards give maximum performance.
There are some problems to use some of these as boot devices, you need support in the UEFI BIOS.
In desktops, where dimensions are less important, a PCI-Express SSD (not the M.2 style) can also be connected with a cable. There is a choice between an express SATA cable and a so-called SFF-8639 cable. The SATA Express cable provides PCI Express x2 and is the cheapest solution, but offers limited performance. SFF-8639 cables are more expensive because they need shielding and a clock signal, but instead do offer PCI Express x4 speeds. The best choice for cheap implementations seems to be an express SATA cable with PCIe x2 interface, while the best is a socket 3 connector without cable for highest performance.
Keep in mind that PCIe SSD's can use more power than SATA SSD's, up to 15W in the case of PCIe x4 (on the 3.3V rail, almost 5A), and they have to be designed to dissipate the heat effectively.
We'll keep you informed about developments. Meanwhile, if you are in the market for a SSD, do have a serious look at Samsung's V-NAND technology. Specifically the 845/850 Pro look very promising, since they use 24/32 layers respectively of 3d NAND flash, which extends the life expectancy of the SSD, due to the 40 nm process.
There is still life in conventional HDD's
HGST, a WD subsidiary, today (07-09-2014) announced the Ultrastar C10K1800, a 1.8 TB conventional 2.5" disk with 10,000 RPM and 128 MB cache, using a SAS 12 Gb interface and support for 4k-format. Sequential transfers are claimed to be around 23% faster than previous generations. BFTB-wise, these disks may still be the most attractive option for large raid arrays today. See Enterprise Ultrastar C10K-1800 Drives
Typical Sustained Transfer Rates depending on the Disk Interface
HDD < 40% Fill Rate
HDD > 70% Fill Rate
|SATA 6G||150 - 170 MB/s||90 - 110 MB/s||300 - 530 MB/s||6 Gb/s|
|SATA 3G||150 - 170 MB/s||90 - 110 MB/s||150 - 250 MB/s||3 Gb/s|
|USB 3*||80 - 100 MB/s||60 - 80 MB/s||5 Gb/s|
|FW 800||55 - 60 MB/s||45 - 50 MB/s||800 Mb/s|
|FW 400||35 - 40 MB/s||30 - 35 MB/s||400 Mb/s|
|USB 2*||20 - 25 MB/s||20 - 25 MB/s||480 Mb/s|
* Maximum effective transfer rates. Can be lower with multiple USB devices on the same port.
Noteworthy in the above table is that conventional HDD's do not profit from a SATA 6G connection. HDD's are too slow to benefit from the increased bandwidth of the 6 Gb connection, quite in contrast to SSD's, that should always be connected to a SATA 6G port to use their potential speed at all times. USB is a shared connection, which means that the more devices that are attached to a single port, the bigger the discrepancy between theoretical bandwidth and the effective transfer rate. In theory USB3 should be faster than SATA 3G, but in real life that does not prove to be true, due to the shared nature of the connection and the inherent overhead it carries. Theoretical bandwidth is purely theoretical and has almost no relevance to real life editing requirements.
Obviously, one should always try to use SATA ports for the disk connections and for externals eSATA is preferred over USB3. Second, one should get big disks in order to prevent 'fill rate' performance degradation.
The other thing that can have a significant impact on disk performance is the codec used during editing.
Other difficult codecs - from a disk performance point of view - are for instance, uncompressed, Lagarith, UT, Cineform, P2, ProRes (which uses QT32Server), RED 4K and EPIC 5K and the like that take huge amounts of disk space. It does not matter whether these codecs are computationally difficult or not, the sheer amount of data to be moved to and from disk can easily stifle disk throughput. See the following table, but keep in mind that Cineform may belong in the same cell as AVC Intra and that DSLR can also have easier codecs to choose from in the camera settings, that put less strain on the CPU:
This overview is a great help, but editing style has a major impact on what is required for a good disk setup. The overriding factors that influence disk requirements boil down to these:
- Number of tracks in use. The more tracks in use, the higher the disk requirements are in terms of sustained transfer rate. If you use 9 cameras in a multicam session, your sustained transfer rate should be 9 times higher than with a single camera to achieve the same responsiveness. You can do the math yourself.
- Average scene duration or number of scene changes. The shorter the duration of a scene, the quicker the next scene must be fetched from disk for fluid playback. Latency and average access time become more critical as the number of scene changes within a certain amount of time increases.
- The combination of both factors above. It is a lot easier on the disk setup to edit a 6 camera multicam session of a presentation with average clips durations of 4-6 seconds, than a 3 camera session with clip changes every single second.
There are no figures indicated in the above table, because it depends very much on the codec in use and the nature and duration of the scene changes, but you get the drift, right?
Practical Disk Setup
Before going into the number of disks and the allocation of various kind of files, like media, projects and media cache, it is important to recognize what the OS does when you are editing. It is not simply reading program files and their related .DLL files, but a lot more is going on without you knowing about it. And unfortunately it comprises both reading and writing to the OS disk, something we would very much like to avoid, because of the half-duplex nature of the disk interface. If things were only about reading, then it would be great, but unfortunately it is not. Windows does not only read .EXE and .DLL files from disk, it also writes to disk for its own administration. Think of stuff like:
- Using the page-file, which Windows requires on the C: drive in the case of crashes.
- Even if you have installed a static page-file on another drive, Windows will create a C: drive page-file in the case of a crash for memory dumps, so you may as well create one on the boot drive from the start. As for the size of the page-file, I recommend a sum total of installed memory plus page-file of at least 48 GB, with a minimum static size of 1 or 2 GB, even if you exceed the sum of 48 GB.
- Updating the various event logs, like system events, application events, administration events and the like,
- Updating the user profile,
- Updating the file allocation tables,
- Updating the access and modification timestamps for files accessed, and
- A whole bunch of other housekeeping tasks.
It is a pity you can't change this information to be written to another drive, but that's life. It is also a pity that some programs like Adobe applications persist in writing to the boot disk all the time with no clear way to change that. Therefore it makes sense to use a dedicated SSD for OS & program files exclusively. A SSD because of the many small changes the OS makes to files, where latency and access time are all important for responsiveness and with recent price drops there is nothing to stop us using a SSD as a boot drive. Luckily it does not need to be huge in size. SSD's don't suffer from 'fill rate' degradation as do HDD's and 120/128 GB is more than enough space on the boot disk, even with multiple versions of the Master Collection installed.
A full installation of Windows 8.1 plus CS6 Master Collection, plus several plug-ins like MagicBullet Suite (including Looks, Colorista, etc), the whole suite of Pixelan plug-ins, Color Finesse, SurCode and the like, extensive libraries of utilities and miscellaneous stuff does not take more than 35 GB max (even with humongous HP printer software installed) on a relatively clean system, including a 2 GB page-file. Of course, if you have a bigger page-file, the used space is much bigger, so keep that in mind.
That is if one has turned off hibernation. If it has not been turned off and the 'hiberfil.sys' file has not been removed, the size of the 'hiberfil.sys' file can grow to a humongous 60 GB. To remove this file, turn off all sleeping modes in the power settings and:
Go to cmd.exe as administrator and from the command prompt type in:
"powercfg.exe -h off" without the quotes and press enter. Then exit.
Conclusion: SSD 128+ GB as boot disk C: for OS & programs and page-file.
With all the reading and writing going on, on the boot disk, it is understandable you need a (number of) dedicated disk(s) for video editing, especially with the bandwidths required when the clip duration is short and the number of tracks exceeds one and uses a codec more complex than DV. The kind of files used during editing are, in order of their need for speed:
- Media cache & Media cache database files, created on importing media into a project. They contain indexed, conformed audio and peak files for waveform display.
Typically small files, but lots of them, so in the end they still occupy lots of disk space.
- Preview (rendered) files, created when the time-line is rendered for preview purposes, the red bar turned to green. Read all the time when previewing the time-line.
- Project files, including project auto-save files, that are constantly being read and saved as auto-save files and written when saving your edits.
- Media files, the original video material ingested from tape or card based cameras. Typically long files, only used for reading, since PR is a non-destructive editor.
- Export files, created when the time-line is exported to its final delivery format. These files are typically only written once and often vary in size from several hundred KB to tens of GB.
When you are doubting which category of files to put on which kind of disk, especially when using both SSD's and HDD's, keep in mind that the speed advantage of SSD's over HDD's is most noteworthy with the Media cache & Media cache database. These files are frequently accessed, are small and there are many, so reducing latency and seek times and increasing transfer rates pays off by putting these on a SSD, rather than on a HDD, even if it is a raid0. Export files can go to the slowest volume on your system, since you only export once. To help you decide, I have added priority rank-numbers for speed, with 1 for the fastest volume and 5 for the least speed-demanding category.
Before going into the number of disks required for comfortable editing, let's make it clear what the distinction is between disks and volumes.
A disk is a physical SSD or hard drive. If there is no partitioning of that physical drive, has been formatted as a single NTFS drive with a single drive letter, then a disk is equal to a volume. If this physical disk has been partitioned, then you have different volumes on the same physical disk, each with their own drive letter. This is a bad idea to do. Partitioning does not increase the physical space, it does not prevent or alleviate the half-duplex nature of SATA disks, it only causes more 'wear-and-tear' on the mechanical parts of HDD's, reduces performance and life-expectancy, so that is a no-go for video editing. Multiple volumes on a single physical disk should not be used at all. However, if some physical disks are 'spanned', they appear as one single volume to the OS with only one drive letter, and that volume benefits from increased storage space, increased performance and reduced 'wear-and-tear' on that volume and thus on the member disks. This gets us into the realm of RAID-ing disks, to be delved into at a later stage. For the time being, let's consider a disk and a volume to be the same, so when I say volume, it simply means a single non-partitioned disk.
Ideally, you would like to have a dedicated volume for each of the 5 file categories described above, since that way makes sure that reading from and writing to each volume is prevented to a large extend, so the half-duplex problem of the interface is avoided as much as possible. But that also means at least 6 volumes (disks) are required, one for the OS and 5 for the video related stuff. That simply is not always possible or affordable. With fewer volumes available, one has to make concessions and combine certain file categories on a single volume in such a way, that the performance hit will be minimal.
What happens when you have less than 6 disks / volumes? You combine certain files types on the same volume, for instance project files and export files on the same volume. That entails a higher fill rate on that volume and thus more 'fill rate' degradation, lowering the sustained transfer rate on that volume. In addition the overhead from the OS increases, because the Windows house-keeping tasks, like updating file allocation tables and access and modification time-stamps for each file used, increases due to the extra number of modified files. Of course the main bottleneck still is the half-duplex problem, waiting for reading to finish before writing can occur and vice versa.
Why did I use project and export files to be combined in the example above? Because the project files are relatively small, as are the export files, so when you combine them on a single volume the performance degradation from 'fill-rate' is relatively small, especially if you consider that on writing the export files to the volume, the project file is no longer accessed. The work has already been done. An alternative approach might be to combine Media Cache, Previews and Exports on the same volume, because Previews are not used during Export nor is Media Cache, so Exports are the only (write) activity on this volume. These kind of considerations ripple through to the table below, which shows how one could allocate the various type of files to the available disks or volumes.
The reason for only two volumes in the case of 5 disks or more, is purely based on ease of use. The parity raids are fast enough to handle the half-duplex limitations and make it easier to use only two drive letters, instead of a whole bunch of letters for different volumes.
As happens often in life, it starts with a KISS (Keep It Stupidly Simple) and ends with real LOVE (Lots Of Video Editing), like below:
My own setup
is like this using Windows 8.1 Enterprise and CS6 Master Collection, balancing between simple and performance:
You can disregard the two network locations, since they are virtual volumes on a VMWare server, the space indications mean zilch because of the virtual nature of these volumes, and the Folders point to these volumes.
Crystal Disk Mark gives these results for the various volumes, so pay attention to the Intel versus Marvell performance discrepancy.
SSD or HDD ?
Now that we have seen how to allocate the various file types to different volumes, the question arises, should you use SSD's or conventional HDD's for the other volumes than C: ? Remember that for C: a SSD is better than a HDD, but does that also apply to the rest of the volumes? Well, in the benchmark tests we have not seen any noticeable difference between SSD and HDD in terms of performance, but logic dictates that there should be a small difference in performance when the volume used for media cache is a SSD, because of the much faster access time and lower latency compared to a conventional disk. If it is noticeable, it is specifically on this volume with the media cache and media cache database, due to the large amount of small files that need to be accessed and written. This argument does not apply to the media volume with a limited number of files, that are very big in size.
Advantages of SSD over conventional HDD are lower energy consumption, less heat, less noise, faster access and higher sustained transfer rates. Non-SandForce based modern SSD's show much less 'stable state' deterioration than earlier models and do not suffer from 'fill rate' degradation as HDD's do. But the major drawback of SSD's is the price per GB, which is still around 20 times higher than HDD's, even after all the price drops for SSD's and the rather limited size in comparison to modern HDD's. They both still suffer from the half-duplex nature of the SATA interface, so even if there were a speed advantage of using one SSD, combining two file types on the same drive versus using two dedicated HDD's for one file type each, the benefit would at most be marginal because of the interface limitations and Windows house-keeping tasks, but the space limitations and the higher price of SSD's may become a distinct disadvantage.
My take on this question is that I prefer two HDD's over a single SSD. It costs less, gives more space, prevents the half-duplex problems and there is no practical performance difference between the two options.
Final thoughts on Disk Setup
Disk Setup is often treated as a prodigal son. It is my experience that one often spends only up to 10 - 15% of the total system cost on the Disk I/O system and then complain about stuttering playback, lagging responsiveness of the system and the like. This is quite understandable, especially with people rather new to the editing business. They are used to buying a system that is advertised as a Monster machine, with only one or two disks and while those claims may be accurate for a gaming machine or a regular office machine, it simply does not hold true for a NLE system. Video editing requires a lot more muscle than gaming, office applications or photo editing and that is often overlooked. If you take into consideration the complexity of the codecs used and the video editing style, it should by now be clear there is much more to the Disk I/O system than meets the eye.
In a separate article, I will tell you all about RAID arrays and how they can improve your Disk I/O system and help to remove bottlenecks from your system, but for now let's limit it to the basics.
- On all 1155 / 1156 platforms (Sandy Bridge, Ivy Bridge and Haswell) a dedicated raid controller is out of the question, so you can not use a parity raid in a practical sense , unless you accept severe performance penalties.
- On all 2011 platforms (Sandy Bridge-E and Xeon E5) a dedicated raid controller can enhance system performance in a major way, but costs $$$.
Irrespective of the platform you have or intend to buy, for a solid balanced system expect to invest around 30 - 45% of the total system cost, excluding keyboard, monitors and the like, in the Disk I/O system, more than the combination of CPU, memory and motherboard, and more than 3 times the cost of the video card.
Typical - indicative - component costs for a balanced system currently look like this for a 2011 platform, including raid controller:
or for a 1150 / 1155 / 1156 platform without a raid controller, since it makes no sense on these platforms: