Keep your ears and eyes open for SSD developments.

Currently SSD's are hampered in their performance by the typical SATA connection. Their capacity tends to get more interesting and their prices continue to fall, so from that perspective, they get more competitive to conventional HDD's and the faster storage solution. But there are new developments around the corner to make SSD's the storage solution of the future.

NVME is the new magic word.

NVME stands for 'Non-Volatile Memory Express' and is specifically designed to overcome the SATA limitations of limited bandwidth.

The SATA interface has already earned his spurs but the standard is designed around traditional spinning hard drives. These drives are inherently slow because the read and write heads of hard disks simply can only be in one place at a time. These restrictions do not apply to solid state drives. The flash memory can very well read or writte in parallel but because of the relatively slow SATA interface speed, the potential of SSD's is unused.

The intended successor to the SATA bus is NVM Express, an acronym for non-volatile memory (Host Controller Interface) express. As the acronym implies, NVME was specifically designed and built for solid state memory. The standard uses PCI-Express interconnects, leading to scalability. With only a single PCI-e lane a NVME connected SSD already has nearly a gigabyte per second of bandwidth, compared to 600 megabytes per second SATA. With several lanes, NVME reaches a multiple of the SATA interconnect bandwidth. For more background on this new interface, see NVM Express, the Optimized PCI Express® SSD Interface.

With vastly lower latencies and minimal power consumption, this may well be the storage solution of the future.

M.2: high speed in small package.

The M.2 interface first gained notoriety with Apple's MacBook Air, which is a M.2-SSD. Which looks a bit like an extended mSATA SSD (but really a simpler and more relative comparison is the it looks very much like a stick of gum), but has a different connection. The M.2-SSD from Apple directly illustrates the diversity of the M.2 interface. Apple chose to use a type 2 (2-PCIE lanes) connect, which did not lead to optimal speed and latencies. All the newer M.2 SSDs use 4-PCIe lanes.  Typical (April 2015) M.2 SSD's are Samsungs XP941 and the SM951.  The standard SM951 is rated at 2150 MB/second read rste and 1550 MB/sec write rate when used in a PCIe x4 Gen3 socket.  Bill has actually measured a write rate from Premiere PPBM8 of 1500MB/second.

Stick of gum

M.2 PCI Express sockets

The interface can also handle M.2 with PCI-Express interconnects (and USB, for completeness), which theoretically leads to higher speeds and lower latencies, as with NVME. There are two types of sockets possible: 2 and 3, and a combination that supports both. The socket 2 has the advantage of flexibility; it can be used for both SSDs and other cards, such as network cards. However, the speed is limited to a PCI-Express interface x2. Socket 3 can not be used for network cards,but offers a x4 interface, improved performance, and would be the primary choice for SSD's. Most all X99 motherboards and many newer Laptops have sockets for one of these devices.  Current examples of this M.2 technology are:

  • Plextor M6e PX-G256M6e M.2 2280 256GB PCI-Express 2.0 x2 Solid State Drive (SSD)
  • Samsung XP941 MZHPU256HCGL M.2 2280 256GB PCI Express 2.0 x4 Enterprise SSD - OEM
  • Samsung SM951 MZHPV256HDGL M.2 2280 256GB PCI Express 3.0 x4 Enterprise SSD - OEM
  • (Next Generation) Samsung SM951 NVMe MZVPV256HDGL M.2 2280 512GB PCI Express 3.0 x4 Enterprise SSD -OEM

These four M.2 drives are also available in 512 GB versions.  These first three drives are not NVME SSD's yet.  In Bill's X99 (where the motherboard M.2 socket is PCIE 3.0 x4)  testing with PPBM8 the write rates of these three, respectively are:;  618 MB/sec,  927 MB/sec, and 1483 MB/sec.  A new version of the SM951 has been announced it is the long promised NVMe version confusing it will also be called the SM951 but it does have different part numbers MZVPV512HDGL/MZVPV256HDGL/MZVPV128HDGL, rumor has it that these NG version might start becoming available in June

There are PCIe cards to install these M.2 devices in desktops without M.2 sockets on the motherboards.  If your system has PCIe Gen 3 available make sure you get a Gen 3 card but be aware not all cards give maximum performance.

There are some problems to use some of these as boot devices, you need support in the UEFI BIOS.

In desktops, where dimensions are less important, a PCI-Express SSD (not the M.2 style) can also be connected with a cable. There is a choice between an express SATA cable and a so-called SFF-8639 cable. The SATA Express cable provides PCI Express x2 and is the cheapest solution, but offers limited performance. SFF-8639 cables are more expensive because they need shielding and a clock signal, but instead do offer PCI Express x4 speeds. The best choice for cheap implementations seems to be an express SATA cable with PCIe x2 interface, while the best is a socket 3 connector without cable for highest performance.

Keep in mind that PCIe SSD's can use more power than SATA SSD's, up to 15W in the case of PCIe x4 (on the 3.3V rail, almost 5A), and they have to be designed to dissipate the heat effectively.

We'll keep you informed about developments. Meanwhile, if you are in the market for a SSD, do have a serious look at Samsung's V-NAND technology. Specifically the 845/850 Pro look very promising, since they use 24/32 layers respectively of 3d NAND flash, which extends the life expectancy of the SSD, due to the 40 nm process.

There is still life in conventional HDD's

HGST, a WD subsidiary, today (07-09-2014) announced the Ultrastar C10K1800, a 1.8 TB conventional 2.5" disk with 10,000 RPM and 128 MB cache, using a SAS 12 Gb interface and support for 4k-format. Sequential transfers are claimed to be around 23% faster than previous generations. BFTB-wise, these disks may still be the most attractive option for large raid arrays today. See Enterprise Ultrastar C10K-1800 Drives

Typical Sustained Transfer Rates depending on the Disk Interface
Disk Interface
HDD < 40% Fill Rate
HDD > 70% Fill Rate
SSD
Theoretical Bandwidth
SATA 6G 150 - 170 MB/s 90 - 110 MB/s 300 - 530 MB/s 6 Gb/s
SATA 3G 150 - 170 MB/s 90 - 110 MB/s 150 - 250 MB/s 3 Gb/s
USB 3* 80 - 100 MB/s 60 - 80 MB/s   5 Gb/s
FW 800 55 - 60 MB/s 45 - 50 MB/s   800 Mb/s
FW 400 35 - 40 MB/s 30 - 35 MB/s   400 Mb/s
USB 2* 20 - 25 MB/s 20 - 25 MB/s   480 Mb/s
* Maximum effective transfer rates. Can be lower with multiple USB devices on the same port.

Noteworthy in the above table is that conventional HDD's do not profit from a SATA 6G connection. HDD's are too slow to benefit from the increased bandwidth of the 6 Gb connection, quite in contrast to SSD's, that should always be connected to a SATA 6G port to use their potential speed at all times. USB is a shared connection, which means that the more devices that are attached to a single port, the bigger the discrepancy between theoretical bandwidth and the effective transfer rate. In theory USB3 should be faster than SATA 3G, but in real life that does not prove to be true, due to the shared nature of the connection and the inherent overhead it carries. Theoretical bandwidth is purely theoretical and has almost no relevance to real life editing requirements.

Obviously, one should always try to use SATA ports for the disk connections and for externals eSATA is preferred over USB3. Second, one should get big disks in order to prevent 'fill rate' performance degradation.

The other thing that can have a significant impact on disk performance is the codec used during editing.

Other difficult codecs - from a disk performance point of view - are for instance, uncompressed, Lagarith, UT, Cineform, P2, ProRes (which uses QT32Server), RED 4K and EPIC 5K and the like that take huge amounts of disk space. It does not matter whether these codecs are computationally difficult or not, the sheer amount of data to be moved to and from disk can easily stifle disk throughput. See the following table, but keep in mind that Cineform may belong in the same cell as AVC Intra and that DSLR can also have easier codecs to choose from in the camera settings, that put less strain on the CPU:

This overview is a great help, but editing style has a major impact on what is required for a good disk setup. The overriding factors that influence disk requirements boil down to these:

  • Number of tracks in use. The more tracks in use, the higher the disk requirements are in terms of sustained transfer rate. If you use 9 cameras in a multicam session, your sustained transfer rate should be 9 times higher than with a single camera to achieve the same responsiveness. You can do the math yourself.
  • Average scene duration or number of scene changes. The shorter the duration of a scene, the quicker the next scene must be fetched from disk for fluid playback. Latency and average access time become more critical as the number of scene changes within a certain amount of time increases.
  • The combination of both factors above. It is a lot easier on the disk setup to edit a 6 camera multicam session of a presentation with average clips durations of 4-6 seconds, than a 3 camera session with clip changes every single second.

There are no figures indicated in the above table, because it depends very much on the codec in use and the nature and duration of the scene changes, but you get the drift, right?

Practical Disk Setup

Before going into the number of disks and the allocation of various kind of files, like media, projects and media cache, it is important to recognize what the OS does when you are editing. It is not simply reading program files and their related .DLL files, but a lot more is going on without you knowing about it. And unfortunately it comprises both reading and writing to the OS disk, something we would very much like to avoid, because of the half-duplex nature of the disk interface. If things were only about reading, then it would be great, but unfortunately it is not. Windows does not only read .EXE and .DLL files from disk, it also writes to disk for its own administration. Think of stuff like:

  • Using the page-file, which Windows requires on the C: drive in the case of crashes.
    • Even if you have installed a static page-file on another drive, Windows will create a C: drive page-file in the case of a crash for memory dumps, so you may as well create one on the boot drive from the start. As for the size of the page-file, I recommend a sum total of installed memory plus page-file of at least 48 GB, with a minimum static size of 1 or 2 GB, even if you exceed the sum of 48 GB.
  • Updating the various event logs, like system events, application events, administration events and the like,
  • Updating the user profile,
  • Updating the file allocation tables,
  • Updating the access and modification timestamps for files accessed, and
  • A whole bunch of other housekeeping tasks.

It is a pity you can't change this information to be written to another drive, but that's life. It is also a pity that some programs like Adobe applications persist in writing to the boot disk all the time with no clear way to change that. Therefore it makes sense to use a dedicated SSD for OS & program files exclusively. A SSD because of the many small changes the OS makes to files, where latency and access time are all important for responsiveness and with recent price drops there is nothing to stop us using a SSD as a boot drive. Luckily it does not need to be huge in size. SSD's don't suffer from 'fill rate' degradation as do HDD's and 120/128 GB is more than enough space on the boot disk, even with multiple versions of the Master Collection installed.

A full installation of Windows 8.1 plus CS6 Master Collection, plus several plug-ins like MagicBullet Suite (including Looks, Colorista, etc), the whole suite of Pixelan plug-ins, Color Finesse, SurCode and the like, extensive libraries of utilities and miscellaneous stuff does not take more than 35 GB max (even with humongous HP printer software installed) on a relatively clean system, including a 2 GB page-file. Of course, if you have a bigger page-file, the used space is much bigger, so keep that in mind.

That is if one has turned off hibernation. If it has not been turned off and the 'hiberfil.sys' file has not been removed, the size of the 'hiberfil.sys' file can grow to a humongous 60 GB. To remove this file, turn off all sleeping modes in the power settings and:

Go to cmd.exe as administrator and from the command prompt type in:
"powercfg.exe -h off" without the quotes and press enter. Then exit.

Conclusion: SSD 128+ GB as boot disk C: for OS & programs and page-file.

With all the reading and writing going on, on the boot disk, it is understandable you need a (number of) dedicated disk(s) for video editing, especially with the bandwidths required when the clip duration is short and the number of tracks exceeds one and uses a codec more complex than DV. The kind of files used during editing are, in order of their need for speed:

  1. Media cache & Media cache database files, created on importing media into a project. They contain indexed, conformed audio and peak files for waveform display.
    Typically small files, but lots of them, so in the end they still occupy lots of disk space.
  2. Preview (rendered) files, created when the time-line is rendered for preview purposes, the red bar turned to green. Read all the time when previewing the time-line.
  3. Project files, including project auto-save files, that are constantly being read and saved as auto-save files and written when saving your edits.
  4. Media files, the original video material ingested from tape or card based cameras. Typically long files, only used for reading, since PR is a non-destructive editor.
  5. Export files, created when the time-line is exported to its final delivery format. These files are typically only written once and often vary in size from several hundred KB to tens of GB.

When you are doubting which category of files to put on which kind of disk, especially when using both SSD's and HDD's, keep in mind that the speed advantage of SSD's over HDD's is most noteworthy with the Media cache & Media cache database. These files are frequently accessed, are small and there are many, so reducing latency and seek times and increasing transfer rates pays off by putting these on a SSD, rather than on a HDD, even if it is a raid0. Export files can go to the slowest volume on your system, since you only export once. To help you decide, I have added priority rank-numbers for speed, with 1 for the fastest volume and 5 for the least speed-demanding category.

Before going into the number of disks required for comfortable editing, let's make it clear what the distinction is between disks and volumes.

A disk is a physical SSD or hard drive. If there is no partitioning of that physical drive, has been formatted as a single NTFS drive with a single drive letter, then a disk is equal to a volume. If this physical disk has been partitioned, then you have different volumes on the same physical disk, each with their own drive letter. This is a bad idea to do. Partitioning does not increase the physical space, it does not prevent or alleviate the half-duplex nature of SATA disks, it only causes more 'wear-and-tear' on the mechanical parts of HDD's, reduces performance and life-expectancy, so that is a no-go for video editing. Multiple volumes on a single physical disk should not be used at all. However, if some physical disks are 'spanned', they appear as one single volume to the OS with only one drive letter, and that volume benefits from increased storage space, increased performance and reduced 'wear-and-tear' on that volume and thus on the member disks. This gets us into the realm of RAID-ing disks, to be delved into at a later stage. For the time being, let's consider a disk and a volume to be the same, so when I say volume, it simply means a single non-partitioned disk.

Ideally, you would like to have a dedicated volume for each of the 5 file categories described above, since that way makes sure that reading from and writing to each volume is prevented to a large extend, so the half-duplex problem of the interface is avoided as much as possible. But that also means at least 6 volumes (disks) are required, one for the OS and 5 for the video related stuff. That simply is not always possible or affordable. With fewer volumes available, one has to make concessions and combine certain file categories on a single volume in such a way, that the performance hit will be minimal.

What happens when you have less than 6 disks / volumes? You combine certain files types on the same volume, for instance project files and export files on the same volume. That entails a higher fill rate on that volume and thus more 'fill rate' degradation, lowering the sustained transfer rate on that volume. In addition the overhead from the OS increases, because the Windows house-keeping tasks, like updating file allocation tables and access and modification time-stamps for each file used, increases due to the extra number of modified files. Of course the main bottleneck still is the half-duplex problem, waiting for reading to finish before writing can occur and vice versa.

Why did I use project and export files to be combined in the example above? Because the project files are relatively small, as are the export files, so when you combine them on a single volume the performance degradation from 'fill-rate' is relatively small, especially if you consider that on writing the export files to the volume, the project file is no longer accessed. The work has already been done. An alternative approach might be to combine Media Cache, Previews and Exports on the same volume, because Previews are not used during Export nor is Media Cache, so Exports are the only (write) activity on this volume. These kind of considerations ripple through to the table below, which shows how one could allocate the various type of files to the available disks or volumes.

The reason for only two volumes in the case of 5 disks or more, is purely based on ease of use. The parity raids are fast enough to handle the half-duplex limitations and make it easier to use only two drive letters, instead of a whole bunch of letters for different volumes.

As happens often in life, it starts with a KISS (Keep It Stupidly Simple) and ends with real LOVE (Lots Of Video Editing), like below:

 

My own setup

is like this using Windows 8.1 Enterprise and CS6 Master Collection, balancing between simple and performance:

You can disregard the two network locations, since they are virtual volumes on a VMWare server, the space indications mean zilch because of the virtual nature of these volumes, and the Folders point to these volumes.

Crystal Disk Mark gives these results for the various volumes, so pay attention to the Intel versus Marvell performance discrepancy.

 

SSD or HDD ?

Now that we have seen how to allocate the various file types to different volumes, the question arises, should you use SSD's or conventional HDD's for the other volumes than C: ? Remember that for C: a SSD is better than a HDD, but does that also apply to the rest of the volumes? Well, in the benchmark tests we have not seen any noticeable difference between SSD and HDD in terms of performance, but logic dictates that there should be a small difference in performance when the volume used for media cache is a SSD, because of the much faster access time and lower latency compared to a conventional disk. If it is noticeable, it is specifically on this volume with the media cache and media cache database, due to the large amount of small files that need to be accessed and written. This argument does not apply to the media volume with a limited number of files, that are very big in size.

Advantages of SSD over conventional HDD are lower energy consumption, less heat, less noise, faster access and higher sustained transfer rates. Non-SandForce based modern SSD's show much less 'stable state' deterioration than earlier models and do not suffer from 'fill rate' degradation as HDD's do. But the major drawback of SSD's is the price per GB, which is still around 20 times higher than HDD's, even after all the price drops for SSD's and the rather limited size in comparison to modern HDD's. They both still suffer from the half-duplex nature of the SATA interface, so even if there were a speed advantage of using one SSD, combining two file types on the same drive versus using two dedicated HDD's for one file type each, the benefit would at most be marginal because of the interface limitations and Windows house-keeping tasks, but the space limitations and the higher price of SSD's may become a distinct disadvantage.

My take on this question is that I prefer two HDD's over a single SSD. It costs less, gives more space, prevents the half-duplex problems and there is no practical performance difference between the two options.

Final thoughts on Disk Setup

Disk Setup is often treated as a prodigal son. It is my experience that one often spends only up to 10 - 15% of the total system cost on the Disk I/O system and then complain about stuttering playback, lagging responsiveness of the system and the like. This is quite understandable, especially with people rather new to the editing business. They are used to buying a system that is advertised as a Monster machine, with only one or two disks and while those claims may be accurate for a gaming machine or a regular office machine, it simply does not hold true for a NLE system. Video editing requires a lot more muscle than gaming, office applications or photo editing and that is often overlooked. If you take into consideration the complexity of the codecs used and the video editing style, it should by now be clear there is much more to the Disk I/O system than meets the eye.

In a separate article, I will tell you all about RAID arrays and how they can improve your Disk I/O system and help to remove bottlenecks from your system, but for now let's limit it to the basics.

  • On all 1155 / 1156 platforms (Sandy Bridge, Ivy Bridge and Haswell) a dedicated raid controller is out of the question, so you can not use a parity raid in a practical sense , unless you accept severe performance penalties.
  • On all 2011 platforms (Sandy Bridge-E and Xeon E5) a dedicated raid controller can enhance system performance in a major way, but costs $$$.

Irrespective of the platform you have or intend to buy, for a solid balanced system expect to invest around 30 - 45% of the total system cost, excluding keyboard, monitors and the like, in the Disk I/O system, more than the combination of CPU, memory and motherboard, and more than 3 times the cost of the video card.

Typical - indicative - component costs for a balanced system currently look like this for a 2011 platform, including raid controller:

or for a 1150 / 1155 / 1156 platform without a raid controller, since it makes no sense on these platforms:

 

Add comment
  • I am very thankful for this great site for its valuable content. Still, as a person who used to work with of-the-shelf pc's and now is building his first custom pc, I am still not sure about the disk setup with the components I got so far:

    Samsung 850 pro 128 GB - OS, programs
    Samsung 950 pro 512 GB - cache, previews
    Areca 5028t2 (6x 4TB Red pro, RAID3) - media and project files
    Samsung 840 pro 512 GB - exports

    I still have a 950 Pro that just arrived - how could it be used most effectively?

    Thanks in advance,

    Tobias

    0 Like
  • Guest - Justin

    I was wondering what the best way to set up these drives with premiere and also the scratch disk set up. I have a WD 2TB 7200 with the OS and programs currently on it. Also have a SAMSUNG SM951 M.2 128GB PCI-Express 3.0 Internal Solid State Drive and 2 Seagate 2 TB 7200.

    0 Like
  • Guest - Justin

    I was wondering what the best way to set up these drives with premiere and also the scratch disk set up. I have a WD 2TB 7200 with the OS and programs currently on it. Also have a SAMSUNG SM951 M.2 128GB PCI-Express 3.0 Internal Solid State Drive and 2 Seagate 2 TB 7200.

    0 Like
  • Guest - David

    Would this work well for a three disk set-up?
    C: 256GB SSD; OS, Programs, Page File
    D: 500GB HDD; Cache, Preview files, Exports
    E: 8TB RAID 10 Server connected through 1Gb/s ethernet; media, project files

    thanks,
    David

    0 Like
  • Hey There,

    I'm close to ordering all my components to build an editing rig around the $2,000 range. I'll be using DSLR, Gopro Hero 4 and GH4 footage (so several different codecs, high frame rate and 4k footage if that matters). I have everything down in terms of parts I want except for my hard drive configuration.

    Here is my (latest) layout based on my research:
    Drive C (SSD): OS, Programs, Page files
    Drive S (SSD): Media cache, preview and rendered export files
    Drive M (Raid0, HDD): music, photos, videos, project files
    Drive J (HDD): backup of drive M

    The way I figure, I need my C drive stuff and my media cache type files in smaller drives that are fast: SSD. I want a fast but big drive for my media: Raid0 HDD. Based on this, I either want to have a separate drive (J) for backup or turn drive M into a Raid 10, thus taking advantage of the speed of Raid 0 with the security of Raid 1, eliminating the need for drive J with a compromise of using one additional hard drive (Raid 10 = 4 HDD, otherwise 2 HDD for M and 1 HDD for J)

    I will be on the X99 motherboard platform if that matters, too.

    Is the above setup the right way to go?

    - jack

    0 Like
  • Guest - Tomasz

    Hey Marc,

    I have no problem throwing down an extra 144 for the EVO if it is a couple dollar more than the 132 cost on Amazon. It is weird that I can't take it out to save money though. Anyway, I can keep the RAM at 16GB right now and upgrade to 32 at a later time and get something a little better than whatever no name brand they have listed. I do agree about the hardware, but it seems like everywhere I look, it's the same case, that or I'm just not looking in the right places.

    As for software, I use After effects heavily, but I want to go more into cinema 4D/3ds max and start incorporating it into my work. Both programs are important for me to use together.

    0 Like
  • Guest - Tomasz

    Hey Marc,

    I'm honestly not sure? It just gives me a generic SSD, without any information other than that. The laptop that I'm looking at can be found here: http://www.agearnotebooks.com/msiwt60-2ok-3k-615us.html?ref=lexity&_vs=google&_vm=productsearch&gclid=Cj0KEQiAiuOlBRCU-8D6idaPz_UBEiQAzTagNCAYqtLk35h1O0kgtoE9YD3KBw_glkkxnkaFVyFpK44aAjKb8P8HAQ#tabs

    It's a bit pricey, so I'm trying to weight my options right now and see what I can use now and upgrade later.

    I'd like to have that TB drive just for storage though, my last system only had 500 GB and I went through that pretty quickly.

    0 Like
  • This is a really strange configurator. You can choose a "128GB Solid State Drive mSATA" or add $144 for a 250 GB Samsung EVO. This EVO costs $132:
    http://www.amazon.com/Samsung-Electronics-mSATA-0-85-Inch-MZ-MTE250BW/dp/B00HWHVOC2/

    Where is the saving through the removed drive?! And it offers very old mSATA drives that aren't available anymore (e.g. PM851). Finally I don't like it if I don't know which hardware is installed. Thats the reason why I bought a cheap version of my notebook and replaced the wireless card (upgrade to 802.11ac), added faster and bigger RAM and installed Samsung SSDs.

    Which software do you use most? Cinema 4D performs better with Quadro GPU because of OpenGL:
    http://www.xbitlabs.com/articles/graphics/display/nvidia-quadro-k5000_6.html#sect1

    But for Adobe a GTX performs better and I could imagine that the GTX 880M will be as fast as the K3100M for C4D, but would be twice as fast for Adobe. Finally a GTX 780M SLI or GTX 880M SLI would be beast like an Alienware 18, but it would be much bigger than a 15 incher ;)
    http://www.notebookcheck.net/Review-Alienware-18-Notebook.102566.0.html

    0 Like
  • Guest - Tomasz

    Thank you for all the information!

    This is my first time dealing with an SSD and HDD setup so some of this is a little confusing for me, but my situation is that I'm buying a new laptop (MSI WT60-2OK) for freelance motion graphics with After effects, cinema 4D, and some video editing. I need a laptop because I travel a good amount for work.

    What I'm getting has a 128GB mSATA SSD + 1TB 7200RPM (SATA III, 6 GB) and before I start installing everything I wanted to ask, do I install my OS, Adobe suite, and Cinema 4D into my SSD, and render out and place all my files and exports into my HDD. Or would it be better to install everything on my HDD and set up my caches to the SSD?

    0 Like
  • How fast is the mSATA drive? If its under 400 MB/sec you should sell both drives and buy one 256 or 512 GB SSD. Finally the complete system will be faster with one drive compared to this two drive thing.

    Two benchmark examples:
    Samsung Evo mSATA 490 MB/sec
    Samsung Pro SATA 500 MB/sec

    If you consider to use both drives as they are, install the OS on the mSATA SSD. Never use the HDD for anything else as storage for media and backups. It is much slower than a SSD.

    0 Like
  • I don't really understand why you suggest this 2-disk setup:
    C: OS, Programs, Pagefile, Media Cache
    D: Media, Projects, Previews, Exports

    If I render a project it reads the "Media" and writes the "Exports" and if I read a "Preview" nothing else will be written, but if a "Preview" is written (rendered) it reads the "Media" so it should be this setup I think:
    C: OS, Programs, Pagefile, Media Cache, Exports, Previews
    D: Media, Projects

    Or was your suggestion based on a fast drive (c) and a slow drive (d). I would use two SSD drives.

    0 Like
  • Media cache and previews are accessed all the time during editing. Since the disks are hampered by the half-duplex nature of the SATA connection, it makes sense to spread the disk activity as much as possible, so separate media cache and previews by using both disks. It does not make any difference whether both are HDD's or SSD's, they still suffer the half-duplex nature of the connection.
    Exports are located on the 2-nd drive because temp storage is located on the first drive and because writing the export file is one long sequential write from temp locations, So, again distribute the load as much as possible. Your approach would not be optimal, because media is read from disk 2, encoded and put in temp storage on disk 1, then collected from disk 1 and written to disk 1.

    0 Like
  • media is read from disk 2, encoded and put in temp storage on disk 1, then collected from disk 1 and written to disk 1.

    Ok, I didn't know that the media is totally copied to the temp folder.

    0 Like
  • It is not copied. It is encoded to destination format. Those chunks are copied to temp storage. When all that is finished, those chunks are read from temp storage, aggregated and written to disk.

    0 Like
  • When I encode to a destination disk/folder, using H.264, I see two files being created in the destination folder, then when rendering is complete, they are (presumably) muxed together in the same folder to give me my final file. What is being written to a temp location on my system disk, and/or is this still true for CC 2014? How would one find this out, other than running SysInternals' Process Monitor or knowing someone inside Adobe?

    0 Like
  • Brian,

    You would have to ask an engineer at Adobe to get the exact answer to this question, but it looks to me quite similar to how ZIP files are downloaded. First there are temp files made as .PART files and when the download is complete, they are converted to .ZIP files.
    Adobe uses a similar approach. When encoding to export location <drive>/<directory>/<...>/<filename> Premiere creates one or more temp file (depending on mux settings) in that same directory with part of the final encoded results, like the .PART files of a .ZIP file and when it is finished encoding, those temp files are then converted to the final destination output in .MP4 or .M4V format.

    0 Like
  • Thank you for this article, it's helped me realise the importance of disks. I'm looking to upgrade my setup and from what I can tell I should have:
    1 disk for OS/Programs/Pagefile, then 5 more for the rest of the video stuff (media cache, previews, project, media and exports).
    I understand how the disks reduce bottlenecks but am still trying to get my head around RAID setups. Do you double the five disks for the RAID?

    Thank you for helping,

    Robert

    0 Like
  • Guest - Evin

    Hey Harm, Is there a difference between getting the SAS drives or SATA drives for the Raid system with an Areca card? I notice they are usually a coupla of dollars more, but does it offer an increase in speed due to the connection?

    0 Like
  • Guest - Jof Davies

    To the Author,

    after upgrading my rig to an Asus P9x79 mobo with an i7-4930k and 48gb of RAM, i notice that my exports do not utilise all of the CPU and RAM, typically only 30% and 50%. I'm assuming the bottle neck may lie with my disk setup, and am now considering a RAID configuration. Would you advise that this should be a good idea to increase export speed? If so, could you recommend a particular RAID 3 controller?

    Thanks,

    from Northampton, UK
    0 Like
  • I have a disk setup question. I know that the disk that is being exported to can be the slowest disk. But how slow is too slow? I have projects and exports on a 3 1tb RAID 5 volume with read speed 350mb/s and write speed just south of 30mb/s. So the question is whether this write speed is too slow for exporting? I saw a video with Dave Dugdale suggesting the video export process can pretty much only be bottle-necked by the CPU, as the rate to the export disk is around 1mbs. Thanks in advance and great site!

    from Dallas, TX, USA
    0 Like
  • if capacity is not a factor, surely using high speed/capacity SSD's is a better option given the cost of a decent RAID card?

    for example, I could purchase 3 x 512GB Samsung 840 Pro SSD's for a total cost of £834.00.

    however, if I were to buy 4 x 1TB Seagate Constellation drives and an Areca RAID card it might cost me about the same but, on paper, is slower.

    surely the SSD setup, in a non-raid configuration, provides better performance than the Areca option?

    0 Like
  • I should mention that the SSD option is appealing to me because:

    1) I don't like noisy PCs
    2) I don't like the downtime you get when there's an issue with RAID
    3) I work on one project at a time so 3x 512GB is more than enough for my needs

    0 Like
  • Thanks for the articles, very helpful.

    I plan to use the #3 setup with three disks:

    1. OS / Software / Pagefile = Samsung 480 Pro 265GB SSD
    2. Media / Projects = Samsung 480 EVO 1TB SSD
    3. Media Cache / Previews / Exports = Samsung 480 EVO 120GB SSD

    Few questions; Does it make real difference if I'm using 480 EVO or 480 Pro as disk #1? Will the disk #2 become bottleneck of the system if I use 1TB WD Black instead of the 1TB 840 EVO? Is the size of the disk #3 good or should it be larger?

    0 Like
  • Josh, given the bad writing speed of the EVO drives and the sizes of your SSD's it makes much more sense to use:

    The 120 GB EVO for OS & programs. You will not use more than around 35 GB plus the size of your page-file on this disk.
    The 256 GB Pro for media cache / previews because it writes much faster than the EVO and has a bigger size.

    0 Like
  • Thanks for the reply. I don't have those drives yet. That was list what I was planning to get. I will also have some games in the OS drive. That's why I choosed the 256GB instead of 120GB for that. But maybe 120GB for OS could do too.

    Can you give your thoughts on the WD Black as media/project disk?

    0 Like
  • The WD Black has a very good reputation for single use or in striped arrays. Not so much in parity arrays, there enterprise disks are preferred.
    In general it is advised to use games on a different machine, not on the video machine.

    0 Like
  • John T. Smith made a remark on the Adobe forum, quoting a post I made in September 2011 and a reply from June 2012 where I was talking about write degradation of SSD's, which could seriously reduce write performance over time. While that was true at that moment, newer generations show much less write degradation today. SandForce controllers still have it to some degree, but much less than in the past and Samsung and Marvell controllers have really made big steps forward to solve this issue to a large degree in combination with good working trim functions. Even when using a modern SSD for frequent R/W activities like media cache and previews, the write degradation is pretty small.

    Comment last edited on about 3 years ago by Super User
    0 Like
Powered by Komento