What is a RAID?

RAID is the acronym for "Redundant Array of Inexpensive Disks". The concept originated at the University of Berkely in 1987 and was intended to create large storage capacity with smaller disks without the need for very expensive and reliable disks, that were very costly at that time, often a tenfold of smaller disks. Today prices of hard disks have fallen so much that it often is more attractive to buy a single 1 TB disk than two 500 GB disks. That is the reason that today RAID is often described as "Redundant Array of Independent Disks".

The idea behind RAID is to have a number of disks co-operate in such a way that it looks like one big disk. Note that 'Spanning' is not in any way comparable to RAID, it is just a way, like inverse partitioning, to extend the base partition to use multiple disks, without changing the method of reading and writing to that extended partition.

Why use a RAID?

Now, with these lower disk prices today, why would a video editor consider a raid array? There are two reasons:

  1. Redundancy (or security)
  2. Performance

Notice that it can be a combination of both reasons, it is not an 'either/or' reason.

Does a video editor need RAID?

No, if the above two reasons, redundancy and performance are not relevant. Yes if either or both reasons are relevant.

Re 1. Redundancy

Every mechanical disk will eventually fail, sometimes on the first day of use, sometimes only after several years of usage. When that happens, all data on that disk are lost and the only solution is to get a new disk and recreate the data from a backup (if you have one) or through tedious and time-consuming work. If that does not bother you and you can spare the time to recreate the data that were lost, then redundancy is not an issue for you. Keep in mind that disk failures often occur at inconvenient moments, on a weekend when the shops are closed and you can't get a replacement disk, or when you have a tight deadline. Redundancy is a method to reconstruct the original data, if one or more disks fail, so there is no complete data loss.

Re 2. Performance

Opponents of RAID will often say that any modern disk is fast enough for video editing and they are right, but only to a certain extent. As fill rates of disks go up, performance goes down, sometimes by 50%. As the number of disk activities on the disk go up , like accessing (reading or writing) pagefile, media cache, previews, media, project file, output file, performance goes down the drain. The more tracks you have in your project, the more strain is put on your disk. 10 tracks require 10 times the bandwidth of a single track. The more applications you have open, the more your pagefile is used. This is especially apparent on systems with limited memory.

The following chart shows how fill-rates on a single conventional disk will impact performance:

The performance degradation is very clear. As a conventional disk fills up, performance decreases significantly, quite in contrast to a SSD, which shows results like this:

Notice that the average read transfer is only around 120 MB/s on a conventional HDD versus more than 500 MB/s on a SSD. Now look at what a raid array can do in terms of performance, using conventional HDD's:

The average read transfer increases to over 2300 MB/s and there is no longer any performance degradation as the fill-rate goes up. That is only about performance. But if you add the factor redundancy or safety, the much slower single HDD loses all its data when the disk fails. The raid array - at least in this case - can lose up to 6 disks at the same time without any data loss. All the data can be automatically reconstructed from the remaining member disks. Sure, during the rebuild of the data, performance drops significantly but is still much better than a single disk.

Remember that I said previously the idea behind RAID is to have a number of disks co-operate in such a way that it looks like one big disk. That means a RAID will not fill up as fast as a single disk and not experience the same performance degradation.

RAID basics

Now that we have established the reasons why people may consider RAID, let's have a look at some of the basics.

Single or Multiple?

There are three methods to configure a RAID array: mirroring, striping and parity check. These are called levels and levels are subdivided in single or multiple levels, depending on the method used. A single level RAID0 is striping only and a multiple level RAID15 is a combination of mirroring (1) and parity check (5). Multiple levels are designated by combining two single levels, like a multiple RAID10, which is a combination of single level RAID0 with a single level RAID1.

Hardware or Software?

The difference is quite simple: hardware RAID controllers have their own processor and usually their own cache. Software RAID controllers use the CPU and the RAM on the motherboard. Hardware controllers are faster but also more expensive. For RAID levels without parity check like Raid0, Raid1 and Raid10 software controllers are quite good with a fast PC.

A very important issue to keep in mind when deciding on a raid configuration is the interface used, and this applies to internal and external raids. If the disks are attached to SATA or eSATA ports, they are very limited in their bandwidth, connected over USB3 they are even more limited, but connected to a dedicated PCIe controller over SFF-8088 or SFF-8087 mini-SAS cables, they really show their full potential. Specifically if PCIe-3.0 8x controllers are used. They offer far better transfer rates with 4 times the bandwidth of eSATA per connection than any other connection. If one uses eSATA or USB3 connections for external drives, realize that the connection is a non-locking type, so the risk of data corruption in case the cable gets loose, is a serious drawback. SFF-8088 and SFF-8087 mini-SAS cables are the locking type, so lost connections are not an issue.

The common Promise and Highpoint cards are all software controllers that (mis)use the CPU and RAM memory. Real hardware RAID controllers all use their own IOP (I/O Processor) and cache (ever wondered why these hardware controllers are expensive?).

There are two kinds of software RAID's. One is controlled by the BIOS/drivers (like Promise/Highpoint) and the other is solely OS dependent. The first kind can be booted from, the second one can only be accessed after the OS has started. In performance terms they do not differ significantly.

On 115x platforms with a Sandy Bridge, Ivy Bridge or Haswell CPU there is no practical possibility to install a dedicated raid controller, because the chip-set design limits the number of available PCIe lanes to 16, all of which are used by the dedicated video card. Well, you can install a dedicated raid card, but the severe drawback is that the video card is throttled back from PCIe-16x to PCIe-8x, resulting in a performance loss of 10 - 15%. Using on-board software raid5 on these platforms is also ill-advised, because a parity raid is just too slow without a dedicated IOP, especially when rebuilding, to be comfortable and carries too much overhead in CPU utilization on a quad core system and consequently really drags down performance.

If you happen to use QuickTime, ProRes or DNxHD formats that use the QuickTime 32 Server extension, you are in serious trouble, because the 32 bit nature of that extension limits the use of RAM memory to 4 GB, even in cases where 64 GB memory is installed. That simply means the system is so much restricted, that it is non-sense to even consider using a dedicated raid controller. The very limited memory space is a far greater bottleneck than any single disk system could ever be, let alone a raid assisted system. Better convert anything QuickTime related to an editable format that runs in a 64 bit environment.

For the technically inclined: Cluster size, Block size and Chunk size

In short: Cluster size applies to the partition and Block or Stripe size applies to the array.

With a cluster size of 4 KB, data are distributed across the partition in 4 KB parts. Suppose you have a 10 KB file, three full clusters will be occupied: 4 KB - 4 KB - 2 KB. The remaining 2 KB is called slack-space and can not be used by other files. With a block size (stripe) of 64 KB, data are distributed across the array disks in 64 KB parts. Suppose you have a 200 KB file, the first part of 64 KB is located on disk A, the second 64 KB is located on disk B, the third 64 KB is located on disk C and the remaining 8 KB on disk D. Here there is no slack-space, because the block size is subdivided into clusters. When working with audio/video material a large block size is faster than smaller block size. Working with smaller files a smaller block size is preferred.

Sometimes you have an option to set 'Chunk size', depending on the controller. It is the minimal size of a data request from the controller to a disk in the array and only useful when striping is used. Suppose you have a block size of 16 KB and you want to read a 1 MB file. The controller needs to read 64 times a block of 16 KB. With a chunk size of 32 KB the first two blocks will be read from the first disk, the next two blocks from the next disk, and so on. If the chunk size is 128 KB. the first 8 blocks will be read from the first disk, the next 8 block from the second disk, etcetera. Smaller chunks are advisable with smaller filer, larger chunks are better for larger (audio/video) files.

RAID Levels

For a full explanation of various RAID levels, look here: http://www.acnc.com/04_01_00/html

What are the benefits of each RAID level for video editing and what are the risks and benefits of each level to help you achieve better redundancy and/or better performance? I will try to summarize them below.

RAID0

The Band AID of RAID. There is no redundancy! There is a risk of losing all data that is a multiplier of the number of disks in the array. A 2 disk array carries twice the risk over a single disk, a X disk array carries X times the risk of losing it all.

A RAID0 is perfectly OK for data that you will not worry about if you lose them. Like pagefile, media cache, previews or rendered files. It may be a hassle if you have media files on it, because it requires recapturing or re-copying from the camera's media card, but not the end-of-the-world. It will be disastrous for project files.

Performance wise a RAID0 is almost X times as fast as a single disk, X being the number of disks in the array.

RAID1

The RAID level for the paranoid. It gives no performance gain whatsoever. It gives you redundancy, at the cost of a disk. If you are meticulous about backups and make them all the time, RAID1 may be a better solution, because you can never forget to make a backup, you can restore instantly. Remember backups require a disk as well. This RAID1 level can only be advised for the C drive IMO if you do not have any trust in the reliability of modern-day disks. It is of no use for video editing.

RAID3

The RAID level for video editors. There is redundancy! There is only a small performance hit when rebuilding an array after a disk failure due to the dedicated parity disk. There is quite a performance gain achievable, but the drawback is that it requires a hardware controller from Areca. You could do worse, but apart from it being the Rolls-Royce amongst the hardware controllers, it is expensive like the car.

Performance wise it will achieve around 85% (X-1) on reads and 60% (X-1) on writes over a single disk with X being the number of disks in the array. So with a 6 disk array in RAID3, you get around 0.85x (6-1) = 425% the performance of a single disk on reads and 300% on writes.

RAID5 & RAID6

The RAID level for non-video applications with distributed parity. This makes for a somewhat severe hit in performance in case of a disk failure. The double parity in RAID6 makes it ideal for NAS applications.

The performance gain is slightly lower than with a RAID3. RAID6 requires a dedicated hardware controller, RAID5 can be run on a software controller but the CPU overhead negates to a large extent the performance gain and rebuild times in case of a disk failure are extremely long.

RAID10

The RAID level for paranoids in a hurry. It delivers the same redundancy as RAID 1, but since it is a multilevel RAID, combined with a RAID0, delivers twice the performance of a single disk at four times the cost, apart from the controller. The main advantage is that you can have two disk failures at the same time without losing data, but what are the chances of that happening?

RAID30, 50 & 60

Just striped arrays of RAID 3, 5 or 6 which doubles the speed while keeping redundancy at the same level.

EXTRAS

RAID level 0 is striping, RAID level 1 is mirroring and RAID levels 3, 5 & 6 are parity check methods. For parity check methods, dedicated controllers offer the possibility of defining a hot-spare disk. A hot-spare disk is an extra disk that does not belong to the array, but is instantly available to take over from a failed disk in the array. Suppose you have a 6 disk RAID3 array with a single hot-spare disk and assume one disk fails. What happens? The data on the failed disk can be reconstructed in the background, while you keep working with negligible impact on performance, to the hot-spare. In mere minutes your system is back at the performance level you were before the disk failure. Sometime later you take out the failed drive, replace it for a new drive and define that as the new hot-spare.

As stated earlier, dedicated hardware controllers use their own IOP and their own cache instead of using the memory on the mobo. The larger the cache on the controller, the better the performance, but the main benefits of cache memory are when handling random R+W activities. For sequential activities, like with video editing it does not pay to use more than 2 GB of cache maximum.

REDUNDANCY (or security)

Not using RAID entails the risk of a drive failing and losing all data. The same applies to using RAID0 (or better said AID0), only multiplied by the number of disks in the array.

RAID1 or 10 overcomes that risk by offering a mirror, an instant backup in case of failure at high cost.

RAID3, 5 or 6 offers protection for disk failure by reconstructing the lost data in the background (1 disk for RAID3 & 5, 2 disks for RAID6) while continuing your work. This is even enhanced by the use of hot-spares (a double insurance).

PERFORMANCE

RAID0 offers the best performance increase over a single disk, followed by RAID3, then RAID5 amd finally RAID6. RAID1 does not offer any performance increase.

Hardware RAID controllers offer the best performance and the best options (like adjustable block/stripe size and hot-spares), but they are costly.

SUMMARY

If you only have 3 or 4 disks in total, forget about RAID. Set them up as individual disks, or the better alternative, get more disks for better redundancy and better performance. What does it cost today to buy an extra disk when compared to the downtime you have when a single disk fails?

If you have room for at least 4 or more disks, apart from the OS disk, consider a RAID3 if you have an Areca controller, otherwise consider a RAID5.

If you have even more disks, consider a multilevel array by striping a parity check array to form a RAID30, 50 or 60.

If you can afford the investment get an Areca controller with battery backup module (BBM) and 2 GB of cache. Avoid as much as possible the use of software raids, especially under Windows if you can.

RAID, if properly configured will give you added redundancy (or security) to protect you from disk failure while you can continue working and will give you increased performance.

See the difference in performance of a single Samsung 840 Pro SSD, a single Corsair Performance Pro SSD, both on an Intel SATA 6G port, a single Corsair Performance Pro SSD on a Marvell 6G port and a large Raid30 array on an Areca ARC-1882 controller. It clearly shows the limitations of the Marvell controller on the motherboard:

Rebuilding a Raid array

Sustained transfer rates are a major factor in determining how 'snappy' your editing experience will be when editing multiple tracks. For single track editing most modern disks are fast enough, but when editing complex codecs  like AVCHD, DSLR, RED or EPIC, when using uncompressed or AVC-Intra 100 Mbps codecs, or using multi-cam or multiple tracks  the sustained transfer speed can quickly become a bottleneck and limit the 'snappy' feeling during editing.

For that reason many use raid arrays to remove that bottleneck from their systems, but this also raises the question:

What happens when one of more of my disks fail?

Actually, it is simple. Single disks or single level striped arrays will lose all data. And that means that you have to replace the failed disk and then restore the lost data from a backup before you can continue your editing. This situation can become extremely bothersome if you consider the following scenario:

At 09:00 you start editing and you finish editing by 17:00 and have a planned backup scheduled at 21:00, like you do every day. At 18:30 one of your disks fails, before your backup has been made. All your work from that day is lost, including your auto-save files, so a complete day of editing is irretrievably lost. You only have the backup from the previous day to restore your data, but that can not be done before you have installed a new disk.

This kind of scenario is not unheard of and even worse, this usually happens at the most inconvenient time, like on Saturday afternoon before a long weekend and you can only buy a new disk on Tuesday...(sigh).

That is the reason many opt for a mirrored or parity array, despite the much higher cost (dedicated raid controller, extra disks and lower performance than a striped array). They buy safety, peace-of-mind and a more efficient work-flow.

Consider the same scenario as above and again one disk fails.  No worry, be happy!! No data lost at all and you could continue editing, making the last changes of the day. Your planned backup will proceed as scheduled and the next morning you can continue editing, after having the failed disk replaced. All your auto-save files are intact as well.

The chances of two disks failing simultaneously are extremely slim, but if cost is no object and safety is everything, some consider using a raid6 array to cover that eventuality. See the article quoted at the top.

Rebuilding data after a disk failure

In the case of a single disk or striped arrays, you have to use your backup to rebuild your data. If the backup is not current, you lose everything you did after your last backup.

In the case of a mirrored array, the raid controller will write all data on the mirror to the newly installed disk. Consider it a disk copy from the mirror to the new disk. This is a fast way to get back to full speed. No need to get out your (possibly older) backup and restore the data. Since the controller does this in the background, you can continue working on your time-line.

In the case of parity raids (3/5/6) one has to make a distinction between distributed parity raids (5/6) and dedicated parity raid (3).

Dedicated parity, raid3

If a disk fails, the data can be rebuild by reading all remaining disks (all but the failed one) and writing the rebuilt data only to the newly replaced disk. So writing to a single disk is enough to rebuild the array. There are actually two possibilities that can impact the rebuild of a degraded array. If the dedicated parity drive failed, the rebuilding process is a matter of recalculating the parity info (relatively easy) by reading all remaining data and writing the parity to the new dedicated disk. If a data disk failed, then the data need to be rebuild, based on the remaining data and the parity and this is the most time-consuming part of rebuilding a degraded array.

Distributed parity, raid5 or raid6

If a disk fails, the data can be rebuild by reading all remaining disks (all but the failed one), rebuilding the data and recalculating the parity information and writing the data and parity information to the failed disk. This is always time-consuming.

The impact of 'hot-spares' and other considerations

When an array is protected by a hot spare, if a disk drive in that array fails the hot spare is automatically incorporated into the array and takes over for the failed drive. When an array is not protected by a hot spare, if a disk drive in that array fails, remove and replace the failed disk drive. The controller detects the new disk drive and begins to rebuild the array.

If you have hot-swappable drive bays, you do not need to shut down the PC, you can simply slide out the failed drive and replace it with a new disk. Remember, when a drive has failed and the raid is running in 'degraded' mode, there is no further protection against data loss, so it is imperative that you replace the failed disk at the earliest moment and rebuild the array to a 'healthy' state.

Rebuilding a 'degraded' array can be done automatically or manually, depending on the controller in use and often you can set the priority of the rebuilding process higher or lower, depending on the need to continue regular work versus the speed required to repair the array to its 'healthy' status.

What are the performance gains to be expected from a raid and how long will a rebuild take?

The  most important column in the table below is the sustained transfer  rate. It is indicative and no guarantee that your raid will achieve exactly the same results. That depends on the controller, the on-board cache and the disks in use. The more tracks you use in your editing, the higher the resolution you use, the more complex your codec, the more you will need a high sustained transfer rate and that means more disks in the array.

Sidebar: While testing a  new time-line for the PPBM6 benchmark, using a large variety of source  material, including RED and EPIC 4K, 4:2:2 MXF, XDCAM HD and the like, the required sustained transfer rate for simple playback of a  pre-rendered time-line was already over 300 MB/s, even with 1/4  resolution playback, because of the 4 4 4 4 full quality deBayering of  the 4K material. That simply means a single disk, as in the example above, with an average transfer rate of around 120 MB/s can not keep up with the required transfer rate for fluid playback. It will stutter. Even with 2 disks in a striped raid0 array you will not have sufficient disk I/O speed for fluid playback and the only solution is a move to a dedicated controller with a large parity array to keep up with these data requirements.

The above table clearly shows that adding disks to an array is a simple and safe way to increase sustained transfer rates to levels that can meet the most demanding tasks during editing imaginable.

Final thoughts

With the increasing popularity of file based formats, the importance of backups of your media can not be stressed enough. In the past one always had the original tape if disaster stroke, but no longer. You need regular backups of your media and projects.  With single disks and (R)aid0 you take risks of complete data loss, because of the lack of redundancy.  Backups cost extra disks and extra time to create and restore in case of disk failure.

The need for backups in case of mirrored raids is far less, since there is complete redundancy. Sure, mirrored raids require double the number of disks but you save on the number of backup disks and you save time to create and restore backups.

In the case of parity raids, the need for backups is more than with mirrored arrays, but less than with single disks or striped arrays and in the case of 'hot-spares' the need for backups is further reduced. Initially, a parity array may look like a costly endeavor. The raid controller and the number of disks make it expensive, but if you consider what you get, more speed, more storage space, easier administration, less backups required, less time for those backups, continued working in case of a drive failure, even though somewhat sluggish, the cost is often worth more with the peace-of-mind it brings, than continuing with single disks or striped arrays.

In case of a Controller Failure

you have a serious problem, just as you would have with a motherboard failure. It needs replacement, but very often it does not stop there. The integrity of the raid array may be severely compromised and your data may be lost permanently. I have used on-board raid, Promise software cards, 3Ware hardware cards and several Areca cards and in all my years with computers, I have never experienced a raid controller failure. I have had motherboard failures, corrupted SSD's, failed hard drives and video cards, but never a failed controller card. It does not mean it can not happen, but chances are very slim. If it happens, then you need your backups to restore your data, since you can no longer trust the parity info in your array.

 
Add comment
  • I want to upgrade my current system to Raid/SAS storage I was thinking about the Areca ARC-1883X PCI-Express 3.0 x8 SAS controller.
    I a currently running the following
    Gigabyte GA-X79-UP4
    Xeon-E5-1650 v2
    64GB RAM
    Samsung 850Pro 512GB SSD
    GTX 760
    Première Pro CC2014
    would I benefit from this raid controller and getting an 8 Drive enclosure? If so what sas/esata drives and drive enclosure would be a good starting point? Looking, to start with 12-16TB


    Thanks

    0 Like
  • Harm:

    First off, thank you for this great blog and thank you for the detailed reply regarding the new Areca Card 4GB vs 8GB feedback a few months back. It was hard to spend the money, but now that I see the results, there is no trace of pain left. I have completed my machine and followed your advice to the "T". I love my 'warrior' machine and experience full editing in full HD resolution every day. I set up the Raid 5 with a hot spare an enjoy speed and peace of mind all day long. Thank you again for this great work of dedication so that us novices can really partake without spending enormous amounts of money. Still spent about 6K in total but feel in complete control of my machine, able to address any issues that may come up. I truly salute you for this body of work you have created.

    from Hollywood, Los Angeles, CA, USA
    0 Like
  • Guest - Man

    RAID 1 gives read performance boost.

    0 Like
  • Man, that is not true and impossible, both in theory and in practice. Raid1 has the same speed as a single disk, or even slightly slower because of the mirror.

    0 Like
  • Guest - LAguest

    Hypothetically if I had at least one raid0 drive, and a few other drives, which drive would benefit most with raid0, from the following list:

    D1: OS, Programs
    D2: PageFile and Premiere Pro Media Cache
    D3: Previews and Exports
    D4: Media and Projects

    Is there a different drive breakdown that would be better than the one above? Would it be beneficial to have two raid0's instead of one?

    0 Like
  • D2

    0 Like
  • Guest - DirkD.LA

    Hi Bill/Harm. Awesome, as always. Wondering what your take is on the new Areca ARC-1883ix series 4GB and 8GB card. Is it worth spending the money and upgrade from the ARC-1882ix series?

    0 Like
  • Hi Dirk,

    As you know, I have the Areca ARC-1882ix/24-4G as well. Currently populated with a 24 disk Raid30 (3 x 7R3 striped plus 3 global HS) with Seagate Constellation ES's and I have for the time being added a 4 disk Raid0 with Samsung 850 Pro SSD's.
    In due time I will upgrade to an i7-5960X on a X99-WS mobo when the price of DDR4-2800 comes down a bit. Of course, I considered where I want to go eventually, but for now I only have 3 SSD's on the mobo, 1 multicard reader, 1 BDR, a 6 disk SSD cage (4 populated) and 25 HDD's. So my intention is to retrofit my case and ADD an Areca ARC-1883ix-24/8G. Keep my 24 disk Raid30 HDD array on the 1882 and ADD a 12 or 16 disk Raid30 SSD array on the new 1883, plus the 4 SSD R0.

    The retrofitting is easy, just remove one 5 x 3.5 hot-swappable bay, move 5 HDD's to empty spaces in the right side on my case (15 HDD's on the right side and 10 on the left side) and add 2 more IcyDock MB996 SSD cages to the place where the 5 bay HHD cage was. That gives me 22 SSD slots on the left side of my case, plus the 2-nd BDR, that I temporarily took out. Leaves me some room to grow.

    However, my idea is to ADD the 1883 and use it solely for SSD's, while leaving the 1882 and the HDD's intact. Selling the 1882 is not easy, the market is very thin for these controllers and the upgrade to the 1883 is only marginal in performance gain if you use large arrays, like I do.

    Hope these considerations help.

    0 Like
  • Al Bergstein commented about the lack of implications for a raid array in case of a controller failure. I will add these to the main article shortly

    0 Like
  • Hi Harm,

    Continuing our conversation from the external drive page here..makes sense as it is now concerning Raid. So I opted for the mediasonic probox and I have 4 x 7200 1tb in a stripe config. This is connected to my Pc via USB3. I've ran some speed tests and I am only getting 175-199 Mb/s max. Significantly slower when using the eSata. Should I easily be hitting 250-300 Mb/s?

    Furthermore, I am going to reset my chunk size, but I am getting some conflicting information here... This guy is saying smaller chunk size when working with video...? http://www.computerweekly.com/RAID-chunk-size-the-key-to-RAID-striping-performance

    Your infinite wisdom is much appreciated Harm.

    Mike

    0 Like
Powered by Komento