Online storage calculator and RAID question

Kingsley

Young grasshopper
Joined
Jan 1, 2016
Messages
40
Reaction score
1
I've spent quite a while getting my home brew system vetted before I install it. I've been using a one drive NAS for testing purposes that I am going to convert to a music drive when I get my camera storage settled. I have stumbled across this online space requirement calculator that seems very handy.

http://www.video-insight.com/support/calc/calc.php

So I have decided that given my quality and storage desires that I will likely run four 4TB drives in a Raid 5 configuration. It strikes me that drive failure is likely low running WD purple drives, but then one never knows.

Are other folks running RAID storage. Why or why not?
 

nayr

IPCT Contributor
Joined
Jul 16, 2014
Messages
9,326
Reaction score
5,325
Location
Denver, CO
4TB x 4 = 12Tb of Platters, with the failure rates of blocks your statically likely to have 100% failure rate on rebuilding a Raid5 of this size.. only fools continue to think this provides any benifit.

Raid5 is dead: http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/

Have you ever tried to rebuild a multi-terabyte raid? your looking at weeks, perhaps months of downtime to rebuild the raid.. you can do a complete system restore from backup and be back online in a fraction of the time it takes to rebuild ANY level raid.

For big data, the game is different.. Raid setups provides speed and high availability, but absolutely no protection anymore.. not anything decent, backups are more important than ever.
 

Kingsley

Young grasshopper
Joined
Jan 1, 2016
Messages
40
Reaction score
1
Well, this fool originally thought that the risk of intrusions being recorded at the same time as a drive failure were quite remote, and it sounds then from reading the article that the odds of disk failures and failed attempts at rebuilding are far greater than the instrusion/failure equation (with no formula or data to back that up).
 

nayr

IPCT Contributor
Joined
Jul 16, 2014
Messages
9,326
Reaction score
5,325
Location
Denver, CO
A mirror is really only good way to get any protection against a drive failure, and with video storage mirroring is quite expensive..

but if your for redundancy at that level, might as well just have a 2nd nvr recording all the same things as the first.. why do it half assed if your spending that kind of money on redundancy.

Many IP Cameras now days have internal SD storage, when recording externally via NVR they can also record to the local SD.. the storage space is obviously much less but it can provide a cheap backup.. in my experience you can get about a day of 1080p or 3 days of D1 on a 64gb of local storage.. as long as you realize and grab the video files before they get overwritten its something.

Many NVR's can also save the stream to 2 separate disks, so if you have some cameras more important than others they could have redundant storage without making everything redundant.
 
Last edited by a moderator:

wseaton

n3wb
Joined
Jun 20, 2015
Messages
23
Reaction score
0
I'm a data center guys that's had to deal with RAID for darn near 20 years. Avoid RAID 5 at all costs. RAID 6 much better, but is out of reach of most consumer storage devices, and requires 5 or more drives to be a benefit. RAID 1 (straight mirror) is preferred for consumer devices, and is pretty much bullet proof.

Drives fail....they are better than they were 10 years ago, but they still fail. RAID is basically insurance policy over a single drive failing. My business accounts running Security cameras require it because the added cost of having mirrored drives is worth it over the risk they'll need the cameras and have a drive failure.
 

Rockford622

Getting the hang of it
Joined
Feb 19, 2016
Messages
188
Reaction score
33
Whats with all the hate toward RAID 5 anyway? Everybody seems to treat it as one would treat the plague. I use it at home with no problems and keep backups of everything I have just in case. I really don't like the idea of giving up 50% of my disk space for a RAID 6 or 1 solution.

If its that bad, it should no longer be implemented.

So, spill the beans.
 

nayr

IPCT Contributor
Joined
Jul 16, 2014
Messages
9,326
Reaction score
5,325
Location
Denver, CO
SATA drives are commonly specified with an unrecoverable read error rate (URE) of 10^14. Which means that once every 100,000,000,000,000 bits, the disk will very politely tell you that, so sorry, but I really, truly can't read that sector back to you. One hundred trillion bits is about 12 terabytes.

The key point that seems to be missed in many of the comments is that when a disk fails in a RAID 5 array and it has to rebuild there is a significant chance of a non-recoverable read error during the rebuild (BER / UER). As there is no longer any redundancy the RAID array cannot rebuild, this is not dependent on whether you are running Windows or Linux, hardware or software RAID 5, it is simple mathematics. An honest RAID controller will log this and generally abort, allowing you to restore undamaged data from backup onto a fresh array.
read the article I linked, in a nutshell.. the closer you get to a 12TB Raid5 Array the higher your odds of raid failure get.. @ 12TB of Raid 5 you've statically reached 100% failure rate: http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/

it is no longer implemented, I havent seen a raid 5 being used seriously in the last decade.. I work in/for datacenters, we started walking away from raid5 along time ago.. the writing's been on the wall, sorry you didnt get the Memo.
 
Last edited by a moderator:

Rockford622

Getting the hang of it
Joined
Feb 19, 2016
Messages
188
Reaction score
33
Ok, just read the article.

If the reason the RAID 5 array fails to rebuild in the event of a disk failure, is because an URE...how is the URE a function of the overall size of the volume and not a function of the individual disks?

And, isn't the 10^-14 just a spec and not a guarantee?

I just replace all 4 x 2TB disks in my NAS with 4 x 5TB in RAID 5 and not only rebuilt the entire array, but expanded the entire volume size to ~13.5TB with no issues. Am I a ticking time bomb?
 

nayr

IPCT Contributor
Joined
Jul 16, 2014
Messages
9,326
Reaction score
5,325
Location
Denver, CO
yes, you are, your 'no problems' are just short lived, the day you have to go rebuild that array.. thats the day the problems will come.. if every time you have a single disk failure you have to start a new array and restore from backup, why not just stripe them all from the beginning and actually get some decent performance instead of shitty speeds and no redundancy.

and yes, no guarantee.. its stats, you could have one with a twice as frequent failure rate, then what?
 

Rockford622

Getting the hang of it
Joined
Feb 19, 2016
Messages
188
Reaction score
33
I really don't want the entire system to crap itself if a disk fails as in RAID 0.

Speed, don't care...it's a naS(torage), not a local boot drive.

When you said above:
4TB x 4 = 12Tb of Platters

You still need to treat the disk individually in terms of the possibility of a URE. It's not a cumulative effect of many large drives added together in a single large volume. Just because the volume is 12TB, does not mean you are virtually guaranteed a URE. You are still just dealing with 4 TB drives.

Unless I misunderstand something here.
 

nayr

IPCT Contributor
Joined
Jul 16, 2014
Messages
9,326
Reaction score
5,325
Location
Denver, CO
its cumulative, you only need 1 URE on any disk to fail the rebuild.. so if you have 12TB of disks your have enough sectors to stastically have 100% failure rate, but even if the odds are somehow skewed in your favor, what is it.. like worse than 50/50? thats still pure shit..

your end volume size is irrelevant.. this is low level.

You seem to have absolutely no experience recovering/rebuilding/working with a raid, last time I had a 10TB raid failure (raid6) it said it'd take 3 weeks to rebuild.. so your looking at 3 weeks of downtime, with a likely failure to rebuild anyhow.. or you just abort, create a new array and restore from backup in a day or so.

Your entire system is going to crap its self with a single disk failure in raid5, so again whats the problem with raid0?
 

Rockford622

Getting the hang of it
Joined
Feb 19, 2016
Messages
188
Reaction score
33
The only experience rebuilding a RAID was in my Synology NAS and it took around 12 hours per 5 TB drive and the entire system was "normal" at that point, and able to be used prior to being normal. Can't say that about a disk failure in RAID 0.

The entire system will continue to run fine in RAID 5 if one disk fails.

For a user with a home system and adequate backup I think RAID 5 is fine. Just my opinion. I didn't spend almost $800 on 20 TB worth of drives to end up with only ~9 TB of usable space.
 
Top