Sign in to follow this  
Followers 0
tekio

RAID1 or RAID6?

12 posts in this topic

Just wanted to get some feedback on this:

 

I have implemented a fault tolerant / high availability solution for vSphere. Two servers running critical VM's are RAID6, one primary and the other secondary. Pretty much same physical hardware.

 

My implementation will use a Linux share to store snapshots, replication data, backup data (onsite), and other data needed for  replication/backups. The entire share will be backed up to the cloud for disaster recovery. 

 

I want Linux to boot off a standard SATA drive - and the entire drive will hold the Linux filesystem. I'm getting a RAID card and two 4TB hd's for the FT/HA data share. This will be a standard desktop class system with 16GB RAM / Dual Core Pentium 3.6Ghz).

 

I'm thinking about having the FT/HA share mounted to a RAID 1 array. I was thinking RAID1 because hot swapable is not really vital, here (three other copies of data). And if the system fails (operating system, CPU, RAM, MoBo, etc...) I could easily swap-out RAID card and HD's into another box (one that I could buy from any computer store, locally).

 

Just worried about being able to get to the data in case of a hardware failure (since it's not a server class CPU, RAM, etc.....).  Is RAID 1 a good idea in this scenario??

 

Thoughts???

0

Share this post


Link to post
Share on other sites

Go with 1. Any RAIDs I've ever set up have been type 1, mainly due to its simplicity/ease of implementation and the reasons you gave.

You should be able to get to the data off one of the other drives, since 1 is a simple mirroring scheme. I used to prep replacement disks with FDISK/FORMAT as a separate USB drive, then boot into Winblows and copy the filesystems over using XCOPY32 before sticking the new one into the set.

Fuck it, that was a long time ago. Anyways, when in doubt stick with type #1. Occams' razor; the simplest implementation is usually the most appropriate one.

1

Share this post


Link to post
Share on other sites

i think depends more on the os/raid card stuff on 1 being hotswap or not, since both drives contain all the data in theory you could hotswap as long as there was always atleast one drive in there regardless of if you had switched 2> in or out.

 

the main thing steering me away from the higher end raids are that the data isnt just a clone of the data, you get into the drives having part of the data and part of another drives data, that you wouldnt be able to just directly clone the data, like if there was a massive system failure like water spilled all over it and fried multiple parts of the system if one of the drives was fine you can just stick it in any machine that can accept that drive and it would have whatever data it contained, where with other raids with striping and all this and that some of the files may be corrupted/not contain all of them on a single drive/or whatever drives you could use, like even the data on that drive might not be usable if files where split between that and other drives where with 1, there would never be a possibility of that

1

Share this post


Link to post
Share on other sites

Yeah. I'm just thinking this way, if a anything on the Linux box goes out - I'd just need to grab any workstation, download Linux, and swap at least one drive and the RAID card.

 

Maybe even proactively replace each RAID 1 H.D. every 2 years or so,,,,

 

 

EDIT: I just see RAID5 or RAID 6 more for saving working systems that need to be up. But RAID 1 better for saving data, in theory. Well... unless both H.D. go out. :-P

Edited by tekio
0

Share this post


Link to post
Share on other sites

not sure, again probably depends on os/raid, but objectively don't see a reason why you couldn't repair a 1 in a live setting, other than sever performance degradation during the period its repairing, the read being 1/2 speed the whole time, and having to use read to fill up the other drive, which in 5 or 6 the performance would still degrade but not as much or for as long as they have the data spread among all of the drives.

 

if you were really worried about it there is always the option of more drives, you could have some backup script or something, backup the 1 to a third drive once a week or something, and entirely unuse/power it otherwise, and then in the event of failure would just be to do the file changes from within that week and youd have a working pair again. or the obvious just a triplet used at all times, in which all 3 would have to fail to lose anything. where with 5 just 2 and you would probably lose a significant portion of the pot, depending on how the stripping is set up

2

Share this post


Link to post
Share on other sites

I just went through this desicion process myself, with my new VM host. Four drive bays, four drives...how to RAID them?

 

FWIW, I ended up with:

 

- RAID1 /boot volume, with four partitions mirrored (should be ultra redundant, lol)

- RAID6 root volume, across four partitions

- Built-in swap partition striping

- Boot code installed on all four drives

 

I chose sofware RAID since I don't want to be stuck with having to find the exact same controller if this one dies. Plug it into any system with enough SATA ports and go. The RAID1 /boot partition lets me boot directly without any special bootloader hacks.

 

Everything I've read leads me to believe that RAID5 is not practical for large drives. I do still run RAIDz with three disks on FreeBSD because that system isn't running anything critical and can be taken offline if it needs to do a rebuild.

1

Share this post


Link to post
Share on other sites

I like the software RAID1 idea.  EDIT: forgot It's gonna be on Linux. :-/

 

I will look at setting software RAID1 under Linux.  

 

 

 

Its FT/HA data for vSphere. Unsure I/O will be too much of an issue. but I will look into it. Thanks again Dins and Glitch...

Edited by tekio
0

Share this post


Link to post
Share on other sites

And scratchy. :-P

0

Share this post


Link to post
Share on other sites

Thanks for the help, everyone who replied.  Attached is a basic diagram of how I set this up. Just got it working today. Sorry, my diagraming skills suck, but only I need to look at them. Haha10_GBPS_Diagram001.png

1

Share this post


Link to post
Share on other sites

I think RAID 6 is better because its gives more functional capacity the more disks you add. The remainder of the capacity constitutes the mirror. If a RAID 6 array comprises four disks, only 50% of that space is available as usable capacity, but the quantity of usable space increases as you add more drives.

1

Share this post


Link to post
Share on other sites

Using RAID 6 on servers and think it takes too much a performance hit... Writing is sloooooooooooooooooowwwwwwwwwwwwwwwwwwwwwwww. Maybe over aggregating, but not by much. I am thinking about doing away with RAID 6 at work.... Windows backups from disk are just too slow. However, Veeam works fine since it copies at the VM layer and not so abstracted through the VM and RAID array.... We have a pretty snazzy RAID card too. :-(

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0