Build Your Own Arcade Controls Forum
Main => Everything Else => Topic started by: squirrellydw on January 31, 2008, 02:20:58 pm
-
I am going to be building a raid 5 server soon, my first one and I have a few questions.
I plan on starting with 3 drives then adding 2 more for a total of 5. I will probably also back everything up occasionally to an external drive for back up also. I am going to use SATA2 drives for the raid also. I plan on using an enlight server chassis http://us.enlightcorp.com/Product/Product_Detail.aspx?ID=189 and
http://us.enlightcorp.com/Product/Product_Detail.aspx?ID=201
I will also being using a raid card maybe 3ware someone recommend on here. Is there anything else you can recommend?
Can I/should I use raid 6 and what would be involved with that?
-
Couple of things to think about before you start:
If you're not going to fully populate the array when you first build it (ie only put 3 drives out of a possible 8) and plan to expand, make sure you buy a RAID card that supports "online expansion". I believe the 3ware 9000 series of cards supports this, but not sure about the 8000 series (older SATA). I know the 7000 series (PATA) doesn't support this (what I use).
I like that you're thinking of using hot-swap drive caddies - VERY good way to go. Your chassis choice is fine as long as you're not looking to go crazy with the number of drives. Make sure that you have good air cooling for the drive blocks. THIS (http://www.newegg.com/Product/Product.aspx?Item=N82E16811119092&Tpk=RC-810-SSN1%2bStacker%2b810) is a popular case for building non-rack mount storage arrays.
With only 3-5 drives, power shouldn't be too much of a problem, just make sure you buy a good quality PSU.
I'm not all that familiar with RAID6, but the main thing it offers over RAID5 is an additional layer of fault tolerance. This is more useful in arrays with larger numbers of drives, as it will take the equivalent of a full drive's storage (in addition to the 1 used by RAID5) to achive this. So for a 3 drive array, 2 of the drives would be parity data - not that useful. Unless you're planning to eclipse 12 drives or so, RAID5 will be fine.
I'm a fan of 3ware, but Areca makes nice cards that have been getting really good reviews by DIY data center builders.
-
thanks, if I would go raid 6 and have 5 500gig drives I would really only have 1.5tb of space correct? What Windows OS do you suggest?
I was also looking at this card since it does raid 6
http://www.newegg.com/Product/Product.aspx?Item=N82E16816103010
-
correct on the 5 drives -> 1.5Tb thing. For RAID5 its (N-1)*(drive capacity) and for RAID6 its (N-2)*(drive capacity).
I don't know too much about the recent adaptec cards, but that one looks good; 8 port SATA, online expansion, etc. Not a bad price either at less than $500.
Your OS choice is entirely going to up to what you plan for the machine to do beyond file-serving. I run linux on my 2 file server raid boxes, but win2k/2k3 server would be fine, or heck the new windows home server might be interesting to check out. :dunno
-
I plan on using mainly for music, video and photos. I want everything backed up in case a drive dies and I want to be able to access it from 2 to 3 computers.
-
For that much cash, why not simply buy a commercial unit?
I have an Infrant ReadyNAS. It does drive additions and you can even replace the whole array (to increase the size) on disk at the time. I used custom built solutions for years, but I got fed up with the hassle.
I filled mine with 4 500GB disks. These disks are actually only 465GB so in total I get something like 1.3TB.
-
I'd be tempted to buy a Drobo unit
http://www.drobo.com/
-
Is drobo any good, what if it dies, how do you get your data?
-
I'd be tempted to buy a Drobo unit
http://www.drobo.com/
Drobo is only USB though isn't it?
Besides I'd be scared to hand my data to a proprietary system too.
-
there's a new droboshare addon that essentially makes it a NAS device. I've never used one of these units, and your questions regarding recovery are good ones. Same thing with a proprietary RAID card though; what happens if your raid card dies?
I'm not saying I would definately buy one, but when considering cost, its worth looking into. I think a member here has a drobo unit...can't remember who though.
-
I think Goz has a Drobo, but I could be wrong.
Are there any nice commerical units out there that will handle 5 or 8 drives and RAID5? That's my main sticking point as I've been contemplating a server build for a while now, but I want a 5 drive array and most home NAS devices are 4 bay devices, and it seems like a good deal don't do RAID5.
I may have to just bite the bullet like you are and buy a dedicated card though.
-
I think Goz has a Drobo, but I could be wrong.
I think you're right, it's Goz
Are there any nice commerical units out there that will handle 5 or 8 drives and RAID5?
None that I know of that are affordable, but I haven't looked in several months.
-
I'm keeping an eye on THIS (http://www.avsforum.com/avs-vb/showthread.php?p=12993466#post12993466) thread over at AVS. A guy is building a 24Tb RAID6 box....ok, it won't be 24Tb after his parity drives, but JEEZ!!
Makes my 5Tb over 2 servers in RAID5 look puny.
edit: fixed messed up link
-
I think I can build mine with 5 SATA drives for right around $1000 and most of that is the drives and the raid card
-
Thought of another question. Do I need a separate HD for the OS? So if I start out with 3 drives in a raid 5 I need another one for the OS correct? Then all the data like my music, movies and photos are kept on the raid array right?
-
If you're using a dedicated RAID card there's not much point in using a seperate OS disk.
Do a search for RAID in this forum and you'll find a lot of other good advice. It seems to come up a few times a year.
-
I prefer a seperate OS drive but it's not necessary. the array is more "portable" if you dedicate it to data only and don't have the OS on it. Lets say the motherboard dies on your array; you can pull the card and drives and put them in another system and be back up and running in no time...
-
I prefer a seperate OS drive but it's not necessary. the array is more "portable" if you dedicate it to data only and don't have the OS on it. Lets say the motherboard dies on your array; you can pull the card and drives and put them in another system and be back up and running in no time...
If your OS is on the array (ie no separate OS disk) and the new computer has a separate OS disk then it should still work fine right?
-
sure, but its cleaner to have it seperate. Also, consider if your boot sector gets corrupted and you need to manipulate the partitions; if the boot drive is seperate, you can change it at will with no worry about compromising your data array.
Just my approach, either way is valid
-
I set it up once with a seperate system drive (using soft raid), but I always kept worrying about the system drive becoming corrupt.
With soft raid I would use a seperate OS drive (you probably have to have at least a separate partition then anyway) and with hardware raid I'd simply install the OS on the array.
A big problem I still have is with backups. How do you backup 1.3TB? Well I have it only filled to 900GB or so, but still. My ReadyNAS does automatic backups to a external drive, but they aren't big enough. I could setup a second raid unit, but that's kinda costly.
-
I have setup three different file servers (two Raid 1 and one Raid 5) and in all of them I had a different drive for the OS. Keeping the OS and Data separate makes it a lot simpler for maintenance and data migration. You could keep an image on the raid array of the OS (or just on a DVD) and move that down. :)
-
thanks, I like the separate OS idea
-
I have setup three different file servers (two Raid 1 and one Raid 5) and in all of them I had a different drive for the OS. Keeping the OS and Data separate makes it a lot simpler for maintenance and data migration. You could keep an image on the raid array of the OS (or just on a DVD) and move that down. :)
You don't need a separate drive to separate OS and data.
-
A big problem I still have is with backups. How do you backup 1.3TB?
try backing up 5Tb :dunno
The redundancy of the RAID array is some insurance, but nothing can stave off catastrophic failure.
As for the OS on the array or on another drive, its really a matter of personal preference. If you're looking for high availability and low-downtime, you'd keep your OS on a seperate drive. If the system crashes, you can easily move the entire array to another machine. If the OS drive crashes, you can swap in a new drive and be back up an running (assuming you have a clean mirror copy of the OS sitting around like me ;) ).
nothin wrong with putting the OS on the array, just not how I would do it.
-
I went with RAID 5 so I can stop worrying about backups. It seems silly to use single external drives to backup fault-tolerant arrays. If you don't feel comfortable with RAID 5, spend a little more for RAID 6. If you're really worried about more than two drives failing at once, you can add as many redundant drives as you like to the array and not have to worry about filling up slow USB drives.
If you're worried about the RAID card dying, buy a spare. Using RAID for safety is all about spending the money to put together a system with a level of risk you can be comfortable with.
I've been running the same 8-drive array for over two years now and the only problem I've had was a failed OS drive. After building a new one (the soft array was fine and synced up with no data loss) I cloned it so that won't be a problem again. The only high quality part I'm using is the power supply. The array drives are all refurbs, the motherboard, ram, case, etc were all cheap stuff, and the OS drives have all been xbox orphans. I'm sure I'll replace the whole array with bigger drives long before I ever lose one.
-
I've seen a RAID 5 array refuse to rebuild itself after a drive failure and I've known at least two or three other people (professionals) who have personally had the same experience. I've seen them rebuild just fine a dozen times, but . . . they're not fool proof. Nothing is. I don't know how many times I'd check the integrity of tape backups and find corrupt ones.
With that said, I don't have any formal backup routine at home. My OS (along with the My Documents folders, etc.) is on a RAID 1 mirror, and all my media is on a cheap, 1.5 TB (effective) Buffalo Terastation. I don't really trust the Terastation too terribly much. It seems a little bit flaky. If the power goes out and the UPS allows it's power to surge for a split second, it will spend the next 12 hours "resynching" the drives, during which time the performance is awful and streaming media from it frequently hiccups. PITA. I don't trust it to keep all my stuff 100% safe, but it brings my risk levels down to levels that I'm reasonably comfortable with, and I don't have to lift a finger for it so I'm pretty happy. Anyway, 90% of the data on it is pirated so I technically have no right to it anyway. Easy come, easy go.
-
I've seen a RAID 5 array refuse to rebuild itself after a drive failure and I've known at least two or three other people (professionals) who have personally had the same experience. I've seen them rebuild just fine a dozen times, but . . . they're not fool proof. Nothing is. I don't know how many times I'd check the integrity of tape backups and find corrupt ones.
No strategy is perfect; a few years ago, I had a software raid5 array completely go. 2 drives died simultaneously and that was that. Last year, I had a hardware raid5 array suffer from some serious data corruption and loss. One drive was getting kicked from the array for ata timeouts (benign problem, usually power related) but simultaneously I had a REAL hardware failure on a different drive but because the array was degraded, the controller was more conservative about kicking the real failed drive. I couldn't rebuild the array because the bad drive would lock up before I could get the "good but kicked" drive back in the mix. Because the array was still functional as long as the "bad" drive didn't freeze up, I was able to transfer about 80% of the data off.
My next system will probably be a larger array either in a RAID6 or RAID5 + hotspare setup....
-
OK, I understand raid 5 and 6 pretty well but what does the hotspare do? I will probably use the adaptec card with a raid 6.
-
A big problem I still have is with backups. How do you backup 1.3TB?
As for the OS on the array or on another drive, its really a matter of personal preference. If you're looking for high availability and low-downtime, you'd keep your OS on a seperate drive. If the system crashes, you can easily move the entire array to another machine. If the OS drive crashes, you can swap in a new drive and be back up an running (assuming you have a clean mirror copy of the OS sitting around like me ;) ).
nothin wrong with putting the OS on the array, just not how I would do it.
Not trying to be argumentative her, but I really do not see why it makes it easier to move the array to another computer.
Why do you think it's easier or requires less maintenance to have the OS on another drive?
When I had a hardware RAID card I put the OS on a partition of the array. If I want to move the array to another computer it would have the OS on a drive already so I simply don't mount the OS partition of the array. A separate is practically a separate drive yet a partition is safeguarded by the RAID and a single drive is not. I don't see any advantage in a extra OS drive, what am I missing?
When you add an extra drive for the OS you actually add another point of failure. Even worse, you add another non-redundant point of failure.
-
OK, I understand raid 5 and 6 pretty well but what does the hotspare do? I will probably use the adaptec card with a raid 6.
A hotspare drive is just a drive "in waiting" in case a failure happens. If a drive fails in a RAID array, rather than waiting for user intervention to swap out a bad drive, the hot spare is swapped in and hte array is rebuilt. Its an additional (-1) in the whole scheme of things, and adds an additional level of "safety".
Not trying to be argumentative her, but I really do not see why it makes it easier to move the array to another computer.
Why do you think it's easier or requires less maintenance to have the OS on another drive?
seriously, if you want to put your OS on the array, then do it. If you were building a scalable data center you wouldn't. Its all about abstracting the "system" from the "storage". I have 3 RAID5 arrays on 2 systems. Depending on what computer has the RAID cards and drives installed, those arrays could live on any computer. The OS and the storage arrays are independant.
Its not that I don't consider the OS to be worthwhile to be redundant, its that I consider the OS to be a seperate entity. In fact I keep a "bare metal" backup of 3 systems: one each for my storage machines and one for my applicaton server....
-
I went with RAID 5 so I can stop worrying about backups. It seems silly to use single external drives to backup fault-tolerant arrays. If you don't feel comfortable with RAID 5, spend a little more for RAID 6. If you're really worried about more than two drives failing at once, you can add as many redundant drives as you like to the array and not have to worry about filling up slow USB drives.
There are several reasons for backups. I move the external drives to a different location. The RAID system will be lost in case of theft or fire. RAID does not protect you there.
Also I have had major disruptions in basically every RAID array I have ever owned. I've had simultaneous drive failures twice. Once the whole array was lost and another time I could repair one drive. I put it in another computer and repaired it with a Seagate toolkit), but it took me a while before I was brave enough to try that. I was afraid it might break the array. I've also had OS failures after Linux upgrades rendering the array inoperative. With the extra backup at least I was able to use the files still.
So the external backup is to protect against physical damage and total array failure. You should never rely on RAID alone.
BTW an important thing is that you get a warning when something starts to fail. That's what I have always been missing in my own built RAID solutions. The first time I had a double drive failure, I simply didn't know one drive was broken until finally two drives failed and the thing stopped working.
Although even on the commercial solution I had a double drive failure. I had set up e-mail warnings on my ReadyNAS, but after I had changed my network settings the raid array was unable to send me an e-mail about the drive failures. Still it has saved me from possibly damaging situations several times already.
Of course you can have a daily check of the logfiles, but who does that? If you are even able to see the errors in a logfile. I've had 2 hardware RAID solutions that did not log errors. One needed a RS232 tool to read the status of the array and another relied on blinking LEDs (on the board ... inside the PC!!!)
-
Its all about abstracting the "system" from the "storage". I have 3 RAID5 arrays on 2 systems. Depending on what computer has the RAID cards and drives installed, those arrays could live on any computer. The OS and the storage arrays are independant.
I realize that you need to keep the OS and the data separate, but I don't think is a separate drive is logically more separated than a separate partition. It's just one digit difference in the mount table.
I've actually had my OS drive fail. That sure took the array down. I guess it was my own fault, because I thought I could get away with using some old drive. But still, any drive can fail.
Actually the ReadyNAS installs the OS on one of the drives too. It's mostly soft RAID afterall. Although if the drive fails it installs it from flash again on another drive. So there shouldn't be downtime if the OS drive fails, but of course if the OS drive fails you never know for sure everything keeps on working.
-
OK, I understand raid 5 and 6 pretty well but what does the hotspare do? I will probably use the adaptec card with a raid 6.
A hotspare drive is just a drive "in waiting" in case a failure happens. If a drive fails in a RAID array, rather than waiting for user intervention to swap out a bad drive, the hot spare is swapped in and hte array is rebuilt. Its an additional (-1) in the whole scheme of things, and adds an additional level of "safety".
Thanks, I like that idea also. I probably won't start building this for another 3 months but at least I know more now and can really look at and understand what I need. I won't my data to be as safe as possible.
-
BTW an important thing is that you get a warning when something starts to fail. That's what I have always been missing in my own built RAID solutions. The first time I had a double drive failure, I simply didn't know one drive was broken until finally two drives failed and the thing stopped working.
Although even on the commercial solution I had a double drive failure. I had set up e-mail warnings on my ReadyNAS, but after I had changed my network settings the raid array was unable to send me an e-mail about the drive failures. Still it has saved me from possibly damaging situations several times already.
Of course you can have a daily check of the logfiles, but who does that? If you are even able to see the errors in a logfile. I've had 2 hardware RAID solutions that did not log errors. One needed a RS232 tool to read the status of the array and another relied on blinking LEDs (on the board ... inside the PC!!!)
My 3ware 9550sx will email me with any problem that occurs.
-
OK, I understand raid 5 and 6 pretty well but what does the hotspare do? I will probably use the adaptec card with a raid 6.
A hotspare drive is just a drive "in waiting" in case a failure happens. If a drive fails in a RAID array, rather than waiting for user intervention to swap out a bad drive, the hot spare is swapped in and hte array is rebuilt. Its an additional (-1) in the whole scheme of things, and adds an additional level of "safety".
Thanks, I like that idea also. I probably won't start building this for another 3 months but at least I know more now and can really look at and understand what I need. I won't my data to be as safe as possible.
Wouldn't it make more sense to use RAID 6 then?
RAID 6 would use that extra disk for extra protection of the array instead of it sitting idle.
-
I assume you can use a hotspare with RAID 5 or 6. In any case, whether you're using RAID 5 or 6, you have to replace a drive that goes bad. The hotspare means that you've got a replacement drive ready so when one drive fails, the replacement drive automatically comes online and the array is rebuilt, without any intervention required. That's what I'm guessing from what I've read here, anyway. I don't actually know anything. ;D
-
I assume you can use a hotspare with RAID 5 or 6. In any case, whether you're using RAID 5 or 6, you have to replace a drive that goes bad. The hotspare means that you've got a replacement drive ready so when one drive fails, the replacement drive automatically comes online and the array is rebuilt, without any intervention required. That's what I'm guessing from what I've read here, anyway. I don't actually know anything. ;D
Indeed, but when you are ready to supply an extra drive in there anyway then RAID 6 would use that extra drive so you have two drives for parity. That means 2 drives can fail and the array is not vulnerable while it's restoring the array from a single drive failure.
Guess it's not that different when the hot spare comes in quick enough and RAID 6 would probably be slower in writing. I'd say RAID 6 is safer though. Even a single extra bad block during or before rebuilding can destroy the entire array with RAID 5.
-
My 3ware 9550sx will email me with any problem that occurs.
My 7000 series card do as well.
I'm not all that well versed on the advantage of RAID5 + hotspare vs RAID6; my cards don't support RAID6. I don't even use a hotspare right now, I keep a cold spare ready to swap in.
-
My 3ware 9550sx will email me with any problem that occurs.
My 7000 series card do as well.
I'm not all that well versed on the advantage of RAID5 + hotspare vs RAID6; my cards don't support RAID6. I don't even use a hotspare right now, I keep a cold spare ready to swap in.
I thought that squirrellydw said that his RAID card supported RAID 6.
I have never used RAID 6 either, but with RAID 5 I'm always one step away from a heart attack when the array is rebuilding. If a bad block is found during rebuilding then you potentially could have a double drive failure. RAID 6 shouldn't have a problem with that.
I guess it depends on how quickly it detects a bad block. I always assumed it would only see a block failure when it accessed a specific block. That would mean a bad block could exist on a drive for a long time before it is detected. It might only be while rebuilding the array after a drive failure that it stumbles on a previously undetected bad block on one of the other drives. I suspect that this happened at least once to me. The system reported a single drive failure and when it was rebuilding it suddenly reported a double failure.
I actually mostly use hardware mirroring. I think I have had a dozen or so RAID 5 configurations on our web and file servers over the last 10 years and I got so fed up with all the RAID 5 hassle (backup problems and recovery problems) that I mostly switched to mirroring now.
I have mirroring (double hotswap unit) on my webservers and workstation (on my notebook even) and only use raid 5 on the file server. Mirroring is so much easier to recover and does not depend on any hardware or specific configurations. Besides, I just swap out a disk and I have a backup. I prefer using that in situations where I don' t need a lot of storage.
BTW another issue with backups is protection against deletes and/or accidental changes (ie stupidity or viruses) The ReadyNAS has a nice backup method where it keeps a previous version of all files. On my webservers I use rdiff backup to keep all different versions for over a month or so.
-
Yes, I am planing on raid 6 plus 1 hotspare
-
Yes, I am planing on raid 6 plus 1 hotspare
Oh wow. That's some serious overhead then though. Then you have 3 disks that don't add to the total storage space.
-
If you're going to have three drives that don't add to the actual disk space why not go with 3 Raid 1 arrays. That way if you have a double drive failure you can still have the option of sending your drive out to a recovery center to get it back. I'm biased though. I use Raid 1 for myself exclusively. So if you only have a hammer everything looks like a nail :)
As a late post addition to a previous question; There are a couple of good reasons to keep your OS and data on different drives, that's why the enterprise level places do it :) For instance, our company recently wanted to do a large OS update (win2k server to win2k3 server) They took out the old OS drive and put in a new OS drive with the OS already installed. If the new OS didn't take they simply had to put the old OS drive back in.
Another reason to separate the data and OS layers is to keep the data 'clean'. If you have drives with data and program files then the data drives have a lot more non-data files in them... cluttering them up.
Finally there's speed to think of. Your OS uses one drive (usually. I've seen some people Raid 1 their OS (separate from their data) but I think that's overkill) and the data read is bottlenecked by the drive. If you're performing an OS task and trying to get data off the drive then you're impacting the data retrieval performance.
-
I don't plan on doing raid 6 plus 1 right away, but add to it in the furtue
-
There are a couple of good reasons to keep your OS and data on different drives, that's why the enterprise level places do it :)
...
Your OS uses one drive (usually. I've seen some people Raid 1 their OS (separate from their data) but I think that's overkill)
When you use a single disk for the OS (as opposed to a mirrored OS array) you cannot really claim "it's what enterprise level places do" anymore, because that's not what they use :P It would create a single point of failure and they would not do that.
The choice is between a single OS drive or an OS partition protected by RAID. In that case I'm pretty sure any enterprise level place would opt for the OS as a partition on the RAID array. Besides this isn't a enterprise environment, but a home environment. Although with RAID 6 and a hot spare ...
But indeed if you want to be able to install a new OS by swapping disks or if you are worried about performance of the server (which I personally wouldn't in the case of a file server, but still) then keeping the OS on a separate disk would be a beter idea. I don't quite get the "clutter" point.
-
Enterprise level places most certainly DO keep their data and OS on different drives. I didn't say enterprise places keep their OS on a single drive.
This may not be an enterprise solution, but a home solution can still emulate a professional one. Our 'generic' file servers at work are 6 disk solutions. 4 data drives all in raid 1 and 2 OS drives in Raid 1. These are in a clustered environment as well... This is extreme for the home. What isn't extreme is a paired down version of this. 2 data drives in Raid 1 and a single OS drive (with images on a disk or on the other file server).
File retrieval performance is also a pretty large worry for a file server in general. If you're working with small files and your primary concern is data protection then it becomes less of one, but if you're editing movies or opening .net solutions... it becomes a larger one.
-
I will be using it to access music, movies, and photos. I am using as a media server.
-
Enterprise level places most certainly DO keep their data and OS on different drives. I didn't say enterprise places keep their OS on a single drive.
And I didn't say that they don't keep their OS separate. I said that they don't do it on a single drive.
-
If this is a media server (movies, large picture files, etc.) then I would separate my OS and data. If this is an MP3, JPG kinda media server then you should be able to put both the OS and data on the same drive array.
Patrickl: Since we're both saying the same thing let's not derail the thread.
-
If this is a media server (movies, large picture files, etc.) then I would separate my OS and data. If this is an MP3, JPG kinda media server then you should be able to put both the OS and data on the same drive array.
Why would that matter?
Patrickl: Since we're both saying the same thing let's not derail the thread.
We don't agree. You simply misread my post. You make a recomendation that goes against the principle of using a redundant array (ie introduce a non-redundant part) so I think it has relevance to the thread.
-
Fair enough, we don't agree. I didn't misread your post. You were wrong. I said enterprise solutions separate their OS and data drives. I'm right.
If you're streaming large files over the network (like movies, large picture files, etc.) then separating your data and OS would improve performance. If you don't then you may not have a high enough read rate and the movie will skip.
-
Fair enough, we don't agree. I didn't misread your post. You were wrong. I said enterprise solutions separate their OS and data drives. I'm right.
You missed the point again. You said that there is no need to mirror the OS drive. An enterprise solution never has an OS disk that isn't (at least) mirrored.
I only asked you to explain why you felt it that important to separate the OS and data that you would even destroy the redundancy of the system just to achieve this separation. That doesn't mean I disagree with separation. I just think that redundancy is the key in a redundant system, while you seem to think that ease of OS installation by swapping disks is what's more important.
Anyway, talking about a truly massive enterprise solution, see the Google drive failure analysis (PDF) (http://research.google.com/archive/disk_failures.pdf). I found it pretty informative. Surprising too since they state that drive temperature didn't seem to matter much.
They also say that SMART errors don't always warn ahead of disk failures/errors. Still a pretty high percentage is predicted. It's good to check the logs for those or have them e-mailed to you.
-
agree to disagree or agree, I don't care. I get the point you are both trying to make and I appreciate all your comments and suggestions. I am going to start with 3 drives in a raid 5 then go to a raid 6 then add a hot spare. The OS will be on a separate drive.
Any other suggestions??
-
I know I said it before, but it deserves re-stating:
buy a good power supply.
single most important component in a large disc array, even more so than the controller. A super expensive controller card with a crappy PSU will be less reliable than a bunch of disks in JBOD using software paired with a high quality PSU.
-
I know I said it before, but it deserves re-stating:
buy a good power supply.
single most important component in a large disc array, even more so than the controller. A super expensive controller card with a crappy PSU will be less reliable than a bunch of disks in JBOD using software paired with a high quality PSU.
Indeed. I've had at least three PSU's die on me (in workstations though). Although oddly enough those were actually quality PSU's.
I ran a HP Netserver LH3000 as file RAID server and it actually had redundant power supplies. That was a cool machine. Unfortunately it made more noise (and wind) than two vacuum cleaners too :P
Don't underestimate the failure rate of hard drives used in a 24/7 application. If you read the manufacturers specs you'd expect a drive failure rate of 1% (for enterprise quality disks) and 3% (for consumer grade disks). Unfortunately in real life studies (like the Google study I pointed to earlier) they report annual failure rates of 6% and up to 12%.
Also check the warranty. I always return a broken disk if it breaks within the warranty period. Keep the box that you received the disks in, because they often have strict rules about the packaging of returned disks. I found that I often get a bigger disk back than what I returned. In one case I had to pay an administration fee (Seagate). A few months ago I returned two disks that started showing SMART errors and I got new ones for those too.
Also think about wether you want to spin down the drives or not and think about using enterprise disks (which are specifically designed for 24/7 RAID solutions) as opposed to consumer grade disks (which are designed for 8/5 office hour applications).
-
OK, a few more questions now.
What is JBOD
What is considered an enterprise disk
what do you mean by spin down the drives
Thanks
-
OK, a few more questions now.
What is JBOD
Not exactly what Boykster meant there, but it's 'Just a Bunch Of Disks'. Guess he means that the whole solution is always as strong as the weakest link. That's also why I'm not in favor of a non-redundant OS disk.
What is considered an enterprise disk
Enterprise disks are designed to work in 24/7 applications. They have a higher MTBF rating (ie less likely to break) to offset the fact that they will be running a lot more in a 24/7 application than the 8/5 application of a desktop computer.
In the days of ATA (now called PATA) these only existed as SCSI disks, but these days you can buy them with a SATA or SAS interface.
Problem is that these disks are more expensive (quite a lot actually), but they should be better suited to the task. Google uses consumer grade disks though. Guess they don't think it's work the extra money.
For instance the Seagate Barracuda 7200.11 (http://www.seagate.com/docs/pdf/datasheet/disc/ds_barracuda_7200_11.pdf) versus the Barracuda ES.2 (http://www.seagate.com/docs/pdf/datasheet/disc/ds_barracuda_es_2.pdf).
what do you mean by spin down the drives
When there is no disk activity you can make the disks stop spinning. This saves power, reduces heat and can prolong the life of the drive (especially a consumer disk which is built to run during office hours only).
This is also one of these areas where you will hear different opinions. Mostly depending on how old the info that the person you talk to is using. In the past, disks had to keep on running all the time because starting them up could make the motor seize and ruin the drive. The same can still happen, but it's less likely these days.
Disks are actually rated for a number of spin-down/spin up cycles too.
BTW putting the OS on an array makes it about impossible to let array spin down since the OS often writes something to disk. Since you are not putting the OS on the array spinning down the disks is something you could consider.
:edit: spell disaster
-
Cool, I have both a 7200.11 and a ES drive. I never realized what the difference was.
-
OK, a few more questions now.
What is JBOD
Not exactly what Boykster meant there, but it's 'Just a Bunch Of Disks'. Guess he means that the whole solution is always as strong as the weakest link. That's also why I'm not in favor of a non-redundant OS disk.
[/quote]
That's what I was referring to, I guess saying 'a bunch of disks in JBOD' is redundant. I guess what I should have said was a bunch of disks in RAID0 spanning or just attached as individual drives. All data, no redundancy.
I agree on not considering MTBF an insurance policy, and on keeping packing materials for your drives. I now exclusively have been buying Seagate drives since they offer a 5 year warranty. I'm in the "don't spin down" camp, but there are valid arguments on both sides of the fence on that one.
-
I now exclusively have been buying Seagate drives since they offer a 5 year warranty.
Did you have to pay an administreation fee? I really hated that. I bought 4 Seagates. 2 of them failed within a few months and they wanted something like 50 euro administration fee.
A few months ago I sent some Maxtor drives back (to Seagate since they bought Maxtor) and I didn't have to pay.
I'm in the "don't spin down" camp, but there are valid arguments on both sides of the fence on that one.
I'm on both sides and I use both depending on the situation.
Looking at the annual failure rates that Seagate reports for using a desktop drive 8/5 versus an enterprise disk used 24/7 shows that the enterprise drive is twice as likely to fail. So a drive with almost double the MTBF is still twice as likely to die when it is used 4 times as much. That shows that at least Seagate thinks that running the drive twice as many hours ages the drive twice as much. ie spinning it down will prolong the life.
At 8 Watt a drive with 5 drives that adds up to 40 Watt. It's not spectacular, but still.
On the other hand spinning down and up too often can kill the disk too.
Spinning a drive back up can take 15 seconds. It's annoying if you have to wait for that.
So on my webservers I don't spin down and my office file server does spin down.
-
Nope, no fee in the US at least to RMA a seagate within the warranty period. You do have to pay return shipping for a tracked carrier and have to have proper packaging.
I generally go for an advance replacement that costs $19.99 but they ship the drive via 2-day delivery and include a pre-paid return shipping label with the required packaging.