Main > Everything Else
Workflow server
lilshawn:
...or an SSD RAID0...
:dunno
mystic96:
I would stay away from SSD, unless you're talking enterprise SSD... but if the company is worried about a few grand for a server, then I doubt you'll get approval to spend 2-3x that just for a couple of disks so it's a moot point :)
I would imagine a raid 1 for just 6 users would be fine, or maybe a new raid 1 and migrate those images over to separate the IO. Stay away from raid 0 unless it's throwaway data, or you don't mind recovering from <insert media type> backups. Even over a dedicated gigabit backup network, if you're talking 100+ GBs be prepared to be down for at least half a day.
Kahlid is spot on though. You can't really fix an issue unless you properly diagnose it. Do a few days' worth of perf mon for disk IO and review. It may be possible to just do a single raid 1 for both OS + data, or you may find that it would be prudent to separate OS from Data spindles. As far as servers are concerned, you should pretty much never, ever build one without raid redundancy. It may save a few hundred bucks up front (I believe current MSRP for an HP branded 300gb dp 6gb sas drive is right at $300 - but that's MSRP), but the potential loss of data and/or employee productivity could cost many times that.
Server talk on an arcade forum, I'm in heaven :applaud:
MonMotha:
RAID 1 generally does not improve performance. It can improve READ performance, at the expense of not catching errors until you do a scrub, but most implementations do simultaneous reads of all members to verify integrity prior to passing the data on.
You need striping to get improved performance. This meas RAID 0 (which will have reliability issues), RAID 5 (not recommended for more than 3-4 disks with modern, high capacity drives, as unrecoverable read errors have gotten relatively common even on "enterprise" kit), or RAID 6 (minimum 4 drives). A 4-5 element RAID 6 array of relatively fast drives will have OK sequential throughput, comparable to a mid-range consumer SSD, but still somewhat poor random access performance, which seems to be a consideration here.
My experience has been that the reliability of decent consumer grade SSDs (e.g. modern Intel) has been drastically understated. They seem to do at least as well as consumer spinning metal, if not better. They do tend to inexplicably "just up and completely die" relatively randomly, whereas spinning metal tends to exhibit a more recognizable and incremental failure in many cases, but the frequency of failure seems no worse than spinning metal except on POS low end models.
You can also of course get "enterprise" SSDs. I've seen 2TB models for ~$2000, which is what a decent array of 4-5 "enterprise" HDDs will run you, albeit for less capacity, and the performance will totally kill any spinning metal array you can make for that price. There's a reason they come as 8x PCIe cards. I'm not sure that the reliability is any better than consumer models, though, but that's generally true of "enterprise" HDDs too (they just have better error reporting since the consumer ones are crippled for marketing reasons).
All this said, yes, do some actual diagnostics and figure out what the issue is. From the description, it's very likely disk IO, but it's not overly hard to figure out for sure.
mystic96:
--- Quote from: MonMotha on March 21, 2013, 11:44:05 pm ---You can also of course get "enterprise" SSDs. I've seen 2TB models for ~$2000, which is what a decent array of 4-5 "enterprise" HDDs will run you, albeit for less capacity, and the performance will totally kill any spinning metal array you can make for that price. There's a reason they come as 8x PCIe cards. I'm not sure that the reliability is any better than consumer models, though, but that's generally true of "enterprise" HDDs too (they just have better error reporting since the consumer ones are crippled for marketing reasons).
--- End quote ---
I can't say with 100% assurance that this is true of all makes, but EMC's ent SSD drives actually have double the stated capacity. The second set of chips are left in a low power state and when an in-use chip hits it's max write count then that data is moved to a standby chip and the original is disabled, pointers are updated, etc, etc. The big thing about enterprise class drives is that they are meant to be spun 95%+ of the time, and their MTBFs are rated accordingly.
Any particular article you can point me to regarding the raid 5 issue you speak of? I'm having a really hard time digesting that having never experienced it myself.
kahlid74:
--- Quote from: mystic96 on March 22, 2013, 08:56:29 am ---
--- Quote from: MonMotha on March 21, 2013, 11:44:05 pm ---You can also of course get "enterprise" SSDs. I've seen 2TB models for ~$2000, which is what a decent array of 4-5 "enterprise" HDDs will run you, albeit for less capacity, and the performance will totally kill any spinning metal array you can make for that price. There's a reason they come as 8x PCIe cards. I'm not sure that the reliability is any better than consumer models, though, but that's generally true of "enterprise" HDDs too (they just have better error reporting since the consumer ones are crippled for marketing reasons).
--- End quote ---
I can't say with 100% assurance that this is true of all makes, but EMC's ent SSD drives actually have double the stated capacity. The second set of chips are left in a low power state and when an in-use chip hits it's max write count then that data is moved to a standby chip and the original is disabled, pointers are updated, etc, etc. The big thing about enterprise class drives is that they are meant to be spun 95%+ of the time, and their MTBFs are rated accordingly.
Any particular article you can point me to regarding the raid 5 issue you speak of? I'm having a really hard time digesting that having never experienced it myself.
--- End quote ---
Basically what Mon Motha is saying is that RAID5 since it's origin, has flaws, which were masked by small drive sizes. With 1-4 TB drives now, a RAID5 of 8 disks each with 4TB is a time bomb waiting to go off. The likely hood of a data set on two drives failing is considerably high because of how much space they hold.
By comparison, if you had EMC walk in the front door to design your new system, they would use a max of 6 drives per group at RAID6 and then make super groups of RAID 60 for your data.