Main > Everything Else
Anyone here use NAS (Network Accessible Storage) devices?
wesbrown18:
I admit that I'm a bit of an outlier in this regards, but here's my home office NAS:
* OpenIndiana
* AMD Opteron 4334 (6 cores, 3.1ghz)
* SuperMicro MBD-H8SCM Socket C32 motherboard
* 32GB of ECC RAM
* LSI 9201-16i HBA
* 1x 64GB Crucial M4 as system disk (root)
* 2x 20GB Intel 313's as a write log disk (ZIL)
* 1x 256GB Samsung 840 Pro as a cache disk (L2ARC)
* 12x 2TB SATA WD Greens (will replace one by one with Seagate Constellations)
* 2x 10gbps Infiniband network connections
* 2x 1gige network connections
This gives me an effective 24TB of space that I can push at 10gbps across my network. Ironically, my cabinet doesn't use any of that -- I don't believe in having every game made to man available right on the cabinet, so it has a 64GB SSD. I do have a collection of games stored on there.
drventure:
Damnation. That's a hell of a home server!
wesbrown18:
--- Quote from: drventure on April 30, 2013, 06:07:08 pm ---Damnation. That's a hell of a home server!
--- End quote ---
Heh. Yeah. :) I do it this way so that the rest of my test and development machines don't need any local disks. I have a pair of 12 core AMD 1u servers with Infiniband. They are disk less and bootstrap over TFTP and then access the disks via DMA over Infiniband. Nice thing about this is that with ZFS, I can snapshot images before making changes. Kind of like VMware's snapshot of VMs but with real hardware.
kahlid74:
--- Quote from: wesbrown18 on April 30, 2013, 05:33:22 pm ---I admit that I'm a bit of an outlier in this regards, but here's my home office NAS:
* OpenIndiana
* AMD Opteron 4334 (6 cores, 3.1ghz)
* SuperMicro MBD-H8SCM Socket C32 motherboard
* 32GB of ECC RAM
* LSI 9201-16i HBA
* 1x 64GB Crucial M4 as system disk (root)
* 2x 20GB Intel 313's as a write log disk (ZIL)
* 1x 256GB Samsung 840 Pro as a cache disk (L2ARC)
* 12x 2TB SATA WD Greens (will replace one by one with Seagate Constellations)
* 2x 10gbps Infiniband network connections
* 2x 1gige network connections
This gives me an effective 24TB of space that I can push at 10gbps across my network. Ironically, my cabinet doesn't use any of that -- I don't believe in having every game made to man available right on the cabinet, so it has a 64GB SSD. I do have a collection of games stored on there.
--- End quote ---
I'm glad you're replacing those green drives, that LSI card doesn't allow spin down, so you've effectively got a time bomb in that NAS/SAN. I'm assuming stripped disks as you say you've got 24TB of space? Kind of dangerous for that large of a chunk of data but as long as you've got it backed up/are okay with a complete loss of data no worries.
Good ole infiniband.
wesbrown18:
--- Quote from: kahlid74 on May 01, 2013, 09:54:53 am ---I'm glad you're replacing those green drives, that LSI card doesn't allow spin down, so you've effectively got a time bomb in that NAS/SAN. I'm assuming stripped disks as you say you've got 24TB of space? Kind of dangerous for that large of a chunk of data but as long as you've got it backed up/are okay with a complete loss of data no worries.
Good ole infiniband.
--- End quote ---
Real capacity is in the 14TB range -- I'm not crazy enough to run it in a JBOD striped configuration:
--- Code: ---NAME USED AVAIL REFER MOUNTPOINT
internal 9.38T 4.88T 258G /internal
--- End code ---
Actually, the WD Green drives have spin down/timeout disabled. They were giving me performance issues with the constant spinning down -- with that many disks, it becomes a cascading spin down/spin up issues. Once spin down was disabled for each individual drive, which was a ---smurfing--- pain in the ass, as I had to hook it up to a motherboard where the utility could see it from DOS.
You're absolutely right about it being a time bomb still. Striped? Not exactly. They're configured as a pair of RAIDZ2's. I can sustain up to two drive failures per RAIDZ2. Regardless, I am replacing them with non-Green drives as personal funds are available. That's the nice thing about ZFS, that I can incrementally replace the drives. I am probably replacing them with 3TB or 4TB Seagate Constellations -- when all the drives in a RAIDZ2 are replaced, I get upgraded capacity.
Topology:
--- Code: ---ZPOOL
2 drive ZIL (20GB SSD)
1 drive L2ARC (256GB SSD)
6 drive RAIDZ2 - 4 data, 2 parity
6 drive RAIDZ2 - 4 data, 2 parity
--- End code ---
The funny thing is? I used to have a motherboard with a regular AMD in there. The lack of ECC RAM was causing ZFS parity errors on the drives, which I blamed on the WD. Which disappeared when two DIMMs in a row died and I replaced them. But I will never build a ZFS server on this scale ever again without ECC RAM.
That's why I replaced it with an Opteron-class motherboard and CPU with ECC RAM.
Infiniband is kind of like a high end sports car in that it needs constant tuning. I built a 40gbps Infiniband fabric for the HPC cluster at my $DAYJOB, and there is so much lore involved. But we push 8 gigabytes a second of data, continuous, across the entire fabric.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version