Main > Everything Else

Workflow server

Pages: << < (5/5)

mystic96:

I think everybody you are talking to is stuck in 2007. Back then, UREs were expressed in 10^13 or 10^14 bits read. Nowadays it's 10^15 and 10^16. These are those numbers in our speak:
10^13 = 1.13tb
10^14 = 11.3tb
10^15 = 113tb
10^16 = 1.13pb

This is per drive, not per array. Your assertion also assumes that the drives are 100% full. Maybe it's an east-coast/west-coast thing, but again I've never heard a single person bring this up as a legitimate concern. The numbers speak for themselves and the only legit article I was able to find to back up your side was from 2007 talking about 2009+. That is to say, it was basically some dude in North Scottsdale being a Nostradumbass.

You bring up a good point about performance differences. Raid5/6 are better for writes and 1/10 are better for reads. The problem I have with 6 is that DP is worse than r5 and P+Q is only marginally better (we're talking typically less than a 5% increase in throughput) for double the overhead when it matters most (rebuilds). Taking URE-related failures off the table here... I would take an increase in capacity over a marginal increase in write performance.

Enterprise SSDs are truly nuts in a throughput capacity, but I still question their longevity. And as you point out, in some cases they are no faster than spindled drives. I'm just not sold on them as an enterprise solution yet, although that video that was posted in the IT history thread is going to raise some eyebrows on Monday :)

Crap, time to go camping. Have a good weekend man, to be continued?

Pages: << < (5/5)

Go to full version