The project continues..:) I was able to take the ram, CPU’s, and HBAH310 (reflashed to base LSI IT mode) out of the 420 and put them into the 520. I now have the rest of the parts inbound: 8 x 6TB SAS HDD’s, SD card adapter, 8 x 3.5″ HDD caddies are all inbound. I have been able to mess with the 520 a bit more too. i got all of the firmware packages updated(on something like this 520 there’s multiple boards with firmware on them), I have tweaked the bios settings to what I want(H/T disabled, extra data safety for the ram, balanced power usage for performance, etc etc etc), I am still formulating my plans for everything. I know I am going to be de-racking all of the other servers and then re-racking them. I just realized I have to get rails for the 420 so I can rack it…poopies. That’s low on the list of priorities as I need to get the 520 up and running so I can start figuring out what happens next with other gear. At one point I was burning nearly 300 watts 24/7. I have been averaging 1/3 of that and I am hoping to stay around the same power burn with the 520.
What are the immediate plans for the new 520? It is going to be my new storage/virtualization server. As I talked about in part 1, The 520 is going to have a greatly expanded storage capability over the 310. The 310’s hard drives are cabled in. If one of them fails, I have to power down the server, slide it out, open up the machine, change the drive, and reverse. Also the 310 is limited to 4 drives. I need more drives for greater storage abilities, higher redundancy, or both. I am actually going to balancing the two. I am going to run all 8 drives in ZFS mirror VDEVS. Think about it as a giant series of Raid 1 mirrors striped together. This means that I loose 50% of my raw storage for redundancy overhead. However it has some serious advantages:
- A drive failure is much faster to recover from. You only have to rebuild from the other drive in the mirror..not the entire array. This makes rebuilds much much faster.
- Rebuilds do not bring the entire machine down to a screeching halt.
- Rebuilds do not pound the entire array therefore raising the specter of another failure.
- You can sustain multiple single drive failure if they each occur on a separate mirrored pair.
- This arrangement has the highest performance of all other array types that have actual redundancy.
How about cons? oh there’s some big ones to keep in mind:
- Because this is a stripe of mirrors, if one of the mirrors loose both drives the entire array collapses.
- The “storage efficiency” is rather low at 50%
Why am I choosing the mirrored VDEVS? I am going to have 48TB of raw storage. Once i set things up I’ll have 24TB of available storage. Since ZFS likes to have no more than 80% usage when all is said and done it will be around 24TB formatted with 19TB effectively available. Since ZFS ram requirements are calculated against the raw storage and not formatted storage 48GB is not enough. It will run but will run better with more ram. Since the 520 can handle up to 384Gb of ram I will start with the current 48GB and upgrade from there. Keep in mind how ram calculations work in ZFS. You need 8GB for the OS(FreeNAS) and then it’s a general rule of 1GB of ram per TB of disk. My first jump will be from 48GB and i will add 2x32GB of ram. i tend to fully populate the server with it’s maximum ram. Once I get the machine put together I’ll make another post with more detailed information about the 520..:)