In July 2015, I bought a used Supermicro server from eBay to replace depot so that depot could replace my web server. I wanted Xeons with at least 32G of ECC, 12 or more hot-swap drive bays, a simple HBA (no RAID) to run ZFS, and a 2U or 4U rack-mounted case with mounting rails.
I wound up with dual L5640 CPUs, 48G of RAM, 12 hot-swap drive bays and a 2U case with redundant 800W power supplies. The eBay listing was incorrect with respect to the backplane; I wanted direct-attach so I could fully utilize 8 SAS lanes, but the chassis came with a BPN-SAS-826EL1 backplane that only utilizes 4 SAS lanes. I'm not going to sweat it; I will get by just fine with 4 SAS lanes and the cabling is cleaner.
Below is a list of hardware I've received, with prices.
Part | Description | P/N | Qty. | Unit Price | Total |
---|---|---|---|---|---|
motherboard | supermicro X8DTN+ | X8DTN+ | 1 | $599.99 | $599.99 |
ZFS drive 5 | Western Digital DC HC510 10TB SATA drive | HUH721010ALE604 | 1 | $288.44 | $288.44 |
ZFS drive 6 | Western Digital DC HC510 10TB SATA drive | HUH721010ALE604 | 1 | $288.44 | $288.44 |
ZFS drive 2 | HGST 7k4000 4TB Deskstar NAS | 1 | $179.99 | $179.99 | |
ZFS drive 1 | HGST 7k4000 4TB Deskstar | 1 | $169.99 | $169.99 | |
ZFS drive 3 | HGST 7k4000 4TB Deskstar | 1 | $169.99 | $169.99 | |
ZFS drive 4 | HGST 7k4000 4TB Deskstar NAS | 1 | $169.99 | $169.99 | |
SSD OS drive | Crucial MX100 512GB | CT512MX100SSD1 | 1 | $159.99 | $159.99 |
front bezel | Supermicro MCP210826010B filtered front bezel | MCP210826010B | 1 | $15.39 | $15.39 |
case | Supermicro CSE-826 with BPN-SAS-826EL1 backplane | CSE-826 | 1 | $0.00 | $0.00 |
power supply | Supermicro PWS-801-1R | PWS-801-1R | 2 | $0.00 | $0.00 |
CPU | Xeon L5640 2.26GHz hexacore | L5640 | 2 | $0.00 | $0.00 |
RAM | 48G ECC DDR3 | 1 | $0.00 | $0.00 | |
8-port SATA card | LSI 9211-8i with IT firmware revision P19 | 9211-8i | 1 | $0.00 | $0.00 |
Total | $2042.21 |
I will eventually purchase more hard drives.
Today, I have a single ZFS pool configured as a pair of mirrors. It is used to store backups of other machines on my LAN and also some media files for plex. In terms of hot-swap bay locations, it looks like this:
As of Nov 4, 2019 the pool looks like this after adding the third mirror vdev (and starting a fresh TimeMachine backup of my laptop):
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zfs1 14.5T 2.76T 11.7T - - 13% 19% 1.00x ONLINE - mirror 3.62T 1.86T 1.76T - - 31% 51% gpt/gpzfs1_0 - - - - - - - gpt/gpzfs1_1 - - - - - - - mirror 3.62T 905G 2.74T - - 22% 24% gpt/gpzfs1_2 - - - - - - - gpt/gpzfs1_3 - - - - - - - mirror 7.25T 13.3G 7.24T - - 0% 0% gpt/gpzfs1_4 - - - - - - - gpt/gpzfs1_5 - - - - - - -
My intent is to always use mirrored vdevs in zfs pools. This makes upgrades easier and incremental, and also makes resilvering safer and faster when a drive fails.
I currently do not need a ZIL since my sync write load is light. While I am using NFS, it's not from any VMs forcing truckloads of synchronous writes. If/when I need a ZIL, I'll look at an Intel S3700 SSD.
Given my current intended usage, it's unlikely that I'll need an SSD for L2ARC. Right now, my usage is mainly just backups. That's writes only; reads will only occur when I need to do a restore, and cache is of no benefit in that case.
The current disk geometry:
NAME STATE READ WRITE CKSUM zfs1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/gpzfs1_0 ONLINE 0 0 0 gpt/gpzfs1_1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gpt/gpzfs1_2 ONLINE 0 0 0 gpt/gpzfs1_3 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 gpt/gpzfs1_4 ONLINE 0 0 0 gpt/gpzfs1_5 ONLINE 0 0 0 errors: No known data errors
Years ago I had sworn off Seagate due to some issues we had at work with Barracuda drives. That was a long time ago, but I wish I had kept them off my list. Sadly, Microcenter no longer carries enterprise-grade drives at the local store.
I'm going with tried-and-true Ultrastar drives. I've used them on and off since the days they were still made by IBM, then later made by Hitachi, then HGST and now Western Digital. Since the real power saving (and HelioSeal) starts with the 10TB model, I'm buying a pair of 10TB. This costs a little bit more money, but gets me about 2TB more space, and a 2.5 million hour MTBF. I might put these in their own pool as a mirrored vdev, to avoid adding another point of failure to the existing pool. If I do this, I'll likely migrate some of my backup datasets to the new mirror.
When I installed the new drives in their hot-swap bays, they showed up as da3 and da4. I then did this:
# gpart create -s gpt da3 # gpart create -s gpt da4 # gpart add -t freebsd-zfs -l gpzfs1_2 -b1M -s3725G da3 # gpart add -t freebsd-zfs -l gpzfs1_3 -b1M -s3725G da4 # zpool add zfs1 mirror /dev/gpt/gpzfs1_2 /dev/gpt/gpzfs1_3
At any rate, UPS monitoring has been migrated to depot. Since depot doesn't have a DB9 serial port, I am using a USB to serial adapter.
I need to put a Samsung 850 Pro SSD in depot. It's still running the OS from a now-ancient WD VelociRaptor drive that has seen better days. depot could use speedier I/O there now that it will be my web server, and I'd prefer the reliability of a Samsung 850 Pro versus a spinning drive.