Daniel's Home Computing: kiva
Last Modified Nov 10, 2019
Manuals.
Pictures.

In July 2015, I bought a used Supermicro server from eBay to replace depot so that depot could replace my web server. I wanted Xeons with at least 32G of ECC, 12 or more hot-swap drive bays, a simple HBA (no RAID) to run ZFS, and a 2U or 4U rack-mounted case with mounting rails.

I wound up with dual L5640 CPUs, 48G of RAM, 12 hot-swap drive bays and a 2U case with redundant 800W power supplies. The eBay listing was incorrect with respect to the backplane; I wanted direct-attach so I could fully utilize 8 SAS lanes, but the chassis came with a BPN-SAS-826EL1 backplane that only utilizes 4 SAS lanes. I'm not going to sweat it; I will get by just fine with 4 SAS lanes and the cabling is cleaner.

Below is a list of hardware I've received, with prices.

kiva hardware


I will eventually purchase more hard drives.

Today, I have a single ZFS pool configured as a pair of mirrors. It is used to store backups of other machines on my LAN and also some media files for plex. In terms of hot-swap bay locations, it looks like this:

As of Nov 4, 2019 the pool looks like this after adding the third mirror vdev (and starting a fresh TimeMachine backup of my laptop):

NAME               SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zfs1              14.5T  2.76T  11.7T        -         -    13%    19%  1.00x  ONLINE  -
  mirror          3.62T  1.86T  1.76T        -         -    31%    51%
    gpt/gpzfs1_0      -      -      -        -         -      -      -
    gpt/gpzfs1_1      -      -      -        -         -      -      -
  mirror          3.62T   905G  2.74T        -         -    22%    24%
    gpt/gpzfs1_2      -      -      -        -         -      -      -
    gpt/gpzfs1_3      -      -      -        -         -      -      -
  mirror          7.25T  13.3G  7.24T        -         -     0%     0%
    gpt/gpzfs1_4      -      -      -        -         -      -      -
    gpt/gpzfs1_5      -      -      -        -         -      -      -

My intent is to always use mirrored vdevs in zfs pools. This makes upgrades easier and incremental, and also makes resilvering safer and faster when a drive fails.

I currently do not need a ZIL since my sync write load is light. While I am using NFS, it's not from any VMs forcing truckloads of synchronous writes. If/when I need a ZIL, I'll look at an Intel S3700 SSD.

Given my current intended usage, it's unlikely that I'll need an SSD for L2ARC. Right now, my usage is mainly just backups. That's writes only; reads will only occur when I need to do a restore, and cache is of no benefit in that case.

The current disk geometry:


Diary