In early November of 2012, I started piecing together a rackmount computer to act as my backup and media server. The intent here was to have a machine with at least 8 hot-swap bays in a rackmount chassis that would accomodate at least an ATX motherboard. It will run the latest version of FreeBSD-stable, configured with one or more ZFS pools.
I wound up with a MicroATX motherboard, and am using an i5 2405S CPU to help keep the power and heat down. This is still mostly overkill CPU-wise, but I didn't want to run into scenarios where I didn't have the CPU to do things on the machine. I also wanted to be able to use 32 gigabytes of RAM, since ZFS is somewhat RAM-hungry. The desire to use at least 32 gigabytes of RAM ruled out low-power CPUs like the Intel Atom processors I use in ria and www.
Below is a list of hardware I've received, with prices.
Part | Description | P/N | Qty. | Unit Price | Total |
---|---|---|---|---|---|
CPU | Core i5-2405S Sandy Bridge 2.5GHz LGA 1155 Quad-Core processor | BX80623I52405S | 1 | $219.99 | $219.99 |
case | Rosewill RSV-L4411 | RSV-L4411 | 1 | $179.99 | $179.99 |
RAM | G.SKILL Ares Series 32GB (4x8GB) 240-Pin DDR3 1333 (PC3 10666) | F3-1333C9Q-32GAO | 1 | $129.99 | $129.99 |
motherboard | ASRock H77 Pro4-M | H77 Pro4-M | 1 | $89.99 | $89.99 |
8-port SATA card | IBM M1015 | M1015 | 1 | $80.00 | $80.00 |
test drive | Western Digital Velociraptor 600GB 10,000 rpm SATA 6.0Gb/s 3.5" hard drive | WD6000HLHX | 1 | $69.99 | $69.99 |
power supply | Seasonic S12II 430B 430W 80-Plus Bronze | S12II 430B | 1 | $59.99 | $59.99 |
SFF-8087 to SATA cable | 3Ware CBL-SFF8087OCF-10M 1-meter SFF-8087 to SATA cable | CBL-SFF8087OCF-10M | 2 | $14.69 | $29.38 |
Total | $859.32 |
I still need to purchase more hard drives.
I currently have a Samsung 840 Pro 512G as my boot drive. A pair of HGST Deskstar 4TB drives are configured as a mirror and acting as backup for other hosts on my LAN. In terms of disk location, it looks like this in the chassis:
The ZFS pool:
% zpool status pool: zfs1 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zfs1 ONLINE 0 0 0 mirror ONLINE 0 0 0 gpt/gpzfs1_0 ONLINE 0 0 0 gpt/gpzfs1_1 ONLINE 0 0 0 errors: No known data errors
Given my current intended usage, it's unlikely that I'll need an SSD for L2ARC nor a SLOG (dedicated ZIL). Right now, my usage is just backups. That's writes only; reads will only occur when I need to do a restore, and cache is of no benefit in that case.
The current disk geometry:
I have an IBM M1015 installed so I can utilize all of the hot-swap drive bays. I have it in the PCI xPress x16 slot of the ASRock H77 Pro4-M motherboard, since I have no need for discrete graphics in a rackmounted server that lives in my basement. The card is x8 electrically, and the ASRock H77 Pro4-M has no x8 slots. I could have placed it in an x4 slot, but with the x16 slot unoccupied, it made sense to use it.
More information on using the IBM M1015 in IT (Initiator Target) mode can be found here. Most of the information was gleaned from various sources on the web, but I wrote up my own version so I don't have to hunt it down or encounter dead links if/when I need the information again.
Part | Description | P/N | Qty. | Unit Price | Total |
---|---|---|---|---|---|
raidz2 pool drives | Western Digital Red WD30EFRX 3.0TB IntelliPower 64M cache SATA 6.0Gb/s | WD30EFRX | 6 | $170.00 | $1020.00 |
120mm fans | Noctua NF-F12 PWM | NF-F12 PWM | 3 | $21.00 | $63.00 |
80mm fans | Noctua NF-R8 | NF-R8 | 2 | $15.00 | $30.00 |
SSD for ZIL | 1 | $0.00 | $0.00 | ||
SSD for L2ARC | 1 | $0.00 | $0.00 | ||
serial cable | USB to serial cable to connect to Powerware 5125 UPS | 1 | $0.00 | $0.00 | |
Total | $1113.00 |
Thanks to Warren Block for his concise write-up on partitioning with GPT at http://www.wonkity.com/~wblock/docs/html/disksetup.html
I'm still working on installing the ports I need on depot.
In the process, I discovered that GetLocalInterfaces() in my libDwm library did not work correctly on FreeBSD 9.1. So I had to fix it. It now works correctly and sitetrafficd is now running on depot, recording IP traffic totals and TCP handshake round-trip times.
da0 at mps0 bus 0 scbus0 target 0 lun 0 da0:I then issued camcontrol stop da0 to stop the drive, and removed the drive. This also went fine:Fixed Direct Access SCSI-5 device da0: 150.000MB/s transfers da0: Command Queueing enabled da0: 70911MB (145226112 512 byte sectors: 255H 63S/T 9039C)
mps0: mpssas_alloc_tm freezing simq mps0: mpssas_remove_complete on handle 0x0009, IOCStatus= 0x0 mps0: mpssas_free_tm releasing simq (da0:mps0:0:0:0): lost device - 0 outstanding, 1 refs (pass1:mps0:0:0:0): passdevgonecb: devfs entry is gone (da0:mps0:0:0:0): removing device entryHence I believe I'm all set for adding drives to be used with ZFS.
I installed smartmontools from ports, and enabled smartd in /etc/rc.conf. I configured the disks I want monitored in /etc/periodic.conf. I then started smartd.
The SFF-8087 to SATA forward breakout cables arrived. I installed them and ziptied them neatly around the perimeter of the case so they don't have an adverse effect on the air flow.
I put depot's case top on, installed the modified handles and put depot back in the rack. I believe I'm ready for pool drives.
I wound up flashing the IT sofware to the IBM M1015 by temporarily installing it in the motherboard of my hackintosh. I used FreeDOS and the sas2flsh utility, using 2108it.bin and mpt2sas2.rom as input files for sas2flsh. I had to do this because depot's motherboard has UEFI BIOS (and hence sas2flsh would not work on it). I could not get the EFI shell to work, and hence could not use sas2flsh.efi. Once flashed, I put it back in depot, and as near as I can tell, it's all OK. From dmesg:
mps0:And camcontrol sees it:port 0xe000-0xe0ff mem 0xf7dc0000-0xf7dc3fff, 0xf7d80000-0xf7dbffff irq 16 at device 0.0 on pci1 mps0: Firmware: 09.00.00.00, Driver: 14.00.00.01-fbsd mps0: IOCCapabilities: 1285c
dwm@depot:/home/dwm% sudo camcontrol devlist -v scbus0 on mps0 bus 0: ...
I have a plan to modify the handles on the RSV-L4411 case so I can get it fully into the rack on the rails without conflict. I modified one of them and it works.
I removed the feet from the Rosewill RSV-L4411. I'm a bit surprised feet were included on a rackmount case, though I guess they'd be handy if I were using a shelf instead of sliding rails. I'll put them in the Middle Atlantic drawer that's installed in my rack in case I ever need them.
I installed the power supply, motherboard and temporary boot drive in the Rosewill RSV-L4411. It appears that the IBM M1015 card will arrive tomorrow, at which point I'll install it in the motherboard and try to get it working. Amazon has not yet shipped my 1-meter SFF-8087 to SATA forward breakout cables yet, so I don't know if I'll receive them this week, despite it being a Prime order.
Well, the Rosewill rack rails suck. The way the brackets are made, combined with the design of the handles on the RSV-L4411, means the case can't go all the way in to be flush with the rack and the rack screws will not reach. I'll have to modify things to make it work. Makes me wish I had just saved up for a Supermicro 3U case.
I need cables for the IBM M1015 card. I don't think 18" cables will suffice for a super-clean installation in the Rosewill RSV-L4411 case, since the case is 25" deep. So I ordered a pair of 1-meter 3Ware CBL-SFF8087OCF-10M cables from Amazon. Once the IBM M1015 card and the cables arrive, I can finish the internal work on depot and hence be ready to install drives into the hot-swap bays. I may pull some of the 1TB drives from other machines and create a small zpool to test things. But my first production pool will be 6 drives in raidz2. I will later probably add a second 6-drive raidz2 pool, and use a Samsung 840 Pro Series SSD as my boot drive.
It's worth noting that the critical I/O bottleneck for this machine will be the gigabit ethernet. It won't be doing anything itself other than acting as a network storage server, mostly for backups of all of my other machines and maybe for audio and video storage.
I installed nut for UPS monitoring. Of course I haven't bought new batteries for either of my Powerware 5115 UPS units yet. So for now I'll just be using it in slave mode; depot will be plugged in to the Powerware 5125, whose master is www.
The motherboard has UEFI BIOS, and it looks like it's very speedy. That'll be nice for the occasions where I need to reboot. It will let me boot from a USB thumb drive (I checked), and it sees all of the RAM modules so at least I know they're all good as far as being recognized. The stock CPU cooler runs very quiet when the CPU is idle, which is good news. The power supply is very quiet at idle too. Not that it matters... I know the real noise from this machine will emanate from the hard drives and case fans.
The dd of the memstick image completed, and I put the USB thumb drive in one of depot's rear ports. It booted fine and I performed the installation of FreeBSD 9.1-RC3. Everything went smoothly. Note that I consider this whole activity temporary... the 600G VelociRaptor drive is much larger than I need for a boot drive. But this is a good test of running a boot drive from the ASMedia SATA ports so that I can use the H77 SATA ports for raidz2 drives.
I created a new kernel configuration (/sys/amd64/conf/depot) which just adds device coretemp to the GENERIC kernel. I want to be able to monitor CPU core temperatures.
So, I'm all set to do some testing once the new case arrives. Of course I don't have array drives yet. I'll be buying those piecemeal over the next few months.
The ASRock H77 Pro4-M motherboard, 32G of DDR3 1333 memory, the Seasonic SS12II 430B power supply (430 watts), and a Western Digital WD6000HLHX 600G VelociRaptor hard drive arrived. All of these parts except the hard drive are for depot.
I installed the CPU and the memory in the ASRock H77 Pro4-M motherboard. I have not yet installed the CPU cooler, only because I'm not sure I'll use the stock one and the motherboard won't fit back in its box with the CPU cooler on it. No sense in installing the cooler until I have a case to put it in.
The StarTech 25U rack arrived via freight shipping today, and it appears to be intact. I'll know when I take it all out of the box and try to put it together.
I quickly put together the StarTech 25U rack. It's all good. The only indication that it's refurbished: a few paint scratches and chips on some of the aluminum braces. It's a non-issue for me since this isn't a piece of furniture, it's a tool. It's not going to be placed in living space, but it looks very good. More importantly, it's much more functional for my purposes than my existing full-height racks. The one drawback: I can't see moving it out of the home on the casters when it's loaded... they're not strong enough. It's also worth noting that the first rack space is not at the floor of the rack, it's about 1.5" above it. That means I probably can't use my Powerware 5125 on the floor of the rack with no rails. I'm calling this a non-issue since I have other options.
.I've decided that I want the CSB HRL634WF2 batteries from atbatt.com for my Powerware 5115 UPS units. Free shipping for orders over $75, and these particular batteries are 9Ah and designed for long-life and 260 full cycles.
I'm leaning toward an IBM M1015 card for ZFS. It will give me 8 SATA ports, and is readily available on eBay for cheap. To reflash the card to IT-mode, I can use the instructions here: http://lime-technology.com/forum/index.php?topic=12767.msg124393#msg124393
Once the new rack and server parts arrive, I'll basically be set for the next several years with respect to home computing, with the exception of an HTPC. I'll probably buy a Mac mini for my HTPC.