Due to a firmware problem in the Seagate IronWolf Pro 8TB drives that makes them incompatible with ZFS on FreeBSD, I returned them over the weekend and ordered a pair of Ultrastar DC HC510 10TB drives. I’ve had phenomenal results from Ultrastars in the past, and as near Its I can tell they’ve always been very good enterprise-grade drives regardless of the owner (IBM, Hitach, HGST, Western Digital). The Ultrastars arrived today, and I put them in the zfs1 pool:
# zpool list -v NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zfs1 16.3T 2.13T 14.2T - - 10% 13% 1.00x ONLINE - mirror 3.62T 1.53T 2.09T - - 29% 42% gpt/gpzfs1_0 - - - - - - - gpt/gpzfs1_1 - - - - - - - mirror 3.62T 609G 3.03T - - 19% 16% gpt/gpzfs1_2 - - - - - - - gpt/gpzfs1_3 - - - - - - - mirror 9.06T 1.32M 9.06T - - 0% 0% gpt/gpzfs1_4 - - - - - - - gpt/gpzfs1_5 - - - - - - -
Everything seems good. Note that the scrub repair of 33.8G was due to me pulling the IronWolf drives from the chassis with the system live (after having removed them from the pool). This apparently caused a burp on the backplane, which was fully corrected by the scrub.
# zpool status pool: zfs1 state: ONLINE scan: scrub repaired 33.8G in 0 days 04:43:10 with 0 errors on Sun Nov 10 01:45:59 2019 remove: Removal of vdev 2 copied 36.7G in 0h3m, completed on Thu Nov 7 21:26:09 2019 111K memory used for removed device mappings config: NAME STATE READ WRITE CKSUM zfs1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/gpzfs1_0 ONLINE 0 0 0 gpt/gpzfs1_1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gpt/gpzfs1_2 ONLINE 0 0 0 gpt/gpzfs1_3 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 gpt/gpzfs1_4 ONLINE 0 0 0 gpt/gpzfs1_5 ONLINE 0 0 0 errors: No known data errors