Given the ignorance I’ve seen in some forums with respect to the need for 10 gigabit networking at home, I decided it was time to blog about it.
The argument against 10 gigabit networking at home for ANYONE, as I’ve seen posted in discussions on arstechnica and other sites, is ignorant. The same arguments were made when we got 100baseT, and again when we got 1000baseT at consumer pricing. And they were proven wrong, just as they will be for 10G whether it’s 10GBaseT, SFP+ DAC, SFP+ SR optics, or something else.
First, we should disassociate the WAN connection (say Comcast, Cox, Verizon, whatever) from the LAN. If you firmly believe that you don’t need LAN speeds that are higher than your WAN connection, I have to assume that you either a) do very little within your home that doesn’t use your WAN connection or b) just have no idea what the heck you’re talking about. If you’re in the first camp, you don’t need 10 gigabit LAN in your home. If you’re in the second camp, I can only encourage you to learn a bit more and use logic to determine your own needs. And stop telling others what they don’t need without listening to their unique requirements.
There are many of us with needs for 10 gigabit LAN at home. Let’s take my needs, for example, which I consider modest. I have two NAS boxes with ZFS arrays. One of these hosts some automated nightly backups, a few hundred movies (served via Plex) and some of my music collection. The second hosts additional automated nightly backups, TimeMachine instances and my source code repository (which is mirrored to the first NAS with ZFS incremental snapshots).
At the moment I have 7 machines that run automated backups over my LAN. I consider these backups critical to my sanity, and they’re well beyond what I can reasonably accomplish via a cloud storage service. With data caps and my outbound bandwidth constraints, nightly cloud backups aren’t an option. Fortunately, I am not in desperate need of offsite backups, and the truly critical stuff (like my source code repository) is mirrored in a lot of places and occasionally copied to DVD for offsite storage. I’m not sure what I’ll do the day my source code repository gets beyond what I can reasonably burn to DVD but it’ll be a long while before I get there (if ever). If I were to have a fire, I’d only need to grab my laptop on my way out the door in order to save my source code. Yes, I’d lose other things. But…
Fires are rare. I hope to never have one. Disk failures are a lot less rare. As are power supply failures, fan failures, etc. This is the main reason I use ZFS. But, at 1 gigabit/second network speeds, the network is a bottleneck for even a lowly single 7200 rpm spinning drive doing sequential reads. A typical decent single SATA SSD will fairly easily reach 4 gigabits/second. Ditto for a small array of spinning drives. NVME/M.2/multiple SSD/larger spinning drive array/etc. can easily go beyond 10 gigabits/second.
Why does this matter? When a backup kicks off and saturates a 1 gigabit/second network connection, that connection becomes a lot less usable for other things. I’d prefer the network connection not be saturated, and that the backup complete as quickly as possible. In other words, I want to be I/O bound in the storage subsystem, not bandwidth bound in the network. This becomes especially critical when I need to restore from a backup. Even if I have multiple instances of a service (which I do in some cases), there’s always one I consider ‘primary’ and want to restore as soon as possible. And if I’m restoring from backup due to a security breach (hasn’t happened in 10 years, knock on wood), I probably can’t trust any of my current instances and hence need a restore from backup RIGHT NOW, not hours later. The faster a restoration can occur (even if it’s just spinning up a VM snapshot), the sooner I can get back to doing real work.
Then there’s just the shuffling of data. Once in a while I mirror all of my movie files, just so I don’t have to re-rip a DVD or Blu-Ray. Some of those files are large, and a collection of them is very large. But I have solid state storage in all of my machines and arrays of spinning drives in my NAS machines. Should be fast to transfer the files, right? Not if your network is 1 gigabit/second… your average SATA SSD will be 75% idle while trying to push data through a 1 gigabit/second network, and NVMe/M.2/PCIe solid state will likely be more than 90% idle. In other words, wasting time. And time is money.
So, some of us (many of us if you read servethehome.com) need 10 gigabit networking at home. And it’s not ludicrously expensive anymore, and prices will continue to drop. While I got an exceptional deal on my Netgear S3300-52X-POE+ main switch ($300), I don’t consider it a major exception. New 8-port 10GbaseT switches are here for under $1000, and SFP+ switches are half that price new (say a Ubiquiti ES-16-XG which also has 4 10GbaseT ports). Or buy a Quanta LB6M for $300 and run SR optics. Right now I have a pair of Mellanox ConnectX-2 EN cards for my web server and my main NAS, which I got for $17 each. Two 3-meter DAC cables for $15 each from fs.com connect these to the S3300-52X-POE+ switch’s SFP+ ports. In my hackintosh desktop I have an Intel X540-T2 card, which is connected to one of the 10GbaseT ports on the S3300-52X-POE+ via cat6a shielded keystones and cables (yes, my patch panels are properly grounded). I will eventually change the X540-T2 to a less power-hungry card, but it works for now and it was $100. I expect to see more 10GbaseT price drops in 2018. And I hope to see more options for mixed SFP+ and 10GbaseT in switches. We’re already at the point where copper has become unwieldy, since cat6a (esp. shielded) and cat7 are thick, heavy cables. And cat8? Forget about running much of that since it’s a monster size-wise. At 10 gigabits/second, it already makes sense to run multimode fiber for EMI immunity, distance, raceway/conduit space, no code violations when co-resident with AC power feeds, etc. Beyond 10 gigabit/second, which we’ll eventually want and need, I don’t see copper as viable. Sure, copper has traditionally been easier to terminate than fiber. But in part that’s because the consumer couldn’t afford or justify the need for it and hence fiber was a non-consumer technology. Today it’s easier to terminate fiber than it’s ever been, and it gets easier all the time. And once you’re pulling a cat6a or cat8 cable, you can almost fit an OM4 fiber cable with a dual LC connector on it through the same spaces and not have to field terminate at all. That’s the issue we’re facing with copper. Much like the issues with CPU clock speeds, we’re reaching the limits of what can reasonably be run on copper over typical distances in a home (where cable routes are often far from the shortest path from A to B). In a rack, SFP+ DAC (Direct Attach Copper) cables work well. But once you leave the rack and need to go through a few walls, the future is fiber. And it’ll arrive faster than some people expect, in our homes. Just check what it takes to send 4K raw video at 60fps. Or to backhaul an 802.11ac Wave 2 WiFi access point without creating a bottleneck on the wired network. Or the time required to send that 4TB full backup to your NAS.
OK, I feel better. 🙂 I had to post about this because it’s just not true that no one needs 10 gigabit networking in their home. Some people do need it.
My time is valuable, as is yours. Make your own decisions about what makes sense for your own home network based on your own needs. If you don’t have any pain points with your existing network, keep it! Networking technology is always cheaper next year than it is today. But if you can identify pain that’s caused by bandwidth constraints on your 1 gigabit network, and the pain warrants an upgrade to 10 gigabit (even if only between 2 machines), by all means go for it! I don’t know anyone that’s ever regretted a network upgrade that was well considered.
Note that this post came about partly due to some utter silliness I’ve seen posted online, including egregiously incorrect arithmetic. One of my favorites was from a poster on arstechnica who repeatedly (as in dozens of times) claimed that no one needed 10 gigabit ethernet at home because he could copy a 10 TB NAS to another NAS in 4 hours over a 1 gigabit connection. So be careful what you read on the Internet, especially if it involves numbers… it might be coming from someone with faulty arithmetic that certainly hasn’t ever actually copied 10 terabytes of data over a 1 gigabit network in 4 hours (hint… it would take almost 24 hours if it has the network all to itself, longer if there is other traffic on the link).
I’d be remiss if I didn’t mention other uses for 10 gigabit ethernet. Does your household do a lot of gaming via Steam? You’d probably benefit from having a local Steam cache with 10 gigabit connectivity to the gaming machines. Are you running a bunch of Windows 10 instances? You can pull down updates to one machine and distribute them from there to all of your Windows 10 instances, and the faster, the better. Pretty much every scenario where you need to move large pieces of data over the network will benefit from 10 gigabit ethernet. You have to decide for yourself if the cost is justified. In my case, I’ve installed the bare minimum (4 ports of 10 gigabit) that alleviates my existing pain points. At some point in the future I’ll need more 10 gigabit ports, and as long as it’s not in the next few months, it’ll be less expensive than it is today. But if you could use it today, take a look at your inexpensive options. Mellanox ConnectX-2 EN cards are inexpensive on eBay, and even the newer cards aren’t ludicrously expensive. If you only need 3 meters or less of distance, look at using SFP+ DAC cables. If you need more distance, look at using SR optical transceivers in Mellanox cards or Intel X540-DA2 (or newer) and fiber, or 10GbaseT (Intel X540-T2 or X540-T1 or newer, or a motherboard with on-board 10GbaseT). You have relatively inexpensive switch options if you’re willing to buy used on eBay and only need a few ports at 10 gigabit, or you’re a techie willing to learn to use a Quanta LB6M and can put it somewhere where it won’t drive you crazy (it’s loud).