Category: Computing

October 27, 2017

You may need 10 gigabit networking at home

by dwm — Categories: ComputingLeave a comment

Given the ignorance I’ve seen in some forums with respect to the need for 10 gigabit networking at home, I decided it was time to blog about it.

The argument against 10 gigabit networking at home for ANYONE, as I’ve seen posted in discussions on arstechnica and other sites, is ignorant. The same arguments were made when we got 100baseT, and again when we got 1000baseT at consumer pricing. And they were proven wrong, just as they will be for 10G whether it’s 10GBaseT, SFP+ DAC, SFP+ SR optics, or something else.

First, we should disassociate the WAN connection (say Comcast, Cox, Verizon, whatever) from the LAN. If you firmly believe that you don’t need LAN speeds that are higher than your WAN connection, I have to assume that you either a) do very little within your home that doesn’t use your WAN connection or b) just have no idea what the heck you’re talking about. If you’re in the first camp, you don’t need 10 gigabit LAN in your home. If you’re in the second camp, I can only encourage you to learn a bit more and use logic to determine your own needs. And stop telling others what they don’t need without listening to their unique requirements.

There are many of us with needs for 10 gigabit LAN at home. Let’s take my needs, for example, which I consider modest. I have two NAS boxes with ZFS arrays. One of these hosts some automated nightly backups, a few hundred movies (served via Plex) and some of my music collection. The second hosts additional automated nightly backups, TimeMachine instances and my source code repository (which is mirrored to the first NAS with ZFS incremental snapshots).

At the moment I have 7 machines that run automated backups over my LAN. I consider these backups critical to my sanity, and they’re well beyond what I can reasonably accomplish via a cloud storage service. With data caps and my outbound bandwidth constraints, nightly cloud backups aren’t an option. Fortunately, I am not in desperate need of offsite backups, and the truly critical stuff (like my source code repository) is mirrored in a lot of places and occasionally copied to DVD for offsite storage. I’m not sure what I’ll do the day my source code repository gets beyond what I can reasonably burn to DVD but it’ll be a long while before I get there (if ever). If I were to have a fire, I’d only need to grab my laptop on my way out the door in order to save my source code. Yes, I’d lose other things. But…

Fires are rare. I hope to never have one. Disk failures are a lot less rare. As are power supply failures, fan failures, etc. This is the main reason I use ZFS. But, at 1 gigabit/second network speeds, the network is a bottleneck for even a lowly single 7200 rpm spinning drive doing sequential reads. A typical decent single SATA SSD will fairly easily reach 4 gigabits/second. Ditto for a small array of spinning drives. NVME/M.2/multiple SSD/larger spinning drive array/etc. can easily go beyond 10 gigabits/second.

Why does this matter? When a backup kicks off and saturates a 1 gigabit/second network connection, that connection becomes a lot less usable for other things. I’d prefer the network connection not be saturated, and that the backup complete as quickly as possible. In other words, I want to be I/O bound in the storage subsystem, not bandwidth bound in the network. This becomes especially critical when I need to restore from a backup. Even if I have multiple instances of a service (which I do in some cases), there’s always one I consider ‘primary’ and want to restore as soon as possible. And if I’m restoring from backup due to a security breach (hasn’t happened in 10 years, knock on wood), I probably can’t trust any of my current instances and hence need a restore from backup RIGHT NOW, not hours later. The faster a restoration can occur (even if it’s just spinning up a VM snapshot), the sooner I can get back to doing real work.

Then there’s just the shuffling of data. Once in a while I mirror all of my movie files, just so I don’t have to re-rip a DVD or Blu-Ray. Some of those files are large, and a collection of them is very large. But I have solid state storage in all of my machines and arrays of spinning drives in my NAS machines. Should be fast to transfer the files, right? Not if your network is 1 gigabit/second… your average SATA SSD will be 75% idle while trying to push data through a 1 gigabit/second network, and NVMe/M.2/PCIe solid state will likely be more than 90% idle. In other words, wasting time. And time is money.

So, some of us (many of us if you read servethehome.com) need 10 gigabit networking at home. And it’s not ludicrously expensive anymore, and prices will continue to drop. While I got an exceptional deal on my Netgear S3300-52X-POE+ main switch ($300), I don’t consider it a major exception. New 8-port 10GbaseT switches are here for under $1000, and SFP+ switches are half that price new (say a Ubiquiti ES-16-XG which also has 4 10GbaseT ports). Or buy a Quanta LB6M for $300 and run SR optics. Right now I have a pair of Mellanox ConnectX-2 EN cards for my web server and my main NAS, which I got for $17 each. Two 3-meter DAC cables for $15 each from fs.com connect these to the S3300-52X-POE+ switch’s SFP+ ports. In my hackintosh desktop I have an Intel X540-T2 card, which is connected to one of the 10GbaseT ports on the S3300-52X-POE+ via cat6a shielded keystones and cables (yes, my patch panels are properly grounded). I will eventually change the X540-T2 to a less power-hungry card, but it works for now and it was $100. I expect to see more 10GbaseT price drops in 2018. And I hope to see more options for mixed SFP+ and 10GbaseT in switches. We’re already at the point where copper has become unwieldy, since cat6a (esp. shielded) and cat7 are thick, heavy cables. And cat8? Forget about running much of that since it’s a monster size-wise. At 10 gigabits/second, it already makes sense to run multimode fiber for EMI immunity, distance, raceway/conduit space, no code violations when co-resident with AC power feeds, etc. Beyond 10 gigabit/second, which we’ll eventually want and need, I don’t see copper as viable. Sure, copper has traditionally been easier to terminate than fiber. But in part that’s because the consumer couldn’t afford or justify the need for it and hence fiber was a non-consumer technology. Today it’s easier to terminate fiber than it’s ever been, and it gets easier all the time. And once you’re pulling a cat6a or cat8 cable, you can almost fit an OM4 fiber cable with a dual LC connector on it through the same spaces and not have to field terminate at all. That’s the issue we’re facing with copper. Much like the issues with CPU clock speeds, we’re reaching the limits of what can reasonably be run on copper over typical distances in a home (where cable routes are often far from the shortest path from A to B). In a rack, SFP+ DAC (Direct Attach Copper) cables work well. But once you leave the rack and need to go through a few walls, the future is fiber. And it’ll arrive faster than some people expect, in our homes. Just check what it takes to send 4K raw video at 60fps. Or to backhaul an 802.11ac Wave 2 WiFi access point without creating a bottleneck on the wired network. Or the time required to send that 4TB full backup to your NAS.

OK, I feel better. 🙂 I had to post about this because it’s just not true that no one needs 10 gigabit networking in their home. Some people do need it.

My time is valuable, as is yours. Make your own decisions about what makes sense for your own home network based on your own needs. If you don’t have any pain points with your existing network, keep it! Networking technology is always cheaper next year than it is today. But if you can identify pain that’s caused by bandwidth constraints on your 1 gigabit network, and the pain warrants an upgrade to 10 gigabit (even if only between 2 machines), by all means go for it! I don’t know anyone that’s ever regretted a network upgrade that was well considered.

Note that this post came about partly due to some utter silliness I’ve seen posted online, including egregiously incorrect arithmetic. One of my favorites was from a poster on arstechnica who repeatedly (as in dozens of times) claimed that no one needed 10 gigabit ethernet at home because he could copy a 10 TB NAS to another NAS in 4 hours over a 1 gigabit connection. So be careful what you read on the Internet, especially if it involves numbers… it might be coming from someone with faulty arithmetic that certainly hasn’t ever actually copied 10 terabytes of data over a 1 gigabit network in 4 hours (hint… it would take almost 24 hours if it has the network all to itself, longer if there is other traffic on the link).

I’d be remiss if I didn’t mention other uses for 10 gigabit ethernet. Does your household do a lot of gaming via Steam? You’d probably benefit from having a local Steam cache with 10 gigabit connectivity to the gaming machines. Are you running a bunch of Windows 10 instances? You can pull down updates to one machine and distribute them from there to all of your Windows 10 instances, and the faster, the better. Pretty much every scenario where you need to move large pieces of data over the network will benefit from 10 gigabit ethernet. You have to decide for yourself if the cost is justified. In my case, I’ve installed the bare minimum (4 ports of 10 gigabit) that alleviates my existing pain points. At some point in the future I’ll need more 10 gigabit ports, and as long as it’s not in the next few months, it’ll be less expensive than it is today. But if you could use it today, take a look at your inexpensive options. Mellanox ConnectX-2 EN cards are inexpensive on eBay, and even the newer cards aren’t ludicrously expensive. If you only need 3 meters or less of distance, look at using SFP+ DAC cables. If you need more distance, look at using SR optical transceivers in Mellanox cards or Intel X540-DA2 (or newer) and fiber, or 10GbaseT (Intel X540-T2 or X540-T1 or newer, or a motherboard with on-board 10GbaseT). You have relatively inexpensive switch options if you’re willing to buy used on eBay and only need a few ports at 10 gigabit, or you’re a techie willing to learn to use a Quanta LB6M and can put it somewhere where it won’t drive you crazy (it’s loud).

October 27, 2017

mcperf: a multithreaded bandwidth tester

I’ve been really dismayed by the lack of decent simple tools for testing the available bandwidth between a pair of hosts above 1 gigabit/second. Back when I didn’t have any 10 gigabit connections at home, I used iperf and iperf3. But I now have several 10 gigabit connections on my home network, and since these tools don’t use multithreading effectively, they become CPU bound (on a single core) before they reach the target bandwidth. Tools like ssh and scp have the same problem; they’re single threaded and become CPU bound long before they saturate a 10 gigabit connection.

When I install a 10 gigabit connection, whether it’s via SFP+ DACs, SFP+ SR optics or 10GbaseT, it’s important that I’m able to test the connection’s ability to sustain somewhere near line rate transfers end-to-end. Especially when I’m buying my DACs, transceivers or shielded cat6a patch cables from eBay or any truly inexpensive vendor. I needed a tool that could saturate a 10 gigabit connection and report the data transfer rate at the application level. Obviously due to the additional data for protocol headers and link encapsulation, this number will be lower than the link-level bandwidth, but it’s the number that ultimately matters for an application.

So, I quickly hacked together a multithreaded application to test my connections at home. It will spawn the requested number of threads (on each end) and the server will send data from each thread. Each thread gets its own TCP connection.

For a quick hack, it works well.


dwm@www:/home/dwm% mcperf -t 4 -c kiva
bandwidth: 8.531 Gbits/sec
bandwidth: 8.922 Gbits/sec
bandwidth: 9.069 Gbits/sec
bandwidth: 9.148 Gbits/sec
bandwidth: 9.197 Gbits/sec
bandwidth: 9.230 Gbits/sec
bandwidth: 9.253 Gbits/sec
bandwidth: 9.269 Gbits/sec
bandwidth: 9.283 Gbits/sec

Given that I don’t create servers that don’t use strong authentication, even if they’ll only be run for 10 seconds, I’m using the PeerAuthenticator from libDwmAuth for authentication. No encryption of the data that’s being sent, since it’s not necessary.

Of course this got me thinking about the number of tools we have today that just don’t cut it in a 10 gigabit network. ssh, scp, ftp, fetch, etc. Even NFS code has trouble saturating a 10 gigabit connection. It seems like eons ago that Herb Sutter wrote “The Free Lunch Is Over”. It was published in 2005. Yet we still have a bunch of tools that are CPU bound due to being single-threaded. How are we supposed to take full advantage of 10 gigabit and faster networks if the tools we use for file transfer, streaming, etc. are single-threaded and hence CPU bound well before they reach 10 gigabits/second? What happens when I run some fiber at home for NAS and want to run 40 gigabit or (egads!) 100 gigabit? It’s not as if I don’t have the CPU to do 40 gigabits/second; my NAS has 12 cores and 24 threads. But if an application is single-threaded, it becomes CPU bound at around 3.5 gigabits/second on a typical server CPU core. 🙁 Sure, that’s better than 1 gigabit/second but it’s less than what a single SATA SSD can do, and much less than what an NVME/M.2/striped SATA SSD/et. al. can do.

We need tools that aren’t written as if it’s 1999. I suspect that after I polish up mcperf a little bit, I’m going to work on my own replacement for scp so I can at least transfer files without being CPU bound at well below my network bandwidth.

May 22, 2017

short flurry of ssh login attempts blocked by mcblockd

mcblockd added quite a few networks during a 20 minute period today. I don’t have an explanation for the ssh login attempts all coming in during this period, but it’s nice to see that mcblockd happily blocked all of them.

While this is by no means a high rate of attempts, it’s higher than what I normally see.

May 22 11:32:10 ria mcblockd: [I] Added 185.129.60/22 (DK) to ssh_losers for 180 days
May 22 11:32:11 ria mcblockd: [I] Added 89.234.152/21 (FR) to ssh_losers for 180 days
May 22 11:32:45 ria mcblockd: [I] Added 46.233.0/18 (BG) to ssh_losers for 180 days
May 22 11:33:00 ria mcblockd: [I] Added 216.218.222/24 (US) to ssh_losers for 30 days
May 22 11:33:05 ria mcblockd: [I] Added 199.87.154/24 (CA) to ssh_losers for 30 days
May 22 11:33:15 ria mcblockd: [I] Added 78.109.16/20 (UA) to ssh_losers for 180 days
May 22 11:33:18 ria mcblockd: [I] Added 89.38.148/22 (FR) to ssh_losers for 180 days
May 22 11:33:26 ria mcblockd: [I] Added 65.19.167/24 (US) to ssh_losers for 30 days
May 22 11:34:05 ria mcblockd: [I] Added 62.212.64/19 (NL) to ssh_losers for 180 days
May 22 11:35:54 ria mcblockd: [I] Added 190.10.0/17 (CR) to ssh_losers for 180 days
May 22 11:37:16 ria mcblockd: [I] Added 192.42.116/22 (NL) to ssh_losers for 180 days
May 22 11:38:33 ria mcblockd: [I] Added 199.249.223/24 (US) to ssh_losers for 30 days
May 22 11:38:37 ria mcblockd: [I] Added 173.254.216/24 (US) to ssh_losers for 30 days
May 22 11:39:48 ria mcblockd: [I] Added 128.52.128/24 (US) to ssh_losers for 30 days
May 22 11:39:51 ria mcblockd: [I] Added 64.113.32/24 (US) to ssh_losers for 30 days
May 22 11:40:32 ria mcblockd: [I] Added 23.92.27/24 (US) to ssh_losers for 30 days
May 22 11:40:50 ria mcblockd: [I] Added 162.221.202/24 (CA) to ssh_losers for 30 days
May 22 11:42:42 ria mcblockd: [I] Added 91.213.8/24 (UA) to ssh_losers for 180 days
May 22 11:43:37 ria mcblockd: [I] Added 162.247.72/24 (US) to ssh_losers for 30 days
May 22 11:44:34 ria mcblockd: [I] Added 193.110.157/24 (NL) to ssh_losers for 180 days
May 22 11:44:38 ria mcblockd: [I] Added 128.127.104/23 (SE) to ssh_losers for 180 days
May 22 11:45:50 ria mcblockd: [I] Added 179.43.128/18 (CH) to ssh_losers for 180 days
May 22 11:45:55 ria mcblockd: [I] Added 89.144.0/18 (DE) to ssh_losers for 180 days
May 22 11:46:29 ria mcblockd: [I] Added 197.231.220/22 (LR) to ssh_losers for 180 days
May 22 11:46:44 ria mcblockd: [I] Added 195.254.132/22 (RO) to ssh_losers for 180 days
May 22 11:46:54 ria mcblockd: [I] Added 154.16.244/24 (US) to ssh_losers for 30 days
May 22 11:47:52 ria mcblockd: [I] Added 87.118.64/18 (DE) to ssh_losers for 180 days
May 22 11:48:51 ria mcblockd: [I] Added 46.165.224/19 (DE) to ssh_losers for 180 days
May 22 11:50:13 ria mcblockd: [I] Added 178.17.168/21 (MD) to ssh_losers for 180 days
May 22 11:50:47 ria mcblockd: [I] Added 31.41.216/21 (UA) to ssh_losers for 180 days
May 22 11:50:55 ria mcblockd: [I] Added 62.102.144/21 (SE) to ssh_losers for 180 days
May 22 11:51:19 ria mcblockd: [I] Added 64.137.244/24 (CA) to ssh_losers for 30 days
May 22 11:52:28 ria mcblockd: [I] Added 80.244.80/20 (SE) to ssh_losers for 180 days
May 22 11:52:42 ria mcblockd: [I] Added 192.160.102/24 (CA) to ssh_losers for 30 days
May 22 11:53:06 ria mcblockd: [I] Added 176.10.96/19 (CH) to ssh_losers for 180 days
May 22 11:55:38 ria mcblockd: [I] Added 77.248/14 (NL) to ssh_losers for 180 days
May 22 11:56:20 ria mcblockd: [I] Added 199.119.112/24 (US) to ssh_losers for 30 days
May 22 11:56:32 ria mcblockd: [I] Added 94.142.240/21 (NL) to ssh_losers for 180 days

May 10, 2017

China is a lousy netizen

There’s no one even close in terms of ssh login attempts. In a span of two weeks, mcblockd has blocked 47 million more addresses from China. That doesn’t mean I’ve seen 47 million IP addresses in login attempts. It means that China has a lot of address space being used to probe U.S. sites.

Brazil is in second place, but they’re behind by more than a decimal order of magnitude. Below are the current top two countries being blocked by mcblockd, by quantity of address space.

% mcblockc getactive ssh_losers

...

  Addresses covered per country:
    CN 149,911,680
      /10 networks:   10 (41,943,040 addresses)
      /11 networks:   21 (44,040,192 addresses)
      /12 networks:   38 (39,845,888 addresses)
      /13 networks:   26 (13,631,488 addresses)
      /14 networks:   23 (6,029,312 addresses)
      /15 networks:   26 (3,407,872 addresses)
      /16 networks:   14 (917,504 addresses)
      /17 networks:    4 (131,072 addresses)
      /18 networks:    1 (16,384 addresses)
      /19 networks:    1 (8,192 addresses)
      /21 networks:    2 (4,096 addresses)
      /22 networks:    2 (2,048 addresses)
      /25 networks:    1 (128 addresses)
    BR 14,170,112
      /10 networks:    1 (4,194,304 addresses)
      /11 networks:    3 (6,291,456 addresses)
      /12 networks:    1 (1,048,576 addresses)
      /13 networks:    3 (1,572,864 addresses)
      /14 networks:    3 (786,432 addresses)
      /15 networks:    1 (131,072 addresses)
      /17 networks:    2 (65,536 addresses)
      /18 networks:    1 (16,384 addresses)
      /19 networks:    5 (40,960 addresses)
      /20 networks:    2 (8,192 addresses)
      /21 networks:    5 (10,240 addresses)
      /22 networks:    4 (4,096 addresses)

I seriously doubt that Chinese citizens have anything to do with these attempts. I’m told that the Great Firewall blocks most ssh traffic on port 22. Not to mention that China’s Internet connectivity is somewhere near 95th in the world in terms of available bandwidth, so it’d be terribly painful for an ordinary user to use ssh or scp from China to my gateway. I think I can assume this is all government-sponsored probing.

April 26, 2017

mcblockd has been busy

The mcblockd automation has been running for roughly one week. It’s been fairly busy automatically blocking those trying to crack my ssh server. Below is some of the output from a query of the active blocked networks (the summary information for the top 10 countries by the number of addresses being blocked). Interesting to note that the automation has blocked a huge swath of addresses from China. State-sponsored cyberattacks?

% mcblockc getactive ssh_losers

...

  Addresses covered per country:
    CN 102,263,808
      /10 networks:    8 (33,554,432 addresses)
      /11 networks:   17 (35,651,584 addresses)
      /12 networks:   21 (22,020,096 addresses)
      /13 networks:   11 (5,767,168 addresses)
      /14 networks:   14 (3,670,016 addresses)
      /15 networks:    9 (1,179,648 addresses)
      /16 networks:    6 (393,216 addresses)
      /18 networks:    1 (16,384 addresses)
      /19 networks:    1 (8,192 addresses)
      /21 networks:    1 (2,048 addresses)
      /22 networks:    1 (1,024 addresses)
    KR 7,864,320
      /10 networks:    1 (4,194,304 addresses)
      /11 networks:    1 (2,097,152 addresses)
      /12 networks:    1 (1,048,576 addresses)
      /13 networks:    1 (524,288 addresses)
    IN 7,340,032
      /10 networks:    1 (4,194,304 addresses)
      /12 networks:    2 (2,097,152 addresses)
      /13 networks:    1 (524,288 addresses)
      /14 networks:    2 (524,288 addresses)
    BR 7,252,992
      /11 networks:    3 (6,291,456 addresses)
      /13 networks:    1 (524,288 addresses)
      /14 networks:    1 (262,144 addresses)
      /15 networks:    1 (131,072 addresses)
      /17 networks:    1 (32,768 addresses)
      /19 networks:    1 (8,192 addresses)
      /21 networks:    1 (2,048 addresses)
      /22 networks:    1 (1,024 addresses)
    FR 6,782,976
      /10 networks:    1 (4,194,304 addresses)
      /11 networks:    1 (2,097,152 addresses)
      /15 networks:    1 (131,072 addresses)
      /16 networks:    5 (327,680 addresses)
      /18 networks:    2 (32,768 addresses)
    AR 4,524,032
      /12 networks:    1 (1,048,576 addresses)
      /13 networks:    2 (1,048,576 addresses)
      /14 networks:    8 (2,097,152 addresses)
      /15 networks:    2 (262,144 addresses)
      /16 networks:    1 (65,536 addresses)
      /21 networks:    1 (2,048 addresses)
    JP 4,227,072
      /10 networks:    1 (4,194,304 addresses)
      /17 networks:    1 (32,768 addresses)
    RU 3,484,672
      /13 networks:    2 (1,048,576 addresses)
      /14 networks:    5 (1,310,720 addresses)
      /15 networks:    6 (786,432 addresses)
      /16 networks:    2 (131,072 addresses)
      /17 networks:    4 (131,072 addresses)
      /18 networks:    2 (32,768 addresses)
      /19 networks:    5 (40,960 addresses)
      /22 networks:    3 (3,072 addresses)
    IT 3,280,896
      /11 networks:    1 (2,097,152 addresses)
      /12 networks:    1 (1,048,576 addresses)
      /15 networks:    1 (131,072 addresses)
      /20 networks:    1 (4,096 addresses)
    TW 2,637,824
      /12 networks:    2 (2,097,152 addresses)
      /13 networks:    1 (524,288 addresses)
      /18 networks:    1 (16,384 addresses)

...

April 26, 2017

Looking at ‘Synners’ (TCP SYN data)

One of the many sets of data I collect with mcflow on my gateway is traffic counters for TCP SYN packets I receive but do not SYN ACK. I keep the source IP address, the destination port, and of course timestamps and counters. This type of data generally represents one of three things: probing for vulnerable services which I don’t run, probing for services I do run but block from offenders, or probing for botnet-controlled devices.

The table below shows the top 10 ports for the current week. In the case of ssh and http, I do run those services but mcblockd automatically blocks those who violate my configured policies. I do not run a telnet server anywhere (my IoT devices are of my own design and use ECDH, 2048-bit RSA keys and AES128). I also do not run MS SQL Server or rdp (Remote Desktop). I have no Windows hosts, and if I did, I certainly wouldn’t expose MS SQL Server or Remote Desktop.

Ports 7547 and 5358 are known to be used by Mirai and its descendants. Port 7547 is also a common port used by broadband ISPs for TR-064 services (specifically, TR-069) to manage home routers.

Port Packets Bytes
22 (ssh) 22116 1168688
23 (telnet) 3740 152784
80 (http) 1601 99216
1433 (ms-sql-s) 1279 52288
81 917 38016
7547 515 20620
3389 (rdp) 199 8792
5358 195 8148
2323 181 7384
8080 154 6700

Below is a table showing the SYNs I didn’t SYN ACK by country. This is just the top 10. Note that the top two have large swaths of their IP address space automatically blocked by mcblockd for violating my configured policies. They’re also known state sponsors of cyberattacks, and the evidence is pretty clear here. Much (but not all) of the US stuff is research scanning.

Country Packets Bytes
RU (Russian Federation) 17394 864024
CN (China) 6038 319116
US (United States) 3077 169932
NL (Netherlands) 1160 47580
TH (Thailand) 603 33480
UA (Ukraine) 467 20612
KR (Korea) 462 19380
BR (Brazil) 426 18708
FR (France) 341 17828
TR (Turkey) 281 11756

What is perhaps interesting about this data: the lines drawn during WWII and the Cold War don’t appear to have changed. I find this very sad. I’m just a tiny single user running a very modest home network, yet I’m a target of Russia and China. And my network is likely much more secure than the average home network. I assume this means that all of us are being probed all of the time, and some of us are probably regularly compromised. I think we (meaning the entire industry) need to consider completely banning telnet and doing something real about securing IoT devices.

April 20, 2017

mcblockd automation progress

So far, so good. Nice to see this in the logs while I’m working on updates to mcblockd. This shows lines from my auth.log with the corresponding actions invoked in mcblockd. The key takeaway: nearly instantaneous response to login attempts from countries where I have the policy set to low tolerance, and the expected response for “US” networks where I have the tolerance set a little higher.

The way this works…

A mcblocklog process receives all auth.log entries via a pipe from syslogd. It uses a list of regular expressions (in a plain text file) to match offending lines in the log, then posts matched IP addresses to mcblockd as ‘logHit’ requests. Unlike my previous setup that periodically parsed entire logs, this happens in real time. mcblockd asks dwprdapd for prefix and country information, then applies configured policy. Depending on the policy for the network, mcblockd may instantly add an entry to its database and the pf table, or wait for the policy to be violated (number of hits over a configured time period). For foreign countries, I have the policy set to trigger from a single offending line, hence mcblockd will immediately add an entry to the pf table. For the U.S., I have the policy set to 5 hits in 7 days. These are experimental settings at the moment, it’s likely I’ll change them.

Also part of the configured policy is how long an entry will live in the pf tables, by days. For countries which have no business connecting to my network, the policy is set long versus my own country. This is a common desired feature in an IPS (Intrusion Protection System). Another part of the policy is a ‘widest mask’ setting, to allow me to avoid blocking huge swaths of address space from a given country to whom I want to grant a bit of leniency (say the U.S. and Canada in my case).

Probably worth noting that if an address is already covered in the pf tables, mcblockd does nothing.

Also worth noting that the service is secured with libDwmAuth, using ECDH and 2048-bit RSA keys during authentication, then AES128 in GCM mode after authentication.

While the log entries below are for ssh, I have a similar process for web logs and mail server logs.

Apr 19 05:33:30 ria sshd[7695]: error: maximum authentication attempts exceeded
                    for root from 81.100.183.189 port 43973 ssh2 [preauth]
Apr 19 05:33:30 ria mcblockd[1854]: [I] Added 81.96/12 (GB) to ssh_losers

Apr 19 06:09:50 ria sshd[7752]: error: maximum authentication attempts exceeded
                    for root from 36.36.254.10 port 60635 ssh2 [preauth]
Apr 19 06:09:50 ria mcblockd[1854]: [I] Added 36.36/16 (CN) to ssh_losers

Apr 19 09:22:37 ria sshd[8123]: error: maximum authentication attempts exceeded
                    for root from 123.96.0.151 port 60583 ssh2 [preauth]
Apr 19 09:22:37 ria mcblockd[1854]: [I] Added 123.96/15 (CN) to ssh_losers

Apr 19 09:29:38 ria sshd[8129]: Did not receive identification string from 34.205.143.181
Apr 19 09:29:43 ria sshd[8130]: Invalid user support from 34.205.143.181
Apr 19 09:29:43 ria sshd[8130]: Postponed keyboard-interactive for invalid user
                    support from 34.205.143.181 port 53145 ssh2 [preauth]
Apr 19 09:29:43 ria sshd[8130]: error: PAM: authentication error for illegal user
                    support from 34.205.143.181
Apr 19 09:29:43 ria sshd[8130]: Failed keyboard-interactive/pam for invalid user
                    support from 34.205.143.181 port 53145 ssh2
Apr 19 09:29:44 ria mcblockd[1854]: [I] Added 34.205.143/24 (US) to ssh_losers

Apr 19 14:11:40 ria sshd[8666]: error: maximum authentication attempts exceeded
                    for root from 200.73.205.204 port 45585 ssh2 [preauth]
Apr 19 14:11:40 ria mcblockd[1854]: [I] Added 200.73.200/21 (EC) to ssh_losers

Apr 19 14:51:48 ria sshd[9272]: Invalid user admin from 77.39.72.192
Apr 19 14:51:48 ria mcblockd[1854]: [I] Added 77.39.0/17 (RU) to ssh_losers

Apr 19 15:31:18 ria sshd[17218]: Invalid user admin from 193.105.134.184
Apr 19 15:31:18 ria mcblockd[1854]: [I] Added 193.105.134/24 (SE) to ssh_losers

Apr 19 15:34:02 ria sshd[18020]: error: maximum authentication attempts exceeded
                    for root from 85.90.198.244 port 44202 ssh2 [preauth]
Apr 19 15:34:02 ria mcblockd[31598]: [I] Added 85.90.192/19 (UA) to ssh_losers

Apr 19 15:58:13 ria sshd[23696]: error: maximum authentication attempts exceeded
                    for root from 156.213.133.233 port 58400 ssh2 [preauth]
Apr 19 15:58:13 ria mcblockd[31598]: [I] Added 156.192/11 (EG) to ssh_losers

Apr 19 16:04:49 ria sshd[23785]: error: maximum authentication attempts exceeded
                    for root from 171.50.175.114 port 46884 ssh2 [preauth]
Apr 19 16:04:49 ria mcblockd[31598]: [I] Added 171.48/12 (IN) to ssh_losers

Apr 19 16:39:23 ria sshd[23858]: Invalid user support from 181.211.93.159
Apr 19 16:39:23 ria mcblockd[31598]: [I] Added 181.211/16 (EC) to ssh_losers

Apr 19 16:59:10 ria sshd[23914]: Did not receive identification string from 
                    128.40.46.124
Apr 19 16:59:10 ria mcblockd[31598]: [I] Added 128.40/15 (GB) to ssh_losers

Apr 19 18:19:24 ria sshd[24599]: error: maximum authentication attempts exceeded
                    for root from 178.216.100.130 port 52035 ssh2 [preauth]
Apr 19 18:19:24 ria mcblockd[31598]: [I] Added 178.216.96/21 (UA) to ssh_losers

Apr 19 19:21:43 ria sshd[24873]: Invalid user admin from 200.121.233.88
Apr 19 19:21:43 ria mcblockd[31598]: [I] Added 200.121/16 (PE) to ssh_losers

Apr 19 23:12:25 ria sshd[30989]: error: maximum authentication attempts exceeded
                    for root from 131.161.55.11 port 42822 ssh2 [preauth]
Apr 19 23:12:25 ria mcblockd[31598]: [I] Added 131.161.52/22 (HN) to ssh_losers

Apr 20 00:08:10 ria sshd[31282]: error: maximum authentication attempts exceeded
                    for root from 167.250.75.214 port 4837 ssh2 [preauth]
Apr 20 00:08:10 ria mcblockd[31598]: [I] Added 167.250.72/22 (BR) to ssh_losers

Apr 20 00:22:31 ria sshd[31674]: Did not receive identification string from
                    218.93.17.146
Apr 20 00:22:31 ria mcblockd[31598]: [I] Added 218.64/11 (CN) to ssh_losers

Apr 20 00:25:41 ria sshd[31691]: Invalid user admin from 60.178.126.100
Apr 20 00:25:41 ria mcblockd[31598]: [I] Added 60.160/11 (CN) to ssh_losers

Apr 20 00:38:12 ria sshd[31715]: Invalid user ubnt from 119.191.105.117
Apr 20 00:38:12 ria mcblockd[31598]: [I] Added 119.176/12 (CN) to ssh_losers

Apr 20 00:45:53 ria sshd[31733]: Invalid user admin from 123.170.99.10
Apr 20 00:45:53 ria mcblockd[31598]: [I] Added 123.160/12 (CN) to ssh_losers

Apr 20 01:39:27 ria sshd[31845]: error: maximum authentication attempts exceeded
                    for root from 119.193.140.196 port 60716 ssh2 [preauth]
Apr 20 01:39:27 ria mcblockd[31598]: [I] Added 119.192/11 (KR) to ssh_losers

April 20, 2017

dwmrdapd nearing production-ready: RDAP cache for IDS/IPS applications

I’ve been working on a new IP to country mapping service to be used by my IDS/IPS tools. This post is about the server portion, named dwmrdapd.

dwmrdapd provides a simple service to map an IP address to its registered prefix (in one of the NICs, i.e. ARIN, RIPE, AFRINIC, LACNIC, APNIC) and its registered country. It maintains a small custom database of the mappings in order to provide a quick responses to most queries. When an entry is not found in the database, or the requested entry is more than 30 days old, dwmrdapd will make a new RDAP query to the RDAP server of the corresponding NIC (Network Information Center).

I’ve been using the service to apply policy to the networks automatically blocked by my firewall. As of this week, I can call it near production-ready.

Most of the trickery in implementing this service revolved around dealing with ARIN’s poor RDAP service. The first problem was dealing with the fact that they pad IP octets with leading zeros in startAddress and endAddress, which leads all of the standard string to address functions to interpret the numbers as octal. That was relatively easy to handle with a simple regular expression fix. The second problem is that ARIN doesn’t populate the country value. Why, I don’t know. The workaround is to parse all of the vcardArrays for a card with an adr label and then parse the label looking for a country name, then map that country name to a 2-letter country code. The latest version of dwmrdapd does this, but it’s still a bit hokey. Some ARIN RDAP responses contain many vcard entries, with different countries. There doesn’t seem to be a science to the entries, hence I prioritize non-U.S. cards and fall back to “US” as the country code as a last resort.

The service itself is secured with libDwmAuth using ECDH, RSA 2048-bit keys and AES128 in GCM mode once authentication is complete. Key management is very similar to that used by ssh, which makes it easy for me to use on my local hosts.

Inside the encryption is just simple JSON. Example output from the simple client:

% dwmrdapc 35.1.1.1
[
   {
      "country" : "US",
      "countryName" : "United States of America",
      "ipv4addr" : "35.1.1.1",
      "lastChanged" : "2014-09-23 18:00",
      "lastUpdated" : "2017-04-20 15:26",
      "prefix" : "35.1/16"
   }
]

This isn’t exactly a new kind of service. Going back to the late 1990s, we’ve had IP geolocation services. But I wanted something free, tightly secured, and automatically updated on an on-demand basis. I also wanted something small data-wise; I don’t need latitude/longitude, etc. And I also wanted to take a look at the RDAP services from the NICs.

I did look at some other freely available sources of data, one of them being ipdeny.com. While their data is useful for bootstrapping (and I have a program to bootstrap dwmrdapd’s initial database from their country ‘zone files’), I’ve found it lacking in correctness. Possibly due to no fault of their own: NIC data is messy, especially if you’re fetching it via WHOIS but even the RDAP data can be very sloppy (cough, ARIN, cough), or abysmally slow (LACNIC).

There are also RIR datasets (Routing Information Registry), but they’re not uniform and there’s less participation than some of us would like to see.

April 7, 2017

Refactoring and adding to libDwmAuth

I’ve been working on some changes and additions to libDwmAuth.

I had started a round of changes to the behind-the-scenes parts of the highest-level APIs to make managing authorized users and MitM prevention easier. However, in the end I felt like I was following the wrong course because my first solution involved too many round trips between client and server and some significant key generation overhead since I was using ephemeral 2048-bit RSA keys.

I’m now using ECDH for the first step. I have a working implementation with unit tests, using Crypto++. Unfortunately I’m still waiting for curve25519 to show up in Crypto++, but in the meantime I’m using secp256r1 despite its vulnerabilities.

I also have a rudimentary scheme for MitM prevention that is very similar to that used by OpenSSH, and client and server authentication based on RSA keys (2048 bits at the moment). I have a known_services file that’s similar to OpenSSH’s known_hosts, and an authorized_keys file that’s similar to the same for OpenSSH. This allows fairly easy management on both client and server side for my applications.

Obviously I also have a public/private key generator application.

August 21, 2015

mcblock examples

I recently wrote about the creation of a new utility I created to help manage my pf rules called mcblock. Thus far the most useful part has been the automation of rule addition by grokking logs.

For example, it can parse auth.log on FreeBSD and automatically add entries to my pf rule database. And before adding the entries, it can show you what it would do. For example:

# bzcat /var/log/auth.log.0.bz2 | mcblock -O - 
109.24.194.41        194 hits
  add 109.24.194/24 30 days
103.25.133.151         3 hits
  add 103.25.133/24 30 days
210.151.42.215         3 hits
  add 210.151.42/24 30 days

What I’ve done here is uncompress auth.log.0.z2 to stdout and pipe it to mcblock to see what it would do. mcblock shows that it would add three entries to my pf rule database, each with an expiration 30 days in the future. I can change the number of days with the -d command line option:

# bzcat /var/log/auth.log.0.bz2 | mcblock -d 60 -O -
109.24.194.41        194 hits
  add 109.24.194/24 60 days
103.25.133.151         3 hits
  add 103.25.133/24 60 days
210.151.42.215         3 hits
  add 210.151.42/24 60 days

By default, mcblock uses a threshold of 3 entries from a given offending IP address in a log file. This can be changed with the -t argument:

# bzcat /var/log/auth.log.0.bz2 |  mcblock -t 1 -O - 
109.24.194.41        194 hits
  add 109.24.194/24 30 days
103.25.133.151         3 hits
  add 103.25.133/24 30 days
210.151.42.215         3 hits
  add 210.151.42/24 30 days
31.44.244.11           2 hits
  add 31.44.244/24 30 days

If I’m happy with these actions, I can tell mcblock to execute them:

# bzcat /var/log/auth.log.0.bz2 | mcblock -t 1 -A -

And then look at one of the entries it added:

# mcblock -s 31.44.244/24
31.44.244.0/24     2015/08/21 - 2015/09/20

This particular address space happens to be from Russia, and is allocated as a /23. So let’s add the /23:

# mcblock -a 31.44.244/23

And then see what entries would match 31.44.244.11:

# mcblock -s 31.44.244.11
31.44.244.0/23     2015/08/21 - 2015/09/20

The /24 was replaced by a /23. Let’s edit this entry to add the registry and the country, and extend the time period:

# mcblock -e 31.44.244/23
start time [2015/08/21 04:37]: 
end time [2015/09/20 04:37]: 2016/02/21 04:37
registry []: RIPE
country []: RU
Entry updated.

And view again:

# mcblock -s 31.44.244.11
31.44.244.0/23     2015/08/21 - 2016/02/21 RIPE     RU
© 2017 rfdm blog
All rights reserved