Refactoring and adding to libDwmAuth

I’ve been working on some changes and additions to libDwmAuth.

I had started a round of changes to the behind-the-scenes parts of the highest-level APIs to make managing authorized users and MitM prevention easier. However, in the end I felt like I was following the wrong course because my first solution involved too many round trips between client and server and some significant key generation overhead since I was using ephemeral 2048-bit RSA keys.

I’m now using ECDH for the first step. I have a working implementation with unit tests, using Crypto++. Unfortunately I’m still waiting for curve25519 to show up in Crypto++, but in the meantime I’m using secp256r1 despite its vulnerabilities.

I also have a rudimentary scheme for MitM prevention that is very similar to that used by OpenSSH, and client and server authentication based on RSA keys (2048 bits at the moment). I have a known_services file that’s similar to OpenSSH’s known_hosts, and an authorized_keys file that’s similar to the same for OpenSSH. This allows fairly easy management on both client and server side for my applications.

Obviously I also have a public/private key generator application.

mcblock examples

I recently wrote about the creation of a new utility I created to help manage my pf rules called mcblock. Thus far the most useful part has been the automation of rule addition by grokking logs.

For example, it can parse auth.log on FreeBSD and automatically add entries to my pf rule database. And before adding the entries, it can show you what it would do. For example:

# bzcat /var/log/auth.log.0.bz2 | mcblock -O - 
109.24.194.41        194 hits
  add 109.24.194/24 30 days
103.25.133.151         3 hits
  add 103.25.133/24 30 days
210.151.42.215         3 hits
  add 210.151.42/24 30 days

What I’ve done here is uncompress auth.log.0.z2 to stdout and pipe it to mcblock to see what it would do. mcblock shows that it would add three entries to my pf rule database, each with an expiration 30 days in the future. I can change the number of days with the -d command line option:

# bzcat /var/log/auth.log.0.bz2 | mcblock -d 60 -O -
109.24.194.41        194 hits
  add 109.24.194/24 60 days
103.25.133.151         3 hits
  add 103.25.133/24 60 days
210.151.42.215         3 hits
  add 210.151.42/24 60 days

By default, mcblock uses a threshold of 3 entries from a given offending IP address in a log file. This can be changed with the -t argument:

# bzcat /var/log/auth.log.0.bz2 |  mcblock -t 1 -O - 
109.24.194.41        194 hits
  add 109.24.194/24 30 days
103.25.133.151         3 hits
  add 103.25.133/24 30 days
210.151.42.215         3 hits
  add 210.151.42/24 30 days
31.44.244.11           2 hits
  add 31.44.244/24 30 days

If I’m happy with these actions, I can tell mcblock to execute them:

# bzcat /var/log/auth.log.0.bz2 | mcblock -t 1 -A -

And then look at one of the entries it added:

# mcblock -s 31.44.244/24
31.44.244.0/24     2015/08/21 - 2015/09/20

This particular address space happens to be from Russia, and is allocated as a /23. So let’s add the /23:

# mcblock -a 31.44.244/23

And then see what entries would match 31.44.244.11:

# mcblock -s 31.44.244.11
31.44.244.0/23     2015/08/21 - 2015/09/20

The /24 was replaced by a /23. Let’s edit this entry to add the registry and the country, and extend the time period:

# mcblock -e 31.44.244/23
start time [2015/08/21 04:37]: 
end time [2015/09/20 04:37]: 2016/02/21 04:37
registry []: RIPE
country []: RU
Entry updated.

And view again:

# mcblock -s 31.44.244.11
31.44.244.0/23     2015/08/21 - 2016/02/21 RIPE     RU

mcblock: new code for pf rule management from a ‘lazy’ programmer

Good programmers are lazy. We’ll spend a good chunk of time writing new/better code if we know it will save us a lot of time in the future.

Case in point: I recently completely rewrote some old code I use to manage the pf rules on my gateway. Why? Because I had been spending too much time doing things that could be done automatically by software with just a small bit of intelligence. Basically codifying the things I’ve been doing manually. And also because I’m lazy, in the way that all good programmers are lazy.

Some background…

I’m not the type of person who fusses a great deal about the security of my home network. I don’t have anything to hide, and I don’t have a need for very many services. However, I know enough about Internet security to be wary and to at least protect myself from the obvious. And I prefer to keep out the hosts that have no need to access anything on my home network, including my web server. And a very long time ago, I was a victim of an SSH-v1 issue and someone from Romania set up an IRC server on my gateway while I was on vacation in the Virgin Islands. I don’t like someone else using my infrastructure for nefarious purposes.

At the time, it was almost humorous how little the FBI knew about the Internet (next to nothing). I’ll never forget how puzzled the agents were at my home when I was explaining what had happened. The only reason I had called them was because the perpetrator managed to get a credit card number from us (presumably by a man-in-the-middle attack) and used it to order a domain name and hosting services. At the time I had friends with fiber taps at the major exhanges and managed to track down some of his traffic and eventually a photo of him and his physical address (and of course I had logged a lot of the IRC traffic before I completely shut it down). Didn’t do me any good since he was a Russian minor living in Romania. The FBI agents knew nothing about the Internet. My recollection is hazy, but I think this was circa 1996. I know it was before SSH-v2, and that I was still using Kerberos where I could.

Times have changed (that was nearly 20 years ago). But I continue to keep a close eye on my Internet access. I will never be without my own firewall with all of the flexibility I need.

For a very long time, I’ve used my own software to manage the list of IP prefixes I block from accessing my home network. Way back when, it was hard: we didn’t have things like pf. But all the while I’ve had some fairly simple software to help me manage the list of IP prefixes that I block from accessing my home network and simple log grokking scripts to tell me what looks suspicious.

Way back when, the list was small. It grew slowly for a while, but today it’s pretty much non-stop. And I don’t think of myself as a desirable target. Which probably means that nearly everyone is under regular probing and weak attack attempts.

One interesting thing I’ve observed over the last 5 years or so… the cyberwarfare battle lines could almost be drawn from a very brief lesson on WWI, WWII and the Cold War, with maybe a smattering of foreign policy SNAFUs and socialism/communism versus capitalism and East versus West. In the last 5 years, I’ve primarily seen China, Russia, Italy, Turkey, Brazil and Columbia address space in my logs with a smattering of former Soviet block countries, Iran, Syria and a handful of others. U.S. based probes are a trickle in comparison. It’s really a sad commentary on the human race, to be honest. I would wager that the countries in my logs are seeing the opposite directed at them: most of their probes and attacks are likely originating from the U.S. and its old WWII and NATO allies. Sigh.

Anyway…

My strategy

For about 10 years I’ve been using code I wrote that penalizes repeat attackers by doubling their penalty time each time their address space is re-activated in my blocked list. This has worked well; the gross repeat offenders wind up being blocked for years, while those who only knock once are only blocked for a long enough time to thwart their efforts. Many of them move on and never return (meaning I don’t see more attacks from their address space for a very long time). Some never stop, and I assume some of those are state-sponsored, i.e. they’re being paid to do it. Script kiddies don’t spend years trying to break into the same tiny web site nor years scanning gobs of broadband address space. Governments are a different story with a different set of motivations that clearly don’t go away for decades or even centuries.

The failings

The major drawback to what I’ve been doing for years: too much manual intervention, especially adding new entries. It doesn’t help that there is no standard logging format for various externally-facing services and that the logging isn’t necessarily consistent from one version to the next.

My primary goal was to automate the drudgery, replace the SQL database in the interest of having something lighter and speedier, while leveraging code and ideas that have worked well for me. I created mcblock as a simple set of C++ classes and a single command-line application to serve the purpose of grokking logs and automatically adding to my pf rules.

Automation

  • I’m not going to name all the ways in which I automatically add offenders, but I’ll mention one: I parse auth.log.0.bz2 every time newsyslog rolls over auth.log. This is fairly easy on FreeBSD, see the entry regarding the R flag and path_to_pid_cmd_file in the newsyslog.conf(5) manpage. Based on my own simple heuristics, those who've been offensive will be blocked for at least 30 days. Longer if they're repeat offenders, and I will soon add policy to permit more elaborate qualifications. What I have today is fast and effective, but I want to add some feeds from my probe detector (reports on those probing ports on which I have nothing listening) as well as from pflog. I can use those things today to add entries or re-instantiate expired entries, but I want to be able to extend the expiration time of existing active entries for those who continue to probe for days despite not receiving any response packets.
  • My older code used an SQL database, which was OK for most things but made some operations difficult on low-power machines. For example, I like to be able to automatically coalesce adjacent networks before emitting pf rules; it makes the pf rules easier to read. For example, if I already have 5.149.104/24 in my list and I add 5.149.105/24, I prefer emitting a single rule for 5.149.104/23. And if I add 5.149.105/24 but I have an inactive (expired) rule for 5.149.104/22, I prefer to reactivate the 5.149.104/22 rule rather than add a new rule specifically for 5.149.105/24. My automatic additions always use /24's, but once in a while I will manually add wider rules knowing that no one from a given address space needs access to anything on my network or the space is likely being used for state-sponsored cyberattacks. Say Russian government address space, for example; there's nothing a Russian citizen would need from my tiny web site and I certainly don't have any interest in continuous probes from any state-sponsored foreign entity.
  • Today I'm using a modified version of my Ipv4Routes class template to hold all of the entries. Modified because my normal Ipv4Routes class template uses a vector of unordered_map under the hood (to allow millions of longest-match IPv4 address lookups per second), but I need ordering and also a smaller memory footprint for my pf rule generation. While it's possible to reduce the memory footprint of unordered_map by increasing the load factor, it defeats the purpose (slows it down) when your hash key population isn't well-known and you still wind up with no ordering. Ordering allows the coalescing of adjacent prefixes to proceed quickly, so my modified class template uses map in place of unordered_map. Like my original Ipv4Routes class template, I have separate maps for each prefix length, hence there are 33 of them. Of course I don't have a use for /0, but it's there. I also typically don't have a use for the /32 map, but it's also there. Having the prefix maps separated by netmask length makes it easy to understand how to find wider and narrower matches for a given IP address or prefix, and hence write code that coalesces or expands prefixes. And it's more than fast enough for my needs: it will easily support hundreds of thousands of lookups per second, and I don't need it to be anywhere near as fast as it is. But I only had to change a couple of lines of my existing Ipv4Routes class template to make it work, and then added the new features I needed.
  • I never automatically remove entries from the new database. That's because historical information is useful and the automation can re-activate an existing but expired entry that might be a wider prefix than what I would allow automation to do without such information. While heuristics can do some of this fairly reliably, expired entries in the database serve as additional data for heuristics. If I've blocked a /16 before, seeing nefarious traffic from it again can (and usually should) trigger reactivation of a rule for that /16. And then there are the things like bogons and private space that should always be available for reactivation if I see packets with source addresses from those spaces coming in on an external interface.
  • Having this all automated means I now spend considerably less time updating my pf rules. Formerly I would find myself manually coalescing the database, deciding when I should use a wider prefix, reading the daily security email from my gateway to make sure I wasn't missing anything, etc. Since I now have unit tests and a real lexer/parser for auth.log, and pf entries are automatically updated and coalesced regularly, I can look at things less often and at my leisure while knowing that at least most of the undesired stuff is being automatically blocked soon after it is identified.

Good programmers are lazy. A few weekends of work is going to save me a lot of time in the future. I should've cobbled this up a long time ago.

depot’s backup space is now ZFS mirror

Last night I installed an HGST Deskstar NAS 4TB drive in depot to pair with the existing HGST Deskstar 4TB drive. I saved the existing data to a ZFS pool on kiva, then wiped the existing HGST Deskstar drive: unmounted the filesystem, deleted the partition, deleted the partitioning scheme.

If you’re doing this for the first time on FreeBSD 10.1 or later, don’t forget to enable ZFS (loading of the kernel module) and tell the system to mount ZFS pools at boot.

Enable ZFS kernel module at boot by adding to /boot/loader.conf:

zfs_load="YES"

Tell the system to mount ZFS pools at boot by adding to /etc/rc.conf:

zfs_enable="YES"

If you haven’t rebooted after changing /boot/loader.conf, you can load the kernel module manually:

# kldload zfs

Before getting started, I changed the default ashift setting to be more amenable to 4k drives:

# sysctl vfs.zfs.min_auto_ashift=12

I then created my ZFS pool. First I created the GPT partitioning scheme on each drive:

# gpart create -s gpt ada0
# gpart create -s gpt ada4

I then created a partition on each, leaving 1 gigabyte of space unused:

# gpart add -t freebsd-zfs -l gpzfs1_0 -b1M -s3725G ada0
# gpart add -t freebsd-zfs -l gpzfs1_1 -b1M -s3725G ada4

I then created the pool:

# zpool create zfs1 mirror /dev/gpt/gpzfs1_0 /dev/gpt/gpzfs1_1

I created my filesystem heirarchy. For now I only need my backups mount point. Since FreeBSD now has lz4_compress enabled by default, I can use lz4 compression. lz4 is considerably faster than lzjb, especially on incompressible data.

# zfs create -o compression=lz4 zfs1/backups

I then copied back the original data that was on the single HGST Deskstar 4TB drive. Since I had disabled TimeMachine on my desktop computer in order to move TimeMachine backups to the ZFS mirror, I re-enabled TimeMachine on my desktop and manually asked it to perform a backup. It worked fine and completed in less than 2 minutes since I hadn’t changed much on my desktop machine.

First ZFS pool now on kiva

I finally got around to creating the first ZFS pool on my new-to-me server (kiva). At the moment, this particular pool is for backups of other machines.

I am using the 4TB HGST Deskstar drive I bought a little bit ago, and a 4TB HGST Deskstar NAS I bought today. Once installed in hot-swap bays, they showed up as da1 and da2.

I created the GPT partitioning scheme on each:

# gpart create -s gpt da1
# gpart create -s gpt da2

I created a partition on each, leaving 2 gigabytes of space unused. It’s not uncommon for a replacement drive to have slightly less space, and I don’t want to be trapped in a jam if one of the drives fails and I need to use a different type of drive as a replacement. 2 gigabytes seems like a lot of space, but in the grand scheme of this ZFS host, it’s nothing. On FreeBSD, there is no performance penalty for using partitions for ZFS versus using whole disks. This allows me to wait to buy a replacement disk, which means I don’t have spare disks sitting around with their warranty period ticking away without the drives being used. I would always have spare drives on hand in a production environment, but at home it makes sense (especially for backups) to wait for a drive to have some trouble before purchasing its replacement. 4TB drives are readily available locally.

# gpart add -t freebsd-zfs -l gpzfs1_0 -b1M -s3724G da1
# gpart add -t freebsd-zfs -l gpzfs1_1 -b1M -s3724G da2

So now I see:

% gpart show da1
=>        34    7814037101  da1  GPT  (3.6T)
          34          2014       - free -  (1.0M)
        2048    7809794048    1  freebsd-zfs  (3.6T)
  7809796096       4241039       - free -  (2.0G)

% gpart show da2
=>        34    7814037101  da2  GPT  (3.6T)
          34          2014       - free -  (1.0M)
        2048    7809794048    1  freebsd-zfs  (3.6T)
  7809796096       4241039       - free -  (2.0G)

I created the pool:

# zpool create zfs1 mirror /dev/gpt/gpzfs1_0 /dev/gpt/gpzfs1_1

I created my filesystem heirarchy. For now I only need my backups mount point. Since FreeBSD now has lz4_compress enabled by default, I can use lz4 compression. lz4 is considerably faster than lzjb, especially on incompressible data.

# zfs create -o compression=lz4 zfs1/backups

And since I had not yet enabled ZFS on kiva, I added to /boot/loader.conf:

zfs_load="YES"

And added to /etc/rc.conf:

zfs_enable="YES"

After copying over 38 gigabytes of backups from another host, I have this:

% zpool list -v
NAME               SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zfs1              3.62T  23.9G  3.60T         -     0%     0%  1.00x  ONLINE  -
  mirror          3.62T  23.9G  3.60T         -     0%     0%
    gpt/gpzfs1_0      -      -      -         -      -      -
    gpt/gpzfs1_1      -      -      -         -      -      -

lz4 compression yielded a 37% reduction in disk space for these backups. That’s quite reasonable.

A friend asked me why I was using a mirror. The simple answer is that it’s more reliable than raidzN, and more easily expanded. This machine has 12 hot-swap drive bays, and I don’t expect to need all of them anytime soon (if ever). While a raidzN is more space-efficient, it’s not easily expanded and when one drive from a batch fails, others are often not far behind. Resilvering a raidzN is hard on all of the drives involved, and it’s not uncommon to have another disk fail during a resilvering. Resilvering a raidzN is slower than resilvering a mirror, and array performance suffers dramatically during resilvering of a raidzN. If/when I need to add more space to the pool, I can simply buy two more drives and add another mirror to the pool.

It’s worth noting that ZFS is not a substitute for backups. Here I am using ZFS to store backups of other machines, and it’s very useful for this use case.

Computer reallocation: depot is now web server

I’ve mostly finished migrating my web server to depot.

I did this to gain RAM and CPU (mostly the former). My old web server was an Atom D510 based machine with 4G of RAM. Most of the time this wasn’t a huge issue, but it was holding me back from putting more of my own software on it and I couldn’t easily run multiple jails or bhyve. depot has 32G of RAM and an i5-2405S, which should be sufficient for my needs for a while.

It’s worth noting that the big hog on my web server is mysql. I’m looking to get rid of it, but that means replacing my blog since WordPress requires mysql. I already wrote my own gallery software to replace gallery3, I have no reason to believe I can’t replace WordPress with something of my own that is simpler and consumes fewer resources. I’m also growing tired of the security issues that regularly crop up with WordPress; I’m certain I can produce something more secure.

New (to me) server up and running: kiva

I recently bought a server from eBay to replace the duties of my storage server (depot). depot will become my web server. I needed to do this because my web server was an Atom D510 based system. I need more RAM than can be addressed with an Atom D510. I also wanted ECC memory in my storage server since I’m about to start using ZFS pools and can use the integrity provided by ECC.

The server I bought is overkill for my current needs, but it was inexpensive because it’s older technology. It is a Supermicro X8DTN+ motherboard with a pair of Xeon L5640 CPUs and 48G of registered ECC RAM (six 8G sticks). It’s in a Supermicro SC826 chassis with a BPN-SAS-826EL backplane. This wasn’t the backplane I wanted; the eBay description was incorrect. However, it’ll work for my needs since I don’t really need more than 4 SAS lanes. As an upside, the cabling is cleaner than SFF to SATA breakout cables. I’m using an LSI 9211-8i PCIe x4 HBA to connect the backplane to the motherboard.

As an aside, I’ve never owned a machine with 12 CPU cores that can run 24 hyperthreads. While the L5640 runs at a paltry 2.26GHz, it is very handy to be able to run gmake -j24 when doing software development. I’m using a Crucial MX100 512G SSD as my OS drive, because it was inexpensive (an Amazon Prime Day deal). I would normally choose a Samsung 850 Pro, but I couldn’t justify the price for setting up this machine. I can always change it later. At any rate, compiles of my software are speedy on this machine, which means I can get back to finishing my BGP-4 development along with some other things.

The new machine is named kiva (thanks to Julie for the name!). Other than the Crucial MX100, it has an HGST 4TB Deskstar that will host backups of other machines. Backups of kiva are currently going to another HGST 4TB Desktar in depot. kiva is running FreeBSD 10.2-BETA2 (i.e. 10.1-STABLE on its way to 10.2). It is mounted in the rack, but I’ll likely change its position later.

Upgrade in progress

I’ve been working on upgrading my web server this week. This was no small task since I migrated from FreeBSD 8.4-STABLE to FreeBSD 10.1-STABLE, from apache 2.2 to apache 2.4, from Wt 3.2.0 to Wt 3.3.4, php 5.5 to php 5.6, etc.

The operating system upgrade went smoothly (I build from source since I run a custom kernel configuration) other than one glitch during pkg2ng.

The apache upgrade was more work since the configuration has changed a bit. It’s done and working.

It took me a bit to bring my changes from wt-3.2.0 into wt-3.3.4. All of these changes were in the Chart classes, but there had been some refactoring that I had to handle. I’m done and rebuilding my apps to use wt-3.3.4.

I am going to abandon gallery3 and deploy my dwmgallery software very soon. Uploading is much more graceful with my software, and gallery3 was abandoned over a year ago. As a bonus, my software does not need mysql. I will likely eventually ditch WordPress too, only because I’d like to ditch mysql. All in the name of more efficient computing; I’d like to keep using my low-power server (Intel Atom 510) for as long as possible. I will eventually move to a Xeon E3-12XX, only to gain addressable memory in ECC form.

depot: new backup/media server

I’ve slowly but surely been working on a new server in my rack. The intent of this server is to consolidate copies of some of my backups, and provide a place to store media files (music, movies, etc.). This new machine is known as depot.

The intent is to run FreeBSD 9.1 and ZFS. I will likely start with a single pool of 6 drives in raidz2, and later add a second pool.

As part of this process, I’ve migrated to a Startech RK2536BKF rack.

Secured my mail server

I’m near done configuring my mail server. Last night I got sendmail configured to use STARTTLS, and to require it for SMTP AUTH. I now don’t allow cleartext passwords, so I can feel safe using my iPhone to send mail through my server when I’m not at home. Not enforcing STARTTLS wasn’t a big deal for my desktop since it’s on a secure wired LAN with my mail server, but there are times when I want to use my iPhone and laptop to send mail when I’m away from home, and hence I need to enforce crypto for SMTP AUTH.

All works fine using Mail on my hackintosh, Mail on my MacBook Pro, Outlook on my hackintosh, and of course my iPhone. I need to write up everything I did so I can repeat it in the future if necessary.