Biktrix Ultra FS Pro 3 first ride

I put together my Biktrix Ultra FS Pro 3 and went for a night ride. I’ve yet to make any suspension adjustments, but this was about making sure everything was functional.

I put almost 10 miles on it, in the dark. The Armageddon light is nice to have since it’s always on the bike and runs from the main battery. But as expected, the Outbound Hangover on my helmet is IMHO a must-have. It allows me to direct light further ahead and where I want. I have a Specialized Stix Elite 2 tail light on my helmet, and another on a reflector mount on the rear rack of the bike.

I think I’ve determined that I want longer handlebars and a shorter stem. The longer bars due mostly to the super tenacious grip of the 4″ fat tires and the weight of the bike (it’s very heavy). A shorter stem for more direct input. I think a 50mm Deity Copperhead would be a good stem choice. The stock handlebars are 700mm, which to me is too short just from a cornering leverage perspective. Obviously there are tradeoffs here, but I think something in the 740mm to 750mm range would be more appropriate for me, despite my height only being 5’8″.

I put more air in the front fork after the ride, from the top. I’m a bit over 50 psi there now. I will later adjust the bottom and the rebound, but they’re usable for now.

For road riding, I think the rear shock adjustment is OK for now.

For what it’s worth, I almost didn’t even notice the Pedaling Innovations Catalyst One pedals. What I can say is that they’re grippy with the Etnies Camber shoes, and I had no foot pain at all. I think the real story will be revealed the day I take a spin with Vans skate shoes (floppy sole). But I’m pretty sure they’re going to be nice long-term. My climbing stance feels OK.

First time on a ebike, pedaling while standing with pedal assist on is VERY different. And probably not something I’ll do often. We’ll see how long it takes me to adjust. Part of the issue here is the delay in pedal assist; this isn’t a Shimano or Bosch system. I suspect that when I really want to stand and hammer on the pedals, I’ll do it with no pedal assist. There is always the throttle when I run out of legs.

Finally ordered an electric bicycle: Biktrix Ultra FS Pro 3

I hemmed and hawed over this for months, losing most of the riding season in the process. But… I finally ordered a Biktrix Ultra FS Pro 3.

A very long time ago, I was a BMX racer. It was this influence that I had to fight to make the right decision.

I still own a 24″ SE Racing quadrangle, as well as a rigid Klein mountain bike from the 1990’s. However, I never ride them because the reality is that old injuries make the ride unpleasant fairly quickly. They’re just not bikes I can take on multi-hour rides. I want sensitive front suspension to alleviate my right wrist pain, and rear suspension to keep the tire on the ground when I’m traversing rough dirt roads here in Michigan.

Another sensible conclusion: I want to be able to ride in the snow. That dictated a fat-tire bike.

So I ordered the bike with a 26″x4″ setup. I added the Wren rear hub for longevity with the powerful mid-drive motor. I added a second battery for more range. I added the rear fender and rack kit, as well as a dropper seat post. I added the Armageddon headlight since we have short days here in the winter. I added the Wren inverted fork, mostly because I could not find any useful reviews of the Biktrix inverted fork. And I upgraded to the Magura MT5e brakes.

I separately ordered studded 26″x4″ tires for the winter.

I also ordered Pedaling Innovations Catalyst One pedals. These are oddball pedals, much bigger front-to-back than normal pedals. I won’t know how I feel about them until I’ve put some miles on them.

Gear-wise, I currently own a single helmet: a Specialized Mode. It was inexpensive, is a good commuter type helmet, and meets the Dutch NTA8776 standard for e-bike helmets. Given that much of my riding will be on dirt roads with some traffic, I wanted a helmet that was qualified for e-bikes and also had a means of easily mounting a light. The Mode comes with a nifty tool-free mount for a Stix Elite 2 tail light (which I have mounted). I have a second Stix Elite 2 tail light with a separate rear reflector mount, and a Stix Elite 2 head light with arm/leg band mount. I also have a head light for my helmet: an Outbound Hangover. The Armageddon on the bike is bright and emits a pretty wide beam, but nothing beats being able to point a light with your head.

I ordered inexpensive gloves from Amazon. I recently received some new Etnies Camber shoes, which I like but we’ll see how they do on the pedals. I of course already have some Vans BMX shoes (high tops as well as slip-ons) for casual rides. Not sure yet what I’ll use in the winter.

I also have a new Specialized wind jacket with lots of zip pockets. It’s not waterproof (and the front is snap closures, no zipper), and it has no hood. I have two Cleverhood rank jackets for rain, but to be honest I don’t expect to do a lot of rain riding due to the dirt road; the amount of sticky clay that winds up on the bike when it’s raining is high.

Modernizing mcpigdo

My custom garage door opener appliance is running FreeBSD 11.0-ALPHA5 on a Raspberry Pi 2B. It has worked fine for about 8 years now. However, I want to migrate it to FreeBSD 13.2-STABLE and from libDwmAuth to libDwmCredence. And just bring the code current.

The tricky part is that I never quite finished packaging up my device driver for the rotary encoders, and it was somewhat experimental (hence the alpha release of FreeBSD). But as of today it appears I have the rotary encoder device drivers working fine on FreeBSD 13.2-STABLE on a Raspberry Pi 4B. The unit tests for libDwmPi are passing, and I’m adding to them and doing a little cleanup so I’ll be able to maintain it longer-term.

I should note that the reason I went with FreeBSD at the time was pretty simple: the kernel infrastructure for what I needed to do was significantly better versus linux. That may or may not be true today, but for the moment I have no need to look at doing this on linux. The only non-portable code here is my device driver, and it’s relatively tiny (including boilerplate stuff).

Looking back at this project, I should have made a few more hardware-wise. The Raspberry Pi 2B is more than powerful enough for the job, and given that I put it inside a sealed enclosure, the lower power consumption versus a 4B is nice. I’m pretty sure my mom would appreciate one of these, if just by virtue of being able to open her garage doors with her phone or watch. The hardware (the Pi and the HAT I created) has been flawless, and I’ve had literally zero issues despite it being in a garage with no climate control (so it’s seen plenty of -10F and 95F days). It just works.

However, today I could likely do this in a smaller enclosure, thanks to PoE HATs. Unfortunately not the official latest Raspberry Pi PoE HAT because its efficiency is abysmal (generates too much heat). If I bump the Pi to a 4B, I’ll probably stick with a separate PoE splitter (fanless). I’ll need a new one since the power connector has changed.

The arguments for moving to a Pi 4B:

  • future-proofing. If I want to build another one, I’m steered toward the Pi 4B simply because it’s what I can buy and what’s current.
  • faster networking (1G versus 100M)
  • more oomph for compiling C and C++ code locally
  • Some day, the Pi 2B is going to stop working. I’ve no idea when that day might be, but 8 years in Michigan weather seems like it has probably taken a significant toll. On the other hand it could last another 20 years. There are no electrolytic capacitors, I’m using it headless, and none of the USB ports are in use.

The arguments against it:

  • higher power consumption, hence more heat
  • the Pi 2B isn’t dead yet

I think it’s pretty clear that during this process, I should try a Pi 4B. The day will come when I’ll have to abandon the 2B, and I’d rather do it on my timeline. No harm in keeping the 2B in a box while I try a 4B. Other than the PoE splitter, it should be a simple swap. Toward that end, I ordered a 4B with 4G of RAM (I don’t need 8G of RAM here). I still need to order a PoE splitter, but I can probably scavenge an original V2 PoE HAT from one of my other Pis and stack with stacking headers.

Over the weekend I started building FreeBSD 13.2-STABLE (buildworld) on the Pi 2B and as usual hit the limits. The problem is that 1G of RAM isn’t sufficient to utilize the 4 cores. It’s terribly slow even when you can use all 4 cores, but if you start swapping to a microSD card… it takes days for 'make buildworld‘ to finish. And since I have a device driver I’m maintaining for this device, it’s expected that I’ll need to rebuild the kernel somewhat regularly and also build the world occasionally. This is the main motivation for bumping to a Raspberry Pi 4B with 4G of RAM. It is possible it’ll still occasionally start swapping with a ‘make -j4 buildworld‘ , but the cores are faster and I don’t frequently see a single instance of the compiler or llvm-tblgen go over 500M, but it does happen. I think 4G is sufficient to avoid swapping during a full build.

Update Aug 26, 2023: duh, a while after I first created mcpigdo, it became possible to do what I need to do with the rotary encoders from user space. With FreeBSD 13.2, I can configure interrupts on the GPIO pins and be notified via a number of means. I’m going to work on changing my code to not need my device driver. This is good news since I’ve had some problems with my very old device driver despite refactoring, and I don’t have time to keep maintaining it. Moving my code to user space will make it more portable going forward, though it’ll still be FreeBSD-only. It will also allow for more flexibility.

Striping 4 Samsung 990 Pro 2TB on Ubuntu 22.04

On Prime Day I ordered four Samsung 990 Pro 2TB NVMe SSDs to install in my Threadripper machine. I’ve had an unopened Asus Hyper M.2 x16 Gen4 card for years waiting for drives. Just never got around to finishing the plan for my Threadripper machine.

The initial impression is positive. Just for fun, I striped all 4 of them and put an ext4 filesystem on the group, just to grab some out-of-the-box numbers. First up: a simple read test, which yielded more than 24 gigabytes/second. Nice.

dwm@thrip:/hyperx/dwm% fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting

TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.28
Starting 1 process
TEST: Laying out IO file (1 file / 2048MiB)

TEST: (groupid=0, jobs=1): err= 0: pid=6333: Wed Jul 19 02:11:19 2023
  read: IOPS=25.2k, BW=24.6GiB/s (26.4GB/s)(10.0GiB/407msec)
    slat (usec): min=27, max=456, avg=38.46, stdev=21.03
    clat (usec): min=174, max=10736, avg=1206.67, stdev=443.19
     lat (usec): min=207, max=11193, avg=1245.21, stdev=460.96
    clat percentiles (usec):
     |  1.00th=[  971],  5.00th=[ 1020], 10.00th=[ 1037], 20.00th=[ 1057],
     | 30.00th=[ 1074], 40.00th=[ 1074], 50.00th=[ 1074], 60.00th=[ 1090],
     | 70.00th=[ 1123], 80.00th=[ 1172], 90.00th=[ 1975], 95.00th=[ 2024],
     | 99.00th=[ 2245], 99.50th=[ 2278], 99.90th=[ 7832], 99.95th=[ 9241],
     | 99.99th=[10421]
  lat (usec)   : 250=0.05%, 500=0.24%, 750=0.26%, 1000=1.76%
  lat (msec)   : 2=88.58%, 4=8.87%, 10=0.21%, 20=0.03%
  cpu          : usr=2.71%, sys=96.06%, ctx=144, majf=0, minf=8205
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=24.6GiB/s (26.4GB/s), 24.6GiB/s-24.6GiB/s (26.4GB/s-26.4GB/s), io=10.0GiB (10.7GB), run=407-407msec

Disk stats (read/write):
    dm-0: ios=151773/272, merge=0/0, ticks=47264/0, in_queue=47264, util=83.33%, aggrios=10240/21, aggrmerge=30720/63, aggrticks=3121/2, aggrin_queue=3124, aggrutil=76.09%
  nvme3n1: ios=10240/21, merge=30720/63, ticks=3146/3, in_queue=3149, util=76.09%
  nvme4n1: ios=10240/21, merge=30720/63, ticks=3653/3, in_queue=3657, util=76.09%
  nvme1n1: ios=10240/21, merge=30720/63, ticks=2504/3, in_queue=2507, util=76.09%
  nvme2n1: ios=10240/21, merge=30720/63, ticks=3182/2, in_queue=3184, util=76.09%

A short while later, I ran a simple write test. Here I see more than 13 gigabytes/second.

dwm@thrip:/hyperx/dwm% fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.28
Starting 1 process

TEST: (groupid=0, jobs=1): err= 0: pid=6682: Wed Jul 19 02:15:31 2023
  write: IOPS=13.7k, BW=13.4GiB/s (14.4GB/s)(10.0GiB/746msec); 0 zone resets
    slat (usec): min=35, max=297, avg=69.19, stdev=14.38
    clat (usec): min=48, max=9779, avg=2242.89, stdev=738.00
     lat (usec): min=105, max=9837, avg=2312.18, stdev=740.08
    clat percentiles (usec):
     |  1.00th=[ 1549],  5.00th=[ 2040], 10.00th=[ 2057], 20.00th=[ 2073],
     | 30.00th=[ 2089], 40.00th=[ 2089], 50.00th=[ 2114], 60.00th=[ 2114],
     | 70.00th=[ 2114], 80.00th=[ 2147], 90.00th=[ 2278], 95.00th=[ 3195],
     | 99.00th=[ 6456], 99.50th=[ 8979], 99.90th=[ 9503], 99.95th=[ 9634],
     | 99.99th=[ 9765]
   bw (  MiB/s): min=13578, max=13578, per=98.92%, avg=13578.00, stdev= 0.00, samples=1
   iops        : min=13578, max=13578, avg=13578.00, stdev= 0.00, samples=1
  lat (usec)   : 50=0.01%, 100=0.02%, 250=0.09%, 500=0.15%, 750=0.16%
  lat (usec)   : 1000=0.19%
  lat (msec)   : 2=1.06%, 4=96.87%, 10=1.46%
  fsync/fdatasync/sync_file_range:
    sync (nsec): min=180, max=180, avg=180.00, stdev= 0.00
    sync percentiles (nsec):
     |  1.00th=[  181],  5.00th=[  181], 10.00th=[  181], 20.00th=[  181],
     | 30.00th=[  181], 40.00th=[  181], 50.00th=[  181], 60.00th=[  181],
     | 70.00th=[  181], 80.00th=[  181], 90.00th=[  181], 95.00th=[  181],
     | 99.00th=[  181], 99.50th=[  181], 99.90th=[  181], 99.95th=[  181],
     | 99.99th=[  181]
  cpu          : usr=36.11%, sys=53.56%, ctx=9861, majf=0, minf=14
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=13.4GiB/s (14.4GB/s), 13.4GiB/s-13.4GiB/s (14.4GB/s-14.4GB/s), io=10.0GiB (10.7GB), run=746-746msec

Disk stats (read/write):
    dm-0: ios=0/135825, merge=0/0, ticks=0/12864, in_queue=12864, util=87.00%, aggrios=0/10276, aggrmerge=0/30823, aggrticks=0/1124, aggrin_queue=1125, aggrutil=82.63%
  nvme3n1: ios=0/10275, merge=0/30822, ticks=0/1109, in_queue=1109, util=82.20%
  nvme4n1: ios=0/10276, merge=0/30826, ticks=0/1000, in_queue=1001, util=82.63%
  nvme1n1: ios=0/10276, merge=0/30822, ticks=0/1366, in_queue=1367, util=82.63%
  nvme2n1: ios=0/10277, merge=0/30825, ticks=0/1022, in_queue=1023, util=82.20%

It’s worth noting that I don’t consider this configuration a good idea for anything other than scratch space (perhaps for DL training data sets, etc.); 4 striped drives is as my friend Ben put it, risky. I of course trust SSD more than spinning rust here, and historically I’ve had no failures with Samsung SSD drives, but… that’s a hard thing to judge from just my personal observations and from where the industry has gone. I still have Samsung SATA SSDs from the 830 and 840 series, and they’re still healthy. But… we’ve gone from SLC to TLC to QLC to… losing a hair of reliability (and a chunk of warranty) at each step. And I’d be remiss if I didn’t mention Samsung’s botched firmware in the last two generations (980 and 990). In fact I’m annoyed that 2 of the 4 drives I received have old firmware that I’ll need to update.

Raspberry Pi PoE+ is a typo (should be PoS)

Seriously. ‘S’ is adjacent ‘E’ on a QWERTY keyboard.

I knew the official PoE+ HATs were pieces of poop before I bought them. This isn’t news. You don’t have to look hard to find Jeff Geerling’s comments, Martin Rowan’s comments or the many others who’ve complained. I had read them literally years before I purchased.

I decided to buy 4 of them despite the problems, for a specific purpose (4 rack mounted Pi4B 8G, all powered via PoE alone). I’ve had those running for a few days and they’re working. They’re inefficient, but so far they work.

I also ordered 2 more, with the intent of using one of them on a Raspberry Pi 4B 8G in an Anidees Pro case and keeping the other as a spare. Well, in literally 36 hours, one of them is already dead. I believe it destroyed itself via heat. And therein lies part of the problem. I’ll explain what I casually observed, since I wasn’t taking measurements.

I ran the Pi from USB-C for about a day, without the PoE HAT installed. It was in the Anidees Pro case, fully assembled. It was fine, idling around 37.4C and not seeming to go above 44C when running high loads (make -j4 on some medium-sized C++ projects; a prominent workload for me). Solid proof that for my use, the Anidees Pro case works exactly as intended. The case is a big heatsink. Note that I have the 5mm spacers installed for the lid, so it’s open around the entire top perimeter.

I then installed the PoE+ HAT, with extension headers and the correct length standoffs that are needed in the Anidees Pro case. Note that this activity isn’t trivial; the standoffs occupy the same screw holes as the bottom of the case (from opposite directions), and an unmodified standoff is likely to bottom out as it collides with the end of the case bottom screw. You can shorten the threaded end of the standoff, or do as I did and use shorter standoffs and add a nut and washers to take up some of the thread. I don’t advise shortening the screws for the bottom of the case.

I plugged in the PoE ethernet from my office lab 8-port PoE switch, which has been powering the 4 racked Pis for a few days. And observed the expected horrible noise noted by others. Since I expected it, I immediately unplugged the USB-C power. I continued installing software and started compiling and installing my own software (libDwm, libCredence, mcweather, DwmDns, libDwmWebUtils, mcloc, mcrover, etc.). It was late, so I stopped here. On my way out of the home office, I put my hand on the Pi. It was much warmer than when running from the USB-C. In fact, uncomfortably warm. I checked the CPU temperature with vcgencmd, it was under 40C. Hmm. I was tired, so I decided to leave it until the next day and see what happens.

In the morning the Pi had no power. I unplugged and plugged both ends of the 12″ PoE cable. Nothing.

It turns out that the PoE+ HAT is dead. Less than 48 hours of runtime. As near as I can tell, it cooked itself. The PoE port on the ethernet switch still works great. The Pi still works great (powered from USB-C after the dead PoE+ HAT was removed).

I find this saddening and unacceptable. “If it’s not tested, it’s broken.”. Hey Eben: it’s broken. No, literally, it’s broken. And looks to not even be smoke tested. In fact I’d say it’s worse than the issues with Rev. 1 of the original PoE HAT. This is a design problem, not a testing problem. In other words, the problem occurred at the very beginning of the process. Which means it passed through all of engineering. And this issue lies with leadership, not the engineers.

So not only have you gone backward, you’ve gone further back than you were for Rev. 1 of the PoE HAT. And you discontinued the only good PoE HAT you had? Now I’n just left with, “Don’t believe anything, ANYTHING from the mouth of Eben Upton.”.

I’m angry because my trust of Raspberry Pi has been eroding for years, and this is just another lump of coal. We hobbyists were basically screwed for 3 years on availability of all things Pi 4, and you’re still selling a PoE HAT that no one should use.

I’ve been saying this for a couple of years now: there is opportunity for disruption here. While I appreciate the things the Raspberry Pi Foundation has done, I’m starting to feel like I can’t tell anyone that the Pi hardware landscape is great. In fact for many things, it has stagnated.

For anyone for whom 20 US dollars matters, do NOT buy an official PoE+ HAT. Not that it matters… it’s June 2023 and it’s still not trivial to find something to use it (a Raspberry Pi 3 or 4).

There comes a time when a platform dies. They get lazy after the ecosystem builds around them. I’m wondering if I am seeing that on the horizon for the Raspberry Pi.

More Raspberry Pi? Thanks!

Years ago I bought a fairly simple 1U rack mount for four Raspberry Pi Model 4B computers. Then the COVID-19 pandemic happened, and for years it wasn’t possible to find Model 4B’s with 4G or 8G of RAM at less than scandalous scalper prices. So the inexpensive rack mount sat for years collecting dust.

This month, June 2023, I was finally able to buy four Model 4B Raspberry Pis with 8G of RAM, at retail price ($75 each). Hallelujah.

I also bought four of the PoE+ HATs. Which IMHO suck compared to the v2 version of the original PoE HATs; the efficiency is terrible at the tiny loads I have on them (no peripherals), they consume a lot more power and waste it as heat. I don’t need to repeat what’s been written elsewhere by those who’ve published measurements. There also appears to be a PoE to USB-C isolation issue, but fortunately for me I won’t have anything plugged into the USB-C on these Pis.

The plan is to put these four Pis in the wall-mounted switch rack in the basement. They’re mostly going to provide physical redundancy for services I run that don’t require much CPU or network and storage bandwidth. DNS, DHCP, mcrover and mcweather, for example.

I am using Samsung Pro Endurance 128G microSD cards for longevity. If I needed more and faster I/O, I’d be using a rack with space for M.2 SATA per Pi, but I don’t need it for these.

I’ve loaded the latest Raspberry Pi OS Lite 64-bit on them, configured DHCP and DNS for them (later I’ll configure static IPs on them), and started installing the things I know I want/need. They all have their PoE+ HATs on, and are installed in the rack mount. I’ll put the mount into the rack this weekend. The Pis are named grover, nomnom, snoopy and lassie.

Separately, I ordered 2 more Raspberry Pis (same model 4B with 8G of RAM), two more PoE+ HATs and 2 cases: an Argon ONE v2 and an Anidees AI-PI4-SG-PRO. Both of these turn a significant part of the case into a heatsink.

The Argon ONE v2 comes with a fan and can’t use the PoE+ HAT, but can accept an M2 SATA add-on. I’m planning to play with using this one in the master bedroom, connected to the TV. It’s nice that it routes everything to the rear of the case; it’s much easier to use in an entertainment center context.

I believe the Anidees AI-PI4-SG-PRO will allow me to use a PoE+ HAT, but I’ll need extension headers which I’ll order soon. I’ve liked my other Anidees cases, and I think this latest one should be the best I’ve had from them. They’re pricey but premium.

It’s nice that I can finally do some of the work I planned years ago. Despite hoping that I’d see RISC-V equivalents by now, the reality is that the Pi has a much larger ecosystem than anything equivalent. It’s still the go-to for several things and I’m happy.

Jan 16, 2023 Unacknowledged SYNs by country

Jan 14, 2023 Unacknowledged SYNs by country

It’s sometimes interesting to look at how different a single day might be versus the longer-term trends. And to see what happens when you make changes to your pf rules.

I added all RU networks I was blocking from ssh to the list blocked for everything. I also fired up a torrent client on my desktop.

RU moving up the list versus the previous 5 days is no surprise; a good portion of traffic I receive from RU is port scanning. But I’ll have to look to see what caused the CZ numbers to climb.

I think the only interesting thing about the torrent client is that I should do something to track UDP in a similar manner as I track TCP. If I have a torrent client running, I will wind up with a lot of UDP traffic (much of it directed to port 6881 on this day), and will respond with ICMP port unreachable. To some extent this is a burden on my outbound bandwidth, but on the other hand it will allow me to add an easy new tracker to mcflowd: “to whom am I sending ICMP port unreachables?”. Of course, UDP is trivially spoofed, so I don’t truly know the source of the UDP.

Source IPV4 address space blocked from ssh, by country

Below is a chart showing the number of IPv4 addresses blocked from accessing ssh on my network, by country, for the 25 countries with the most address space blocked. It has changed over the years, but the U.S. is now #2 where it wasn’t in 2017 (it wasn’t even in the top 20 back then). What hasn’t changed: China remains dominant.

It’s worth noting that my automation blocks address space based on access attempts. A prefix doesn’t wind up in the list unless the automation has seen failed access attempts from that prefix. Of course, policy and address allocation determine the width of prefix, as well as aggregation. No one gets blocked forever, but repeat offenders are blocked longer. Bear in mind that ssh on my network is intended for a single user: me. If I’m not in China at the moment, ssh need not be accessible from China.

This next chart comes from mcflowd data. One of the things it tracks is unacknowledged SYN packets that hit my gateway, per source IPV4 address. I can process this data with IP to country information to get an idea of which countries are the predominant prowlers. It shows how often different countries are either probing for a service I don’t run or trying to hit a service that I’ve blocked some of their IPv4 address space from accessing. A metaphor: the number of times they knocked on my door and I purposely didn’t answer (I did not reply with a SYN ACK). I call this the chump factor chart; many of the address spaces that contribute to this chart have been the same for 5 years.

Note that this second chart is from a period of less than 5 days. Anyone with a public Internet connection should not kid themselves into thinking they’re not being probed, constantly.

This is how you splinter the Internet. Make us fed up enough with your traffic that’s indistinguishable from traffic with criminal intent, we block you. Works great for authoritarian governments that don’t want their citizenry to communicate with the free world, and those with other motives too. 🙁

The good news for me is that I have automation that’s pretty flexible in configuration and input sources (the log parser, for example, can be used on a number of different log formats as long as they’re text and contain offending IP addresses). It saves the data I need, and isn’t a significant resource consumer on my gateway. It’s very secure, using my Credence library for ECDH, authentication and authorization (which under the hood is using libsodium at the moment). I have a reasonably robust IP to country service, which updates itself via RDAP. Sadly the registries are a disaster saga, so occasionally I wind up reloading with GeoLite or similar data. But since I only use the country data to determine how long and how wide a prefix will be blocked, and not if a prefix will be blocked, it’s mostly inconsequential. It’s just useful to be able to see where the nefarious traffic is coming from, through a geopolitical lens.

Hey U.S. (my own country): we shouldn’t throw stones while we live in a glass house. And if there’s anything we should do about big tech, I’d say regulating the weaponization of massive cloud computing resources would be a good start. Where do a lot of the U.S. probes come from? Amazon EC2, Google, Microsoft, DigitalOcean, linode, Oracle. The same holds true for probes from Canada and other parts of the western world. Some of these are legitimate research probes. However, to a large extent they’re indistinguishable from nefarious activity. And besides, I pay for my bandwidth at the end of a thin straw we call broadband in the U.S. I don’t want this traffic, yet I pay for it.

Sports thoughts of the day

Jan 5, 2023

Texas fired Chris Beard after a felony domestic violence charge. Maybe Texas Tech and Texas can now agree that Chris Beard might be a scumbag. And probably hangs out with scumbags.

Here’s the sign I want to see at games from both schools: “Chris Beard bites!!!”. See the police report. Yikes.

There are scumbags in all walks of life. But I’d like to see us be less forgiving of scumbags on big stages with positions of leader/teacher/mentor. And any men who bite their domestic partner in anger. That’s just crazy, right? As my friend Andy put it, “Biting Hall of Fame: Marv Albert, Mike Tyson, Hannibal Lecter.” All bat-poop-crazy in their own way.

Kudos to Texas for doing the right thing here. Boos for taking 3 weeks to do it.

I’d like the coach of my alum to be fired (Juwan Howard) for his inability to maintain a professional demeanor. He hasn’t advanced to scumbag status though, at least not yet. There’s the rub: yet. I don’t hate Howard, I just don’t trust him. Ticking time bomb. I miss John Beilein, a lot.

I put Harbaugh in the untrustworthy bucket too. He’s been there a long time, bu† the hypocritical righteousness while violating NCAA rules is the one that really rubs me the wrong way. Not to mention the stupidity. If you bought a hamburger for a recruit at the Brown Jug, it’s downright stupid to lie about it to an investigation committee. We’re not talking about fancy food here; Wendy’s makes better burgers. In fact, at my age, you’d have to pay me to eat a burger at the Brown Jug. It’s a college campus dive. I spent my fair share of time at the Brown Jug as a student, mostly eating eggs and pancakes late at night after a long study session with friends. But it’s not a place you take someone to impress them or sway them. It’s not even the institution that Crazy Jim’s or the Fleetwood is, never mind a fantastic food place like Zingerman’s.