Category: FreeBSD

July 18, 2016

Raspberry Pi garage door opener: part 2

I have a fully functioning software suite now for my garage door opener. I have been using a small simulator program on the Raspberry Pi to pull the pins up and down (using the pullup and pulldown resistors). Tonight I plugged in one of the actual rotary encoders, and it works fine. And now that I think about it, I don’t really need the optocouplers on the inputs, since I’m using encoders with NPN open collector outputs. All I need to do is enable the pull-up resistors. This is also true for the garage door closed switches. Hence I am going to draw up a second board with a lot fewer components. The component cost wasn’t significant for the board I have now, but it’ll save on my effort to populate the board. By dropping the optocouplers, I will eliminate 15 components. And technically I could probably eliminate the filtering capacitors too since the encoder cable is shielded. That would eliminate 4 more components.

I hate the amount of board space required for the relays, but I need them. I considered using MOSFETs or Darlingtons, but I decided it was just a bad idea to tie the Raspberry Pi ground to my garage door opener’s ground pin. It’d be a recipe for ground loop disasters. The relays keep the Raspberry Pi isolated. I am using relay drivers to drive the relays, which just saves on component count and board space.

I have a decent web interface now, which runs on my web server and communicates with the Raspberry Pi (encrypted). I have yet to implement the separate up/down logic, but since the web interface shows the movement of the door, it’s not strictly necessary. Door activation works, and I can see whether the door is opening or closing.

My code on the Raspberry Pi will learn the door travel from a full open/close cycle, so the graphic in the web interface is very representative of the amount the door is open.

July 4, 2016

rotary encoder driver for FreeBSD on Raspberry Pi

by dwm — Categories: FreeBSD, Software DevelopmentLeave a comment

I’m working on a rotary encoder driver for FreeBSD 11.0 on the Raspberry Pi.

Why?

I’m tired of consuming CPU to poll GPIO pins for my rotary encoders for my garage door opener project, and as of yet there isn’t a mechanism to deal with GPIO interrupts from user space on FreeBSD. And even with the elegant kqueue mechanism, I don’t really need to push edge interrupts to user space. All I really want is rotary encoder state transitions.

There seems to be a quirk in the setup of GPIO interrupts on the BCM283[56] under FreeBSD. From a very quick scan of the data sheet, I should be able to have separate handlers for rising and falling edges. But FreeBSD doesn’t allow it. Specifically, gpio_alloc_intr_resource() fails for the second interrupt. I have worked around it for now, but I will hopefully find time to revisit this issue later.

I have the interrupt handling working, I tested it using a simple shell script to manipulate the pullup and pulldown resistors. I have two encoders configured since I need one for each of my garage doors.


gpiorotenc0: on ofwbus0
gpiorotenc0: inputs on gpio0 pin 27, gpio0 pin 22
gpiorotenc1: on ofwbus0
gpiorotenc1: inputs on gpio0 pin 23, gpio0 pin 24
...
Jul 4 06:54:48 rpi2 kernel: gpiorotenc0: channel A value 1
Jul 4 06:54:48 rpi2 kernel: gpiorotenc0: channel A value 0
Jul 4 06:54:48 rpi2 kernel: gpiorotenc0: channel B value 1
Jul 4 06:54:48 rpi2 kernel: gpiorotenc0: channel B value 0
Jul 4 06:54:48 rpi2 kernel: gpiorotenc1: channel A value 1
Jul 4 06:54:48 rpi2 kernel: gpiorotenc1: channel A value 0
Jul 4 06:54:48 rpi2 kernel: gpiorotenc1: channel B value 1
Jul 4 06:54:48 rpi2 kernel: gpiorotenc1: channel B value 0

May 25, 2016

Raspberry Pi garage door opener

by dwm — Categories: embedded, FreeBSD, Software DevelopmentLeave a comment

In my spare time, I’ve been working on a Raspberry Pi garage door opener project.

The main goal is the ability to open or close my garage door from my iPhone or CarPlay. Unlike the existing projects I’ve seen, my solution has a rotary encoder on each door in addition to a door closed switch. The rotary encoders allow me to see if the door is moving, and hence close or open the door with a single button press. i.e. instead of a direction-ignorant ‘Activate’ button, I can have separate ‘Open’ and ‘Close’ buttons that will do the right thing, even if the door initially moves in the wrong direction. And I can handle two doors with one Raspberry Pi. I could handle more, but I only need it to handle two doors.

Unlike some of the commercial solutions, my solution does not require cloud services. It will still work when my Internet connection is down. If I’m in WiFi range with my phone, I can open and close my garage doors. If my Internet connection is up, I’ll be able to do the same from anywhere in the world.

It is more secure than the average IoT solution. The authentication protocol uses 2048-bit RSA as the base encryption, over which I send an AES256-encrypted randomly generated challenge. The client must decrypt the challenge using their private RSA key, then decrypt the AES256-encrypted challenge with the shared secret key, then encrypt with my server’s public key and send the challenge response. Post-authentication, the session uses AES256. Sessions will typically be short-lived, and we never exchange any secrets. The server will automatically generate a new 2048-bit RSA key each week, which should eliminate the ability of anyone being able to crack the public key crypto and use it in my lifetime (I don’t see a usable quantum computer on the horizon, and the NSA would just break my door down instead of cracking my garage door opener).

I’m nearly done with the server side that runs on the Raspberry Pi. I’m running FreeBSD 11 on the Raspberry Pi, and my server is multithreaded in order to be able to monitor all of the sensors and be responsive to clients at the same time. It is written in modern C++ since that’s been my server language of choice for… 18 years? All unit tests are done for the code that is completed, and I have tested the server with a simple client. I have also tested the sensor inputs, but I’ve yet to send out my piggyback I/O board design for manufacturing. The industrial rotary encoders will arrive soon, so I’ll probably order my PCBs sometime in the next few weeks.

In order to make the iOS side of things easier, client/server messages are encapsulated in JSON. Since I’m using jsoncpp on the server side, it’s trivial for me to write a Qt app for the desktop and a Wt app for my web server if desired since I’d just use jsoncopp in these cases.

Crypto is still ugly to do with Swift, but I’ll manage. I’m going to try to keep my Objective-C and other non-Swift code to a minimum on the iOS side of things.

I intend to allow configuration from a client. The server code to support this is not complete, but the protocol work is in place.

Probably worth noting that I created a separate libDwmAuth project so I can reuse the authentication and encryption in other IoT projects. No cloud needed, no clear text flying around my wired or wireless networks.

April 24, 2016

creating FreeBSD packages without ports

by dwm — Categories: FreeBSD, Software DevelopmentLeave a comment

This weekend I spent some time working on creating FreeBSD packages of my software.

Background…

Way back when, I would use epm. It worked up until FreeBSD switched to pkgng, which was a long time ago. It looks like the author of epm has no interest in updating epm to create native packages on FreeBSD. Probably in no small part because pkgng changed things dramatically. It does make me wonder why epm is still in the ports tree, since its primary facility has not worked on FreeBSD for a long time.

At any rate, pkgng’s ‘pkg create …’ needs a manifest file in order to do what I need. It’s a relatively simple file, though it appears that it’s overly forgiving of missing/present quotes and commas (i.e. the grammar isn’t very rigorous). I’ll blame YAML here.

While I’d like to use libpkg to do what I need, the important data structures are in a private header file. Which presumably means it’s subject to change. And there appears to be no good way to get at the contents of a manifest without using the private header file. Though I’ve yet to look at using libucl to parse a manifest file.

I wrote my own FreeBSDPkg::Manifest class and helper classes, along with a lexer/parser for manifest files using flex and bison. The parser can populate a Manifest object from a manifest file. The manifest can then be manipulated as desired, and emitted to an ostream. For my own software packages, this will allow me to create a skeletal manifest file from the build, then further populate it with my files, then create a native FreeBSD package. What I have today works on libDwm, and the next release of libDwm will include the classes and some supporting applications.

August 21, 2015

mcblock examples

I recently wrote about the creation of a new utility I created to help manage my pf rules called mcblock. Thus far the most useful part has been the automation of rule addition by grokking logs.

For example, it can parse auth.log on FreeBSD and automatically add entries to my pf rule database. And before adding the entries, it can show you what it would do. For example:

# bzcat /var/log/auth.log.0.bz2 | mcblock -O - 
109.24.194.41        194 hits
  add 109.24.194/24 30 days
103.25.133.151         3 hits
  add 103.25.133/24 30 days
210.151.42.215         3 hits
  add 210.151.42/24 30 days

What I’ve done here is uncompress auth.log.0.z2 to stdout and pipe it to mcblock to see what it would do. mcblock shows that it would add three entries to my pf rule database, each with an expiration 30 days in the future. I can change the number of days with the -d command line option:

# bzcat /var/log/auth.log.0.bz2 | mcblock -d 60 -O -
109.24.194.41        194 hits
  add 109.24.194/24 60 days
103.25.133.151         3 hits
  add 103.25.133/24 60 days
210.151.42.215         3 hits
  add 210.151.42/24 60 days

By default, mcblock uses a threshold of 3 entries from a given offending IP address in a log file. This can be changed with the -t argument:

# bzcat /var/log/auth.log.0.bz2 |  mcblock -t 1 -O - 
109.24.194.41        194 hits
  add 109.24.194/24 30 days
103.25.133.151         3 hits
  add 103.25.133/24 30 days
210.151.42.215         3 hits
  add 210.151.42/24 30 days
31.44.244.11           2 hits
  add 31.44.244/24 30 days

If I’m happy with these actions, I can tell mcblock to execute them:

# bzcat /var/log/auth.log.0.bz2 | mcblock -t 1 -A -

And then look at one of the entries it added:

# mcblock -s 31.44.244/24
31.44.244.0/24     2015/08/21 - 2015/09/20

This particular address space happens to be from Russia, and is allocated as a /23. So let’s add the /23:

# mcblock -a 31.44.244/23

And then see what entries would match 31.44.244.11:

# mcblock -s 31.44.244.11
31.44.244.0/23     2015/08/21 - 2015/09/20

The /24 was replaced by a /23. Let’s edit this entry to add the registry and the country, and extend the time period:

# mcblock -e 31.44.244/23
start time [2015/08/21 04:37]: 
end time [2015/09/20 04:37]: 2016/02/21 04:37
registry []: RIPE
country []: RU
Entry updated.

And view again:

# mcblock -s 31.44.244.11
31.44.244.0/23     2015/08/21 - 2016/02/21 RIPE     RU

August 21, 2015

mcblock: new code for pf rule management from a ‘lazy’ programmer

Good programmers are lazy. We’ll spend a good chunk of time writing new/better code if we know it will save us a lot of time in the future.

Case in point: I recently completely rewrote some old code I use to manage the pf rules on my gateway. Why? Because I had been spending too much time doing things that could be done automatically by software with just a small bit of intelligence. Basically codifying the things I’ve been doing manually. And also because I’m lazy, in the way that all good programmers are lazy.

Some background…

I’m not the type of person who fusses a great deal about the security of my home network. I don’t have anything to hide, and I don’t have a need for very many services. However, I know enough about Internet security to be wary and to at least protect myself from the obvious. And I prefer to keep out the hosts that have no need to access anything on my home network, including my web server. And a very long time ago, I was a victim of an SSH-v1 issue and someone from Romania set up an IRC server on my gateway while I was on vacation in the Virgin Islands. I don’t like someone else using my infrastructure for nefarious purposes.

At the time, it was almost humorous how little the FBI knew about the Internet (next to nothing). I’ll never forget how puzzled the agents were at my home when I was explaining what had happened. The only reason I had called them was because the perpetrator managed to get a credit card number from us (presumably by a man-in-the-middle attack) and used it to order a domain name and hosting services. At the time I had friends with fiber taps at the major exhanges and managed to track down some of his traffic and eventually a photo of him and his physical address (and of course I had logged a lot of the IRC traffic before I completely shut it down). Didn’t do me any good since he was a Russian minor living in Romania. The FBI agents knew nothing about the Internet. My recollection is hazy, but I think this was circa 1996. I know it was before SSH-v2, and that I was still using Kerberos where I could.

Times have changed (that was nearly 20 years ago). But I continue to keep a close eye on my Internet access. I will never be without my own firewall with all of the flexibility I need.

For a very long time, I’ve used my own software to manage the list of IP prefixes I block from accessing my home network. Way back when, it was hard: we didn’t have things like pf. But all the while I’ve had some fairly simple software to help me manage the list of IP prefixes that I block from accessing my home network and simple log grokking scripts to tell me what looks suspicious.

Way back when, the list was small. It grew slowly for a while, but today it’s pretty much non-stop. And I don’t think of myself as a desirable target. Which probably means that nearly everyone is under regular probing and weak attack attempts.

One interesting thing I’ve observed over the last 5 years or so… the cyberwarfare battle lines could almost be drawn from a very brief lesson on WWI, WWII and the Cold War, with maybe a smattering of foreign policy SNAFUs and socialism/communism versus capitalism and East versus West. In the last 5 years, I’ve primarily seen China, Russia, Italy, Turkey, Brazil and Columbia address space in my logs with a smattering of former Soviet block countries, Iran, Syria and a handful of others. U.S. based probes are a trickle in comparison. It’s really a sad commentary on the human race, to be honest. I would wager that the countries in my logs are seeing the opposite directed at them: most of their probes and attacks are likely originating from the U.S. and its old WWII and NATO allies. Sigh.

Anyway…

My strategy

For about 10 years I’ve been using code I wrote that penalizes repeat attackers by doubling their penalty time each time their address space is re-activated in my blocked list. This has worked well; the gross repeat offenders wind up being blocked for years, while those who only knock once are only blocked for a long enough time to thwart their efforts. Many of them move on and never return (meaning I don’t see more attacks from their address space for a very long time). Some never stop, and I assume some of those are state-sponsored, i.e. they’re being paid to do it. Script kiddies don’t spend years trying to break into the same tiny web site nor years scanning gobs of broadband address space. Governments are a different story with a different set of motivations that clearly don’t go away for decades or even centuries.

The failings

The major drawback to what I’ve been doing for years: too much manual intervention, especially adding new entries. It doesn’t help that there is no standard logging format for various externally-facing services and that the logging isn’t necessarily consistent from one version to the next.

My primary goal was to automate the drudgery, replace the SQL database in the interest of having something lighter and speedier, while leveraging code and ideas that have worked well for me. I created mcblock as a simple set of C++ classes and a single command-line application to serve the purpose of grokking logs and automatically adding to my pf rules.

Automation

  • I’m not going to name all the ways in which I automatically add offenders, but I’ll mention one: I parse auth.log.0.bz2 every time newsyslog rolls over auth.log. This is fairly easy on FreeBSD, see the entry regarding the R flag and path_to_pid_cmd_file in the newsyslog.conf(5) manpage. Based on my own simple heuristics, those who've been offensive will be blocked for at least 30 days. Longer if they're repeat offenders, and I will soon add policy to permit more elaborate qualifications. What I have today is fast and effective, but I want to add some feeds from my probe detector (reports on those probing ports on which I have nothing listening) as well as from pflog. I can use those things today to add entries or re-instantiate expired entries, but I want to be able to extend the expiration time of existing active entries for those who continue to probe for days despite not receiving any response packets.
  • My older code used an SQL database, which was OK for most things but made some operations difficult on low-power machines. For example, I like to be able to automatically coalesce adjacent networks before emitting pf rules; it makes the pf rules easier to read. For example, if I already have 5.149.104/24 in my list and I add 5.149.105/24, I prefer emitting a single rule for 5.149.104/23. And if I add 5.149.105/24 but I have an inactive (expired) rule for 5.149.104/22, I prefer to reactivate the 5.149.104/22 rule rather than add a new rule specifically for 5.149.105/24. My automatic additions always use /24's, but once in a while I will manually add wider rules knowing that no one from a given address space needs access to anything on my network or the space is likely being used for state-sponsored cyberattacks. Say Russian government address space, for example; there's nothing a Russian citizen would need from my tiny web site and I certainly don't have any interest in continuous probes from any state-sponsored foreign entity.
  • Today I'm using a modified version of my Ipv4Routes class template to hold all of the entries. Modified because my normal Ipv4Routes class template uses a vector of unordered_map under the hood (to allow millions of longest-match IPv4 address lookups per second), but I need ordering and also a smaller memory footprint for my pf rule generation. While it's possible to reduce the memory footprint of unordered_map by increasing the load factor, it defeats the purpose (slows it down) when your hash key population isn't well-known and you still wind up with no ordering. Ordering allows the coalescing of adjacent prefixes to proceed quickly, so my modified class template uses map in place of unordered_map. Like my original Ipv4Routes class template, I have separate maps for each prefix length, hence there are 33 of them. Of course I don't have a use for /0, but it's there. I also typically don't have a use for the /32 map, but it's also there. Having the prefix maps separated by netmask length makes it easy to understand how to find wider and narrower matches for a given IP address or prefix, and hence write code that coalesces or expands prefixes. And it's more than fast enough for my needs: it will easily support hundreds of thousands of lookups per second, and I don't need it to be anywhere near as fast as it is. But I only had to change a couple of lines of my existing Ipv4Routes class template to make it work, and then added the new features I needed.
  • I never automatically remove entries from the new database. That's because historical information is useful and the automation can re-activate an existing but expired entry that might be a wider prefix than what I would allow automation to do without such information. While heuristics can do some of this fairly reliably, expired entries in the database serve as additional data for heuristics. If I've blocked a /16 before, seeing nefarious traffic from it again can (and usually should) trigger reactivation of a rule for that /16. And then there are the things like bogons and private space that should always be available for reactivation if I see packets with source addresses from those spaces coming in on an external interface.
  • Having this all automated means I now spend considerably less time updating my pf rules. Formerly I would find myself manually coalescing the database, deciding when I should use a wider prefix, reading the daily security email from my gateway to make sure I wasn't missing anything, etc. Since I now have unit tests and a real lexer/parser for auth.log, and pf entries are automatically updated and coalesced regularly, I can look at things less often and at my leisure while knowing that at least most of the undesired stuff is being automatically blocked soon after it is identified.

Good programmers are lazy. A few weekends of work is going to save me a lot of time in the future. I should've cobbled this up a long time ago.

December 22, 2012

Site Health page now shows UPS status and history

I am now collecting UPS status data from my UPS, and there is a new plot on the Site Health page that displays it. I still need to make these plots work more like those on the Site Traffic page, but having the UPS data for battery charge level, UPS load, expected runtime on battery and power consumption is useful to me. I currently have 3 computers plus some other gear running from one UPS, but soon will move a few things to a second UPS to increase my expected on-battery runtime a bit.

May 9, 2012

Measuring TCP round-trip times, part 5: first round of clean-ups

I added the ability to toggle a chart’s y-axis scale between linear and logarithmic. This has been deployed on the Site Traffic page.

Code cleanup… when I added the round-trip time plot, I wound up creating a lot of code that is largely a duplicate of code I had for the traffic plot. Obviously there are differences in the data and the presentation, but much of it is similar or the same. Tonight I started looking at trickling common functionality into base classes, functions and possibly a template or two.

I started with the obvious: there’s little sense in having a lot of duplicate code for the basics of the charts. While both instances were of type Wt::Chart::WCartesianChart, I had separate code to set things like the chart palette, handle a click on the chart widget, etc. I’ve moved the common functionality into my own chart class. It’s likely I’ll later use this class on my Site Health page.

May 6, 2012

Measuring TCP round-trip times, part 4: plotting the data

by dwm — Categories: FreeBSD, Software Development, Web Development — Tags: , Leave a comment

I added a new plot to the Site Traffic page. This is just another Wt widget in my existing Wt application that displays the traffic chart. Since Wt does not have a box plot, I’m displaying the data as a simple point/line chart. There’s a data series for each of the minimum, 25th percentile, median, 75th percentile and 95th percentile. These are global round-trip time measurements across all hosts that accessed my web site. In a very rough sense, they represent the network distance of the clients of my web site. It’s worth noting that the minimum line typically represents my own traffic, since my workstation shares an ethernet connection with my web server.

Clicking on the chart will display the values (in a table below the chart) for the time that was clicked. I added the same function to the traffic chart while I was in the code. I also started playing with mouse drag tracking so I can later add zooming.

May 5, 2012

Measuring TCP round-trip times, part 3: data storage classes

I’ve completed the design and initial implementation of some C++ classes for storage of TCP round trip data. These classes are simple, especially since I’m leveraging functionality from the Dwm::IO namespace in my libDwm library.

The Dwm::WWW::TCPRoundTrips class is used to encapsulate a std::vector of round trip times. Each round trip time is represented by a Dwm::TimeValue (class from libDwm). I don’t really care about the order of the entries in the vector, since a higher-level container holds the time interval in which the measurements were taken. Since I don’t care about the order of entries in the vector, I can use mutating algorithms on the vector when desired.

The Dwm::WWW::TCPHostRoundTrips class contains a std::map of the aforementioned Dwm::WWW::TCPRoundTrips objects, keyed by the remote host IP address (represented by Dwm::Ipv4Address from libDwm). An instance of this class is used to store all round trip data during a given interval. This class also contains a Dwm::TimeInterval (from my libDwm library) representing the measurement interval in which the round trip times were collected.

both of these classes have OrderStats() members which will fetch order statistics from the encapsulated data. I’m hoping to develop a box plot class for Wt in order to display the order statistics.

© 2020 rfdm blog
All rights reserved