New IPv4 and IPv6 container templates

I’ve spent a little bit of time working on some new slimmed-down C++ containers keyed by IPv4 addresses, IPv6 address, IPv4 prefixes and IPv6 prefixes. The containers that are keyed by prefixes allow longest-match searching by address, as would be expected.

My main objective here was to minimize the amount of code I need to maintain, by leveraging the C++ standard library and existing classes and class templates in libDwm. A secondary objective was to make sure the containers are fast enough for my needs. A third objective was to make the interfaces thread safe.

I think I did OK on the minimal code front. For example, DwmIpv4PrefixMap.hh is only 102 lines of code (I haven’t added I/O functionality yet). DwmIpv6PrefixMap.hh is 185 lines of code, including I/O functionality. Obviously they leverage existing code (Ipv4Prefix, Ipv6Prefix, et. al.).

The interfaces are thread safe. I’m in the process of switching them from mutex and lock_guard to shared_mutex and shared_lock/unique_lock.

Performance-wise, it looks pretty good. I’m using prefix dumps from routeviews to have realistic data for my unit tests. On my Threadripper 3960X development machine running Ubuntu 20.04:


 % ./TestIpv4AddrMap -p
 831,915 addresses, 7,380,956 inserts/sec
 831,915 addresses, 16,641,961 lookups/sec
 831,915 addresses, 9,032,736 removals/sec
 831,915 addresses, 8,249,196 inserts/sec (bulk lock)
 831,915 addresses, 54,097,737 lookups/sec (bulk lock)
 831,915 addresses, 9,489,272 removals/sec (bulk lock)
 831,918/831,918 passed

 % ./TestIpv4PrefixMap -p
 901,114 prefixes, 6,080,842 prefix inserts/sec
 901,114 prefixes, 14,639,881 prefix lookups/sec
 901,114 addresses, 5,105,259 longest match lookups/sec
 901,114 prefixes, 6,378,710 prefix inserts/sec (bulk lock)
 901,114 prefixes, 25,958,230 prefix lookups/sec (bulk lock)
 901,114 addresses, 5,368,727 longest match lookups/sec (bulk lock)
 1,802,236/1,802,236 passed

 % ./TestIpv6AddrMap -p
 104,970 addresses, 11,360,389 inserts/sec
 104,970 addresses, 15,206,431 lookups/sec
 104,970 addresses, 9,159,685 removals/sec
 104,970 addresses, 12,854,518 inserts/sec (bulk lock)
 104,970 addresses, 20,434,105 lookups/sec (bulk lock)
 104,970 addresses, 10,302,286 removals/sec (bulk lock)
 104,976/104,976 passed

 % ./TestIpv6PrefixMap -p
 110,040 prefixes, 11,181,790 prefix lookups/sec
 110,040 prefixes, 1,422,403 longest match lookups/sec
 440,168/440,168 passed


What is ‘bulk lock’? The interfaces allow one to get a shared or unique lock and then perform multiple operations while holding the lock. As seen above, this doesn’t make a huge difference for insertion or removal of entries, where the time is dominated by operations other than locking and unlocking. It does make a significant difference for exact-match searches. One must be careful using the bulk interfaces to avoid deadlock, of course. But they are useful in some scenarios.

The best part, IMHO, is that these are fairly thin wrappers around std::unordered_map. Meaning I don’t have my own hash table or trie code to maintain and I can count on std::unordered_map behaving in a well-defined manner due to it being part of the C++ standard library. It is not the fastest means of providing longest-match lookups. However, from my perspective as maintainer… it’s a small bit of code, and fast enough for my needs.