Order placed March 19. A Prime item.
It’s now March 27. USPS hasn’t even updated the tracking in 6 days.
Roughly 50% of my orders that wind up in USPS’ hands get lost.
Order placed March 19. A Prime item.
It’s now March 27. USPS hasn’t even updated the tracking in 6 days.
Roughly 50% of my orders that wind up in USPS’ hands get lost.
I don’t think I’ve ever seen such a disingenuous paragraph in a full-page newspaper ad as this one from Facebook in their ongoing attack on Apple:
Apple’s change will limit their ability to run personalized ads. To make ends meet, many will have to start charging you subscription fees or adding more in-app purchases, making the internet much more expensive and reducing high-quality free content.
Let’s be clear here. For one… there is literally NO free content on Facebook. And very little of it is high-quality. That which is, does not come from Facebook. They are not a company of journalists and writers.
Newsflash for those who’ve been under a rock for the last 25 years… the Internet has always been expensive. The real issues here:
1) who’s profiting?
2) in what currency?
3) is the transaction clear and transparent?
There are many companies profiting from the existing model of ‘free’ Internet. But it’s not small businesses. It’s Google, Facebook and a trove of others (Apple included).
On the currency and transparency… Facebook is far and away the worst here. They despise transparency. Apple wants to expose their users to what Facebook is collecting from you, the product, and let you choose whether or not you’d like to participate. You can choose to opt in. Facebook is worried that many will opt out once they understand what Facebook is doing. Not an unjust fear, but it’s yet to be seen how it will play out.
Apple’s motivation is coming from its customers. They (and I’m one of them) want these options. They’re one of the reasons we choose to buy iOS devices instead of Android devices. I don’t want targeted advertising. In fact, at this point I’ve been using the Internet for 30 years and I’m essentially blind to all online advertising; my brain has a highly-trained ad-ignoring filter. I don’t want large corporations tracking my every move online. Especially without transparency. Heaven forbid that I be willing to pay Apple for a device that allows me to protect some of my privacy!
Facebook’s motivations are at least partly coming from their customers too. But you, the end user, are NOT their customer. The advertisers are their customers. You are their product. I don’t quite get why Facebook tries to deny this; without you (the end user) and all the data they collect on you… they have no product to sell to advertisers. They’d have to change their business model. Perhaps charge a subscription fee. And for most of us… Facebook is definitely not something we’d knowingly pay ‘real’ money to use. But if you’re a Facebook user, you ARE paying for it. With your privacy and your time. And possibly your mental health. And maybe even your data plan.
And Facebook knows this to be true.
Beyond hurting apps and websites, many in the small business community say this change will be devastating for them too, at a time when they face enormous challenges. They need to be able to effectively reach the people most interested in their products and services to grow.
LMAO. “Hurting apps and websites”. Could you be more ambiguous? Oh, I see… you mean facebook.com. Sorry, I forgot for a moment that Google and Facebook have _decimated_ many small businesses as well as some large ones (news broadcasters, journalists, ad agencies, large newspapers, local sign makers…).
Again… you, the end user, are not the customer. The advertisers are the customers.
Forty-four percent of small to medium businesses started or increased their usage of personalized ads on social media during the pandemic, according to a new Deloitte study. Without personalized ads, Facebook data shows that the average small business advertiser stands to see a cut of over 60% in their sales for every dollar they spend.
In other words… once users understand what Facebook is doing, most will opt out?
I’ve spent a little bit of time working on some new slimmed-down C++ containers keyed by IPv4 addresses, IPv6 address, IPv4 prefixes and IPv6 prefixes. The containers that are keyed by prefixes allow longest-match searching by address, as would be expected.
My main objective here was to minimize the amount of code I need to maintain, by leveraging the C++ standard library and existing classes and class templates in libDwm. A secondary objective was to make sure the containers are fast enough for my needs. A third objective was to make the interfaces thread safe.
I think I did OK on the minimal code front. For example, DwmIpv4PrefixMap.hh is only 102 lines of code (I haven’t added I/O functionality yet). DwmIpv6PrefixMap.hh is 185 lines of code, including I/O functionality. Obviously they leverage existing code (Ipv4Prefix, Ipv6Prefix, et. al.).
The interfaces are thread safe. I’m in the process of switching them from mutex and lock_guard to shared_mutex and shared_lock/unique_lock.
Performance-wise, it looks pretty good. I’m using prefix dumps from routeviews to have realistic data for my unit tests. On my Threadripper 3960X development machine running Ubuntu 20.04:
% ./TestIpv4AddrMap -p 831,915 addresses, 7,380,956 inserts/sec 831,915 addresses, 16,641,961 lookups/sec 831,915 addresses, 9,032,736 removals/sec 831,915 addresses, 8,249,196 inserts/sec (bulk lock) 831,915 addresses, 54,097,737 lookups/sec (bulk lock) 831,915 addresses, 9,489,272 removals/sec (bulk lock) 831,918/831,918 passed % ./TestIpv4PrefixMap -p 901,114 prefixes, 6,080,842 prefix inserts/sec 901,114 prefixes, 14,639,881 prefix lookups/sec 901,114 addresses, 5,105,259 longest match lookups/sec 901,114 prefixes, 6,378,710 prefix inserts/sec (bulk lock) 901,114 prefixes, 25,958,230 prefix lookups/sec (bulk lock) 901,114 addresses, 5,368,727 longest match lookups/sec (bulk lock) 1,802,236/1,802,236 passed % ./TestIpv6AddrMap -p 104,970 addresses, 11,360,389 inserts/sec 104,970 addresses, 15,206,431 lookups/sec 104,970 addresses, 9,159,685 removals/sec 104,970 addresses, 12,854,518 inserts/sec (bulk lock) 104,970 addresses, 20,434,105 lookups/sec (bulk lock) 104,970 addresses, 10,302,286 removals/sec (bulk lock) 104,976/104,976 passed % ./TestIpv6PrefixMap -p 110,040 prefixes, 11,181,790 prefix lookups/sec 110,040 prefixes, 1,422,403 longest match lookups/sec 440,168/440,168 passed
What is ‘bulk lock’? The interfaces allow one to get a shared or unique lock and then perform multiple operations while holding the lock. As seen above, this doesn’t make a huge difference for insertion or removal of entries, where the time is dominated by operations other than locking and unlocking. It does make a significant difference for exact-match searches. One must be careful using the bulk interfaces to avoid deadlock, of course. But they are useful in some scenarios.
The best part, IMHO, is that these are fairly thin wrappers around
std::unordered_map. Meaning I don’t have my own hash table or trie code to maintain and I can count on
std::unordered_map behaving in a well-defined manner due to it being part of the C++ standard library. It is not the fastest means of providing longest-match lookups. However, from my perspective as maintainer… it’s a small bit of code, and fast enough for my needs.
I recently assembled a new workstation for home. My primary need was a machine for software development, including deep learning. This machine is named “thrip”.
Having looked hard at my options, I decided on AMD Threadripper 3960X as my CPU. A primary driver was of course bang for the buck. I wanted PCIe 4.0, at least 18 cores, at least 4-channel RAM, the ability to utilize 256G or more of RAM, and to stay in budget.
By CPU core count alone, the 3960X is over what I needed. On the flip side, it’s constrained to 256G of RAM, and it’s also more difficult to keep cool than most CPUs (280W TDP). But on price-per-core, and overall performance per dollar, it was the clear winner for my needs.
Motherboard-wise, I wanted 10G ethernet, some USB-C, a reasonable number of USB-A ports, room for 2 large GPUs, robust VRM, and space for at least three NVMe M.2 drives. Thunderbolt 3 would have been nice, but none of the handful of TRX40 boards seem to officially support it (I don’t know if this is an Intel licensing issue or something else). The Gigabyte board has the header and Wendell@Level1Techs seems to have gotten it working, but I didn’t like other aspects of the Gigabyte TRX40 AORUS EXTREME board (the XL-ATX form factor, for example, is still limiting in terms of case options).
I prefer to build my own workstations. It’s not due to being particularly good at it, or winding up with something better than I could get pre-built. It’s that I enjoy the creative process of selecting parts and putting it all together.
I had not assembled a workstation in quite some time. My old i7-2700K machine has met my needs for most of the last 8 years. And due to a global pandemic, it wasn’t a great time to build a new computer. The supply chain has been troublesome for over 6 months now, especially for some specific parts (1000W and above 80+ titanium PSUs, for example). We’ve also had a huge availability problem for the current GPUs from NVIDIA (RTX 3000 series) and AMD (Radeon 6000 series). And I wasn’t thrilled about doing a custom water-cooling loop again, but I couldn’t find a worthy quiet cooling solution for Threadripper and 2080ti without going custom loop. Given the constraints, I wound up with these parts as the guts:
It’s all in a Lian Li PC-O11D XL case. I have three 360mm radiators, ten Noctua 120mm PWM fans, an EK Quantum Kinetic TBE 200 D5 PWM pump, PETG tubing and a whole bunch of Bitspower fittings.
My impressions thus far: it’s fantastic for Linux software development. It’s so nice to be able to run ‘
make -j40‘ on large C++ projects and have them complete in a timely manner. And thus far, it runs cool and very quiet.
A bit of history…
I started my computing career at NSFNET at the end of 1991. Which then became ANSnet. In those days, we had a home-brewed network monitoring system. I believe most/all of it was originally the brainchild of Bill Norton. Later there were several contributors; Linda Liebengood, myself, others. The important thing for today’s thoughts: it was named “rover”, and its user interface philosophy was simple but important: “Only show me actionable problems, and do it as quickly as possible.”
To understand this philosophy, you have to know something about the primary users: the network operators in the Network Operations Center (NOC). One of their many jobs was to observe problems, perform initial triage, and document their observations in a trouble ticket. From there they might fix the problem, escalate to network engineering, etc. But it wasn’t expected that we’d have some omniscient tool that could give them all of the data they (or anyone else) needed to resolve the problem. We expected everyone to use their brains, and we wanted our primary problem reporter to be fast and as clutter-free as possible.
For decades now, I’m spent a considerable amount of time working at home. Sometimes because I was officially telecommuting, at other times just because I love my work and burn midnight hours doing it. As a result, my home setup has become more complex over time. I have 10 gigabit ethernet throughout the house (some fiber, some Cat6A). I have multiple 10 gigabit ethernet switches, all managed. I have three rackmount computers in the basement that run 7×24. I have ZFS pools on two of them, used for nightly backups of all networked machines, source code repository redundancy, Time Machine for my macOS machines, etc. I run my own DHCP service, an internal DNS server, web servers, an internal mail server, my own automated security software to keep my pf tables current, Unifi, etc. I have a handful of Raspberry Pis doing various things. Then there’s all the other devices: desktop computers in my office, a networked laser printer, Roku, AppleTV, Android TV, Nest thermostat, Nest Protects, WiFi access points, laptops, tablet, phone, watch, Ooma, etc. And the list grows over time.
Essentially, my home has become somewhat complex. Without automation, I spend too much time checking the state of things or just being anxious about not having time to check everything at a reasonable frequency. Are my ZFS pools all healthy? Are all of my storage devices healthy? Am I running out of storage space anywhere? Is my DNS service working? Is my DHCP server working? My web server? NFS working where I need it? Is my Raspberry Pi garage door opener working? Are my domains resolvable from the outside world? Are the cloud services I use working? Is my Internet connection down? Is there a guest on my network? A bandit on my network? Is my printer alive? Is my internal mail service working? Are any of my UPS units running on battery? Are there network services running that should not be? What about the ones that should be, like sshd?
I needed a monitoring system that worked like rover; only show me actionable issues. So I wrote my own, and named it “mcrover”. It’s more of a host and service monitoring system than a network monitoring system, but it’s distributed and secure (using ed25519 stuff in libDwmAuth). It’s modern C++, relatively easy to extend, and has some fun bits (ASCII art in the curses client when there are no alerts, for example). Like the old Network Operations Center, I have a dedicated display in my office that only displays the mcrover Qt client, 24 hours a day. Since most of the time there are no alerts to display, the Qt client toggles between a display of the next week’s forecast and a weather radar image when there are no alerts. If there are alerts, the alert display will be shown instead, and will not go away until there are no alerts (or I click on the page switch in the UI). The dedicated display is driven by a Raspberry Pi 4B running the Qt client from boot, using EGLFS (no X11). The Raspberry Pi4 is powered via PoE. It is also running the mcrover service, to monitor local services on the Pi as well as many network services. In fact the mcrover service is running on every 7×24 general purpose computing device. mcrover instances can exchange alerts, hence I only need to look at one instance to see what’s being reported by all instances.
This has alleviated me of a lot of sys admin and network admin drudgery. It wasn’t trivial to implement, mostly due to the variety (not the quantity) of things it’s monitoring. But it has proven itself very worthwhile. I’ve been running it for many months now, and I no longer get anxious about not always keeping up with things like daily/weekly/monthly mail from cron and manually checking things. All critical (and some non-critical) things are now being checked every 60 seconds, and I only have my attention stolen when there is an actionable issue found by mcrover.
So… an ode to the philosophy of an old system. Don’t make me plow through a bunch of data to find the things I need to address. I’ll do that when there’s a problem, not when there isn’t a problem. For 7×24 general purpose computing devices running Linux, macOS or FreeBSD, I install and run the mcrover service and connect it to the mesh. And it requires very little oomph; it runs just fine on a Raspberry Pi 3 or 4.
So why the weather display? It’s just useful to me, particularly in the mowing season where I need to plan ahead for yard work. And I’ve just grown tired of the weather websites. Most are loaded with ads and clutter. All of them are tracking us. Why not just pull the data from tax-funded sources in JSON form and do it myself? I’ve got a dedicated display which doesn’t have any alerts to display most of the time, so it made sense to put it there.
The Qt client using X11, showing the weather forecast.
The Qt client using X11, showing the weather radar.
The curses client, showing ASCII art since there are no alerts to be shown.
Apple silicon has arrived for the Mac. Not in my hands, but it has arrived.
Wow. I’m hesitant to call it revolutionary, simply because they’ve been on this path for over a decade. But I’m wowed for a number of reasons.
From the benchmarks I’ve seen, as well as the reviews, the performance isn’t what has wowed me. Yes, it’s impressive. But we had seen enough from iPhones to iPad Pros to know full well what we could expect from Apple’s first generation of their own SoC for the Mac. And they hit the marks.
I think what had the most profound impact on me was just the simple fact that they delivered on a promise to themselves, their users and their company. This wasn’t a short road! In this day and age, there are almost no technology companies that can stick the landing on a 10-year roadmap. Heck, many tech companies abandon their products and users in the span of a few years. Apple quietly persevered. They didn’t fall prey to hubris and conceit. They didn’t give us empty promises. They kept plugging away behind the scenes while Intel and others floundered, or overpromised and underdelivered, or just believed that the x86 architecture would be king forever. And much of this work happened after the passing of Steve Jobs. So to those who thought Apple would flounder without him… I think you’ve been wrong all along.
It’s not like I didn’t see this coming; it’s been rumored for what seems like forever. But I hadn’t really reflected on the potential impact until it arrived. Some background…
I’m a Mac user, and I love macOS. But I’m a software developer, and the main reason I love macOS is that it’s a UNIX. I like the user interface more than any other, but I spend most of my time in a terminal window running emacs, clang++, etc. Tasks served well by any UNIX. For me, macOS has been the best of two worlds. I shunned OS 9; I loved the Mac, but OS 9 fell short of my needs. When OS X arrived, I was on board. Finally an OS I could use for my work AND heartily recommend to non-techies. And the things I liked about NeXT came along for the ride.
The other reason I’ve loved Macs: the quality of Apple laptops has been exceptional for a very long time. With the exception of the butterfly keyboard fiasco and the still-mostly-useless Touch Bar (function keys are WAY more useful for a programmer), I’ve been very happy with my Mac laptops. Literally nothing else on the market has met my needs as well as a Macbook Pro, going back longer than I can remember.
But now… wow. Apple just put a stake in the ground that’s literally many miles ahead of everyone else in the personal computing space. It’s akin to the Apollo moon landing. We all saw it coming, but now the proof has arrived.
To be clear, the current M1 lineup doesn’t fit my needs. I’m not in the market for a Macbook Air or a 13″ Macbook Pro. I need a screen larger than 13″, and some of my development needs don’t fit a 16G RAM limitation, which also rules out the M1 Mac Mini (as does the lack of 10G ethernet). And like any first generation product, there are some quirks that have yet to be resolved (issues with some ultra wide monitors), missing features (no eGPU support), etc. But… for many users, these new machines are fantastic and there is literally nothing competitive. Just look at the battery life on the M1 Macbook Air and Macbook Pro 13″. Or the Geekbench scores. Or how little power they draw whether on battery or plugged into the wall. There’s no fan in the M1 Macbook Air because it doesn’t need one.
Of course, for now, I also need full x64 compatibility. I run Windows and other VMs on my Macs for development purposes, and as of right now I can’t do that on an M1 Mac. That will come if I’m to believe Parallels, but it won’t be native x64, obviously. But at least right now, Rosetta 2 looks reasonable. And it makes sense versus the original Rosetta, for a host of reasons I won’t delve into here.
Where does this leave Intel? I don’t see it as significant right now. Apple is and was a fairly small piece of Intel’s business. Today, Intel is in much bigger trouble from AMD EPYC, Threadripper, Threadripper Pro and Ryzen 3 than Apple silicon. That could change, but I don’t see Apple threatening Intel. Apple has no products in Intel’s primary business (servers). Yes, what Apple has done is disruptive, in a good way. But the long-term impact is yet to be seen.
I am looking forward to what comes next from Apple. Something I haven’t been able to say about Intel CPUs in quite some time. Don’t get me wrong; I’m a heavy FreeBSD and Linux user as well. Despite the age of x86/x64, we do have interesting activity here. AMD Threadripper, EPYC and Ryzen 3 are great for many of my needs and have put significant pressure on Intel. But I believe that once Apple releases a 16″ Macbook Pro with their own silicon and enough RAM for my needs… there will literally be nothing on the market that comes even close to what I want in a laptop, for many years. It will be a solid investment.
For the long run… Apple has now finally achieved what they’ve wanted since their inception: control of their hardware and software stack across the whole product lineup. Exciting times. Real competition in the space that’s long been dominated by x86/x64, which will be good for all of us as consumers. But make no mistake: Apple’s success here isn’t easily duplicated. Their complete control over the operating system and the hardware is what has allowed them to do more (a LOT more) with less power. This has been true on mobile devices for a long time, and now Apple has brought the same synergies to bear on the PC market. As much as I appreciate Microsoft and Qualcomm SQ1 and SQ2 Surface Pro X efforts, they are far away from what Apple has achieved.
One thing that continues to befuddle me about what’s being written by some… things like “ARM is now real competition for x86/x64”. Umm… ARM’s relevance hasn’t changed. They license reference core architectures and instruction sets. Apple is not building ARM reference architectures. If ARM was the one deserving credit here, we’d have seen similar success for Windows and Linux of ARM. ARM is relevant. But to pretend that Apple M1 silicon is just a product of ARM, and that there’s now some magic ARM silicon that’s going to go head-to-head with x86/x64 across the industry, is pure uninformed folly. M1 is a product of Apple, designed specifically for macOS and nothing else. All of the secret sauce here belongs to Apple, not ARM.
I’ve also been seeing writers say that this might prompt Microsoft and others to go the SoC route. Anything is possible. But look at how long it took Apple to get to this first generation for the Mac, and consider how they did it: mobile first, which brought unprecedented profits and many generations of experience. Those profits allowed them to bring in the talent they needed, and the very rapid growth of mobile allowed them to iterate many times in a fairly short span of time. Wash, rinse, repeat. Without the overhead of owning the fab. And for what many have considered a ‘dead’ market (personal computers). Yes, PC sales have on average been on a steady decline for some time. But the big picture is more complex; it’s still the case that a smartwatch isn’t a smartphone, a smartphone isn’t a tablet, a tablet isn’t a laptop, a laptop isn’t a desktop, most desktops are not workstations, a workstation isn’t a storage server, etc. What we’ve seen is the diversification of computing. The average consumer doesn’t need a workstation. Many don’t need a desktop, and today they have other options for their needs. But the desktop and workstation market isn’t going to disappear. We just have a lot more options to better fit our needs than we did when smartphones, tablets, ultrabooks, etc. didn’t exist.
I’ve always been uneasy with those who’ve written that Apple would abandon the PC market. The Mac business, standalone, generated 28.6 billion U.S. dollars in 2020. That would be at spot 111 on the Fortune 500 list. Not to mention that Apple and all the developers writing apps for Apple devices need Macs. The fact that Apple’s desktop business is a much smaller portion of their overall revenue isn’t a product of it being a shrinking business; it’s 4X larger in revenue than it was 20 years ago. The explosive growth in mobile has dwarfed it, but it has continued to be an area of growth for Apple. Which is not to say that I haven’t bemoaned the long delays between releases of Apple professional Mac desktops, not to mention the utter disaster of the 2013 Mac Pro. But Apple is notoriously tight-lipped about their internal work until it’s ready to ship, and it’s clear now that they wisely directed their resources at decoupling their PC fates from Intel. None of this would have happened if Apple’s intent was to abandon personal computers.
So we enter a new era of Apple. Rejoice, whether you’re an Apple user or not. Innovation spurs further innovation.
Due to a firmware problem in the Seagate IronWolf Pro 8TB drives that makes them incompatible with ZFS on FreeBSD, I returned them over the weekend and ordered a pair of Ultrastar DC HC510 10TB drives. I’ve had phenomenal results from Ultrastars in the past, and as near Its I can tell they’ve always been very good enterprise-grade drives regardless of the owner (IBM, Hitach, HGST, Western Digital). The Ultrastars arrived today, and I put them in the zfs1 pool:
# zpool list -v NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zfs1 16.3T 2.13T 14.2T - - 10% 13% 1.00x ONLINE - mirror 3.62T 1.53T 2.09T - - 29% 42% gpt/gpzfs1_0 - - - - - - - gpt/gpzfs1_1 - - - - - - - mirror 3.62T 609G 3.03T - - 19% 16% gpt/gpzfs1_2 - - - - - - - gpt/gpzfs1_3 - - - - - - - mirror 9.06T 1.32M 9.06T - - 0% 0% gpt/gpzfs1_4 - - - - - - - gpt/gpzfs1_5 - - - - - - -
Everything seems good. Note that the scrub repair of 33.8G was due to me pulling the IronWolf drives from the chassis with the system live (after having removed them from the pool). This apparently caused a burp on the backplane, which was fully corrected by the scrub.
# zpool status pool: zfs1 state: ONLINE scan: scrub repaired 33.8G in 0 days 04:43:10 with 0 errors on Sun Nov 10 01:45:59 2019 remove: Removal of vdev 2 copied 36.7G in 0h3m, completed on Thu Nov 7 21:26:09 2019 111K memory used for removed device mappings config: NAME STATE READ WRITE CKSUM zfs1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/gpzfs1_0 ONLINE 0 0 0 gpt/gpzfs1_1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gpt/gpzfs1_2 ONLINE 0 0 0 gpt/gpzfs1_3 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 gpt/gpzfs1_4 ONLINE 0 0 0 gpt/gpzfs1_5 ONLINE 0 0 0 errors: No known data errors
I purchased two Seagate IronWolf Pro 8TB drives at MicroCenter today. They’ve been added to the zfs1 pool on kiva.
# gpart create -s gpt da5
# gpart create -s gpt da6
# gpart add -t freebsd-zfs -l gpzfs1_4 -b1M -s7450G da5
# gpart add -t freebsd-zfs -l gpzfs1_5 -b1M -s7450G da6
# zpool add zfs1 mirror /dev/gpt/gpzfs1_4 /dev/gpt/gpzfs1_5
# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zfs1 14.5T 2.76T 11.7T - - 14% 19% 1.00x ONLINE -
mirror 3.62T 1.87T 1.75T - - 33% 51%
gpt/gpzfs1_0 - - - - - - -
gpt/gpzfs1_1 - - - - - - -
mirror 3.62T 910G 2.74T - - 24% 24%
gpt/gpzfs1_2 - - - - - - -
gpt/gpzfs1_3 - - - - - - -
mirror 7.25T 1.05M 7.25T - - 0% 0%
gpt/gpzfs1_4 - - - - - - -
gpt/gpzfs1_5 - - - - - - -
It appears that Amazon’s greed is continuing to degrade the customer experience.
Many areas of the US have had AMZL US become the primary logistics and delivery service for Amazon Prime shipments over the last couple of years. The problem is that for many of us, it has largely eliminated the incentive for Prime membership: fast, ‘free’ shipping. Of course it’s NOT free, since Prime membership is not free. But the bigger problem is that for many of us, AMZL US is dreadfully bad compared to USPS, UPS, FedEx or DHL.
I’d estimate that most of the Prime orders delivered to my home via AMZL US have not met the Prime promise. They’re often not on time, or even close to on time. 2 days becomes 4 to 7 days. On many occasions the final tracking record says “handed to resident” when in fact that was not true (no one was home, package was left in driveway). And recently, a package arrived with the outer box intact, and the product’s box (inside the outer box) intact but EMPTY. And all of my recent AMZL deliveries have been late by at least a day. Today’s notice is typical of what I’ve seen lately:
Note that ‘Wednesday’ is 5 days after I ordered. This is a small Prime item (would easily fit in my mailbox and hence USPS would be inexpensive), as was the item that didn’t arrive (the empty box shipment). And these are just a couple of the recent issues. Less than 20 percent of my AMZL shipments have been completely logistically correct. All of my recent shipments have found their way into the abyss above; delayed on the day they were supposed to arrive, at which point they can’t tell you when it will arrive or when they’ll even ATTEMPT to deliver it. This isn’t how FedEx, UPS or even the USPS do things. They have actual logistics, while AMZL apparently does not. AMZL can not reliably deliver packages on time, nor reliably track them. And of course, the day they expected to deliver it was a Sunday. Umm, I don’t need or even want Sunday deliveries. Especially if it triggers the “we no longer know when we’ll deliver your package” tracking status when that Sunday passes.
This is what happens when a company decides it would like to leverage its increasingly monopolistic position to make higher margins. As near as I can tell from stories from AMZL drivers, former AMZL logistics employees, other customers and my own experiences, AMZL is a logistics morass. And the last mile, arguably the most critical, is essentially slave labor. As the old adage goes, you get what you pay for. This isn’t open capitalist markets choosing the winner; as near as I can tell, there are almost no customers who prefer AMZL over USPS, UPS, FedEx, DHL, etc. And there are many stories of customers dropping their Prime membership because they can’t control who is used as a courier and get stuck with AMZL. This is Amazon deciding that they’d like to squeeze a few more pennies from their business by underpaying for courier services. Who suffers initially? Amazon Prime customers, and those who think they might build a profitable business delivering products sold by Amazon (good luck; the last time I ran the numbers, it was worse than trying to make a living as an Uber driver). Who suffers in the long run? Amazon and its shareholders. When Walmart and BestBuy start looking like significantly better options to your customers, you know you’re in the running in the race to the bottom.
This isn’t a bottom-up problem. While I’m sure there are some bad apples in the driver and contract courier company ranks, the real cause is much more likely the pricing demanded by Amazon. This is an Amazon initiative, and from my narrow view, very poorly implemented. I’m quickly becoming an alienated customer, and it’s been made clear that they don’t really give a rat’s ass about it since I can’t blacklist their AMZL service. Prime is now mostly a contract that’s regularly broken by one party (Amazon).
It’s unlikely that I’ll renew my Prime membership when it comes due. Nearly everything I buy from Amazon is available elsewhere, with free shipping from a RELIABLE courier service, and often at a lower price. Since 2005 I’ve been preferring Amazon because my Prime membership yielded fast, reliable delivery (via UPS or FedEx). That’s no longer true, and I have no recourse other than ditching my Prime membership and shopping elsewhere. Amazon doesn’t allow me to blacklist their lousy AMZL delivery service. The time alone that I’ve spent chasing down AMZL delivery issues costs more than the annual Prime subscription.
Today I spent $240 at my local BestBuy on an item I had intended to buy from Amazon (exact same price). And I generally hate BestBuy. But… walking out of a brick and mortar with product in hand is orders of magnitude better than waiting 5 to 7 days for something to POSSIBLY arrive and paying extra (Prime membership fee) for that ‘privilege’.
Who will get more of my business? For technology, the usual suspects: NewEgg, B&H, Adorama, Microcenter and BestBuy. For tools, Home Depot, Lowe’s, Menard’s, Performance Line Tool Center, Tooltopia and others.
To be clear, I’m not oblivious to the problems of scale with respect to Amazon delivery. And I’m _far_ from a luddite; I strongly believe in technological advancement and don’t need a human to hand me a package. But AMZL has been around since 2014 and it still sucks for many of us. I didn’t sign up for this experiment; I signed up (and paid for) 2-day delivery. If you ask me, a smarter move on Amazon’s part would have been to use AMZL as the free (as in Prime membership not required) delivery service, and kept the reliable courier services as the only ones used for Prime membership. And been willing to invest more in making AMZL viable before forcing it on customers who are paying extra for 2-day delivery service.
Given the ignorance I’ve seen in some forums with respect to the need for 10 gigabit networking at home, I decided it was time to blog about it.
The argument against 10 gigabit networking at home for ANYONE, as I’ve seen posted in discussions on arstechnica and other sites, is ignorant. The same arguments were made when we got 100baseT, and again when we got 1000baseT at consumer pricing. And they were proven wrong, just as they will be for 10G whether it’s 10GBaseT, SFP+ DAC, SFP+ SR optics, or something else.
First, we should disassociate the WAN connection (say Comcast, Cox, Verizon, whatever) from the LAN. If you firmly believe that you don’t need LAN speeds that are higher than your WAN connection, I have to assume that you either a) do very little within your home that doesn’t use your WAN connection or b) just have no idea what the heck you’re talking about. If you’re in the first camp, you don’t need 10 gigabit LAN in your home. If you’re in the second camp, I can only encourage you to learn a bit more and use logic to determine your own needs. And stop telling others what they don’t need without listening to their unique requirements.
There are many of us with needs for 10 gigabit LAN at home. Let’s take my needs, for example, which I consider modest. I have two NAS boxes with ZFS arrays. One of these hosts some automated nightly backups, a few hundred movies (served via Plex) and some of my music collection. The second hosts additional automated nightly backups, TimeMachine instances and my source code repository (which is mirrored to the first NAS with ZFS incremental snapshots).
At the moment I have 7 machines that run automated backups over my LAN. I consider these backups critical to my sanity, and they’re well beyond what I can reasonably accomplish via a cloud storage service. With data caps and my outbound bandwidth constraints, nightly cloud backups aren’t an option. Fortunately, I am not in desperate need of offsite backups, and the truly critical stuff (like my source code repository) is mirrored in a lot of places and occasionally copied to DVD for offsite storage. I’m not sure what I’ll do the day my source code repository gets beyond what I can reasonably burn to DVD but it’ll be a long while before I get there (if ever). If I were to have a fire, I’d only need to grab my laptop on my way out the door in order to save my source code. Yes, I’d lose other things. But…
Fires are rare. I hope to never have one. Disk failures are a lot less rare. As are power supply failures, fan failures, etc. This is the main reason I use ZFS. But, at 1 gigabit/second network speeds, the network is a bottleneck for even a lowly single 7200 rpm spinning drive doing sequential reads. A typical decent single SATA SSD will fairly easily reach 4 gigabits/second. Ditto for a small array of spinning drives. NVME/M.2/multiple SSD/larger spinning drive array/etc. can easily go beyond 10 gigabits/second.
Why does this matter? When a backup kicks off and saturates a 1 gigabit/second network connection, that connection becomes a lot less usable for other things. I’d prefer the network connection not be saturated, and that the backup complete as quickly as possible. In other words, I want to be I/O bound in the storage subsystem, not bandwidth bound in the network. This becomes especially critical when I need to restore from a backup. Even if I have multiple instances of a service (which I do in some cases), there’s always one I consider ‘primary’ and want to restore as soon as possible. And if I’m restoring from backup due to a security breach (hasn’t happened in 10 years, knock on wood), I probably can’t trust any of my current instances and hence need a restore from backup RIGHT NOW, not hours later. The faster a restoration can occur (even if it’s just spinning up a VM snapshot), the sooner I can get back to doing real work.
Then there’s just the shuffling of data. Once in a while I mirror all of my movie files, just so I don’t have to re-rip a DVD or Blu-Ray. Some of those files are large, and a collection of them is very large. But I have solid state storage in all of my machines and arrays of spinning drives in my NAS machines. Should be fast to transfer the files, right? Not if your network is 1 gigabit/second… your average SATA SSD will be 75% idle while trying to push data through a 1 gigabit/second network, and NVMe/M.2/PCIe solid state will likely be more than 90% idle. In other words, wasting time. And time is money.
So, some of us (many of us if you read servethehome.com) need 10 gigabit networking at home. And it’s not ludicrously expensive anymore, and prices will continue to drop. While I got an exceptional deal on my Netgear S3300-52X-POE+ main switch ($300), I don’t consider it a major exception. New 8-port 10GbaseT switches are here for under $1000, and SFP+ switches are half that price new (say a Ubiquiti ES-16-XG which also has 4 10GbaseT ports). Or buy a Quanta LB6M for $300 and run SR optics. Right now I have a pair of Mellanox ConnectX-2 EN cards for my web server and my main NAS, which I got for $17 each. Two 3-meter DAC cables for $15 each from fs.com connect these to the S3300-52X-POE+ switch’s SFP+ ports. In my hackintosh desktop I have an Intel X540-T2 card, which is connected to one of the 10GbaseT ports on the S3300-52X-POE+ via cat6a shielded keystones and cables (yes, my patch panels are properly grounded). I will eventually change the X540-T2 to a less power-hungry card, but it works for now and it was $100. I expect to see more 10GbaseT price drops in 2018. And I hope to see more options for mixed SFP+ and 10GbaseT in switches. We’re already at the point where copper has become unwieldy, since cat6a (esp. shielded) and cat7 are thick, heavy cables. And cat8? Forget about running much of that since it’s a monster size-wise. At 10 gigabits/second, it already makes sense to run multimode fiber for EMI immunity, distance, raceway/conduit space, no code violations when co-resident with AC power feeds, etc. Beyond 10 gigabit/second, which we’ll eventually want and need, I don’t see copper as viable. Sure, copper has traditionally been easier to terminate than fiber. But in part that’s because the consumer couldn’t afford or justify the need for it and hence fiber was a non-consumer technology. Today it’s easier to terminate fiber than it’s ever been, and it gets easier all the time. And once you’re pulling a cat6a or cat8 cable, you can almost fit an OM4 fiber cable with a dual LC connector on it through the same spaces and not have to field terminate at all. That’s the issue we’re facing with copper. Much like the issues with CPU clock speeds, we’re reaching the limits of what can reasonably be run on copper over typical distances in a home (where cable routes are often far from the shortest path from A to B). In a rack, SFP+ DAC (Direct Attach Copper) cables work well. But once you leave the rack and need to go through a few walls, the future is fiber. And it’ll arrive faster than some people expect, in our homes. Just check what it takes to send 4K raw video at 60fps. Or to backhaul an 802.11ac Wave 2 WiFi access point without creating a bottleneck on the wired network. Or the time required to send that 4TB full backup to your NAS.
OK, I feel better. 🙂 I had to post about this because it’s just not true that no one needs 10 gigabit networking in their home. Some people do need it.
My time is valuable, as is yours. Make your own decisions about what makes sense for your own home network based on your own needs. If you don’t have any pain points with your existing network, keep it! Networking technology is always cheaper next year than it is today. But if you can identify pain that’s caused by bandwidth constraints on your 1 gigabit network, and the pain warrants an upgrade to 10 gigabit (even if only between 2 machines), by all means go for it! I don’t know anyone that’s ever regretted a network upgrade that was well considered.
Note that this post came about partly due to some utter silliness I’ve seen posted online, including egregiously incorrect arithmetic. One of my favorites was from a poster on arstechnica who repeatedly (as in dozens of times) claimed that no one needed 10 gigabit ethernet at home because he could copy a 10 TB NAS to another NAS in 4 hours over a 1 gigabit connection. So be careful what you read on the Internet, especially if it involves numbers… it might be coming from someone with faulty arithmetic that certainly hasn’t ever actually copied 10 terabytes of data over a 1 gigabit network in 4 hours (hint… it would take almost 24 hours if it has the network all to itself, longer if there is other traffic on the link).
I’d be remiss if I didn’t mention other uses for 10 gigabit ethernet. Does your household do a lot of gaming via Steam? You’d probably benefit from having a local Steam cache with 10 gigabit connectivity to the gaming machines. Are you running a bunch of Windows 10 instances? You can pull down updates to one machine and distribute them from there to all of your Windows 10 instances, and the faster, the better. Pretty much every scenario where you need to move large pieces of data over the network will benefit from 10 gigabit ethernet. You have to decide for yourself if the cost is justified. In my case, I’ve installed the bare minimum (4 ports of 10 gigabit) that alleviates my existing pain points. At some point in the future I’ll need more 10 gigabit ports, and as long as it’s not in the next few months, it’ll be less expensive than it is today. But if you could use it today, take a look at your inexpensive options. Mellanox ConnectX-2 EN cards are inexpensive on eBay, and even the newer cards aren’t ludicrously expensive. If you only need 3 meters or less of distance, look at using SFP+ DAC cables. If you need more distance, look at using SR optical transceivers in Mellanox cards or Intel X540-DA2 (or newer) and fiber, or 10GbaseT (Intel X540-T2 or X540-T1 or newer, or a motherboard with on-board 10GbaseT). You have relatively inexpensive switch options if you’re willing to buy used on eBay and only need a few ports at 10 gigabit, or you’re a techie willing to learn to use a Quanta LB6M and can put it somewhere where it won’t drive you crazy (it’s loud).