I've been doing what I do for more than three decades, and much of it at home. I know exactly what I need in my home office space at any given point. Saying things like, "You're going to put all that computer stuff in there?!" is willful ignorance (in the dictionary sense of the word). I dismiss it as such. If you don't want to know what I need or why, there's no constructive discussion to be had.
A funny thing is that the computing stuff I could do without is the stuff most people really want; streaming devices and gaming machines. I might play 10 minutes of computer pinball once every 3 months. Ditto for television.
I have quite a bit more bigger computing stuff than someone whose livelihood doesn't depend on it. But most of it isn't in living space; it's in racks in the basement. The basic needs haven't changed for my whole career: workstation, laptop, UPS, robust backups, fast and reliable network connections, a networked printer. Some things come and go, and some things get upgraded (technology moves at a blistering pace and it's not optional for me to keep up). I don't buy computing stuff based on want very often; nearly all of it is based on need.
One of the interesting things that has happened over time is the diversification of computing. Today, your average worker doesn't need a workstation; their laptop is more than sufficient for their computing. They might need an external monitor, keyboard and mouse to be at their highest productivity, but their laptop easily meets their core needs. For another large set of people, a tablet meets their needs (a current iPad Pro is significantly faster than any laptop of just a few years ago). I know these people, as do you... they're the majority (by far). It has nothing to do with any sort of tech savvy. it's just the fact that the market has been able to outpace the average user's needs with smaller, portable devices. But there remains a minority of us who create these devices and many other things in the modern world, who still need workstations, servers, etc. And whose entire life's work is purely digital artifacts. I'm in this camp. And it's funny... workstation and desktop sales haven't dropped off (hello, we're still here :)), it's just that mobile has outpaced it. PC sales were roughly 300 million units in 2020. A workstation and some surrounding infrastructure isn't optional for me (if it were, I wouldn't own it). When your life's work is something you can't physically touch, smell or feel, it's easy to be careless, and as a consequence, lose everything in a single, predictable and inevitable failure. Or just lose a month's worth of work (big $$$). Spinning rust (disk drive) tends to fail catastrophically if not monitored and replaced. When solid state storage fails, it's typically even worse. I'm a professional; I do what's necessary to avoid inevitable, preventable failures. Almost all of my infrastructure at home is about keeping robust backups for all of my work. Every piece of software I've written outside of work is automatically backed up over the network every night. As is everything I need to quickly recover from any storage failure at home. If you don't have unattended backups of your work, you're either not a professional (i.e. no one suffers but you if a storage failure occurs) or you're living VERY dangerously. This is one of today's foundations of being a software developer, creative artist (photography, film, music, digital art), engineer, writer, et. al.. If your work is all in one place and one place only, and that place is of a lifespan known with certainty to be MUCH shorter than the average human lifespan... you're literally begging for a traumatic life-changing event. I bemoan this as much as anyone, but I also acknowledge and address it as the reality. I am prepared for most of the inevitable (storage and other computer failures) here.
There's also the reality that some of my stuff isn't significantly different than hundreds of millions of other homes. My smartphone is on the net. So is my watch, my iPad, my laptop, my Android TV, my Roku TV, my Apple TV, the PS/4, my thermostat, all of my smoke/CO alarms, printers, my garage door opener... none of them atypical in a modern (post-2000) home. Reality... for most consumers, all of these devices are under 24-hour attack from the outside, each and every day. It's invisible to most people until they get hit with something that demands their attention (ransomware, for example). It's not invisible to me, and my network is hence robustly protected from the Internet. This means hardware and software dedicated to keeping my network safe. It's not optional for me any more than it would be for a large enterprise. I can't choose to fly by the seat of my pants without incurring very significant risk. I need to access my home network from outside my home on a regular basis, which means I have to be just as careful as any other business with remote access. And my setup is more secure than most I've seen in enterprise space, in part because I understand it deeply but mostly because my attack surface can be kept quite small. It's much smaller than the average residential consumer with PnP enabled on their gateway. It's a sad reality that most people have no idea how vulnerable they are to network attacks. We're in this weird place where "That only happens to other people and big businesses" is a common thought despite it being far from reality. If you're on a battlefield with dense scatterfire everywhere, it should not be comforting to think, "There are more desirable targets.". The attackers don't think that way, and most are completely unaware of the worth of their target until after they've penetrated the target's defenses. They kill all the targets and don't know their worth until they've cleaned the victims' pockets.
A very long time ago, I was a victim of credit card fraud. To this day I'm not certain how the attacker got my credit card number. I assumed an insecure HTTP transaction of some kind (it was still the early days of the Web... 1998). But even afterward, they were completely oblivious to who they'd attacked; they didn't care. They had attacked my ssh1 gateway via a zero-day vulnerability. Sadly for them, they set up a tiny IRC server on my gateway which I found quickly (tripwire). I was able to monitor their chat (in Russian), then shut it down after collecting IRC handles and IP addresses. And... restore from a clean backup (which I believe was a Zip drive) and disable sshd (I still had Kerberos). What the attacker apparently didn't know was that I was a network researcher, for a group with live fiber taps at all of the major U.S. exchanges. So we decided it'd be an interesting sleuthing activity to see what we could ascertain from sampled data at the exchanges. Just IP addresses, from which I was able to find the attacker's mail server, ftp server and web server. From there I was able to get the name and many photos of the actual attacker, a Russian teenager. All from his public ftp server and web server. When I provided this information to the FBI, they were understandably clueless; they were not Internet-savvy yet. It didn't matter much anyway; the FBI's jurisdiction doesn't cross borders.
Things haven't changed for the better here. There are more of us to target and the average consumer is more vulnerable to attack than the average Web user was in 1998. The primary reason: a much larger attack surface (many more devices).
Long story short... I'm happy to make compromises. And I want the den to look as nice as possible. But there are some things for which I can't compromise because the risk to my livelihood is very real. So as long as I'm in a spot where the den is the only place in the house I can work from effectively while remodelling other rooms, it will have what I need in it. Elegance is of lower priority right now.
I botched my Sweetwater order in one spot: I meant to order 2 Rode phantom 48V to 4V adapters, but I only ordered one. This might be a good thing, since I now see a review mentioning a 170Hz hum. I'll test the one I ordered before ordering another. Of course, most of the ones I've seen available don't appear to have good build quality (PCB held in place just be soldering the XLR pins to solder pads on the PCB, etc.). Since I'm already suffering from the poor build quality of the microphone and headphone jacks on my PC, I'd prefer to avoid poor build quality here.I get it. But no one knows what I want to do; I don't know, so how could anyone else know? All I really know is my basic needs. It's not like I haven't put thought into this; my career spans more than three decades and I'm passionate about my work. Much of what I use has not changed significantly. Some needs don't go away; UPS, for example. My Internet doesn't go away during a power outage (unless it lasts more than 45 minutes). In fact none of my computing goes away. That's important to me and has been for 30 years. On the other hand, some things have changed significantly. Way back when, I had one device on the Internet (my workstation). First via modem, then via ISDN. Later I had a few workstations, via T1. Then broadband and multiple devices. Today... I don't actually have a count, but let's just say my network no longer fits in a /25 prefix; there are more than 64 devices on my home network at some point or another. And if you work in technology, thjis isn't crazy. Think about it. My watch is on the WiFi (and cellular). So is my smartphone. So is my iPad. My Rokus, my Apple TV, my Android TV, my thermostat, my smoke/C02 alarms, my garage door opener... the list just keeps growing. I've had to upgrade the network as time has progressed, in order to use the technology to my advantage. But not for these devices, for the most part. The significant network upgrades have been for my work, not my convenience or leisure. I need my source code repositories to be fast. And I need them to be backed up, rigourously and robustly. They hold much of my life's work, and I refer to it many many times a day.
I don't have all my computing gear in the office. What can be racked in the basement, outside of living space, is in fact racked in the basement. My gateway/firewall, my web server, my backup hosts, the main 48-port 1 gigabit PoE switch for networking and power throughout the house, the 16-port 10 gigabit switch for high speed from important hosts (workstations, laptops, backup hosts, web server, gateway/firewall), UPS, cable modem, patch panels. It's not pretty in the basement, but it's quite tidy. My basement is unfinished, and a ceiling would hide everything I've done.
The handful of uglies I need in my office space: my desktop computer/monitor/keyboard/trackpad. My laptop, though 99% of the time I use it in clamshell mode. The Threadripper 3960X workstation (thrip), because it's far and away the fastest machine in the house and hence I use it every day for my work. A 10 gigabit ethernet switch for high speed connections to the basement (for shared filesystems and backups). UPS to keep my desktop, thrip and ethernet switch from unexpected power-offs. I like having my network printer in my office, but it could definitely be somewhere else.
So... I need some near-the-floor rack space for UPS and ethernet switch(es). Let's say 12U since that's what I have right now. Let's assume we might need another 2U for another UPS (Eaton 5P shallow model, probably). I'd put that UPS in a separate rack since they're ludicrously heavy (I can't afford lithium ion UPS). So for rack space, let's just assume two 12U racks, the equivalent of two of my existing MDV-R12. This is my best first tack, only because 12U is amenable to under-desk. I don't need a tremendous amount of depth like I need in the basement rack; the UPS is the deepest item, followed by the Middle Atlantic drawers which are 15.5" deep and would be the predominant rack tenant. I already own these drawers. They are mechanically rugged and if you don't hit them with a hammer, they look nice. What I'm tired of is the look of the MDV-R12 racks. They're music studio furniture. I want something of real wood, not laminated MDF. And not held together with fasteners that have to covered up with ugly plastic plugs. And not larger than needed. Two of a custom cabinet (say 24" deep) is likely sufficient. So we're taling about roughly a 20.5" by 24" footprint on the floor for each one. Back to back with some space for cords and ventilation, let's say 20.5" x 54". That could be under a pair of 30" desktop surfaces while being recessed under the desktops by 3". And if I assume the second one has no electronics (all drawers) except for an Eaton 5P1500RC, that one could actually be only 20" deep. But... on top of these I'd like bookshelves. And I'd prefer they were 19" wide by about 12" deep per column, interior dimensions. Two columns would be 40.25" to 41" wide depending on whether or not I made each column standalone. I'd rather go with three, which would be 60" to 61.5" wide. That's a near ideal match for back-to-back 30" desktop surfaces. Why 19" interior width? So I can rackmount the audio gear, and because 19" is pretty standard for a lot of electronic stuff (nearly all home audio gear, for example). I don't expect to have anything beyond some small amount of audio gear racked, so there will only be rack rails at the bottom, presumably the center column. I know I need 4U or 5U, so I bought rails for 6U. The idea in my head at the moment is 6U of rack space at the bottom of the center column for Behringer UMC404HD (my desktop's audio), Behringer UMC204HD (Julie's computer audio), Behringer MDX2600 (one channel for each of us), Behringer FBQ1502HD (one channel for each of us) and Furman power conditioner. I'd like cubbies of about 8U height on each of the left and right columns, with pegs for us to hang headphones/headset, inside the columns (hidden).
As far as desk surface goes... it's always the more the better, but I need at least 60" width. My monitor is about 36" wide, and these days I can't imagine working with less desktop (I started using dual 24" in the early 1990's, I can't go back to a single small monitor without destroying my productivity). Speakers are roughly 6" wide, so I'm at 48" with just my monitor and small speakers. What I haven't figured out is how much depth I really need. My current desk is much deeper than what I want in the den, but at the same time my comfortable monitor viewing distance says I might need a little more than 30". How do I accomodate? It's food for thought at the moment.
Then there's the desk(s). I've done quite a bit of looking and haven't really found a good solution off-the-shelf yet. There are some industrial-looking things I like (think steampunk), if I want to go single-surface. They'll accomodate a large top and are height-adjustable (functional industrial crank). But I'm not convinced cosmetically, and I've no idea what Julie might think. And some are ungodly heavy or out of my price range (think tens of thousands of dollars). On the other hand, I could picture a not-ugly decor that would work with it. The Middle Atlantic drawers I have, for example, would match up well if they're in nice wood cabinets. Add a nice globe, maybe a replica antique telescope, a sextant, aviation-inspired chairs, etc. All the bookshelves lined with Julie's books and a few of mine (mainly the Donald Knuth books and a few others). Think cast iron legs but real wood tops. Timeless, durable stuff.
It's worth noting that I did do some scouring for antique library tables. Unfortunately I haven't hit on anything. Vaunted law libraries aren't abundant. I did find some cast iron legs I like on Etsy. https://www.etsy.com/listing/915643496/28-dining-height-rustic-brown-cast-iron
Of course I still need decent audio. I do listen to music from my computer once in a while, and occassionally even when I'm working (something light that just helps tune out background noises, or a developer podcast). More importantly, as expozed by COVID-19 and work from home, I need a better setup for WebEx, Teams and friends. And I listen to tech podcasts at my desk; things like embedded.fm, Security Now, etc. are more useful to me when I can easily look at the web for references. My main problem of late has been the fact that my old desktop's microphone and headphone jack have already been repaired once but the headphone jack is on the blink yet again. They're poorly made, and I want an external audio device that is built for professional use and not tied to the PC (it can be used with my laptop and any future PC). In addition, the ADC for the microphone isn't great, and the headphone output is pretty much abysmal. I don't want a desktop or boom microphone, I like having the Antlion microphone on my headphones of choice. Not fancy, and has some booming issues as well as handling thumps. But it's easy to move out of sight when I'm not using it, and doesn't clutter my office. Hence I'd like a little bit of processing of the microphone before it gets to my computer, to make up for the unbalanced connection from the micrpohone (hopefully).
I'm a former sound engineer. For the human voice, to me the minimum processor chain is a gate/compressor/limiter (other features a bonus) and an equalizer. It helps me greatly reduce handling thumps and proximity bias, even out the volume and allows de-essing. These aren't huge dial spins. But when you get it dialed in reasonably well, it's a night and day difference versus no processing.
My other consideration: my significant other. She works from home more than I do in a normal year, and she needs quiet. So I expect to be wearing headphones more often than I do right now. More importantly, with both of us working in the same space, there will occasionally be conference call conflicts and the like. I need to be able to do what I can to isolate our sounds when necessary. And finally, she records lectures a lot. She's been getting by with the built-in microphone on her old 2013 Macbook Pro, and the microphone array on a newer 16" Intel Macbook Pro is better, but neither provides any real isolation and I consider them last-resort. She hasn't really eperienced the difference between what she's using now and a semi-professional recordgin setup. She might not even notice a big difference. But some of her students most definitely will, and I'd like her to have a similar setup to mine where she doesn't need to open DAW software just to make her voice sound good. Once I dial in the gate/compressor/limiter/de-esser/etc., she won't need to change it. She'll just pick up her headset, adjust one or two knobs (mix and volume), and go.
So... after a lot of research, I ordered some stuff from Sweetwater. Some PreSonus Eris 3.5" powered monitors (the smallest I could go with decent sound), which are less than half the size of my current speakers and dirt cheap ($99/pair). A matching PreSonus 8" powered subwoofer. So I no longer need a rackmount amplifier. A Behringer MDX2600 compressor/expander/gate/de-esser (1U, 2 channels... one for my microphone, one for my significant other's microphone). A Behringer FBQ1502HD equalizer (1U, 2 channels, 15-band... I'd prefer 31-band, but don't want the space consumption of a 2U equalizer). A Behringer UMC404HD 4x4 USB audio interface at tne center of things... for my microphone, headphones, inserts through the compressor and EQ (only for microphone inpouts), and output to the subwoofer and monitors. I got a UMC204HD for my significant other, which willl also insert through the compressor and equalizer (second channel). Neither of us needs the UMC404HD, but the UMC204HD is bus-powered only. That should work fine from a laptop or any modern PC with good USB-C ports. My PC is very old and already has a lot of USB ports occupied; I'm leery of using another bus-powered device with it. So I got the single UMC404HD mainly to get a device that's not powered from USB.
I also ordered a Furman power conditioner with pull-out lamps and a rear gooseneck lamp. It's the plainest looking one I could find that has very good power conditioning and protection. It's similar to the one in my UPS and switch rack, it just doesn't have the voltage LEDs. It still has the 133V RMS overvoltage protection that I've found to be fabulous for protecting my gear (it has tripped a few times during storms). No ugly branding logo like my old Stanton units, and the pull-out lights are brighter and don't get hot (they're LED). So technically I can go from 8U of stuff to 3U plus the audio interfaces. For now I've ordered some 6U rails so I can mount things. At the moment I'm not sure what I'll be doing mounting wise, I only know that I don't want it to be an eyesore. I've been thinking one large work surface for myself and my significant other, facing each other, with this stuff to the side of us. Ideally it'd just be integrated into booskshelves and hence inconspicuous. None of this stuff should generate much heat, thankfully. I need to sit down with Shapr3d for a while and come up with something.
Anyway, I'm sort of excited. For one, I'm tired of fighting with the flaky audio jacks on my PC; the sound interface and processors will solve an immediate problem. I'm tired of having 8U of rack directly in front of me with gear I've had for almost 20 years, including the ART SLA-1 that gets warm/hot (that's with a new fan). I'm tired of having really tall speakers on my desk that aren't intended for near-field listening. I'm tired of having a subwoofer that's bigger than I need. I'm tired of not having decent control of what gets to my PC from my microphone (and yes I use DAW software when I must, but it's dramatic overkill for my daily use).
The hard part is going to be figuring out how I keep the den from looking like a corporate office. Despite the fact that I don't want my Middle Atlantic desk in visible living space, I can't argue against its utility. A ton of space (84" wide), a huge overbridge, 8U of rack space, mounted power strip, cable routing, modesty panel to hide things... it's been good to have. Not to mention that I have three matching MDV-R12 units. But I don't want it in the den. And my significant other feels even stronger about it, and I don't blame her. But... I have stuff that I need in order to work from home. And I work from home even if I spend full days in the office; I like what I do, it's my primary hobby and not just my profession. My significant other... despite her many hours a day of computer use, her needs are different than mine. She'd love desks that were designed in the pre-computer era (or a cenutry earlier than that) and they'd work for her. And I won't argue with her on the looks; she's right and it's also subjective. But I need to come up with some kind of compromise. The den is the place in the house where I need to be simply because it has 10 gigabit fiber connections to the rack in the basement and it's well-isolated sound-wise from the bedrooms. I don't need to be blasting audio, but I can't give up my loud mechanical keyboards. I also normally have a desktop, a workstation and a laptop that I use daily. While I am hoping to be able to migrate to a Mac Mini soon (I need 64G of RAM so the current M1 Mac Mini doesn't work for me), I'll still need the Threadripper 3960X machine in the office. And it's not small, nor pretty. But it's where I do Windows and Linux work, and where I have a 2080ti GPU that I occasionally need. Not to mention it has 256G of RAM and is easily the fastest machine in the house; it saves me time, sometimes a LOT of time.
Nothing worth doing is easy.
So, back to the chase. I got lucky and found an EVGA RTX 2080 Ti FTW3 Ultra on eBay from a local seller with an EK Quantum Vector FTW3 waterblock pre-installed, with a backplate and still including the stock cooler and fans. It will suit my needs (machine learning) for at least the next year, and since it'll be watercooled I can keep it in thrip even if I later install a big 3-slot card. And none of my money went to NVIDIA, which makes me happy only in the sense that they don't deserve it. Their launches are essentally big money grabs; the dramatic mismatch between supply and demand isn't by accident.
Other than shooting myself in the foot with a typo in the BIOS for RAM timings, the hardware testing has gone well. CPU works, RAM works, NVMe SSD works, video works. And I can't seem to push the CPU beyond 52C or so, even with long blender benchmark runs that peg all cores for many minutes. The CPU cores idle at around 28C, with ambient around 21C. So the cooling solution works great for the CPU. It's yet to be seen what happens when I put a modern GPU in the loop.
NAME STATE READ WRITE CKSUM zfs1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/gpzfs1_0 ONLINE 0 0 0 gpt/gpzfs1_1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gpt/gpzfs1_2 ONLINE 0 0 0 gpt/gpzfs1_3 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 gpt/gpzfs1_4 ONLINE 0 0 0 gpt/gpzfs1_5 ONLINE 0 0 0 errors: No known data errors
Years ago I had sworn off Seagate due to some issues we had at work with Barracuda drives. That was a long time ago, but I wish I had kept them off my list. Sadly, Microcenter no longer carries enterprise-grade drives at the local store.
I'm going with tried-and-true Ultrastar drives. I've used them on and off since the days they were still made by IBM, then later made by Hitachi, then HGST and now Western Digital. Since the real power saving (and HelioSeal) starts with the 10TB model, I'm buying a pair of 10TB. This costs a little bit more money, but gets me about 2TB more space, and a 2.5 million hour MTBF. I might put these in their own pool as a mirrored vdev, to avoid adding another point of failure to the existing pool. If I do this, I'll likely migrate some of my backup datasets to the new mirror.
I was forced to wipe everything and start over. I decided tp update openjdk, mongodb and the Unifi controller in the process. I then adopted all of my Ubiquiti Unify devices and updated the firmware on all of them (now at 126.96.36.19925). Everything is working.
The 2U blank plate arrived as well, along with the sound deadening material. I installed a piece of the deadening material on the Middle Atlantic QFP-2, and installed the 2U blank in the back of the MDV-R12. I installed the AC Infinity T7-N and set the speed to 2 for now. The US-24 in the MDV-R12 is now at 48C and the US-16-XG is at 43C.
The AC Infinity AC fan controller arrived. I installed it in the rack to control the speed of the QFP-2 fans. It works fine for my purposes for now.
I'm still waiting for a black 2U panel, sound deadener and the AC Infinity T7-N to arrive. It is likely that I'll disassemble the T7-N when it arrives, to determine what kind of work would be involved in replacing the fans with Noctua PWM fans. I don't need the high speed of the AC Infinity fans, nor the static pressure capabilities of 38mm deep fans. What I need is near silent operation, and longevity. The AC Infinity fans are rated at 67,000 hours of operation. Noctua fans are rated at 150,000 hours of operation. From what I've seen of the AC Infinity fans, I don't believe their advertised noise numbers. They're using what look to be fairly standard dual ball bearing fans, yet they rate the panel at 36 dBA max when pushing 220 CFM. That's 55 CFM per fan. Four 80mm dual ball bearing fans pushing 55 CFM each are HIGHLY unlikely to produce only 36 dBA of noise. That MIGHT be the number for a single fan, though even that would be reasonably impressive versus other 80mm x 38mm fans. But there isn't a snowball's chance in hell that all 4 fans can run at all full speed and emit 36 dBA TOTAL. I think their marketing material is misleading.
I removed the fans and installed some silicone O-rings between the fans and the mounting plate. I also installed some rubber washers between the mounting plate and the rack rails. This helped quite a bit.
I ordered some 80 mil butyl sound deadener to install on the mounting plate. I also ordered an AC Infinity AC fan controller, which will allow me to reduce the speed of the fans. I don't need the full 100 cfm the fans move at full speed. $18 from Amazon. I also ordered an AC Infinity T7-N for intake. I'll mount this between the US-16-XG and the US-24 in the front of the rack. It's entirely possible this will make things louder, even with the fans on low or off. That's because the US-16-XG's fans seem to always run near 50% and it's not quiet. The QFP-2 brought the reported temperature of the US-24 down from 67C to 64C or so, and disabling ports I'm not using brought it down to 57C. I'm a bit surprised by the latter; unused ports should not require much power and this isn't a PoE switch. But its fan doesn't come on at all until 70C (from what I've read); I've not yet seen it come on except when I run a fan diagnostic from the command line. My hope is that running the T7-N on low will help cool both switches and perhaps bring down the speed of the US-16-XG fans, since they're noisy 40mm critters.
I also ordered a blank 2U panel to block more of the rear of the rack, in order to force more air to be drawn from the front of the rack. I also installed keystone blanks in the rear patch panels for the same reason.
I repainted my office UPS' front panel. 20+ years ago I had painted it blue, it was time for a change. It's now black. This is a Best Power Fortress 1425, 3U rack mounted UPS. It's built like a tank, and continues to work flawlessly despite decades of full-time use. I believe I've replaced the batteries 4 times. It lacks modern features like LCD display, but that's fine with me. I'm considering ordering a custom front battery cover for it from Front Panel Express, mostly so I can have a panel that can be any color without losing the part number labelling (I'd have them engrave that on the cover). Eventually I'll have to replace it, probably with a 2U Eaton 5P.
I installed a small terminal block and ground wire going to the ground screw on my UPS, and wired the ground wires from the rear patch panels (shielded type) to the terminal block.
I'm waiting for a Middle Atlantic QFP-2 fan panel to arrive from eBay. It was supposed to arrive yesterday, but it appears it won't be here until Monday.
I installed a pair of Middle Atlantic 8U rack rails in the back of the MDV-R12. These are for the patch panels I am installing in the rear, and to hold a fan panel.
The new Sapphire Pulse RX580 card continues to work well as a replacement for my old GTX570. The GTX570 was causing crashes multiple times per day (when waking from sleep), probably due to crappy Nvidia drivers (the latest). I've had no such issues with the RX580. This was an inexpensive experiment which has worked out well. I can keep my hackintosh for a while longer before building a new desktop computer. I don't game on my desktop, but I will later want to use OpenCL for deep learning, at which point I'll use a new VEGA card with a higher core count CPU than I have right now.
I need a Ubiquiti AC In-Wall Pro in the den, since it's essentially completely blocked from the 5GHz channels from my UAC-AP-Pro. Julie needs good WiFi connectivity in tne den, and I do too. The question is where to put it where it won't be blocked by furniture. I haven't figured out how I'm going to lay out the room after I tile the floor.
The new fan should help keep the Mellanox ConnectX-2 card cool.
Topology map from the Unifi controller:
The four OM-4 fiber connections for the den are now terminated in keystones in the patch panel.
OM-4 fiber patch cables for the ports in the den should arrive this week from fs.com. It'll be a huge relief to have that part of the cabling done so I can finally button up the ports in the den. Next week the Mellanox ConnectX-2 card for ria should arrive so it can be wired into the US-16-XG via an SFP+ DAC cable.
I updated the firmware on the US-16-XG, the US-48-500W and my UAC AP Pro.
It looks like the Ubiquiti US-16-XG switch will arrive on Wednesday. I need to order the fiber for the runs to the den.
It looks like the SFP+ ports don't reconnect to my Mellanox cards via DAC after a switch reboot. This might be due to the fact that my DAC cables are coded for Netgear, I don't know. But I need more anyway, so I ordered some that are coded for Ubiquiti from Amazon. Namely the 10Gtek ones. I ordered three of the 2 meter ones (for kiva, www and ria) and two .5 meter ones (to connect the US-48-500W to the US-16-XG when it arrives). I also ordered a pair of 10Gtek SFP+ 10G MMF transceivers and a pair of FiberCablesDirect .5 meter OM4 LC-LC fiber patch cables for the fiber runs to the den. All of this was ordered from Amazon.
I ordered a Mellanox ConnectX-2 card from eBay to put in ria, after checking that ria contained a riser card for the 16X PCIe slot. I separately ordered a low-profile bracket for it since I couldn't find any single-port cards with a low-profile bracket. Once this is installed in ria and the US-16-XG arrives, my whole rack in the basement will be at 10 gigabits/second.
I have not decided on a 10 gigabit switch for the den. However, at the moment the QNAP QSW-1208-8C-US is appealing.
My initial diagnosis was a toasted NTC thermistor (current inrush limiter) and fuse on the power supply board. I ordered parts from Mouser and replaced them, but that didn't fix it. There's a short downstream, presumably one of the FETs (drain to source). Unfortunately I don't really have time right now to further diagnose it, and the manner in which the power supply board is populated would make it significant surgery. All of the FETs and power diodes are attached to non-trivial heatsinks in a manner that makes them impossible to replace individually. The screw heads face components like large filter capacitors, and can't be reached with screwdrivers due to proximity. The components also have thermal adhesive holding them to the heatsinks. The heatsinks also are not planar. So in order to remove one FET or power diode, I'd have to desolder a slew of them and pull the whole set out as one piece, or remove all of the the components that are in the way. This is how you build a non-repairable power supply, sigh.
Given that I don't have time for this right now, I ordered a new Ubiquiti US-48-500W switch to replace it. Less features switching-wise, but nicer management features since I already run their controller for my WiFi access points. I also finally ordered a Ubiquiti US-16-XG, since it appears that they finally resolved the issues with 10GbaseT compatibility. The US-48-500W will become my aggregation switch for gigabit ethernet ports and the US-16-XG will become my core 10 gigabit switch. I've been shopping for over a year for the right 10 gigabit switch for my core, and the US-16-XG was my initial target. But then I saw all the issues with early revisions not talking to typical Intel 10GbaseT NICs. Unfortunately I really want a mix of SFP+ (for DACs to machines in my rack and fiber to rooms in the house that need high bandwidth) and 10GbaseT. For a while I looked at using a Mikrotik CRS317, but it has no 10GbaseT ports and their SFP+ 10G RJ45 transceivers work but get hot which makes me question their long-term reliability. While we all know that today's 10GbaseT NICs consume a decent amount of power, they're easier to cool than an SFP+ transceiver.
Recently the QNAP QSW-1208-8C-US became available, but it's unmanaged and I've seen no reviews other than one from the complete moron Robbie (SPANdotCOM) youtuber (why do such people even bother putting their videos on youtube?). Yes, I could buy a Catalyst, ProCurve or a Netgear Mxxxx switch, but they're expensive new and for my home network, I don't need all of their features. I want some management, and PoE is a requirement. I'd definitely like working LACP, but realistically I don't need L3 features behind my gateway/firewall. I also looked at the fs.com switches and some others, but the Ubiquiti should meet my needs and be easy to manage. And hopefully, the power supply design is a little more robust. Of course, my now-dead Netgear was bought for $300 on eBay, but had performed flawlessly (and looks brand new) until last week.
Oddly enough, for the last 5+ years, I've been debating replacing this UPS (which is for my main desktop) and others. The CyberPower units are intriguing price and feature wise, but I'm fairly reluctant to buy them for a few reasons. One, the negative reviews on Amazon are for DOA units and failures after 3 weeks to 9 months. They're also all made in China, as many things are these days. But probably most importantly... I've never seen a CyberPower UPS in a datacenter or even an enterprise environment. Small enterprise has tended to use APC (which isn't what it used to be), and datacenters and large enterprise lean toward ABB, Delta and Eaton. A major driver for datcenters is reliability, and recently, the availability of lithium ion in place of lead acid. The lithium ion solutions are expensive and not available in large solutions yet, but lead acid is a real PITA maintenance-wise since battery replacement is required every 3 to 5 years in most cases.
While I'm just a home network user, I consider UPS fairly critical. It's the last piece of equipment I want to fail, since its failure leaves all of my equipment (and data) vulnerable. I've owned the same brand of UPS for over 20 years, and they've been reliable except for one that was a casualty of the aforementioned lightning strike. Originally Best Power, then Powerware, and now Eaton... all the same brand, they've just been bought/renamed twice. When I worked for a large ISP in the 1990's, we bought nothing but Best Power UPS. And hell, I've had one that has worked for over 20 years in far-from-ideal conditions!
So, what's the right thing to do? Save a hundred dollars and gamble with a CyberPower unit, or go with a known quantity with a very long history of quality products? I think this one's easy: buy an Eaton and not bave to worry about it for at least a decade.
For my desktop setup, I think the right answer is the Eaton 5P1500RC or the Eaton SP1500RT.
I went to Lowe's for some plywood, a piece of acrylic sheet, strap hinges and hasps to make the door for the patch panel and switch enclosure.
I cut the 4 pieces of plywood for the door and drilled the pocket holes. I then glued and screwed them together, using clamps and an engineering square to keep it square while driving the pocket hole screws. I then cut two stiles (one wider than the other to accomodate a handle) and two rails and ran them through the router table. I'd normally use symmetric stiles, but I wanted to maximize visibility into the cabinet. I glued and pocket-hole screwed the stiles and rails to the new door. I sanded it and put the first coat of polyurethane on it. I also cut the acrylic for the door's window.
I installed an inexpensive 3" cabinet pull on the top hinged lid of the enclosure. I have an identical one for the front door of the enclosure.
All that remains for the enclosure is attaching the strap hinges and the hasps that will keep it closed. The hasps I got are low quality but were inexpensive and will be used very infrequently. Of course I still need to install the acrylic in the door after I put a finish on the door. There isn't room for 1/4" quarter-round due to the thickness of the acrylic, so I'll probably just use caulk or a few dabs of hot glue. I could use something stronger if desired since I don't expect to ever need to replace it. On the other hand, it's acrylic so it will scratch and possibly break. I do have room to wedge something about the thickness of a penny into the groove from the rail and stile router bits, so I might just cut an oak shim and use that with no adhesive.
I'm still waiting on shielded cat6a patch cables for the connections from the switch to ria's ethernet interfaces. They should be here this week.
For some as-yet-unknown reason, my Raspberry Pi garage door opener has been working fine with the new switch. Of course, it has been intermittent for months, and I assumed it was the cheap PoE splitter I used inside the enclosure for the Raspberry Pi. But perhaps it's due to the length of the ethernet run and the resulting losses, and perhaps the new switch, being 802.3at capable, cures it. I doubt it; I still suspect the POE splitter. The new switch reports only 6W total power right now (powering my Ubiquiti UAP-AC-Pro and the Raspberry Pi garage door opener). I have a much more robust replacement PoE splitter, but it will not fit inside the enclosure.
I've concluded that I need a door on the front of my switch and patch panel enclosure, to help prevent dust. It will also give me a place to mount the wiring diagram. Other than a draw latch, I have what I need on hand, unless I want an acrylic front.
I labelled all of the ethernet cables from the existing keystones. I shut down my Raspberry Pi garage door opener, then removed the existing enclosure. I then installed the new enclosure on the wall and connected the grounding terminal block to my service ground block. I installed the existing keystones in the patch panel. I then installed the new switch and enough patch cables to make the network operational again.
The correct short patch cables won't arrive until Tuesday. I still need to order cat6a shielded jacks and bulk cable.
dwm@www:/home/dwm% iperf -c 10.5.5.1 -i 5 ------------------------------------------------------------ Client connecting to 10.5.5.1, TCP port 5001 TCP window size: 32.8 KByte (default) ------------------------------------------------------------ [ 3] local 10.5.5.2 port 13139 connected with 10.5.5.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 5.0 sec 5.47 GBytes 9.39 Gbits/sec [ 3] 5.0-10.0 sec 5.47 GBytes 9.39 Gbits/sec [ 3] 0.0-10.0 sec 10.9 GBytes 9.39 Gbits/sec
It's worth noting that at this point, typical transfers between these two machines are files, i.e. to/from spinning disks or SATA SSDs. Hence I'm still limited by what the I/O subsystem can do. In the case of a SATA to SATA transfer to and from SSD, that's roughly 4 gigabits/second. Then there are application constraints... scp is slow due to ssh being single-threaded and completely consuming a single core at less than 2 gigabits/second. For NFS, I'm not CPU bound so it's a little better. But for a single SATA III device, I'm limited to around 4.8 gigabits/second. In other words, without going to many spindles or SSDs, I'm not going to hit 10 gigabits/second. This is OK in my mind. The focus right now is to be able to go above 1 gigabit/second for things like backups, SVN checkouts, etc. This is network upgrades, I'm not yet addressing I/O issues and likely won't need to any time soon. But I want the network to be ready when I someday wind up with faster I/O subsystems.
The new switch has 48 1000baseT ports, two dedicated 10GbaseT ports and two dedicated SFP+ ports. I'll use the SFP+ ports with DAC cables for kiva and www, and 10GbaseT for the den despite the fact that I don't currently have anything in the den that can do 10 gigabit.
A nice thing about the new switch is that it will allow me to retire two switches, at least for now. I had a 16-port Zyxel as my main switch and a 16-port Netgear with 8 PoE ports to drive PoE devices like my main WiFi AP and my Raspberry Pi garage door opener. But the Zyxel is unmanaged, and hence I couldn't do any link aggregation betweeen the switches. Not a huge problem for the two devices on it right now, but adding a camera or anything that used real bandwidth was going to create a bottleneck. The new switch has 48 1G ports, and all are PoE+ capable. The total PoE power budget is 390 watts, which I don't expect to ever need.
A bad thing about the new switch is that I need to do some rearranging in my rack to accomodate it. It is deeper than my existing switch. Since I have my switch in the back of the rack, it needs to be in a position where it doesn't conflict with the opposite device in the front of the rack. The new switch's air flow is side-to-side, which I also need to take into account. But the big issue at the moment is that my Middle Atlantic rack drawer is in the front of the rack opposite where I'd like to place the new switch, which creates a problem for the new switch.
In preparation, I moved one of my NMB fan panels in the back of the rack and then moved my KVM and the rackmount LCD/keyboard down 1U in the front of the rack. I haven't moved the Middle Atlantic drawer yet but will do that once the new switch arrives.
The new hardware is in the rack and doing its duty. Since it's only 1U, some rack space was made available. i installed two of my Middle Atlantic 2U rackmount drawers so I have space to store tools, extra fans and spare hard drives.
It is interesting that it might turn out that the new machine is roughly equivalent in power consumption to the old machine. If true, I'll chalk it up to newer CPUs having wider and more effective SpeedStepping, an efficient motherboard, SSD instead of a spinning drive for root, and fewer spinning drives.
The machine I ordered is a used 1U server from eBay. A Supermicro CSE-815TQ-600WB chassis, X9SCi-LN4F motherboard, Xeon E3-1270v2 CPU, 32G of RAM. I ordered a Supermicro MCP-220-00043-0N drive caddy in order to install a Samsung 850 Pro in one of the hotswap bays as the root drive. I may later change to ZFS on root, and it's possible I'll move the Samsung 850 Pro outside of the hot swap bays. I also ordered a Supermicro CSE-PTFB-813LB front bezel. I don't need the lock, but I do need the filter. I have a similar bezel on kiva, and it's very useful in my environment (my unfinished basement).
I don't really need the oomph of an E3-1270v2 CPU, and hence I might replace it with an E3-1265Lv2 to save power (and heat/noise). The one area where CPU has been an issue on ria's current hardware is compiling the kernel or 'make buildworld'. So it will be nice to have a significant bump in performance when I'm upgrading/updating. And I'll be happy to have additional gigabit ethernet ports, since it gives me some flexibility I currently do not have. Cordoning off my WiFi and IoT devices, for example. And having a separate network to connect to kiva if desired.
The new hardware won't be her until next week. Hopefully the failing hard drive in ria will last that long. I already have the Samsung 850 Pro loaded from a good ria backup in case I need to swap it in before the new hardware arrives.
I wound up writing zeros to the whole drive (5 hours), then recreating the Data partition and copying again. The Current_Pending_Sector count wehnt to 0, and Reallocated_Sector_Ct remains at 0. I will keep an eye on things. This drive is used, and has 20,000 hours on it (2.28 years). The first drive (Data Backups) also has 20,000 hours on it. In both cases, that's less hours than the drive that was replaced, and the replaced drives were not enterprise drives.
When I installed the new drives in their hot-swap bays, they showed up as da3 and da4. I then did this:
# gpart create -s gpt da3 # gpart create -s gpt da4 # gpart add -t freebsd-zfs -l gpzfs1_2 -b1M -s3725G da3 # gpart add -t freebsd-zfs -l gpzfs1_3 -b1M -s3725G da4 # zpool add zfs1 mirror /dev/gpt/gpzfs1_2 /dev/gpt/gpzfs1_3
I intend to power everything using PoE (power over ethernet). Today I received a Netgear ProSAFE JGS516PE gigabit ethernet switch with 8 PoE ports that I'll be installing in my rack in the basement. I have plans for some PoE IP cameras and other PoE devices, so the switch made more sense than a single injector with a power brick. Multiple power bricks don't scale, and are a very messy solution.
To power the Raspberry Pi from PoE, I'm using a small PoE to microUSB device that I'll put inside a larger enclosure with the Raspberry Pi 2. This enclosure will also house the indicator LEDs and door activation buttons. The door activation buttons are just for convenience; I often want to be able to open or close the garage door without crossing the garage, and the buttons will allow me to do that while standing near the doors.
I want better information about my garage door than just closed or not closed, without resorting to using a camera. For this reason, not only do I want a switch to tell me the door is closed, but I want an optical encoder to tell me when the door is in motion (including direction). I'm almost tempted to design the whole opener so I can detect when the door stops from force, but I can do most of that with just the encoder and tapping the beam detectors. If the door reverses while closing, it either means someone commanded it to reverse (which I can detect) or it means there was an obstruction.
Ideally I'd be able to get approval for using my app with CarPlay, but I'm a long way from looking at that option.
I've started writing the code for the garage door state machine, and I'm writing the unit tests as I go.
In the long term, I need to replace the motherboard and CPU on ria with something with a bit more oomph. What I really want is a system with an Atom C2750 or C2758. This would allow me to use more RAM, and give me more ethernet ports so I can create a separate LAN for the connection to kiva.
Thanks to Warren Block for his concise write-up on partitioning with GPT at http://www.wonkity.com/~wblock/docs/html/disksetup.html
At any rate, UPS monitoring has been migrated to depot. Since depot doesn't have a DB9 serial port, I am using a USB to serial adapter.
I need to put a Samsung 850 Pro SSD in depot. It's still running the OS from a now-ancient WD VelociRaptor drive that has seen better days. depot could use speedier I/O there now that it will be my web server, and I'd prefer the reliability of a Samsung 850 Pro versus a spinning drive.
The main motivation for this change: RAM. My existing web server has 4G of RAM and an Intel Atom D510 which can't address more memory. Most of the time this isn't a big issue, but over the years the memory utilization on my server has crept upward to the point where I now see a few hundred megabytes of swap occupied (though very little paging activity). The second motivation is CPU. While my new gallery software is MUCH speedier and more efficient than gallery3, I'd like my bulk photo uploads to be a little snappier. Depot's i5-2405S processor will provide the oomph I need. I know this because depot was the host I used to develop my gallery software.
The used server arriving on Monday is a Supermicro X8DTN+ motherboard in a Supermicro SC826 2U chassis. It contains a pair of Xeon L5640 CPUs and 48G of RAM. The chassis has 12 hot-swap drive bays. I don't intend to fully occupy the drive bays, but I did buy an LSI 9211-8i flashed to IT mode for ZFS. I have a new Crucial MX100 512G flash drive that I'll use for the OS drive, ands I bought 2 Supermicro hot-swap cages for 2.5" drives; one for the OS drive and one for an SSD SLOG if I decide to add one. Today, I don't have high sync loads and hence I don't need an SLOG. That may change in the future, hence the second 2.5" hot-swap cage.
The L5640 CPUs are overkill for my needs. I may buy L5630 CPUs to replace them, in the interest of reducing power consumption. I like the idea of having 12 cores for the times I need to do something intensive, but these are older Xeons and not as power-efficient as say a new E3 or E5 Xeon. But this really boils down to up-front cost; I got the server with chassis, rails, motherboard, CPUs and RAM for $600. The LSI 9211-8i was $129. The Crucial MX100 was $159. I will add a pair of 4TB disk drives (ZFS mirror) after I've loaded the operating system.
I continue to muck around with my Clover configuration. Once every couple of days, it will crash when trying to wake it up. I'm hoping that my latest configuration changes will resolve the issue, but we'll see. Some of it is probably due to the U1L BIOS on the motherboard, but there's nothing newer from Gigabyte that is UEFI.
I am using the new lightining cables I got from Amazon and so far I like them. They are from IXCC. They're a little bit beefier than the Apple cables, but they don't conflict with the edge of my Pad & Quill case.
I'm still working on installing the ports I need on depot.
I have yet to figure out why I can't completely boot from my OS backup drive. I do have full backups running to it now using the latest Carbon Copy Cloner. And I did install Clover on it, in the EFI partition like my primary boot drive. Same drivers, kernel extensions, etc. And I can boot the OS from that drive when the bootloader runs from my primary drive. It's possible that there's something wonky about my BIOS, I don't know yet. At the moment I'm not going to worry about it too much. The new primary boot drive is a Samsung 850 Pro which will likely outlive the rest of this machine.
The only thing left is to get a decent USB 3.0 hub and to get Bluetooth working without using the IOGear USB Bluetooth dongle, so I can have Handoff work. I ordered a WiFi card setup from osxwifin.com, which will resolve the issue with the card I have now (which was bogusly sold as an Apple card; it's not!).
I believe I have iMessages working again. The issue was that I had forgotten to transfer my ROM setting from Mountain Lion.
It looks like SMS via my iPhone works. Yay! AirDrop works too, in both directions (desktop to phone, phone to desktop). I love you Apple!
Further difficulties ensued once I had a Yosemite base installation. I had to install the Nvidia Web drivers and add PatchVBiosBytes to get my GTX570 to run at full resolution. Then I shot myself in the foot trying to get the Bluetooth on my WiFi card to work, and had to use single-user mode from my Clover USB stick to repair what I ahd done (modifying an Info.plist for Bluetooth).
I have yet to dig into the issues with iCloud, but it should not be difficult.
So at the moment, I'm alomost back to where I was with Mountain Lion, except for iCloud/iMessages and Bluetooth. I still need to dig into the Nvidia issue of running at full tilt all of the time.
I hemmed and hawed over which SSD to buy for over a month. The 850 Pro is not cheap. But I've been very happy with my other Samsung SSD drives, and in the end it boiled down to reliability. The 850 Pro carries a 10 year warranty, well above any others in the industry (most are 3 years, some are 5 years). Recovering from a dead drive eats more of my time than the entire price of the drive is worth.
I also bought a Sandisk Extreme 32G flash drive. This will replace one of my older 16G drives that I intend to use for the Clover and Yosemite bootstrap. I already have a 16G Sandisk Extreme that I use for work, and I've been very happy with it (it's speedy).
Finally, I bought another Razer Naga mouse to replace my existing one. Razer sucks quality-wise, my old mouse was used less than 100 hours total (I almost never use it or even plug it in since I much prefer my Apple trackpad). The old mouse lights up and the buttons work but the sensor doesn't work. The only reason I bought another Razer is that MicroCenter doesn't have a lot of wired mice that work well with OS X in the store. I don't want Bluetooth or RF when installing a new OS, I need the reliability of a wired USB mouse.
I'm glad I did the upgrade. By moving from bind to unbound for DNS, I'm no longer seeing memory utilization issues. This saves me real money since I no longer need to consider bumping the memory here. The memory for this machine is not cheap since it's now outdated. I'm seeing 3.7G free memory out of 4G, with all the services I need running.
I'm thrilled to have basically the same C++ compiler environment as my OS X machines. A modern C++ compiler with full C++11 and C++14 support and better licensing than GPLv3.
I cleaned up most of the old ports and reinstalled those I need. I have to admit that I was sad to see the ludicrous number of dependencies now included by installing emacs-nox11. But I can't live without my favorite editor for the last 20 years. I updated to isc-dhcp43-server, and just rebooted to make sure that all of my changes are good.
I installed GenericUSBXHCI.kext from here. I now have working USB 3.0 ports from my motherboard. Awesome! I need to do this on mom's hackintosh and again on my own when I install Yosemite.
My old Sunfire Ultimate Receiver started flaking out last night. It would appear to shut down randomly, and would not properly respond to controls on the front panel. I don't know why. I tried the factory reset mechanism (hold the TONE DOWN button and the POWER button until 'RESETTING TO FACTORY DEFAULTS' appears), but it did not help. So I took it apart expecting to find a popped electrolytic capacitor or the like. I found nothing wrong, and I was impressed with the overall design (pretty neat how all of the gazillion FETs are attached to the bottom of the chassis, and the bottom of the chassis is a thick piece of aluminum with threaded holes as well as PEM nuts). I wish I had taken pictures. I cleaned it with compressed air and reassembled it. It's now working fine, but I don't believe my cleaning fixed it. I'm not going to sweat it too much; I'm long overdue for a new receiver (one with HDMI inputs), along with a new TV.
Speaking of old A/V gear... my TV has no HDMI inputs, only component video. And it'll do 720p and 1080i, but not 1080p. I'm using a ViewHD HDMI to component video converter for the Apple TV. It works well so far.
I added a second option for a USB 3.0 hub to my wish list. It's the Satechi UH3-10P, which has 10 USB 3.0 ports plus a charging port.
I also ordered Sanyo eneloop AA batteries with a 4-location charger so I can start using them for my Magic Trackpad. I've been meaning to switch to rechargeables here and elsewhere for ages, to save money and reduce the amount of batteries I recycle. I am thrilled with the Energizer Ultimate lithium so far, but they're expensive and I would rather just use good-quality NiMH rechargeables.
I ordered Sanyo eneloop AA batteries with a 4-location charger so mom can start using them for her Magic Trackpad.
I also ordered a Satechi Premium 4-port Aluminum USB 2.0 hub for her, with a 3-meter extension cable. The extension cable will let her plug it into the back of the machine if she doesn't want to plug it into her Apple keyboard. The advantage of this hub over others is purely cosmetic: it matches her Apple keyboard and trackpad. The disadvantage is that it's bus-powered, so it can't be effectively used for anything that draws a lot of current. It should be fine for thumb drives and the like.
Worth noting that the molex power header on this card SUCKS. It's very flimsy, and the holes for its latches are way too big so the latches will not stay in place. Hence the header bends awat from the board when trying to install the cable, or after installation if the cable is routed downward. As a result, I wound up drilling a small hole through the PCB to allow a cable tie to be installed to hold the header in place.
The FL1009-based USB 3.0 card caused the machine to not finish booting. Got to the gray screen, then it hung. Might have been due to not having the molex plug powered. I didn't power it because it's going to take a little work to make the cabling tidy. I left it out for now, I'll get one working in my machine and then duplicate the work on mom's machine.
Mom's Mac Mini was running OS X 10.4. There's a long list of things I need to show her. I don't expect her to remember everything from a crash course, but she'll likely remember a faint inkling of their existence and be able to find and use them herself once she knows they exist. It's not like there isn't a lot of helpful information online, and she can always call me for help.
For some reason I thought mom had a wired ethernet connection to her Verizon internet connection. She doesn't; she's using wireless because she had her LTE modem placed in one bedroom but moved her computer to a different bedroom. So she needs a wireless card for the hackintosh. I ordered a TP-LINK TL-WDN4800 since it supposedly works out of the box. If that turns out to not be true, I will likely just run some wired ethernet for her with drops from the attic and wallplates.
I also ordered some Fresco Logic FL1009-200 USB 3.0 PCI Express cards. These supposedly work out of the box with Mountain Lion. While they're slower than some other cards, they're still much faster than a typical USB 2.0 port.
Since I've been having trouble with lynx2mac's ethernet driver for some time (it sometimes fails miserably when waking from sleep), I've switched to using the Realtek driver for Lion. So far, it's working fine.
This is the second time I've received the wrong item from online ordering in the last several months. Maybe brick-and-mortar stores have a fighting chance if companies like Amazon and NewEgg can't seem to send the items that were ordered. I know I've recently moved to buying locally at MicroCenter whenever possible for this reason. Paying sales tax and driving there on my way home from work is still less hassle than having to repackage an item and go to the UPS store and have my funds tied up just because an online vendor can't get its act together.
On a positive note, I picked up the Magic Trackpad from the Apple Store at Twelve Oaks Mall. As usual, a very pleasant retail experience. In fact it has me leaning toward getting a Mac wired keyboard for this machine. It's a lot cheaper, it takes up less desk space, and if mom doesn't like it, I'll keep it for myself as my spare.
I created a new SSDT with a turbo overclock to 4GHz. It works fine, and I never saw CPU temperatures get out of hand running all cores at full tilt. They levelled off around 55C.
I set up Carbon Copy Cloner, and ran the first backup.
I set up Hyperdock.
I also bought an iogear GBU521 USB Bluetooth adapter, since it supports Bluetooth 4.0 and reportedly works better than the Rocketfish RF-MRBTAD I have been using. This appears to be true; though I had to re-pair my Magic Trackpad after putting the GBU521 in place, it looks like I don't have to wait 30 seconds after wake-from-sleep to use my trackpad now. It starts working immediately after the USB hub in my das keyboard is initialized.
Finally, I bought a Samsung 840 Pro Series 256 gigabyte SSD for the second hackintosh I'm building for mom. It was on sale at $224.99, as was the Fractal Design Arc Midi case I bought for her. I also bought an EVGA GT640 video card for her. She's not going to be playing any games, it'll work just fine.
I finished migrating to the new Samsung 840 Pro SSD. One of the things I had forgotten... my installation settings for Clover. I used GPT boot0hfs, following the instructions for creating a FAT-32 filesystem on the EFI partition.
Samsung 840 Pro performance... I'm seeing 467 MB/sec write, 515 MB/sec read. Not dramatically different than the Samsung 830, but a bit faster and I now have space to breathe.
Once done, I reconfigured my CarbonCopyCloner scheduled tasks to back up the new SSD. I also started a TimeMachine backup to my MyBookLive.
One strange thing happened in this process... my das keyboard is no longer able to make the BIOS stop before it jumps to the bootloader. I suspect it has something to do with having the new USB to Bluetooth adapter plugged in to the hub on the das keyboard. I will have to figure it out shortly. Since I needed to be able to get to the boot menu in the BIOS while setting up the new SSD, I plugged my old Unicomp keyboard into one of the front panel USB ports. That worked fine.
In the process, I discovered that GetLocalInterfaces() in my libDwm library did not work correctly on FreeBSD 9.1. So I had to fix it. It now works correctly and sitetrafficd is now running on depot, recording IP traffic totals and TCP handshake round-trip times.
da0 at mps0 bus 0 scbus0 target 0 lun 0 da0:I then issued camcontrol stop da0 to stop the drive, and removed the drive. This also went fine:
Fixed Direct Access SCSI-5 device da0: 150.000MB/s transfers da0: Command Queueing enabled da0: 70911MB (145226112 512 byte sectors: 255H 63S/T 9039C)
mps0: mpssas_alloc_tm freezing simq mps0: mpssas_remove_complete on handle 0x0009, IOCStatus= 0x0 mps0: mpssas_free_tm releasing simq (da0:mps0:0:0:0): lost device - 0 outstanding, 1 refs (pass1:mps0:0:0:0): passdevgonecb: devfs entry is gone (da0:mps0:0:0:0): removing device entryHence I believe I'm all set for adding drives to be used with ZFS.
I installed smartmontools from ports, and enabled smartd in /etc/rc.conf. I configured the disks I want monitored in /etc/periodic.conf. I then started smartd.
The SFF-8087 to SATA forward breakout cables arrived. I installed them and ziptied them neatly around the perimeter of the case so they don't have an adverse effect on the air flow.
I put depot's case top on, installed the modified handles and put depot back in the rack. I believe I'm ready for pool drives.
I wound up flashing the IT sofware to the IBM M1015 by temporarily installing it in the motherboard of my hackintosh. I used FreeDOS and the sas2flsh utility, using 2108it.bin and mpt2sas2.rom as input files for sas2flsh. I had to do this because depot's motherboard has UEFI BIOS (and hence sas2flsh would not work on it). I could not get the EFI shell to work, and hence could not use sas2flsh.efi. Once flashed, I put it back in depot, and as near as I can tell, it's all OK. From dmesg:
mps0:And camcontrol sees it:
port 0xe000-0xe0ff mem 0xf7dc0000-0xf7dc3fff, 0xf7d80000-0xf7dbffff irq 16 at device 0.0 on pci1 mps0: Firmware: 09.00.00.00, Driver: 14.00.00.01-fbsd mps0: IOCCapabilities: 1285c
dwm@depot:/home/dwm% sudo camcontrol devlist -v scbus0 on mps0 bus 0: ...
I have a plan to modify the handles on the RSV-L4411 case so I can get it fully into the rack on the rails without conflict. I modified one of them and it works.
I removed the feet from the Rosewill RSV-L4411. I'm a bit surprised feet were included on a rackmount case, though I guess they'd be handy if I were using a shelf instead of sliding rails. I'll put them in the Middle Atlantic drawer that's installed in my rack in case I ever need them.
I installed the power supply, motherboard and temporary boot drive in the Rosewill RSV-L4411. It appears that the IBM M1015 card will arrive tomorrow, at which point I'll install it in the motherboard and try to get it working. Amazon has not yet shipped my 1-meter SFF-8087 to SATA forward breakout cables yet, so I don't know if I'll receive them this week, despite it being a Prime order.
Well, the Rosewill rack rails suck. The way the brackets are made, combined with the design of the handles on the RSV-L4411, means the case can't go all the way in to be flush with the rack and the rack screws will not reach. I'll have to modify things to make it work. Makes me wish I had just saved up for a Supermicro 3U case.
I need cables for the IBM M1015 card. I don't think 18" cables will suffice for a super-clean installation in the Rosewill RSV-L4411 case, since the case is 25" deep. So I ordered a pair of 1-meter 3Ware CBL-SFF8087OCF-10M cables from Amazon. Once the IBM M1015 card and the cables arrive, I can finish the internal work on depot and hence be ready to install drives into the hot-swap bays. I may pull some of the 1TB drives from other machines and create a small zpool to test things. But my first production pool will be 6 drives in raidz2. I will later probably add a second 6-drive raidz2 pool, and use a Samsung 840 Pro Series SSD as my boot drive.
It's worth noting that the critical I/O bottleneck for this machine will be the gigabit ethernet. It won't be doing anything itself other than acting as a network storage server, mostly for backups of all of my other machines and maybe for audio and video storage.
I installed nut for UPS monitoring. Of course I haven't bought new batteries for either of my Powerware 5115 UPS units yet. So for now I'll just be using it in slave mode; depot will be plugged in to the Powerware 5125, whose master is www.
The motherboard has UEFI BIOS, and it looks like it's very speedy. That'll be nice for the occasions where I need to reboot. It will let me boot from a USB thumb drive (I checked), and it sees all of the RAM modules so at least I know they're all good as far as being recognized. The stock CPU cooler runs very quiet when the CPU is idle, which is good news. The power supply is very quiet at idle too. Not that it matters... I know the real noise from this machine will emanate from the hard drives and case fans.
The dd of the memstick image completed, and I put the USB thumb drive in one of depot's rear ports. It booted fine and I performed the installation of FreeBSD 9.1-RC3. Everything went smoothly. Note that I consider this whole activity temporary... the 600G VelociRaptor drive is much larger than I need for a boot drive. But this is a good test of running a boot drive from the ASMedia SATA ports so that I can use the H77 SATA ports for raidz2 drives.
I created a new kernel configuration (/sys/amd64/conf/depot) which just adds device coretemp to the GENERIC kernel. I want to be able to monitor CPU core temperatures.
So, I'm all set to do some testing once the new case arrives. Of course I don't have array drives yet. I'll be buying those piecemeal over the next few months.
I need to order some rack thumbscrews for my Middle Atlantic fan panels. I can get them directly from www.rackrelease.com.
The ASRock H77 Pro4-M motherboard, 32G of DDR3 1333 memory, the Seasonic SS12II 430B power supply (430 watts), and a Western Digital WD6000HLHX 600G VelociRaptor hard drive arrived. All of these parts except the hard drive are for depot.
I installed the CPU and the memory in the ASRock H77 Pro4-M motherboard. I have not yet installed the CPU cooler, only because I'm not sure I'll use the stock one and the motherboard won't fit back in its box with the CPU cooler on it. No sense in installing the cooler until I have a case to put it in.
The StarTech 25U rack arrived via freight shipping today, and it appears to be intact. I'll know when I take it all out of the box and try to put it together.
I quickly put together the StarTech 25U rack. It's all good. The only indication that it's refurbished: a few paint scratches and chips on some of the aluminum braces. It's a non-issue for me since this isn't a piece of furniture, it's a tool. It's not going to be placed in living space, but it looks very good. More importantly, it's much more functional for my purposes than my existing full-height racks. The one drawback: I can't see moving it out of the home on the casters when it's loaded... they're not strong enough. It's also worth noting that the first rack space is not at the floor of the rack, it's about 1.5" above it. That means I probably can't use my Powerware 5125 on the floor of the rack with no rails. I'm calling this a non-issue since I have other options..
I've decided that I want the CSB HRL634WF2 batteries from atbatt.com for my Powerware 5115 UPS units. Free shipping for orders over $75, and these particular batteries are 9Ah and designed for long-life and 260 full cycles.
I removed the batteries from my old Axxium 1500 2U rackmount UPS. They can be replaced with the same batteries I used in my Powerware 5125: Werker WKA12-9F2. I hate the pricing at BatteriesPlus... they want $42 each and I need 4 of them. The only advantage to getting them locally versus online is warranty and recycling of the old batteries. atbatt.com has much better pricing (more than $10 less per battery if I compare the exact same battery: a Genesis NP7-12) and has free shipping for orders over $75. Of course, I'm not all that excited about reviving my Axxium 1500 at the moment, since its only connection is a serial port. The Powerware 5115 units that I own have USB ports (though they are not HID compliant). It's also worth noting that since my load is quite low, I don't need additional UPS at the moment. Even if my load doubles with the new server, I'll still be at 20% load. This makes me think that I should wait a bit before buying more UPS batteries, and instead continue working toward finishing depot's build. Most of my power issues are of very short duration (seconds) or very long duration (many hours). I can't cover many hours no matter what I do with UPS, but I can cover 20 minutes rather easily with only my Powerware 5125. If there's a place I could use more UPS at the moment, it'd be in my office since my desktop easily consumes more power than my servers (dual monitors, power-hungry GPU, etc.).
I'm leaning toward an IBM M1015 card for ZFS. It will give me 8 SATA ports, and is readily available on eBay for cheap. To reflash the card to IT-mode, I can use the instructions here: http://lime-technology.com/forum/index.php?topic=12767.msg124393#msg124393
Once the new rack and server parts arrive, I'll basically be set for the next several years with respect to home computing, with the exception of an HTPC. I'll probably buy a Mac mini for my HTPC.
One problem I notived when working on the rack: the keyboard port for my Belkin KVM is intermittent. When the new rack arrives, I'll take the KVM apart and see if I can replace that port.
Since I had not looked at my Powerware 5115 UPS units in a long time, I checked... they're the 1400VA/1000W models, which is good news. I can get new batteries for them and be in very good shape for UPS. At the moment my power consumption in the rack isn't very high since the two machines that run 7x24 are both miniITX systems with low power consumption (though I haven't measured it). Then there's the cable modem, the gigabit ethernet switch, the KVM and KVM keyboard/monitor. These are all low power devices since the KVM monitor goes to sleep when I'm not using it. My plan is to put ria on one of the Powerware 5115 units, www on the other Powerware 5115 unit, and everything else on the Powerware 5125 (including the new backup server).
The reason I'm rearranging right now: I want my layout to be planned well in advance of receiving the new rack, so I can just move everything once the new rack arrives and not have to think about it at that time. This will minimize my downtime and let me put the old rack on craigslist shortly thereafter.
I need to print a cheat sheet on adhesive-backed vinyl so I'll remember the hot-key sequences for my KVM. When I move to a smaller rack, it won't be easy to use the channel changing button on the front of my Belkin KVM when the keyboard/monitor is extended for use.
I pulled the batteries from my Powerware 5125 2U rackmount UPS since they need to be replaced. They're Portalac PX12090. Batteries Plus shows a 7.2Ah battery as the replacement. I think that sucks, since the PX12090 is a 9Ah rated battery. Hence I think the more appropriate replacement is the Werker WKA12-9F2. I bought some of these for my APC XS 1300 not long ago, so I took one out of the XS 1300 to measure it. They will fit. The WKA12-9F2 is $42 at the local store. I need four of them, so the total comes to $168.
I also need new batteries for my Powerware 5115 1U rackount UPS units. The Werker WKA6-7.2F will fit, but it's rated for only 7.2Ah versus the 9Ah Panasonic UP-RW0645CH1 batteries that are in them right now. Each of my 1U rackount UPS takes 6 of these batteries. The Werker WKA6-7.2F is $24 at my local Batteries Plus, so the total comes to $288 plus tax. At the moment I don't really need both of these UPS units online, so I could just buy 6 batteries for $144 plus tax.
My second rack is holding 16U of Middle Atlantic rackmount drawers right now, along with my old waveform generator and oscilloscope. I intend to replace that with a shorter, shallower rack. The Middle Atlantic rackmount drawers are only 15" deep, so using a deep computer rack is a tremendous waste of space. Buying a pair of shorter, shallower racks will let me sell the second 42U rack as well as both of my SKB road cases (currently holding more Middle Atlantic rackmount drawers). Though I might keep the SKB road cases, they're sort of nice for holding 12U of Middle Atlantic rackmount drawers.
So, what do I want as a shorter rack for my computers? I need full depth here since I intend to put a 4U case in it as my backup/storage server. A 24U would be just about ideal. The unfortunate thing... shipping. Most places won't ship a rack to a residence for a reasonable price, because they use freight shipping. So for now I may just buy a 21U Royal Racks A/V rack for my Middle Atlantic drawers, and keep one of my tall racks until later.
A Norco C-24U would serve my purposes. The drawback: the casters are a joke. Better would be a Startech 25U 36-inch knock-down rack, which is similar to my existing racks but shorter and with mesh rear door. Another option would be to make my own rack out of hardwood plywood with a butcher-block top. It would of course be heavy, but I already own very good casters for the job.
This is one of those areas where I'm irked that Apple continues to use HFS and abandoned ZFS over licensing issues. HFS remains fairly easy to corrupt, journaled or not. A bad DIMM shouldn't result in the loss of the filesystem. I should also be able to force a filesystem to be mounted read-write if I can mount it read-only. I was able to mount the filesystem on the SSD as read-only, but for the life of me I could not figure out how to mount it read-write. That meant I had to erase it just so I could make it a target for Carbon Copy Cloner.
I hope I can return the bad DIMM to MicroCenter, I don't really have time for an RMA. A new module is $45 and in stock.
The slowness of my OS clone drive is annoying. It's a Western Digital Caviar Green. Not so bad for backups and restores, but it's somewhat painful to use as a boot drive, especially compared to the SSD. I really should replace it with a 7,200 or 10,000 rpm drive. But as long as my backups continue to work and I don't completely lose my SSD, I'm O.K.
At the moment I know that I'm somewhat constrained by the slowness of the Lnx2Mac ethernet driver I'm using in my hackintosh. It has no support for jumbo frames, which puts some constraints on my network bandwidth. In fact the Realtek 8111 chipset doesn't support jumbo frames. If I want to do something about it, it appears my best option is an HP NC360T 412648-B21 from eBay. I don't really want to occupy my second PCIEX16 slot with it, so I'd need to slightly modify the card to fit in one of the PCIEX1 slots. Not a big deal, and I'd normally say it's not worth it except for the fact that I'd really like to migrate my whole LAN to jumbo frames. The HP card also has support for checksum offload, and the hardware supports TCP segmentation offload though I'm not sure the OS X driver uses it.
The issue here isn't whether or not I can saturate my LAN; I can, as shown with iperf:
dwm@www:/home/dwm% iperf -c hackintosh -i 1 ------------------------------------------------------------ Client connecting to hackintosh, TCP port 5001 TCP window size: 32.5 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.168.59 port 44223 connected with hackintosh port 5001 [ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 108 MBytes 907 Mbits/sec [ 3] 1.0- 2.0 sec 112 MBytes 942 Mbits/sec [ 3] 2.0- 3.0 sec 112 MBytes 942 Mbits/sec [ 3] 3.0- 4.0 sec 112 MBytes 941 Mbits/sec [ 3] 4.0- 5.0 sec 112 MBytes 941 Mbits/sec [ 3] 5.0- 6.0 sec 112 MBytes 942 Mbits/sec [ 3] 6.0- 7.0 sec 112 MBytes 941 Mbits/sec [ 3] 7.0- 8.0 sec 112 MBytes 941 Mbits/sec [ 3] 8.0- 9.0 sec 112 MBytes 941 Mbits/sec [ 3] 9.0-10.0 sec 112 MBytes 940 Mbits/sec [ 3] 0.0-10.0 sec 1.09 GBytes 937 Mbits/sec
The issue here is the amount of work the CPU needs to do on both sides. It's a high packet rate. I think I'll live with it for now, since I'm mostly only concerned about data transfer to what are slow disk drives. And at the moment I'm just happy that the new switches seem to work just dandy and are not a factor in my network bandwidth. And I'm happy to know that the wiring I did in the house years ago is working perfectly; TCP at 940 Mbits/sec is fantastic.
Fingers crossed that the time it takes for my level 0 backups to complete will be reduced significantly.
I'm still saving up for a NAS box, and deciding what I really need. Ideally, something that will last at least 5 years. I think there's no question that I'll run FreeBSD and use ZFS, but the downside of using a ZFS RAID-Z configuration is the RAM required. I can't just use a MiniITX motherboard with only 2 RAM slots, since it's likely too restrictive RAM-wise. I think I want 32G of RAM, and ideally I'd have 6 SATA III ports on the motherboard. Not that I expect many individual drives to exceed SATA II rates, but SATA II is a fairly old spec at this point and 5 years from now it'll probably be considered antiquated for hard drives. It already is for SSD..
We'll see how I feel about this setup after I've used the new setup for a week or two. I do still dislike how cramped parts some areas of the keyboard seem when compared to my old Unicomp buckling spring keyboard. The control keys in particular, since I'm a 20-year emacs user and use emacs keybindings in a number of places (not just in emacs).
In any event, I'll be thrilled to replace the keycaps with the ones from wasdkeyboards.com. It will resolve my issues and I can go back to not having a lamp shining on my keyboard.
I also ordered the wire keycap puller and a set of the 40A o-ring dampeners with .4mm travel reduction. I may or may not use the o-ring dampeners, depending on how they feel. My hope is that they reduce the shock to my right hand when I bottom out the keys, without adversely affecting the feel of the keyboard. I think I'll like them, but if not I just won't use them. At a minimum they'll probably be useful in training me ot not bottom out the keys.
This is the first keyboard I've owned with Cherry MX blue key switches. My first impression is that I like it a lot. It's much lighter touch than my Unicomp, and considerably quieter. However, as expected, I hate the printing on the keycaps; it's too small, and I prefer capital letters over small letters. I'm also certain that the printing is going to wear off quickly, like every white-on-black laser-etched-and-filled keyboard I've owned. There's a reason Unicomp only offered it for a very short time; the longevity is abysmal compared to dye-sublimated keycaps or even light-colored keys with just laser etching.
Short story made long... as much as I love my buckling spring keyboards like the one I use now, there are times when I want something slightly lighter in touch and a little bit quieter. I still want clicking for touch typing, just a slightly lighter touch. There isn't a chance in hell I'm ever going to use a rubber-dome type keyboard for daily use, I've been using buckling spring keyboards like the IBM Model M for over 20 years and IMHO non-mechanical keyboards just suck to type on for long periods. There's really no comparison in terms of keyboard quality. Rubber-dome keyboards are inexpensive, but they also suck. In fact any keyboard that requires full depression of the keys sucks for typing. I can live with the keyboard on my MacBook Pro, but I'd never want to use it for long coding sessions.
Many years ago, I was a passenger in a car accident that damaged a bunch of cartilage in my right hand. I also had two reconstructive surgeries on my right thumb for a Bennett's fracture. For this reason, I can't tolerate keyboards that require me to bottom out the keyswitch to have a character transmitted. They cause daily pain, not to mention the frustration of their lack of longevity. A quality mechanical keyboard will last through decades of daily use, and not require one to bototm out the keys to get a character. They also never produce double hits.
If I were buying for an operating system other than OS X, I'd probably buy a keyboard with Topre switches. They're probably the nicest available for typing, and they're available in 55g and 45g weights. However, I've not seen one made for the Mac, so they have the wrong labelling and are missing some of the special functions on the top row of function keys. I've been living with such issues for a long time using my Unicomp buckling spring keyboards, but I want to have the functions as I'm now using my hackintosh as my primary desktop. I also want the extra USB ports on the daskeyboard; one will hold my USB to Bluetooth micro-adapter for my magic trackpad, and the other will either be used for USB flash drives or for a Mobee Power Bar to keep my Magic Trackpad charged at all times. I've been using AA alkaline batteries in my Magic Trackpad, and it conusmes a pair of them about once a month. Given that I essentially never move it and it sits right next to my keyboard, I could use a 6" USB to micro USB cable plugged into the keyboard to keep my Magic Trackpad charged at all times. On the rare occasion I need to use my Razer Naga mouse (mostly when playing a game), I'll unplug it. That happens less than once a month. The same injury that causes me to suffer when using a rubber dome keyboard causes significantly more suffering when using a mouse; it's the reason that I used a Logitech Wireless Trackman Marble for over 10 years before changing my primary desktop to OS X.
It's worth noting that the USB ports on the daskeyboard are fairly low current. There's no chance of using one to charge an iPhone or iPad in a reasonable amount of time. But it should work fine to power the Mobee Power Bar since I'll leave it plugged in all of the time; it'll essentially just be trickle charging the Mobee Power Bar round-the-clock. Even if it doesn't, it'll be nice to have my USB to Bluetooth micro-adapter literally inches from my Magic Trackpad; it completely eliminates any potential problems with reception.
I know the daskeyboard surface attracts dust and fingerprints like a magnet. I have a cover designed in CAD to address that issue, which will be cut from anodized aluminum. I may or may not powdercoat it with a textured powder.
I've gone back to using the LaCie PXHCD driver (from the LaCie web site, not from MultiBeast 4.6.x) and the lynx2mac ethernet driver. I do sometimes have to unplug/replug my Razer Naga mouse after sleep/wake, but the OS doesn't crash and I don't lose my network.
I bought Carbon Copy Cloner. It works well for what I need, and I intend to keep using it.
I ran the first backup of my OS drive. It took about 37 minutes, 30 minutes of which was data transfer. So in the worst case (SSD completely full), I expect a full backup to take approximately 67 minutes (right now my SSD is only half full). That's significantly faster than a full backup to my low-power machine in the basement.
I installed the Chimera bootloader on the backup drive, then booted from it by selecting it in my BIOS. It booted fine. It was slow as molasses compared to my SSD, but that was expected. What matters is that it works.
As a result, I repartitioned my 500G drive that formerly contained my Lion installation, and will now use it as a data drive. I started transferring my music collection from my old FreeBSD desktop, and when it's done I'll pull the two 1G drives out of that machine to use for backups in the hackintosh. I moved my iTunes Media folder to the 500G drive.
sudo perl -pi -e 's|\x75\x30\x89\xd8|\xeb\x30\x89\xd8|' /System/Library/Extensions/AppleRTC.kext/Contents/MacOS/AppleRTC
This basically just causes the code to jump over the checksum updates.
I also bumped my turbo clocks to 42 on all cores. I rarely run my CPU hard, but when I do I want it to run at 4.2GHz. 4.2GHz is mild and safe, given that I have the Corsair H100. My Geekbench score is back to 15,144.
I updated my VMWare Fusion to 4.1.3 in order to run it on Mountain Lion.
One issue I haven't chased down and fixed... it appears that I don't really have power management of the GPU. HWMonitor is reporting it at above 50C all of the time. Duh, that's basically what I expect when using MacPro 3,1 as my system definition. I'm going to stick with it for now, since my sleep/wake is working better.
I also picked up the Samsung 830 256G SSD and another 8G USB stick for UniBeast 1.5.1. I installed UniBeast, and proceeded to install Mountain Lion on the Samsung 830 256G SSD.
I had a few glitches along the way, mostly with my DSDT not working initially. I think this was due to Safari not saving it properly (tonymac should ditch the stupid menu-only selection on the website). Obviously I had to install my SSDT to get SpeedStep to work correctly. I also had to install the AppleRTC Patch for CMOS Reset, else my CMOS would be reset at every reboot. And unlike Lion, I decided to use toleda's new patched AppleHDA and ALC889 for audio, which works just dandy.
One very cool think about Mountain Lion... my GTX570 video card just works, without trying to install the OpenCL enabler from MultiBeast. It also appears to be glitch-free after sleep and wake. I hope this remains true, since I continued to have problems with wake from sleep under Lion.
I've decided on my additional case lighting. I want the NFLS-SS-A300-1/2M-S (amber) and NFLS-SS-WW300-1/2M-S (warm white) from superbrightleds.com. The white strip will go on the top of the case and the amber will go in the bottom. I wouldn't mind having white in both places, but I don't think I want the brightness of the white hitting my eyes when I walk by my hackintosh.
My APC Back-UPS XS 1300 continues to show 0 minutes of runtime, and I can't seem to find the cable for it. Not that it really matters; PowerChute isn't available for anything but very old versions of OS X, and from what I've read the APC units don't really work with OS X. The unit functions, in terms of the inverter coming on when I pull the plug from the wall, but I have no way of having my hackintosh shut down when runtime gets low. I've decided I'm going to buy a new CyberPower OR1500PFCRT2U from Amazon. It's a 2U rackmount, line-interactive with pure sine output, for a reasonable price ($410). Importantly, it has an HID-compliant USB port that works with OS X.
I bought new batteries for my APC Back-UPS XS 1300. Normally I only use this to power my monitors, but right now I'm also using it to power the hackintosh. Despite the fact that its output is not a sine wave, it's better than taking the power bumps that seem to be frequent this month. I need to buy new batteries for several more of my UPS systems (all rackmounted systems). At the moment the important one is the Best Fortress 1425 that I normally use for my desktop. I need 4 batteries for it, the cost will be around $160.
My iPhone wasn't being seen by iTunes. Per suggestion in the tonymacx86 forum, I tried the IOUSBFamily Rollback, but it made no difference. I am still loading the NEC USB3 driver. What does seem to work... using one of the powered USB 3.0 ports on the back.
I put the fourth coat of polyurethane on the rolling platform. This is the final coat. I wish I had something else that needed a finish... I had to destroy the lid on the can of polyurethane to get it open because it was glued shut, but it's still at least 2/3 full. At any rate... the platform is nothing special but looks nicer than I expected and is definitely highly functional. I almost wish the slots would be visible, since they're the most interesting looking feature.
I finished sanding the rolling platform and put the first coat of polyurethane on it.
Just for kicks, I tried a system definition of MacPro 5,1 to see if it made any difference with my sleep/wake issues. I copied my original smbios.plist to ~/Hackintosh/smbios.plist.3,1 first. Good thing I did; the 5,1 smbios.plist caused a kernel panic. So instead I went back to 3,1 and no overclocking in my BIOS. For the moment, it seems to have resolved my sleep/wake issues. I doubt it will stay that way; I've done this before and eventually I had wake issues.
I put a second coat of polyurethane on the rolling platform. Blondewood plywood and poplar absorb finishes like crazy, especially if they've been sitting in your home for years. it'll need a third coat and possibly a fourth.
I assembled the wood part of a rolling platform using some scrap 3/4" thick blondewood plywood I had left over from making my bed years ago. I trimmed the edges with 1.5" x .75" poplar. Nothing special here, I tacked it together with my finish nailer and wood glue. I then plunge-cut 23 slots in it with my miter saw, spaced 1/2" apart, which is enough to allow some air to be pulled in from underneath the platform by the bottom intake fan and the power supply fan. OF course, the case has feet too, so it can draw air in even if I had not cut the slots. I rough sanded the top and sides, then installed the casters. I brought it into my office for a test fitment, it's near perfect. I'll finish sanding it tomorrow and start putting polyurethane on it. It'll be nice to have a reasonably good setup for cooling and maintenance.
I'm still debating what to do about the filtering of the radiator fans. It's really just obstruction since I'm using them as exhaust. If I change them to intake, it really messes up the whole air flow in the case. For now I'm leaving it alone, since my CPU temperatures are fine. I think my only reasonable option is to cut out the filter above the radiators, but I'm reluctant to do it because the filter material isn't removable from it's frame.
Yesterday I bought some cheap casters to make my rolling platform. I need to get moving on a design for it. The easy thing to do is use plywood I already have, but I'm leaning toward using perforated stainless steel.
I still have not completely resolved my sleep/wake issues. They may never be resolved for this motherboard. Probably serves me right for choosing a Gigabyte board and GTX570 video card. On the upside, it's really not unreasonable for me to just shutdown and restart. That will be even more true when I install an SSD. Since I power down my monitors at night, I don't need to be concerned about the power consumption there. And of course I never put my old FreeBSD workstation to sleep because it didn't have sleep functionality (server type motherboard).
I replaced the Corsair H100 fans with the Noctua NF-F12 PWM fans. My core temperatures look fine, and the machine is very quiet.
The Noctua NF-F12 PWM fans will replace the Corsair fans on the Corsair H100 radiator. The Noctua NF-F12 PWM fans are almost as effective as the Corsair fans, but much quieter and have a 150,000 hour MTBF.
The Xigmatek XLF-F1453 fans will replace the Fractal Design front intake fans. I don't really trust the Fractal Design fans to last very long, and I'm certain that the Xigmatek are quiet since I already have some in the case.
After much fiddling and head-scratching, I went back to using the analog audio output of my motherboard. I eventually found a post in a forum on how to reset the SRC2496: hold the EMPHASIS and COPY buttons while powering up. I'll be darned, that appears to have fixed it. However, I don't trust it long-term. I believe it's over 10 years old, I think I bought it in 1998. The DAC in it is really nothing special, and I know the analog stages are low-grade. The main reason I bought it way back when was for digital conversion, and to avoid induced noise on the cable from my PC by moving the DAC close to my pre-amp and amplifier. At the time I was using an m-audio Revolution 7.1 card (which I wish I could still use, it was a nice card for the price). In my current home office, I don't really have noise sources to worry about, so I could just use the analog output of my hackintosh.
On an entirely different subject... I am stunned at how well everything has been working on my hackintosh. Doing roughly the same things I was doing on my FreeBSD desktop, I see SpeedStep keeping my CPU cores at 1.6GHz most of the time. That's with iTunes playing music, my usual plethora of browser and terminal windows open, etc. I see a good amount of RAM free most of the time. And obviously since my CPUs are running at low speed most of the time, my core temperatures are very low. The GUI is very responsive and I'm running HyperDock with window previews enabled and it's smooth as butter. I really couldn't be happier for what I spent. And now that I have TimeMachine working to one of my FreeBSD servers, I don't have to feel like it's all a temporary experiment.
I am intending to build a rolling platform to hold my tower. It will have some venting to allow free movement of air into the power supply and bottom intake fan. Right now I have the case propped off of the floor because its feet sink into the carpet, rendering those intake fans useless.
I created the bootable OS X Lion USB stick using one of my old 8 gigabyte sticks. I need to pull one of my monitors from my current desktop, plug in a keyboard and see if the new system will actually boot.
I bought a single-link DVI cable from BestBuy (otherwise known as your local rip-off store), and a Rocketfish USB to bluetooth adapter. Not sure the Rocketfish will work, but it's worth a try since it's the only thing I could find locally at 6:30PM on a Sunday. Time to hook things up and see where I get...
Well, first hitch... UniBeast is not smart enough to emit an error when the target doesn't have a Master Boot Record. It reports success, but you have a USB stick that can't be booted. I repartitioned my 8G USB stick, and reran UniBeast. UniBeast took about half an hour; you can watch the progress by running df once in a while. I then ejected the USB stick from my MacBook Pro, and put it in one of the USB ports I have on the front of my case. Success! However, I had to fight a bit with keyboards and mice. I don't have a USB mouse that works with the bootloader, so I wound up using an old Unicomp USB keyboard with built-in pointing stick and mouse buttons.
The first thing to do once it's possible: open Utilities->Disk Utility and format the new hard drive (use Mac OS Extended (Journaled).
A long time later and some fumbling around, I have a basic setup working. One monitor (I haven't loaded the real 570 drivers yet, nor CUDA drivers). Sleep works, which is very nice. My Apple Magick Trackpad works with the Rocketfish USB Bluetooth adapter, though it takes a little bit to come to life after sleep. Dual monitors are working fine, and my Cinebench score shows 43 frames/sec for OpenGL.
Next up is to see if I can get digital audio (I need to buy a new, long optical cable). For now, I moved the output of my digital equalizer to the output of my Rane BB 44X and plugged the analog output of my motherboard to the input of the Rane BB 44X and that works fine. I can button things up and put the Samsung 830 SSD on my list for a new install. Awesome!
Next month I'll order a keyboard and the Samsung SSD. In the meantime I need to document all of my settings from Multibeast, including the latest SDST tweak that was needed to make my CPU cores run at the correct speed. My current geekbench score is 14,132.
I installed the Seasonic 660W power supply in the Fractal Design Arc Midi case. I also installed the Gigabyte GA-Z68X-UD3H-B3 motherboard and the Xigmatek XLF-F1453 case fans, one in the bottom of the case and one replacing the original Fractal Design fan in the rear. The Fractal Design case fan that was in the rear was moved to the front of the case so that I have two intake fans in the front. I did this because I'm pretty sure I'll eventually be replacing the Fractal Design fans; from what I've read, they don't last a long time.
I installed the i7 2700K CPU on the motherboard.
I installed the Corsair fans on the Corsair H100 radiator. I suspect I will eventually replace these fans with something else, but it depends on how well they hold up. I know they're going to be loud. But in the end I'll probably replace the whole H100 setup anyway; I'd like a thicker radiator and I'm leery of leaks.
last pid: 1473; load averages: 0.00, 0.16, 0.15 up 0+00:07:59 20:58:05 55 processes: 1 running, 53 sleeping, 1 stopped CPU 0: 0.4% user, 0.0% nice, 0.0% system, 0.0% interrupt, 99.6% idle CPU 1: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle CPU 2: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle CPU 3: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle Mem: 69M Active, 31M Inact, 97M Wired, 252K Cache, 31M Buf, 3723M Free Swap: 4096M Total, 4096M Free
I'm starting to dislike Microcenter. The failed SO-DIMM packaging appeared to have been opened before, presumably because they put returns right back on the shelf. The customer service in the store is horrible (very long waits, combative, waiting area is way too small). In addition, on weekends the parking lot is completely full with 10 to 15 cars circling around in search of a parking spot. I had to park illegally in the apartment complex next door. Waste my time, you lose me as a customer. Especially since your price on this SO-DIMM was $15 more than Newegg.
I installed the second SO-DIMM and put www into the rack.
I also made some tweaks to dwmsitemenu to make it consider web page files in the same directory as sibling candidates.
I'm re-installing some ports, since wordpress caused an older version of mysql-server to be installed. I need to re-install gallery3 and wordpress after mysql-server-5.5.9 is installed, and also install mysql++-mysql55-3.0.8
Obviously all dependencies were also built and installed automatically.
I then updated the ports tree and the
FreeBSD source using csup. Then the usual:
In the meantime, I cut the hole for the Noctua NF-R8 80mm fan to cool the hard drives. I installed that fan, plus the second Noctua NF-R8. I replaced the original case fan with the Noctua NF-B9. I installed the Supermicro MBD-X7SPA-H-O motherboard with the first Patroit PSD22G6672S 2G SO-DIMM, and installed the Lite-On iHAS324-98B SATA DVD writer. I also installed the Western Digital WD1500HLFS 150GB VelociRaptor 10,000 rpm SATA hard drive, and the Western Digital WD1001FALS 1TB Caviar 7,200 rpm SATA hard drive.
I powered the machine and started loading FreeBSD 8.2.
One scare I had during installation: the header for the front panel IDC cable is not keyed. I assumed the wrong orientation, which put the motherboard in a state where it would not power up or send a power-up signal to the power supply after the front panel IDC cable was oriented correctly. the only solution I found to this problem was to use the "Force Power" juymper on the motherboard. This powered everything up fine, and it now powers up fine with the jumper removed.
The good experience with ria is what led me to buy the same hardware for www (axle's replacement).
Software-wise, I updated to FreeBSD 8.2.
I cut the hole for the 80mm Noctua fan and installed it, and replaced the original 92mm case fan with the 92mm Noctua. The CPU temperature under full load (gmake -j6 on libDwm) on both cores crept up to 40C, which is OK. When going back to idle, core2 went to ambient and core1 was 5C over ambient. I'm not worried about the processor, and now I'm not worried about the hard drives. It's worth noting that this machine in its target use will use very little CPU.
I then dropped ria in place, shut down the old gateway, and updated DNS. I tested things via my iPhone (using the new IP address), and all looks OK.
I started level 0 backups from axle to ria's new hard drives. This gives me a backup in case something goes awry with reallocation of the old gateway as my new desktop.
Before I start modding the case, I need to look at the BIOS settings for fan speeds. I have a bad feeling that the default setting has the fans running at 50% duty cycle.
I updated the ports tree, only due to the fact that the gcc 4.6 port had a showstopper C++ bug with std::pair that was the result of work being done for C++0x compliance with constexpr. This was/is bug 46701.
I'm now building gcc 4.6. Temperatures during stage3:
% sysctl -a | grep temper dev.cpu.0.temperature: 37.0C dev.cpu.1.temperature: 37.0C dev.cpu.2.temperature: 34.0C dev.cpu.3.temperature: 34.0C
I think I've finished configuring pf for firewalling. This is a nice change from using ipfw and natd, though I continue to hate pf's configuration syntax.
I also configured named, and now I'm acting as slave for the roots to reduce some of my outbound DNS traffic.
I like the SuperMicro case for the price. The front is very well ventilated for airflow. There is a LOT of room when using a MiniITX board. The power supply kinda looks sub-par; one fan, no vents on the bottom, not much venting in the rear. However, given that this should be a fairly low power machine, it should be fine. The case is much larger than I need, but that's a non-issue. One drawback: the right side panel doesn't come off, or at least not easily (I haven't looked at removing it yet). That means my 5.25" to 3.5" hard drive bay adapter is probably useless since I can't screw it to the bay on the right side (only on the left). For now I've put all 4 drives in the drive cage. Realistically, I don't really need 5 hard drives; 3 of these are just going to be used for backups of other machines. I expect to need to replace the Seagate in the not-too-distant future, this drive has had negative reviews.
I installed everything I have, and started the FreeBSD 8.2-BETA installation on the WD VelociRaptor drive. I created a 4G swap partition, though it probably should've been bigger. I only created a 2G /var, since I don't generally need crash dumps.