Considering Some Upgrades
Each of the contemporary servers in the farm have had some recent upgrades, usually due to age-related failures. In my mind, some of them need a little more.
There are four servers in my farm. Two are too old to do anything with, and two are fairly newly rebuilt.
The ancient is an old Sunfire T1000, with a 64-thread CPU, 64GB of RAM, and a bunch of 146GB SCSI drives shoved into a RAID with ZFS on top of it, making it a little more than a TB of total storage. I loved this little box when it came out, and when I acquired mine about 10 years ago, there weren't any other 64-thread CPUs out there, and 64GB of RAM was cost prohibitive. That was also about when the last version of Solaris for SPARC was released, and when any LINUX on SPARC offerings died off. There used to be a great open source collection of binaries (or instructions to build your own...I'm a fan of from-source software), but the site now requires subscriptions, and the options available to build your own aren't what they were before; largely because so many of the needed dependencies are behind one of the many pay walls. It chugs along nicely, mostly for the databases it can host (like the one this blog site uses), leveraging that fat RAM for buffers and cache. Every thing I used to need to know to do on SPARC is no longer required, since Oracle killed all of that off in favor of LINUX.
Related, but a bit more current and still useful is the Sunfire X4600 M2 I found, shortly after acquiring the T1000. It cranks in an 8x quad-core CPU system with a hot 128GB of RAM, and a smaller bank of the 146GB SCSI disks. It came with a storage fiber channel card that I thought maybe I could use to attach bigger storage, but so far it's done fine leveraging the other storage shared with NFS from the other servers. I had thought to also run Solaris on this AMD-powered server, and did for a little bit, but its firmware is a generation too old, which blocks the latest Solaris, and limits the drives to the smaller disks (I do have a small stack of 300GB drives I was going to use, but instead settled for using the spares I'd got in the bundle purchased for the T1000). This one instead runs LINUX, and has been the core of my container hosting, leaning on those many cores and bunch of RAM. As much as I throw at it, though, it's generally chugging along idle, really only making a fuss when it first boots and all the fans and things are tested.
When I got those servers, the other one on my network was a handed-down dual-core that used to be a little desktop turned mail and web server that also ran some other little things. It's got just a single 500GB drive, giving up its larger storage to a the last server in the farm. After a little PSU or motherboard failure, it got a boost to an OK quad-core CPU and now rocks 16GB of RAM. Plenty for what little tasks it does.
I've handed down my former desktop to the server farm. It's what I used to use on my desk, when I wasn't doing work on my Mac. It's a modern 16-core CPU with 32GB of RAM (and a couple empty slots still), and a couple drives with a couple TB each. This hasn't found a full-on server use yet, except that it leverages its abundant (compared to the others) storage to act as the backup server, slurping from the other systems using a little rsync and hard link magic to make daily backups that don't take up extra space for copies of unchanged files, and then does age-based removal of old enough versions of files, keeping its usage around 50% of the available space.
I use Docker containers on each of those non-SPARC CPU boxes, managed with a lazy interface of Portainer, after using Rancher for a long time, switching when they made the big move to Kubernetes instead of just Docker management. Portainer seems to be shifting focus to Kubernetes, too, so I either need to find another simple Docker management UI, or move to Kubernetes. I keep meaning to build a little Kubernetes cluster to solve this chase problem and also remove some of the manual "where do I want this to run" that I need to do now. The effort isn't too much, especially considering the stuff needed to get Kubernetes going, but I'm (the right kind of) lazy, so if I can make the machines manage that, I'll probably be better off for it, right?
I've got a small collection of M.2 SSD I've been given for being a loyal shopper at my local MicroCenter. They're not huge, but are plenty large enough to move the boot partitions off the other drives, and certainly free space from the other storage, while probably giving a bit of performance reading the seldom changing OS files. I could add those to the non-Sun machines. I'd also like to update the old HDD in the mail/file server. There's about 300GB of mail and stuff on it, between the e-mail, log analytics, and some other could-probably-be-cleaned crufty stuff.
I could go a few different directions with a little care and attention in the server farm.
I could re-position the databases and things running on the T1000 and pass that system on. It is a nice low-power system that is pretty responsive. Its ZFS over RAID is nice from a data loss prevention perspective, but it's also a lot of drives to store and monitor just in case of failure. I'm not interested in any kind of return from the system, so I'd probably offer it up to any other hobbiest who might be interested, maybe taking a fun spin through wiping the drives and installing a fresh copy of Solaris without any of my oddities or zones.
I could also eliminate the 8xCPU server. It's pretty big, although the space doesn't bother me. It seems like it'd be a big power sucker, but it's only drawing 100W, according to the UPS it's plugged into, except for that big burst at boot time. Its 32 cores are spread across 8 CPUs, which evidently requires a lot of hyperthreading to take advantage of. Given the little bursty and intermittent nature of the work the server is doing, most of the work is handled by the first CPUs and threads in the chain anyway. It's got 4xGB network ports, and an always-on management system, but my other servers have multiple NIC ports, too, and while I do have my internal and public networks on different interfaces, I don't need 4 on any system (yet). It's a lot of RAM, but even with ~20 Docker containers running, and the other things that servers do, it's only reporting 8GB is in use (and 6GB of that is cache)! Its CPU rarely blips over a few percent, and that's usually when I'm updating containers so it's copying and stopping and starting things.
The quad-core, mail and web server, is only using 8GB of its 16GB, with 8GB as cache. It runs some of the older sites I haven't turned into containers, as well as a couple that are addressed there in containers. I've got a couple "rich" sites that leverage PHP, Perl, JSP and Servlets, and a few more things, using the Apache modules instead of reverse proxies. I'll get them worked out, but haven't yet. It runs its mail servers (SMTP and POP/IMAP) and SpamAssassin outside of containers, because the accounts are generally system users instead of a DB of any kind, and the interactions on one server are easier than the single-purpose I try to adhere to in containers. It also runs my Crowdsec, Splunk, and other log analyzers because it takes in the rsyslog feeds for the other servers and routers to feed the analyzers; these could move to the other, more robust server.
Really, the 16-core can do all the things, and not break a sweat. It is the piggiest system, hogging 28GB of its 32GB of memory, but 22GB of that are cache, and it is running a GUI because it was functioning as a desktop previously. Sitting idle, as it is now, it churns at about 10% CPU all the time, making sure that desktop is ready for someone to interact with it!