New Server Serving Web Pages
Recently I acquired a giant server that I plan to provision as a replacement for my hand-me-down workstation-turned-server. This is the first test of software through that server, and so far it's going pretty smoothly.
The current web server is a few-years-old 1.8GHz Intel Core 2 Duo with 8GB RAM and a few TB on a few hard drives. The new web server is a few-years-old Sun (pre Oracle...so that old) SPARC Enterprise T5220, with an 8-core 1.2GHz UltraSParc T2 with 64GB of RAM and a pair of 146GB hard drives (but 6 unused SAS bays).
The idea was to spread out the work done on the current server, which includes a dozen web sites, some with backing app servers, a few database servers, a mail server with anti-spam filtering, and the other system servers like file and log and time. The server also runs web log analyzers and other analytics and diagnostics.
This first running server is not my first attempt at utilizing the new server.
First, the new server, still being an old server (technically past the end-of-life from Oracle, exchanged for a two-generations newer version, the T4 or its also-sold successor the T5), had some issues with support and firmware. I managed to work them out, got a firmware upgrade, and installed the free Oracle Solaris 11.2 on it. Check the first task: machine runs and installs.
I then bandied about playing with zones and re-familiarizing myself with Solaris. I occasionally check out Solaris or its Open Indiana counterpart in VMs, but not often with any seriousness. I found the packages I figured I'd need and threw the "it works" Apache page out there. On my private net. Easy.
Then I thought to give OpenStack a spin. One reason for getting such a big box was to virtualize everything. OpenStack on Solaris still leverages Solaris Zones, so it seemed like a management win. Getting OpenStack running from the ground-up is hard. I tried adding the packages to my installed Solaris, and even a re-installed Solaris. Ultimately I used the bundled install from Oracle. The documentation and tutorials and bundle miss out on a few things, like initial network configuration, with most of the documentation (as with too much documentation, including often documentation I write) skipping over some of these tough parts. After much difficulty getting the OpenStack virtual networking working on one NIC, I discovered a terse footnote that implied that getting OpenStack virtual networking on more than one NIC wasn't supported. That won't fly, so I begrudgingly reinstalled raw Solaris.
This time I tried to minimize my interaction in the global zone, intent on getting everything working in zones, hopefully as streamlined and specialized as I can make them. The idea is to make a "network in a box," for a simplistic description, using zones on this big box as separate systems. There are some bigger differences in the way I can configure this compared with the way I have the current system configured.
Storage is the biggest problem with the new server. Essentially I have one 146GB drive to spare as the data host for the zones and the zones' data.
I do have six unused drive bays, plus I could recover the one bay and replace its drive with a larger one. The system uses SAS drives, though, which aren't as prevalent as the SATA drives used by the other systems. Also, they seem to not be as large. Finally, since the system's a little old, they seem also to not be so fast.
In general, I believe, the use of these systems was for processing, not storage, with some form of NAS or SAN providing the storage. That's what I'll try to do with the old server, convert it to a storage device. Well, the old server or another server (possibly one of the few unplugged systems laying around) could provide the network-to-storage needs. A difficulty there is that it seems best if the storage server serves ZFS.
Probably the easiest is the mail server. I run my own mail server, often to my own chagrin, mostly as a tool to maintain and develop skills than because I think I can do it better. There are a dozen or so domains that filter through my server. Most filter into one or few e-mail addresses on the server, and the rest all forward to other mail servers; for example, anything sent to this domain is filtered by my server to remove UBE and then is forwarded to my personal GMail account.
Humorously, most of what the mail server does is reject connections or messages. It uses SpamAssassin as well as anti-spam lists like SpamCop to reject connections or messages. Further, I've got an enormous ban list that encapsulates countries known for spamming, from whom I'm pretty sure I'm not expecting legitimate e-mail. Also, I use fail2ban to provide realtime updates to my filter for hosts that send too much mail that gets flagged by one of the other mechanisms, but that aren't already permanently banned.
The mail server handles all of the alerts the system generates. It also accepts and forwards any mail generated by apps, such as when comments are added to this blog. It also allows mail from any other system on the network to forward mail to the Internet. And, with a little security, allows other servers to relay mail through it; such as when I reply from my GMail account as a mail user of this domain.
There are at least two difficulties with separating other apps and services from the mail server. The first is identifying the server, which is usually "localhost" and with that securing the connection, which should be covered by allowing all LAN addresses to relay (but that needs to be verified). The second is storage. If I move the mail server to the new box, it will demand some storage.
Mail, though, is not a main purpose of the servers. For the most part the mail server simply provides filtering and routing, and very few e-mail remain on the server for direct consumption. It's quite likely that I'll continue with a centralized mail server not on the big box, but instead with adequate storage attached to it.
Database service is another concern. The databases aren't large, but are used by many servers or apps, so it's really about managing all of the connection strings. There are storage concerns with the databases as well, especially with the frequent back-ups.
Web serving is one of the main reasons for the new server. The serving software is quick to install and configure. Getting the web pages and apps moved is pretty simple, if detailed. With the database service already in mind, the next biggest concerns are the logging and log analysis.
With the web servers all on one machine, it's easy to combine their logs for analysis. It's easy for the analysis tools to get to the log files, and for it to write reports back to the web servers for quick access. With the web servers separated, that takes a bit of reconfiguring.
There's more on my notepad, but that should give an idea of the tasks ahead. The good news is that the first tests work!