Without checking for conflicts or anything related to deprecated packages, I allowed the server to do the distribution upgrade it's been pestering for, moving from Ubuntu 21.04 to 21.10. Sadly, a major miss is that PHP7.4 isn't available in 21.10, but is required for this blog software, and some other PHP things running on it. I was faced with the challenge of downgrading the OS, or digging in and finally breaking the entangled server bits into proper Docker components and leveraging reverse proxies and other connections in the server to make it look like it always has.
That this is displaying again, and that I've made this new entry, is testament to getting it done!
Honestly, starting the server from a previous version install had some appeal. I just happened to get a free 240GB SSD from MicroCenter, and considered installing and configuring that as the boot drive, with the OS runtime, and then rebuild the rest from the other drives. But that seemed like a lot of work. I do have the drives partitioned so that mostly the data and configs are on other drives, but that's a lot of trust in mostly, and big hope that reinstalling on the same drives didn't leave artifacts that would lead to other incompatibilities.
In the end, I decided that it'd probably be more trouble than it's worth, so I plugged away part-time during the day, finding bits of examples between meetings at work (learning all the way!), and then digging in full-time after everyone else turned in.
I started in a fresh folder with a blank Docker file, and found the ubuntu:hirsute I wanted to base things on. I added some boilerplate to make Docker happy, and then threw in the minimal apt commands to get the required bits to support Apache 2.4 and PHP 7.4. I spent too long looking at solutions for trying to do a git clone as part of the Docker build, and decided I'd leave that for the command line when I need to rebuild. The rest is all software config after that.
This blog is running on b2evolution, which is sadly being abandoned. I guess it's been a good run, and will be time to figure out how to move everything to somewhere else, or make my own, or abandon the past and start over...but in the meantime, since I know the software works, and until the containers don't, I guess I can continue this way.
The b2evolution is important, 'cause it's really easy to configure new versions when they come out. There's a single config file that names the few details to get to the database, and I have some custom bits in my .htaccess to handle redirections of some things I moved around (that the software didn't help with). So I copied that file from the broken server to my Dockerfile folder and added the COPY command to get it in the right place in the container. Similar little bits to inform the Apache where the documents are, and to handle that I've got my blogs in the /blogs/ path, instead of the root.
I thought I'd be clever and do the right thing and move as much of that configuration outside of the container. It seemed simple enough to pass the credentials to the container as variables, and have the server read from them. That part started getting messy as Apache and PHP don't leverage the system environment variables quite the same. I spun down a rabbit hole of PHP discovery to try to find ways to get a PHP app to read system variables through Apache, and do it in a way that the container could leverage at runtime instead of build time, but it started getting too hard. It seemed I'd either have to modify the software to leverage other-thinking ways of getting the values, or add scripts to modify Apache files at start-up, neither of which seemed very reliable or low-effort.
I tried for a bit to take a file I could bake in, to leverage the Apache SetEnv directive, but exclude from git, so I could have a little peace of mind. I couldn't swing the right combinations of data and variable passing to get them read by the PHP app, though.
In the end I made a version of the container with my baked in values. I cringed especially hard because of the credentials in git, but it works, so I'll deal with changing credentials if I get around to making the external configuration work.
I had to spend a little time getting HTTPS to work, as the blog software sees that this is SSL, and forced redirects without it on the server. This was "hard" only because of the way Let's Encrypt tracks the current certificates, and because I know that every few months (at least) the server's going to need to be restarted to fetch the new certs when they're renewed. Bridge to cross later...fingers crossed I'll remember what the note on the whiteboard means. Maybe this post will remind me.
Then came the reverse proxy parts. Because of how the software works, I know it behaved with a curl command, but it kept sending redirects to the proper domain, instead of living at localhost for the duration. Setting up an Apache reverse proxy is really straight-forward. The SSL caused problems because the domain name didn't match, but it took just a little searching to turn off the name matching. I can figure that out later, too, if needed, but it's less important to me that the web server agree that the container's name matches the certificate than it is that the connection passes the data through.
A few runs to find the couple other directives I needed to fix, and viola! The Docker container is running the blog software, connecting to the old database! It's running right now on my development box, so the media and cache are messed up, but as soon as I'm done with this test post, I'm going to turn that off, reconfigure the reverse proxy, and deploy the container on the proper container server, with a real copy of the media and a place for the cache to be rebuilt.
Fingers crossed that nothing critical breaks in the meantime. There are a couple other blogs on this server that leverage the same software, and a couple other PHP apps that might not be liking the loss of PHP7 bits. I guess I'll be tech-deep this weekend.