Blog Analytics Stalled
The system generally runs. I have a number of external alerts now, like https://uptimerobot.com, that look at my various websites and give me alerts if the Internet fails or a service dies. So when those alerts don't go off, I tend to trust all is well.
I do peek at my log analytics and metrics occasionally, and it's usually this time of year I have that face-palm moment where I realize I've forgotten to tell my Splunk instance to look at the new log files.
I use cronolog to separate my various logs into their year, month, or even day folders and files. Busy servers, like the web server, get each day files in separated month folders inside separate year folders, for example. For a more explicit example, the access logs related to this blog post writing will be logged in the 2023/01 folder in the 04.log file.
The files existed, but the free version of Splunk I use doesn't look at the root of the log folders (for risk of taking in too much data at once and running out of license), so every January I need to tell it to look into the new year folder, so 2023 this year. I didn't do it before, but it's done now.
The logs only need to be added to the server that aren't aggregated through the Splunk fowarders, like the Docker instances use. I have a small set of log files that are on servers not yet ported to Docker instances. The mail server, for example, because it ties to system users, still isn't in a container. There's a chroot-ed Apache server that hasn't been moved yet, and it still logs that way. That runs the forwarding proxy for this site, and some of the static elements, although most of the other things are in app engines inside Docker containers.
All is well in log land now.