PHP Fix Finished
Well, is software ever finished?
After days of trying, reading weird blog entries, and sometimes not helpful documentation, I finally found the bits necessary to allow Apache to pass environment variables to a PHP app. Not only that, but I made a shared project on GitHub, and published the containers on Docker Hub. I spent almost as much time trying to get the GitHub Action to work well enough to publish the containers when a release is tagged. I swear I had it done, then made a fresh repository (to hide my 15 failed attempts) with what had worked, but it failed PUSHING TO ITS OWN CONTAINER REPOSITORY. I finished for now, but after a refresh, I'll try again. Maybe I don't need to publish to both the GitHub Container Registry and Docker Hub, except that I said I would. I can push the containers from my desktop, but that's less cool.
Further, I'm eating my own dog food! This blog software is running in one of those containers, as configured with the instructions in the README document!
What did I end up finding out? I'll try not to be too boring.
Of course, for security, we want to protect our passwords and some other configuration details. In the container world, this usually means moving the secrets out of the app and into the container environment. Specifically for this case, I'm running Docker containers, so entering those details via Docker keeps them out of the container package. When the container runs, Docker injects the passed values into the environment inside the container. Simply executed by example:
docker run --name example -e KEY=value some/example
As it appears, an environment variable named "KEY" is set with the value of "value" when the container is run. A different instance of the same container can have a different value for KEY
.
Once the value is in the environment, it's straightforward for scripts and apps to pick them up. In a shell, you'd just say $KEY
to get "value" wherever that is needed. Very similar, in Apache configs, the shell-like variable works, so an Apache config might have this simple example:
ServerName $SERVER_NAME
When that is run with a command like this example:
docker run --name example -e SERVER_NAME=example.com some/example
Then Apache will expand that to be the expected "ServerName example.com" as it's starting. Small caveat there. The variables aren't available to the entire system. I struggled briefly as I was debugging this because starting the Apache service didn't have the variables, but running the apachectl
command did. This only hurt when I was connected to the containers, editing inside, and trying to do a "service apache2 reload
" instead of an "apachectl graceful
" command to test my changes (before rebuilding containers with the changes). The commands are fundamentally "the same," asking the running Apache workers to read the configurations again, except that they use different environments when doing so. Using Docker, the "right" way to run Apache is with the "apachectl -D FOREGROUND
" command anyway, as it keeps the container alive as long as the Apache service is running. So it just took some discipline to use the right command (or groaning when I did it wrong).
The real problem came with PHP scripts running in the Apache app. Running a PHP script from the command line saw them just fine, but through Apache they were lost. This is important because this blog software is written in PHP. After some hunting, a very simple solution revealed itself: the PassEnv
directive. Adding this simple bit in the Apache config makes sure Apache passes the values on in the environment that PHP then runs.
PassEnv KEY
Note the name of the variable is passed, not the value or expansion of it.Then our Docker command with -e KEY=value
makes that available for ${KEY}
expansion in PHP scripts.
If you check out the GitHub repo, you can see these are used in the right spots in the PHP configuration script and the Apache configuration script used to make the container. This works well to make a coordinated and simple mechanism for starting different instances of the same Docker container each behave differently with different variables passed in.
The way it should be.
I have one small bit of work to finish with regard to the Alias
directive. This instance of the blog doesn't run at the app root, for example. So I use the Alias directive to point /blogs/ at the software folder. In some experiments, I got my Apache config to work with -e ALIAS=/blogs/
in the config. However, in the last attempt, in the version on Docker Hub now, it doesn't seem to work, even though the code is the same as my last success.
Still, as documented in the repo, I was able to use it by replacing the config file in a Dockerfile that extends that one, that has the literal /blogs/ in place, and none of the <If...> stuff around it. It really is a stretch to expect Apache to allow conditional directives based on environment values. I mean, it'd be neat. I could do some script trickery, and maybe add a check to the runtime so that the container could put different configurations in place based on the presence of the variables, but that takes away from some of the "unaltered" promise.
So that's two things. Fix the deploy action, and make Alias work with variables.
I'll fix it. It feels incomplete, and now that it's in the world (unless I delete the repo and start it fresh again...), I should make it whole before I move on.
Then the next thing will be fixing the reverse proxy I use. It's neat because it can detect when containers start and stop, and if they contain the right variables, it'll load and unload them from its nginx configuration. But it's flawed, because while it can pair with someone's Let's Encrypt container, it mashes things around and doesn't do the same kind of DNS-based auth that I use. It should be the case a container could say "this is my certificate path" and the reverse proxy can use that path to find the cert, so when those requests come in, they can be SSL. For now I've fronted everything with a CDN that offers SSL certs (it's Cloudflare; you can find out by looking at the cert...I wasn't trying to keep it a secret), but then can use HTTP connections to the server. That's fine, but I'd rather SSL all the things, especially since it's pretty easy now.
But that's the next thing...I think I'm back in infrastructure hobby spaces...