Ugh! I use Portainer to help manage my Docker containers running on the big server. I haven't done all of the things I want to there, but hobbies, right?
I know everyone will wonder, "why not Kubernetes?" We use Kubernetes at work, wrapped with an admin tool. It's great. But it also requires multiple nodes and a lot of configuration overkill that my situation doesn't need.
Portainer largely allows using "raw" Docker, with a management tool deployed in a container. It basically wraps Docker configurations in web views, and executes Docker commands with information filled on web forms. It still uses Docker to maintain the volumes, networks, and images. Really, aside from a little permission configuration on the container host, it's not invasive at all.
As nice as it is, it has some small flaws. It is a work in progress. Despite being on v2, it's still hitting on the high points and hard parts, and has a lot to finish in the weeds. As an open source project, though, so it's possible to help make changes in their Github Repo.
So, where did my time suck go?
I'd noticed this before when trying to rebuild some containers through their UI. Any environment variables loaded from the container are remembered somewhere in the Portainer bowels. Whether you add them through the UI or they're provided in the container, they get put there. That's fine, except if you refresh the image, their remembered values override what's in the container. That's great for trying to help remember the variables you provide, but it gets troublesome when those values changed with container changes, and then aren't represented with the new image. The only workaround is to delete the container and rebuild it entirely, or remember to delete them when rebuilding.
I've gotten used to that. When I make changes, or just want a new image from GitHub, I remove the settings I know aren't specifically mine.
This weekend I decided to add some health checks to some containers. This started because I lazily noticed that one container that interfaces with a MongoDB container was "running fine," but wasn't interacting with the DB. It turned out to not be "running fine," with a failure in some of its endpoints that turned out to be quickly fixed by updating some versions of things, but even after that, the data failed. It turned out to be a firewall rule failure (sigh), but the lesson made me want to make a sanity check. I created a simple health-check endpoint in the app that would make a simple DB query and return success if happy. I should have done it originally, but hobby projects, right?
The endpoint worked great, so I added a HEALTCHECK bit to my Dockerfile, and rebuilt and redployed the container. Then I saw that curl wasn't installed, and that I'd made a typo in the HEALTHCHECK command, forgetting the port number, so I fixed that and redeployed. The command worked when I tried in the console, but wouldn't ever succeed in the Portainer UI. A little poking around in the configuration viewer, and there was the old command. The timing settings I had configured (pause 30s to start, check every 5m, with a 1s time-out) weren't there, with 30s across the board. I can deal with that, but the whole thing is a little frustrating.
I deleted the instance, created a new one, and viola, it worked!
A quick poke at the issues in the Portainer GitHub, and it seems people want this configurable element exposed. This and the environment variables should be pretty similar to fix, so I think I might tackle them both.
For the environment variables, I think the Portainer-stored data just needs to remember which are user-added, and maybe allow refreshing the others on re-creation. That seems pretty straight-forward.
I'm going to try to tack a health check bit in the create/edit form, and display them in the view page. It should be possible to make it so that (re)creating a container will allow adding or updating health check values, including those timings. It'll be fun, as a solid blend of Go and Node.js, neither of which I'm great at. I've forked the repo, and hope to have something to PR back in a week or so.