OpenSolaris Zone Setup
I've got OpenSolaris running on the new system. I'm setting up some zones to do some isolated work, and have found a couple of nail-biters.
Zones have been around for a while. Previously called "containers," they (at least conversationally) offer a kind of virtual machine that leverages the fact that the OS is already running on a system and kind of "merges" the base file system with a private file system. It does this similarly in memory, isolating the zones from the base system.
Because the running zones aren't entire faux machines, as with the other, more actual virtual machines like VMWare or even Sun's VirtualBox, a system can potentially host more zones than it could full-on virtual machines.
I'm planning to use the zones for simple process isolation, putting the web server in one, the exposed mail server in another, and even running some other applications in zones that don't have public Internet addresses. As much an exercise for me as a proper functional use of the technology.
Setting up a zone is pretty straight forward. This isn't meant to be a tutorial on how to do it, nor really was it intended to be as much an introduction to them as it has already been. I like to use Webmin to do the dirty work of zone creation for me as someone else has worked out the necessary keywords and turned them into simple web forms, plus I don't always have shell access, but usually can get to the Webmin. I've done this without the same hiccups on Solaris (not OpenSolaris) systems. Either they're different enough, or there have been enough changes, or I've simply forgotten enough, that I spent all weekend getting it right, when I felt like it should have taken minutes instead.
One of the first thing a zone needs is a private place for its files. Essentially just a folder it will treat as its root, at the most basic. One new thing (at least on OpenSolaris) is that zones need to be on ZFS file systems.
ZFS has been around for a while, and is always one of those things I keep meaning to get around to playing with. I guess I had the main use of the whole file system in a different category than it should be. I thought it was a multi-drive or error-correcting system, ala RAID.
I seldom have the time or resources to put together any kind of fault-tolerant or fault-capable system; usually it's one or two "big" hard drives in a PC-style system configured to work as a server, as is the system I'm configuring now. The machine in question has one 500GB drive, I just let the OS installer use the whole thing as it saw fit. It made a 6GB swap partition (equal to the starting RAM in the machine...I'm swapping the 2x1GB sticks for 2x2GB sticks as soon as my pal hands 'em down), and then created a ZFS pool out of the rest, from which it mounts / and /export, and everything else goes in those.
This made me figure, then, that anything I put on them would also be on ZFS, which according to that statement is true. That is, if I created a folder /export/zones, since /export is on ZFS, so would /zones. Evidently, however, what needed to happen is that the zone's root folder needs to be a mounted ZFS file system.
This took a few spins around the web to find out that what I hadn't done, but needed to do, was to create a ZFS mount point in my system's only pool. I made the root of my zones into its own ZFS file system, too, which may not be required. Perhaps it is not necessary to make that folder its own file system, but it does need to be a mount point.
zfs create rpool/export/zones
zfs set mountpoint=/zones rpool/export/zones
Then I was allowed to create the zone roots in /zones without any more warnings. This let me make zone1 in /zones/zone1 and zone2 in /zones/zone2. Well, my names are different, but you can take the point now.
The next bit that got me, and I'm guessing it happened this way before but I've forgotten, is that after installing the zones, they wouldn't finish booting. I could attach using the zlogin command, but some of the standard services, notably ssh, weren't starting. I banged my head around this for a bit before I stumbled upon the solution.
Before I share the solution, I'll detail the problem. The base of the problem was that I was using Webmin to try to create the zones. This in itself isn't horrible; it helps with skipping some of the typing, but in the end didn't save me much. After setting the parameters in Webmin, hit the "install" button, and the zone gets set up. Then hit the "boot" button, and the zone starts. What I missed doing this was the warning I saw after I tried doing the "install" from the command line.
Next Steps: Boot the zone, then log into the zone console
(zlogin -C) to complete the configuration process
Whoops. I either missed this in the Webmin little window, or it didn't put it in there. During the set-up there are a couple of points it could have told me. One of them scrolls a bunch of information, the other shows a basic progress bar, and I guess I'm guilty of not paying attention to either.
I un-installed the system, which cleaned the zone's files away, removing my attempts to make things work around the "failure" to finish booting. I let the system again install fully and then booted and connected to the console immediately after. It happens fast, so it was already waiting at a prompt for me; some bad input and it repeated the prompt. A better way would be to boot and login in one command, to do it at system speed instead of my typing speed (e.g., zoneadm boot && zlogin).
I could see that the zone was waiting to run through some final configurations. This configuration is largely the same as the installation of the base OS; asks for the timezone, some information about the NICs, keyboards, and the root password. This is the part that I've either not recalled as it's a one-liner to get into and a bunch of "next" hits to get through, or is changed since the last time I configured a zone on straight-up Solaris 10.
So the step-by-step from the command line is basically this (all done as a super-user):
zfs create rpool/export/zones
zfs set mountpoint=/zones rpool/export/zones
mkdir /zones/zoneName
chmod 700 /zones/zoneName
zonescfg -z zoneName
zoneadm -z zoneName verify
zoneadm -z zoneName install
zoneadm -z zoneName boot && zlogin -C zoneName
Of course, change zoneName and the paths to match the desired configuration. Creating the mount point only needs to be done once, of course, unless each zone is going to end up under a different mount point. If desired, the zonescfg and zoneadm bits can be done in Webmin, but don't forget to attach to the console until the intial setup is complete.
Maybe later I'll go into the details of configuring zones. The only other interesting thing I did with this is made some of them have static IP addresses on both the public Internet NIC and the LAN NIC (the machine has two).
For now, at least I've got a reminder so I can avoid getting stung by the "zone won't config or boot" bug I got stung by this weekend.