Laptop Recovered from Magic-button Destruction
After earlier bonking the wrong button to power-up the laptop, grimacing upon learning that destroys non-Windows partitions, and working to recover from it all, the system is back.
As mentioned, hitting the button caused the laptop to mess up my hard drive's partition table, destroying the partition on which I'd had the bulk of the OS stuff, but leaving behind my /home partition, and therefore most of my actual work. I typically partition a fair amount for the OS, about double the RAM for swap, and the rest for /home; in this case 16GB for OS, 8GB for swap, and a little less than 90GB for /home fill my 120GB drive. The damage not only destroyed that first partition, but incorrectly wrote the partition table, losing the other partitions, and also squirted a little 3GB partition at the end, stomping a little bit on the other partition (hopefully an unused part).
Thankfully, the Disk Utilities that come on the Ubuntu LiveCD could see past that and still mount the partition containing /home. Curiously, I couldn't edit the partition table to correct the damage; the error message reporting that the partition table had overlapping partitions. Mounting and poking around, it seemed that the data was all there; all of it that I carefully checked anyway--some important and recent work.
I thought I was on top of it when I plugged in a generous (for this purpose) 320GB external USB drive. I started copying the files from the mounted partition. Immediately I noticed the complaints that the "preserve" I'd asked cp to do couldn't happen...of course, it occurred to me, the USB drive had one big VFAT partition, and the permissions would be remembered differently. Undaunted, I let the copy continue.
After almost an hour, a "file too large" failure caught my eye. I let the copy continue until it finished. Curious, ran another cp, but this time just an update to see what went un-copied; only a few dozen files, so not too shabby. A number of large files were skipped, and a number of files with either very long names or FAT-unfriendly characters were also skipped. Of course, I considered, copying from an ext3 (or was it ext4?) partition would have that trouble.
Knowing I couldn't leave those large files behind as they were the hard drive images for my virtual machines, and not really wanting to lose some of the other files, I started considering other ways to do it.
I figured I could make a tar archive file with everything in it, but really, archiving the 70GB of data would result in a just less-than 70GB file without compression. I first thought of doing this over SSH to a server that could handle it, but I dismissed bandwidth as the weak link there. I could have waited until I got home and used the Gig-E, but I was impatient.
A quick "man tar" command and I found how to easily split the archive into smaller files by specifying the "tape length." I tried this with the simple command tar --tape-length=SIZEINMB -cMpvf ARCHIVE.tar FILES. Technically it was working, but at the end of the span of one file, it prompted to "change the tape," which allowed me to give it a new file name for the next bit. Since I'd chosen to break the files into 3GB bits (not wanting to figure out the right size to put to ensure I wouldn't hit the 4GB limit of FAT), I realized this would require a bit of attention (26 bits, it turned out).
After a bit of an Internet search, I found the key I needed to have tar just create the split files for me. Using something akin to the following, tar will automatically stop when the current file (specified by TARPATHANDNAME) reaches the filesize of SIZEINMB megabytes. If the size is expected to be more than 100 times that size, make that number bigger. List whatever FILES or folders to add to the archive, as normal. Change the other parameters as desired, too, but -c is needed to create the file, and -M is necessary to tell it to make a multi-volume archive; the -p is useful to retain permissions and timestamps, and the -v is good to see what's going on while it churns.
tar --tape-length=SIZEINMB -cMpv --file=TARPATHANDNAME-{1..100}.tar FILES
Note the key is the use of --file instead of -f. The bash-like expansion part in the file name is used to create files like TARPATHANDNAME-1.tar and TARPATHANDNAME-2.tar and so on, until it reached 26 in my case, or about 75GB of files.
I used the reverse (-x instead of-c) but similar command to extract the archive after I completed the install. The bits are below.
tar -xMpv --file=TARPATHANDNAME-{1..NUMBEROFARCHIVES}.tar
Like creating the archive, the -M is necessary to tell tar it's using a multi-volume archive, and the -x is necessary to extract (but -t could also be used to list the files); the -v is also just helpful to see that it's working while it churns, and the -p will again try to preserve the timestamps and permissions. Unlike creating the archive,the number is known, so the actual number can be used.
It took another two hours to finally create the 26 archive files. I then kicked off the install process from the LiveCD. I tried one more time to edit the partition table, with no success. I then repartitioned the whole thing using my normal model, let the installer run, and rebooted to a happily empty system. I selected a couple of the restricted drivers (video and WiFi), ran the OS updates and rebooted. I ran the tar extraction noted above, which took much less time than the create, but still approached an hour. I extracted the files into a separate directory, and then took a little time to copy the important stuff to my new home folder. I ran through and deleted some of the obviously unnecessary stuff, freeing about 20GB more space. Later, if I don't find anything else that needs copying, I'll remove the remainder of the restored stuff and the archive files.
For now, the machine is back and running as good as desired.