We have a situation where we need to keep a ZIP archive of some data files available on our Ubuntu Linux server so that our satellite offices can grab the information through slower data lines. Problem is, the underlying files change 2-3 times a day. What’s a quick, efficient way to only rebuild the ZIP archive file on our Linux system if a file’s changed, but leave it as-is if everything’s stayed the same?
I really like these sort of questions because there are so many different ways to solve them. You could, for example, just brute force rebuild the ZIP archive every few hours, but that’s a pretty inelegant solution and is bound to waste a lot of computing cycles, though that might not be a big deal. The bigger deal is that it could also leave your remote offices stuck with corrupted archive files because a new build started half-way through their latest transfer, a situation that’s a worst case scenario, I’m sure.
The cornerstone of this solution is to create a short shell script and then use “test” to ascertain if the data source files are updated (or, in the language of the script, newer than the ZIP archive file). If they are, then create the ZIP file to a different filename and when the archive and compression process is done, rename the new name to the standard archive name.
The basic logic is:
rebuild archive to temp file
mv temp file to archive
endif
Now, to make that code, we’ll want to check the “test” man page, which informs us that:
file1 -nt file2
True if file1 exists and is newer than file2.
I have a similar situation with an archive I’m maintaining, so the first step is to ascertain which files we want to test against. In my case, it’s 26 files, so having a chain of if-then-else statements would be crazy ugly. But how to ascertain which file is newest?
The solution is so simple it’s eerie! Just use “ls”: ls -t | head -1 gives you the most recently modified (touched) file in the directory. Since I am working with XML files it makes sense to constrain this just a little bit, so I’ll use something more akin to ls -t *.xml | head -1 instead.
If I had an explicit list of files to check, it’d be easy to set a variable that contains all the names:
So let’s put it all together and see what we get:
searchdb=”search-database” # target filename for search db ZIP archive
newestfile=”$(ls -t *xml | head -1)”
if [ $newestfile -nt $target.zip ] ; then
# time to rebuild the archive
zip $target *xml
fi
That’s basically all you need: make sure that the “newestfile” accurately picks up which of your set of source files is newest (and if you use a list of files, just use that in the statement instead of an explicit pattern, like “newestfile=$(ls -t $filenames | head -1)”
The only issue remaining in the above code is the potential problem of having the archive be slowly built while a remote site is downloading it at the same time. Not good. To avoid that, just use this:
# time to rebuild the archive
zip $interim *xml
mv $interim.zip $target.zip
fi
What’s nice about this is that it has a very low processor footprint, so it’s going to have minimal impact if you have the script run every hour or two via a cron job, which is what I do. In fact, my script is a bit more complex because I also take advantage of the “-x” flag to “zip” that lets me exclude a specific temporary file, as in “zip archive * -x *zip”.
Cooper, I think it’s an old dog, new tricks, problem: I still default to “tar” even though it was originally written to stream data to mag tapes. Yeah, it’s that old. 🙂
In your Unix & Unix SysAdmin books, you covered this same kind of thing, but I cannot remember if you used TAR or CPIO. Which would be better?