2008-11-15

Note on using webdav idisk for experiment data

One of the problems I had before with setting up a script to manage the experimental scenario and data on a webdav idisk was that I hadn't stumbled across how to automatically mount & unmount the filesystem. It turns out that this is very easy.

First, create a directory to use as the mountpoint. For example, assuming you are in a writable directory, use something like this:

mkdir mnt

Next, use a command of this form:

/sbin/mount_webdav -s http://idisk.mac.com/groups.labname mnt

("groups.labname" should be replaced with the actual name of your idisk)

If the login info us not in your keychain, it will ask you for it. You might consider putting in the keychain for convenience. Note that any user (i.e., any RA who will access the database) must have access to the idisk.

Note that it is possible for a subdirectory to mounted directly with a command like this:

/sbin/mount_webdav -s http://idisk.mac.com/groups.labname/Databases/Thisdatabase mnt

However, in this case, a separate keychain entry will be needed to log in. It would normally be simpler to mount the root of the idisk and navigate its hierarchy programatically.

When complete, a simple "umount mnt" will be required.

Note that this method will work even if the idisk had already been mounted in the standard location or somewhere else.

2008-11-13

Combining image and rsync backups

This is from a note on the Apple server list.

The procedure is to start out with an asr image of the root volume using the -erase flag. This is now a bootable volume.

Then, each night, perform an rsync on this volume, using -delete and other flags so that all changes are written to the volume. This is now an updated, but still fully bootable volume.

This is almost equivalent to doing a nightly clone. It is food for thought. Here are some related issues:

  • Unless you want to bring the system down to single user mode every night, you are still going to have to deal with open database files during the rsync.
  • Assuming that those files have been dealth with, would it be faster just to do the image dump every night?
  • Is the target system more likely to be bootable in the even of a catastrophe mid-copy if rsync is used?
  • One of the biggest advantages of using rsync is the ability to do snapshots. Clearly, the original image could be used as the starting point for nightly snapshots, but then the image itself would never be updated. Is there some way for the root hierarchy to be the most recent snapshot, with previous versions stored elsewhere? (See below)
It seems to me that what might be needed here is for there to be two rsyncs each night. The first one is to update the bootable image from the working image, as in the original hint. The second one, to be done when the first one is finished, is to make a snapshot of the bootable image.

In other words, the backup would go like this:
  1. Before any backup, go through the process of dumping all system databases.
  2. For the first (labor intensive) backup, use asr to make a complete copy of the base system. The system should be as quiesent as possible for this, and the system database dumps should contain the current database contents. The backup volume should be considerably larger than the working volume.
  3. Dump the databases and use rsync to update the smaller backup volume. Be careful to exclude the /snapshots directory on the backup volume; this should be untouched by this run of rsync.
  4. Then use rsync again to create a new snapshot of the non-snapshot regions of the backup volume (there is no need for an additional database dump). This will be stored in /snapshots on that volume (and so /snapshots will again be excluded).
Both runs of rsync could be run at fairly low priorities.

Note that if this were to be done over the network via an rsync server, the second (snapshot) rsync run could be done locally on the server. Not sure if that would be worth it, though. For now, I'm assuming an attached drive.

Also, it would be possible for /snapshots to be a different drive. For example, with two internal drives, the second drive could be the mirror of the first, with a larger external firewire drive for the snapshots. In a fairly non-intensive application like in our lab, this use of the second internal drive would probably be better than mirroring RAID, which is how it is being used now.

In any case, we would want to avoid spotlight on the backup drives, and also we would want them to be mounted read-only except when actually being written during backups.

A variation of the above would be to split drive 1 between the backup-able part and a large non-backupable part, so that less space would be needed on drive 2.

2008-09-12

OS/X single-user-mode backups

I've been trying to take advantage of the script /etc/rc.server to do full backups of the boot drive. This file is present in OS/X server, and it can be added to non-server systems. Basically, the script is run via /bin/sh early in the boot process, at a time similar to single-user mode. Only kernel drivers are present, which means that the internal harddrive and firewire drives are available, but not (at least not yet) USB drives. The boot drive is semi-mounted, in read-only mode. Semi-mounted means that its device is listed as "root_device", not as an actual drive.

The advantage of bringing a server down on a regular schedule for backups is that there are no open files, and the entire system drive is unwritable. This maximized the thoroughness of the backup. Furthermore, there are some cases where backup programs such as asr(1) can use more efficient techniques for read-only drives than for read-write drives.

The are huge disadvantages, though. First and foremost is that while the server is down for backups, it can't provide whatever services it is responsible for. In the case of our servers, that's at least DHCP, DNS, OD, and file and web services. However, in a situation such as in our lab, where the amount of data is relatively small (probably less than 20-25 GB), the down-time will not be excessive, probably an hour or less.

The second class of disadvantages is almost a deal-breaker. OS/X has chosen to implement so much of its device and file-system interface code in terms of user-mode "frameworks" rather than kernel-mode drivers, hardly any even of the command-line utilities are available for use in single-user mode! For example, diskutil, the main filesystem tool, is unavailable. Disktool doesn't work any better. Hdiutil will do some things, but cannot attach images or use the -srcfolder mode. It turns out that the best tool of the ones that will actually work in single-user mode is asr.

Asr works far faster when it is in "device" mode. In order to enter device mode, the target drive must be greater than or equal to the size of the source drive, the "-erase" flag must be used, the source drive must be mounted in read-only mode. Asr is only for hfs-format drives. There is also "copy" mode, which is also fairly fast; it is used when device mode's criteria are not met.

The biggest problem with asr is that it copies everything, including the volume label, and there is no way to avoid this. Therefore, the copy ends up as a filesystem named the same as the boot drive. This can cause confusion. One would think that the solution would be to simply change the volume name after the clone operation. However, due to the impoverished runtime environment of single-user mode, there is no tool available to do it. The pdisk(1) program has a partition-labeling option that does work in single-user mode, but it turns out that this is not the same thing as the hfs volume label. When the system comes up, the clone will be mounted under the same name as the main drive with a " 1" suffix, like "Macintosh HD 1". If there are several backup partitions and nothing is done, they will all end up with the same name, like "... 1", "... 2", etc. (See update below.)

This can cause considerable confusion. The only "solution" I've been able to come up with is to write some information into the /tmp directory regarding the backup itself, and then once the system comes back up, use diskutil to rename the volume accordingly. A good way to do this is in crontab, with the "@reboot" time indicator.

As for a general strategy, it is important to have at least 2 backup partitions (2 drives would be better), so in case something bad happens, the previous backup would be available. Also, the backup partitions should be larger than the system disk.

At present, my servers both have 250GB Raid mirrors, and I have a 300GB firewire drive for each of them. As an initial test, I will simply do a single asr backup of the main drive onto the firewire drive--this is nearly as safe as having two partition on the firewire drive. Later, I'll get another firewire drive for each one, and swap the drives.

UPDATE

Here's a kind of strange way to get around the problem of the volume name. Instead of trying to change, ex post facto, the name of the clone, why not change the volume name of the system disk? This can be run very early in the launchd process, and all it takes is "[sudo] diskutil rename / newname". Obviously, the name will have the boot time in it, or, more difficult, a sequence number related to the backup system. I think a timestamp of the form: 200901171402 would be good. As for the rest of the string, why not simply look for the current name (you can get this from the "list -plist" diskutil command). If the last 12 digits of the current name (which will be identical to that of the most recent backup) are digits, then they will be replaced, otherwise nothing will happen. If there is a reasonably short system name, like "Lab 13", then why not name the root drive something like "Lab 13 System Disk " ? Each time the system is rebooted, the timestamp will be updated. The volume names of the clones will contain the timestamp of the previous boot. Since the root volume, unlike other mounted volumes, doesn't get mounted as /Volumes/VolName, the change of name will have no effect on paths, etc.

Plan 2: A slightly more satisfactory but more risky way to do a similar thing would be to change the name of the root volume (e.g., to "Lab 13 Clone 200901171402") before restarting the system for an automated backup. The name would be changed always to a constant (e.g., "Lab 13" or to the part before " Clone...") during the early the boot process.

In either case, there would need to be three scripts:
  1. The setup-and-reboot script, which would search for the target volumes, decide which one to write to next, and if there is one, to store that information in /tmp/rc.autoclone or whatever. This script will change the name of the root volume just before rebooting under plan 2.
  2. The clone script, which would expect information in /tmp/autoclone that if it checked out will control the clone operation.
  3. The cleanup script, to be run very early in the boot process, to change the name of the root volume back to its normal name. In plan 1, the name will contain a time stamp, in plan 2, it will be an ordinary name like "Lab 13".
The first script can be run either from the command line or from a launchd/crontab entry. The second script must be run at the end of/etc/rc.server, and it must contain safety checks to prevent writing into the wrong medium. The third script would ordinarily be run very early in the boot sequence from launchd/crontab.

As always, the most dangerous possibility is that the clone will be written into the wrong place. Since diskutil doesn't work right in single-user mode, probably the safest way to handle this is to write a check string into the destination volume, for example, a file in its root directory called rc.autoclone identical to the one in /tmp. Also, there must be a directory created in /tmp called "mnt" as a place to mount the destination volume, so that /tmp/mnt/rc.autoclone can be compared to /tmp/rc.autoclone.

So in rc.server,
  1. Check for /tmp/rc.autoclone
  2. Check that the timeout interval has not passed (e.g., 5 minutes)
  3. Check for /tmp/mnt
  4. Get info on all current drives
  5. Search for the drive indicated in /tmp/rc.autoclone
  6. Mount the drive on /tmp/mnt and compare its version of rc.autoclone
  7. Unmount the target drive
  8. Perform the clone operation
  9. Done
In the after-boot crontab (plan 2),
  1. Get the current / volume name
  2. If it needs to be changed, change it
The setup script does:
  1. Do general checking to prevent being called at the wrong time (e.g., too soon)
  2. Make sure that at least one target volume is available
  3. Look through all possible target volumes to find the one with the oldest timestamp.
  4. Compute its ID string
  5. Create an rc.autoboot containing the ID string
  6. Write the rc.autoboot into the root directory of the target volume
  7. Reboot

2008-08-08

An example experimental project

This project, which we call "picolf6" consists of six experiments, four of which use Sensonics odor labels as stimuli. In addition, the counterbalancing and randomization is within each of the two sets of three experiments. Therefore, this was done by scripts rather than within Superlab.

One discovery we made about Superlab in the course of setting up picolf6 was that any time that a file system path is stored within Superlab, all symbolic links within the path are expanded and the path converted to an absolute path. This complicated several aspects of the process.

There are three types of stimuli used: pictures (stored as .jpg files), words (stored as .png files) and ticket numbers (stored as .png files). If you will refer to my recent blog entry on the sl4am hierarchy, you will see the layout we used here. Under the project directory, there is a Shared directory containing all of the .sl4 Superlab scenarios, and folders for all stimuli. The general idea is that the folder for each experiment (with subjects and groups) will have a link back to the scenario for that experiment in Shared, plus a "stim" folder containing links back to specific stimulus files within Shared.

The odor labels we used were mounted on standard 1"x2" event tickets and torn off one at a time to be "scratched and sniffed" and given to the subject. So, one of the things required is for there to be an inobtrusive ticket number displayed on the screen at the beginning of each trial. In superlab, the only way to do that is to make a graphic with the numbers in the corners and then specify that superlab scale it to fill the screen. We used the least significant digits of the tickets for this, so experiment 1 used tickets 1-30, exp 4 31-42, exp 5 43-54, and exp 6 55-66. Since this was always the same for all subjects, we set up folders in Shared named ticket[1456] and stored the .png files with the images there, named, e.g., 23.png.

One thing we learned the hard way was that it is best to set Superlab up so that the scenario is in a file hierarchy identical to where the experiment will be run. So for example, we simply specified /Users/.../Shared/ticket1 and Superlab could use the same relative path at runtime and find the stimulus folders.

The other stimuli were more complicated, because they had to be different for each subject. Superlab always accesses stimulus list folder contents in alphabetical order, so while setting up the experiment, we set up dummy folders containing files with names like w/img34.png (for word image event #34). Later, when setting up the hierarchy, we put links with those names to image files in Shared. So if for example on trial #34, a certain subject needs to see the word "alligator", stim/w/img34.png would be a link to ../../../../../Shared/words/alligator.png (for example).

Now, Superlab doesn't print the name of the file in a stimulus list folder, only its sequence number. So in order to figure out which stimuli were presented to a give subject without going back to the setup data, we inserted a dummy text event into each trial. These events were simply numbered like "@=1-34=@" for experiment 1, trial 34. As part of the setup sequence, we created a sed script that we placed into each experiment folder that translates that dummy code into a condition code for that trial, for example "alligator:old". This allows the actual stimuli that were used to be determined.

We also placed a link to a superlab scenario file in Shared into each experiment subfolder. However, we were not able to use symbolic links for this, since superlab then assumes that the scenario file is actually in the Shared folder and all of the subject-specific links fail to work. As a work-around, we just used hard links for it, since it is a file rather than a directory (file system rules disallow hard-linking to directories).

Here is a tar archive (tgz) of the Korn shell scripts I used to set this experiment up. Note that you also will need to install some packages to generate the graphics versions of the textual stimuli. I also didn't include the picture stimuli we used.

2008-08-03

Multiple superlab stimulus folders

Apparently when you set up an experiment in Superlab that uses external stimulus files or folders, it saves two pointers to the files: the absolute path at the time the file or folder was specified, and the file or folder relative to the scenario file when it was saved. In fact, it appears that sometimeis if you use "save as" within superlab to save a scenario, it recomputes the relative address(es) using the new scenario location. Therefore, as long as the relative path points to folders and/or files that exist, and that the absolute path does NOT exist, things will work. However, this latter approach will not always work

However, if the absolute path is valid, then those files will be used instead of the local relative path, and the wrong stimuli will be presented. What is needed is some way to relocate the scenario file, relative to a certain version of the stimulus files.

Say for example you have 25 subject folders named 01-25, and they are all "sister" folders, that is, subfolders of the same superordinate folder. You might think that if you set up the scenario and stored the scenario in subject 01's folder, you could then copy the scenario into each of the others and use their stimulus set-up. But since the absolute address of subject 01's stimuli would still exist, I strongly suspect that they would be used instead of the relative address. What is needed is some way to relocate the paths in the scenario file so that they point to the new location.

If superlab can't find the stimuli, it asks the experimenter where they are. This could cause some fairly minor problems, but the larger issue is when it does find them, in the wrong place.

It turns out that it is possible simply to overlay the absolute paths (or as many of them as you want to relativize) with a sequence of X's of the same length, using some method such as a binary editor like bbe(1). Superlab is quite content to use the relative address. Note that if you should happen to save the scenario, new absolute addresses will be written in there.

It would be better if there were some way to identify the absolute addresses automatically, but since they can be located almost anywhere, that's a bit tricky. There are some other paths in there, such as the default logfile location. Maybe the best way is simply to specify them as part of the setup, since they will be known then, and do the overlay as part of building the project.

UPDATE

One very important lesson: take the time to do all development of superlab scenarios in a directory hierarchy that matches how it will eventually be installed. This is a little less convenient in the beginning, but pays off hugely later on.

2008-07-21

Converting text to graphic files for Superlab

Since graphics files usually exist outside of the immediate Superlab scenario folder, and are not loaded until the experiment runs (and if the appropriate option is selected, not until just before a trial runs), it is possible to use an external script to re-randomize graphics files before running a particular subject.

The way it works is, you set up Superlab to use generic names for files, for example, "trial001.png", "trial002.png", and so on. You have your real stimuli in a different folder, with names like "elephant.png" and "COW.png". Then, when you are setting up for a particular experiment, you get rid of the dummy files and put in links in to the real stimuli, in the appropriate order, for example, trial001.png --> elephant.png ; trial002.png --> COW.png. Obviously, you have to keep the mapping around so that the logfile can be patched up after the run, by replacing instances of "trial001.png" with "elephant.png" and so on.

Obviously, you can't do this with text stimuli because text stimuli are internal to Superlab. I have written a utility to patch an existing scenario file to contain a different order of stimuli, but this method is very risky and isn't flexible enough. So the correct solution is to convert the text stimuli you want to use into graphics files.

There are many ways to do this, but one of the easiest to use from a scripting standpoint is called "a2png". This is a utility that can be downloaded from sourceforge. It needs either the cairo or the gdlib graphics libraries; one or the other also has to be installed for a2png to build. Once it has been installed, you can use .ttf font files to create .png files from text strings. By default, the image is cropped to the font's cell size, and has a black background. There are a number of options to change the background, foreground, size, font, spacing, and so on. The .png files can be used directly by Superlab on both Windows and Mac systems, or they can be converted to jpeg or some other supported graphics format.

By default, if you give a2png the name of a .txt file containing a stimulus, it will create a .png file in the same folder with the same basename. So for example, "a2png ... elephant.txt" should result in an output file "elephant.png". If you don't want all those *.txt files, a2png will also accept standard input if the file is "-", and will write to X in "--output=X". So, an alternative way to create elephant.png is "print elephant | a2png ... --output=elephant.png -".

How to specify the right font

Now, at least on my system, a2png doesn't want to find the ttf fonts. The built-in font folder list is a poor match for the fonts I have installed on my system. So, what I do is to give the whole path to the .ttf file I want to use. You can find all the appropriate fonts on your system with "locate .ttf | grep ttf$". Also, there are thousands of truetype fonts (ttf) out there on the internet. It's probably better to give the whole path anyway. I like a sans-serif font for displaying stimuli, such as free-sans, which can be readily downloaded.

You can also mix text and pictures, and randomly switch, for example, which kind is on the left or the right of the display, just by setting the link appropriately before running.

UPDATE

There appears to be some kind of glitch in a2png such that the cropping that it does removes the bottom of each character on the last (i.e., only) line. There is a workaround of suffixing a '\n', but this adds too much space.

There is a completely different approach available with the classic netpbm package. This command line:

print JjKg_Xq \
| pbmtext -font ~/Downloads/bdffont/100dpi/helvR24.bdf \
| pnmcrop | ppmchange white black black white \
| pnmtopng > foo.png

isn't too bad. An alternative is:

pbmtextps -font=Helvetica JJKG_XQ \
| pnmcrop | ppmchange white black black white \
| pnmtopng > foo.png

So, one way or another, there will be a way to do this. Frankly, a2png produces prettier output. The pbmtextps output is quite fuzzy, while the pbmtext output depends on having the bdf fonts available, and in turn, they have limitations on size. Since a2png uses the Cairo graphics library, it can use ttf fonts and scale them, etc., very prettily. Hopefully I will find a fix for a2png.

UPDATE2

It is true that a2png produces more attractive lettering, but as it turns out, there is a very real application for the netpbm package here: setting up the experiment template. The scheme I am trying to use involves setting up a single experiment with all of the trials indexing external event files, usually images or images of text. By changing the names of these external files, you can change the stimuli presented to subjects with none of the limitations imposed by superlab. So, what I've been doing is to generate dummy stimuli to be used while testing. These are graphics containing text strings that make it easy to identify the order and type of the stimuli for debugging purposes.

In one of the experiments I'm setting up now, there are 30 640x480 pictures. Here is the shell function I'm using to create jpeg dummy files for them:

jpg640x480(){
ppmmake lightblue 640 480 \
    | ppmlabel -x $((320-(5*${#1}))) -y 240 -size 10 -background lightblue -color black -text "$1" \
    | pnmtojpeg > $2 2>/dev/null
}

I chose black over lightblue so they would be very contrastive with the white over black text stimuli.

UPDATE3

Well, there is a fairly easy way to get images cropped correctly with a2png that will work until the program is fixed somehow: use the --no-crop option in a2png, and crop the result using netpbm. For example:

print Somejunque \
| a2png --no-crop -s --overwrite --font-size-0.1 --output=uncropped.png \
; pngtopnm < uncropped.png
| pnmcrop \
| pnmtopng > cropped.png

This yields the best of both worlds: flexible, high-quality text rendering plus correct cropping of the result.

2008-07-18

How to set up an sl4am project

Once the basic SuperLab experiments are running, the next step is to set up the hierarchy of files and folders. This can be done with a fairly simple script, given the names of the population and condition groups and the initial number of subjects in each cell. Options include setting up an empty Shared folder and links to it in each experiment folder. Empty rc files and flag.free files are created everywhere. Another option is to clone a new population group from an existing one; another is to look for empty rc files (this would be the sign that an external setup script didn't do its job completely).

However, once the hierarchy is all set up -- and it probably is a good idea to set up only the Try population group first, and clone the other groups from it -- the next step is to populate all of the experiment folders and to put actual code into the rc files. The best way to do this is to write a custom script. This script could use find(1) and be driven by the existing factors as an organizational approach.

Update:

Just stumbled across the automator(1) command. This should be very useful for running experiments, since it is a way to invoke Automator workflows from the command line. There are options to set variables and to pass input, including standard input, to the workflow. It is less clear to to take output from the workflow, probably temporary files will be needed.

A Useful Experimental Data Hierarchy


This is a scheme for storing experimental data, primarily with SuperLab, that should be easy to setup, easy to use, and easy to automate.

At several specific points throughout the hierarchy there are rc files: rc.sl4am, rc.proj, rc.subj, and rc.exp. These can contain arbitrary ksh code and can change the operation of sl4am, but are intended to customize the operation of sl4am by changing specific parameters or by altering the setup, runtime, and/or cleanup phases of a SuperLab scenario.

The SL4AM subfolder within the SuperLab Experiments folder is the default sl4am home folder. If an rc.sl4am file is present, it can redefine the variable SL4AMHOME in order to use a different home folder, for example, in a shared location on the local machine, like "/Users/Shared/Documents/SuperLab Experiments/SL4AM", or on a remote location, such as "/Volumes/groups.ourlab/Documents/SuperLab Experiments/SL4AM". If there is a separate root for the status flags and datafiles, that can be defined in rc.sl4am as "DATAROOT" (see below). If DATAROOT is defined, it should give the path to a folder that will be organized in parallel to the SL4AM folder.

Note that spaces in folder and file names should avoided if possible below SL4AM. I'm trying to make the script immune to problems resulting from spaces in filenames, but it's safer (and much wiser) to avoid them.

Project roots are top-level subfolders in SL4AM. There can also be a top-level subfolder in SL4AM called Archival that is intended to store completed or inactive projects.

Except for some code at the beginning to select a project, sl4am is concerned only with the world within a single project root, and in fact, it uses cd to go there as soon as possible so that most paths within a project are relative to the project's root. While running SuperLab, sl4am changes temporarily into the folder of the specified scenario (see below). At the top level of a project root, there must exist a file called rc.proj that is sourced when sl4am starts a session in that project. A subfolder X of SL4AMROOT is a project root iff X/rc.proj exists.

Each project contains a subject hierarchy. The top level is a dot-separated list of population.group folders, for example, Healthy.Elderly, Healthy.Young, Schiz.Elderly, Schiz.Young. Below each population.group folder is one or more condition.group folders, such as Set1.VisualFirst, Set1.AuditoryFirst, Set2.VisualFirst, Set2.AuditoryFirst. Below each condition-group folder is one or more numeric subject folders, for example 1, 2, 3. There is one subject folder for each subject to be tested in the project. Each subject folder must contain a flag file. It may be desirable for sl4am to create new subject folders automatically as needed; in any case, the names of these folders are simply numbers starting at 1. There also needs to be a way to insure that sl4am will test the first subject in each condition before starting the numerically next subjects, and so on.

Note that even though some labs never test any population group other than college students, both group levels are required. The best way to handle this situation is to use two population group identifiers, one named "Try" and the other something like "YN" (young normal). The Try group is for testing the experimental setup, while the YN group is a reasonable label for college students. If at some point you need to add another population group, it will be very easy to do it. The presence of the "YN" level won't interfere with anything, and the "Try" pseudogroup is very useful for development and for training RAs.

The flag file is a simple advisory access-control mechanism. When an experiment is first set up, each slot's flag file is named "flag.free", indicating that any computer can run that subject slot. The file is renamed to "flag.user@en0", where "user" is the current user and en0 identifies the current host. (The figure says "fern", which is the name of one of our computers; this method is too variable.) This "checks out" the subject slot to the named individual. The parameter "en0" is set to the ethernet address of en0 (ifconfig en0 | grep ether | read junque ether junque) in order to identify the machine in a somewhat unambiguous way. An individual may relinquish a subject slot by renaming the flag file back to "flag.free", but the presence of datafiles will cause a multi-experiment sequence to pick up where it left off before. When a subject slot is complete and all data has been saved, the flag file must be renamed to "flag.completed".

Note that if DATAROOT is defined, then sl4am will search there for flag.free slots, change to flag.user@en0 there, and will upload data there. The structure under DATAROOT is identical to the SL4AMHOME hierarchy, but it need not have any Shared folder, rc files, or scenarioes or stimuli. In the event that subjects must be tested offline, one or more subject slots should be checked out in advance. When sl4am starts up, it will sweep all experiments looking for the flag "SYNC" in a subject-number folder. If it finds any, and if DATAROOT is accessible, it will synchronize the parallel structure there by uploading any data not already present, and renaming the flag file if necessary. Any time sl4am tries to upload data or change the flag file but DATAROOT is inaccessible, SYNC is created. After a successful upload, it is removed.

There also must be an rc.subj file in a subject-number folder, primarily designed to control running multi-experiment projects (for example, the order in which to run the experiments). This can be a zero-length file.

For each experiment, there is an experiment root under the subject-number folder. This folder contains all that is needed to run one SuperLab experiment: the scenario (.sl4) file; all stimuli needed (these will generally be links elsewhere, or in the case of all-text experiments, missing altogether); and the datafile (.txt) once the experiment has been run. There is also a mandatory rc.exp file, primarily intended to help customizing the SuperLab run or the datafile processing afterwards. For example, this could give the subject some feedback about his performance. Sl4am cd's into the experiment root before running rc.exp and SuperLab.

Under each project folder, there can be an optional Shared subfolder. This contains all shared stimuli and/or sl4 files. For convenience, the setup script will install a Shared symbolic link in each experiment root that points back to the project-level Shared folder (if it exists). To link to it from a stimulus folder in an experiment root under a subject file, just use

    ln -s ../Shared/SomeFolder/somefile.xxx stimset/somename.xxx

To link a scenario file in the experiment root to one in the Shared folder, use

    ln -s ./Shared/some-exp.sl4 some-exp.sl4

It is possible to fully populate this tree before beginning to run, but it is equally possible to use the rc scripts to populate fill things out at run time.

UPDATE

It really isn't hard to link relatively back to the Shared folder without the klutzy locate Shared link. Here's how to do it. You create a dummy tree all the way down to the stim/stimset folder inside an experiment. Also, create a Shared folder with a stimset/xxx.png in it. Then cd down there and do "ls ..", "ls ../.." and so on until you get to (e.g.) "ls ../../../../../../Shared/stimset". The test it with ln -s ../../../../../../Shared/stimset/xxx.png yyy.png" or whatever. Once you've done this and gotten it to work, just make sure that your script does two things: use the right number of dotdots, and actually cd into the stimset folder to do the linkage:

(cd downpath; ln -s uppath/Shared/stimset/file.suff link.suff)

Note that assuming the script is running in the project root, downpath will be like "pgrp/cgrp/subj/exp/stim/stimset", and uppath will be like "../../../../../..". As for why this is worth doing instead of just using absolute links, it's to allow experiments to be installed using simple methods like tar. With relative links, no adjustment is required, with absolute links, basically there would need to be a script run to adjust links after installation in new locations.

2008-07-15

Subject checkout on shared volume

In our lab, we have five macbook pros that could theoretically be used all at once to test subjects in a single experiment. In the past, we have gotten into trouble when, due to experimenter error, a certain subject slot has been run on more than one computer. To get past that problem, we want to use a shared volume to contain the experiment setup hierarchy, and come up with some way for all of the computers to share that hierarchy. Obviously, there must be some method to prevent two computers from trying to use the same resources. The simplest way is to set a lock at the filesystem level, marking the subject as "taken", and releasing the lock. However, the most straightforward way to do the sharing, using one of Apple's group iDisks, has no locking mechanism. You can't even make files read-only. The lockfile(1) program, when asked to create a lockfile on an iDisk, gives up and suggests praying instead.

I did come up with a locking mechanism. What you do is to use a reserved folder. A computer that wants to lock the resource waits until the folder is empty, then writes its ID into the folder (the ID could be, for example, the ethernet address of en0). After a short delay, the computer then checks to see if there is exactly one file in the folder, namely, it's own ID. If so, then it has the lock. If there is more than one, then it removes its ID, waits a short but random period, and tries again. The only problem with this mechanism is that it is very slow, on an already slow filesystem like the iDisk.

After pondering this for a while, I thought of another approach. Instead of setting a lock before accessing the subject slot, you randomly choose the "next" subject to test, and then rename it to a name with your ID. For example, if the subject is called "12", and if your ID is aa.bb.cc.dd, then you would simply "mv 12 12-incomplete-aa.bb.cc.dd". Then wait a short time and see if "12-incomplete-aa.bb.cc.dd" exists. If it does, you now own subject 12; if not, try again. (If the locked name doesn't exist, it means that a race occurred and another computer locked it between the time you found it and the time you did the mv command.)

The random selection is somewhat important, but not critical. If you just go in a fixed order, all it means is that there is slightly greater probability that a given computer will have to try more than once to get a subject.

Once the subject is locked, testing proceds. When it is complete, the name is changed again to, e.g., "12-complete-aa.bb.cc.dd". Note that it is still locked, in a sense, since it will not appear in the list for testing.

One other brief note: it might make sense for each subject on the remote volume to be an archive, for example tar.gz format. This would facilitate copying it onto the macbook pro. A question to be resolved is whether data is place into the archive or somewhere else on the remote volume.

2008-07-09

SuperLabAutoMator: superlab + automator

We use Superlab 4 for some experiments we do in the lab, but it almost always seems that we need fancier randomization/counterbalancing than the program provides out of the box. Also, the dialog that the RAs must go through, to deal with subject and group IDs, different scenario files for different conditions, and the right name to use for the logfile, have resulted in errors and lost data in our lab. The traditional solution for this is scripting, and in the Macintosh world, many user-oriented scripts make use of the Automator utility. I'm currently setting up an experiment that requires a specific randomization and counterbalancing across three different procedures for 24 subjects. What I intend to do is to make a shell script embedded in an automator script called "SuperLabAutoMator" (or "slam" for short) that will do this in a generalized way. What superlabautomator does is to pop up a window asking to select from an experiment (all must be a subfolder in a standard folder, or if not there, in the same folder as the script). It then follows the instructions in the experiment subfolder, by running "prescript", "midscript",  and "postscript", which are functions defined in the script.

In the experiment subfolder, there is optionally a file called "rc.slam" that can define the following objects:
  • name=xxx -- the subject ID to use (default = null)
  • group=xxx -- the subject's group (default = null)
  • scenario=ppp -- default is "scenario.sl4" in the experiment subfolder
  • logfile=ppp -- default is "logfile.txt" in the experiment subfolder
  • fifofile=ppp -- use as Superlab's logfile; filtered data should be written to $logfile by midscript
  • prescript() -- set up to run
  • midscript() -- interact with Superlab while running
  • postscript() -- run after midscript and Superlab have finished
All of these have default values. The rc.slam file will be sourced early, before asking for the subject ID, for example. This list could expand or shrink.

Midscript is to be run while Superlab is running, and it can either fiddle with the logfile, or prescript could create a named pipe to be used by Superlab as the logfile and open it for reading by midscript. The purpose of midscript is to handle cases where the actual stimuli to be presented must be changed as a function of performance. In most cases, it will not be needed. Note that if the midscript is reading the logfile from a named pipe, it should also save the raw contents. One way to do this might be to use the tee(1) command; alternatively, midscript can simply save each line it reads. One unresolved problem is that Superlab brings up a user confirmation window if the logfile already exists, as it must for a named pipe. It would be nice to override that somehow.

The core of Superlabautomator will run Superlab from the executable file rather from the GUI, so that command-line options can be specified. After changing into the experiment subfolder, it will source "rc.slam".  Next, it will call prescript, and wait for it to finish. It will then set Superlab to run in the background, bring Superlab into the GUI foreground, and call midscript. After midscript completes and Superlab exits, SuperLabAutoMator will call postscript before exiting.

In general, it is a good idea for Superlab at a minumum to wait either for several seconds before the first trial, or more typically, to display an instructions screen and wait for a response.

From the RA's point of view, all that will be required is to start SuperLabAutoMator, choose the correct experiment (only active ones should be available, and if only one is available, only a confirmation window comes up); choose the subject group from a short list (if more than one); choose the subject (only untested subject numbers will be available). The script will take care of setting things up for Superlab, running it, and dealing with the data, including filtering it, giving some feedback to the subject, and possibly storing it away in a centralized database.

When I get this running with the first experiment (no runtime interaction will be used, btw), I will post the SuperLabAutoMator app and the setup of the first experiment.

Note: while slam is a great name, it is already in use with a couple of different programs/utilities, so we will go with the GUI name SuperLabAutoMator and use slam as an internal shortcut (as in the rc.slam filename), and we can also pronounce the name optionally as slam.

2008-07-01

Sending email to root

It is pretty important that the root get asynchronous notification of problems detected by the maintenance system. However, it is obviously impossible to sent email in single-user mode (sendmail not running, boot drive write-protected). Therefore, there are two methods that could be used to notify root of errors or just of system status. The basic idea is that the message be written someplace other than the boot drive and then mailed in single-user mode via a launch daemon.

First, when a backup is done, it will be done to a writable medium, namely a firewire drive. This drive will be mounted when the system is running, so a message posted in /Volumes/Snapshots can easily be sent on to root. This will be the main method of notifying root about snapshots and clones.

Second, there are times when Snapshots isn't available. In fact, one of the critical messages might be that is wasn't available so no snapshot was made. In this case, the best method is to use /var/log/system.log. The system startup saves the standard output and standard error from rc.server (and other boot programs) in system.log. This is in fact a more reliable way to send notifications.

The approach will be simply to write out a banner lines like "org.bogs.rootmail begin" and "org.bogs.rootmail end" so that the daemon can simply extract the intermediate lines (if any) and mail them to root. In general, this should be limited to the bare minimum of lines. By default, system.log is cycled at midnight, and eight gzipped copies are kept around, which should be more than ample.

One thing this does is to complicate the daemon. Not only must it write out org.bogs.maintenance-mode, but it now has to send mail to root. This will require something to be run immediately after entering multi-user mode as well as something when it is time for more maintenance. Also, some kind of flag must be used to prevent multiple mailings, but I don't know what it should be. Maybe just a file ~root/.org.bogs.rootmail containing the timestamp of the last "org.bogs.rootmail end" line that was mailed out would be sufficient. That is, when the mailing script is run, it will send only more recent segments of system.log, and it will update the flag file.

Where to put maintenance scripts

This is slightly complicated, because system areas are of course sometimes overwritten by software updates. Here are some ideas.

I want to keep the changes to /etc/rc.server as minimal as possible. So I will add two lines at the very end of the file that will do two things in a conditional:

# org.bogs.maintenance
if [ -e /private/tmp/org.bogs.maintenance-mode ] ; then source /var/root/Scripts/maintenance.sh ; fi

The file /tmp/org.bogs.maintenance-mode is used to pass parameters to ~root/Scripts/maintenance.sh. If it is not present, then this is not a maintenance boot. If it is present, but if the maintenance.sh script is missing, an error message "no such file or directory" will be written to the log. Note that maintenance.sh runs in the same bash environment as rc.server. Also note that ~root is already a protected place, and Scripts should also be protected (mode 700).

Also note that /tmp will be erased at some point after the maintenance has completed and the system comes up multiuser.

The file maintenance.sh should not do very much, its role is to call other scripts or programs located in the same directory based on the contents of org.bogs.maintenance-mode.

All of the maintenance code will be maintained elsewhere, on another system, and be copied into a directory on the server and installed from there into /var/root and /etc/rc.server. In addition, the installation script will add or replace the last two lines of rc.server, and will also add the appropriate material to /Library/LaunchDaemons (which is where "system-wide daemons provided by the administrator" are supposed to go).

2008-06-30

Simulating UUID in rc.server

I've now done a reasonable amount of testing, and it seems clear that an "md5 -q" hash of the output of « pdist DISK -dump | grep -v "/dev/disk" » can be used to identify a certain disk both while the system is running and the drive is mounted, and while the system is in single-user mode (in rc.server) and the drive is not mounted.

In effect, the hash functions very similarly to the UUID method that can be used in multi-user mode in fstab.

Furthermore, I have ascertained, as is only logical, that neither /private/tmp nor /Volumes has been cleaned up at the time that rc.server is running. (It's logical because the boot drive is still mounted read-only and no other drive is yet mounted.)

Therefore, putting a time-stamped info file into /tmp that contains the hash(es) of the drive(s) containing /Volumes/Clone and /Volumes/Snapshots can be used reliably to find the appropriate drive(s) to mount in order to do the backups, and that they can be mounted in /Volumes.

2008-06-29

/dev entries

The latest bottleneck is in ascertaining the name of the disk to use for backup in /dev. The LABEL= and UUID= names don't appear to work from rc.server, and the /dev/disk? names are famously variable. For example, if someone plugged in, say, a USB drive and then plugged in the Firewire backup drive, then later removed the USB drive. Even with a predictable assignment sequence, the disk number would change.

Just as a footnote, this is something that has always driven me crazy, in UNIX, in MS-DOS, and everywhere. Why can't there be a constant mapping between a slot and a drive?

Anyway, there is a program called pdisk(1) that may work. It prints the partition table from an attached drive whether it is mounted or not, and so it will probably work from rc.server. The table it prints out is like this:

Partition map (with 512 byte blocks) on '/dev/disk1'
#: type name length base ( size )
1: Apple_partition_map Apple 63 @ 1
2: Apple_Free 262144 @ 64 (128.0M)
3: Apple_HFS Untitled 104857600 @ 262208 ( 50.0G)
4: Apple_Free 262144 @ 105119808 (128.0M)
5: Apple_HFS Untitled 480690400 @ 105381952 (229.2G)
6: Apple_Free 16 @ 586072352

Device block size=512, Number of Blocks=312581808 (149.1G)
DeviceType=0x0, DeviceId=0x0

Note that while there are various useful clues, the actual volume name doesn't appear to be present. There is an option (-f) that is supposed to cause volume names to be printed, but this has no effect in early testing.

However, the swap-search example program that I found did something interesting with the output from pdisk. They used the command "md5 -q" to create a usable checksum from all of the output of pdisk except the first line (that gives the /dev name). This allows an easy way to recognize whether the appropriate disk is present and where it is in the device tree. The command they used was swaphash=`pdisk /dev/disk${swapdisk} -dump 2>/dev/null | grep -v '/dev/disk' | md5 -q`

So one way to do this would be for the scheduled script to look for the devices by their mount names /Volumes/Clone and /Volumes/Snapshots, use mount to find the current name and partition numbers, use pdisk to compute the hash, which would be stored somewhere volatile but which would not yet be cleaned up in rc.server (?) or maybe just in /etc. Then when the system is booted, a simply loop would be run to find it, it would be mounted, and the backup would proceed.

Incidently, finding a directory like /tmp or /var/??? that gets cleaned up after rc.server is finished would be a great way to pass information to the maintenance script(s), since there's no other way that the info would be there, unless it was placed there just before booting. Note that the script can't really remove it, since the boot drive is still read-only. In fact, this implies that /tmp can be used for this purpose. To be tested...

Well, I'll be testing this all out tomorrow so will report either in a comment here or in another entry.

Here is an example of finding all disks' hashes:
for x in /dev/disk[0-9] ; do /bin/echo $x `/usr/sbin/pdisk $x -dump 2>/dev/null | /usr/bin/grep -v "/dev/disk" | /sbin/md5 -q` ; done

2008-06-28

snaps, rc.server, and firewire drives

I haven't found out much more, but here are two fairly critical facts.

Prelimaries: I have partioned a 300GB firewire drive with a 50B partition called "Clone" and a 250GB partition called "Snapshots". The plan is for Clone to contain a bootable clone of the main drive, produced by ditto once per week or so, and for Snapshots to contain the snapshots. Obviously, if I put more than 50GB on the main drive (which is a 250GB drive, by the way), this scheme won't work. In fact, I probably need a 1TB drive for this, with Clone equal in size to the main drive and the rest for snapshots. At some point, I will upgrade, but for now, I am using around 30GB on the main drive so this will be good for testing and for use for quite a while.

The first amazing fact; based on my little test with mount, firewire drives are not mounted during the time rc.server is running. So in order to do my firewire backup, I will have to mount and then unmount them. I hope that the necessary driver is available at that time. I wonder what will happen if I leave them mounted read-only...

The second fact, and I should have known this, is that when the system eventually mounts the partitions, it starts running Spotlight on them (well, they are empty so this was sort of a no-op). Once I start putting data on them, I absolutely do not want Spotlight even to see them. In fact, when I'm not actually backing up to them (from rc.server), I want them always to be mounted read-only. So I need to research these two issues: getting Spotlight to skip them, and making sure they are mounted read-only by default (possibly by mounting them read-only in rc.server?).

Hasta mañana.

2008-06-27

snaps and rc.server

Snaps is a Korn Shell backup script for our servers. It is intended to do one rsync (1)snapshot per day of the entire boot drive onto a local fireway drive. In addition, it does a periodic clone of the boot drive using ditto(1). There are a couple of unusual aspects to this script.

First, it parameterizes the archiving of snapshots in an unusual way. It keeps a week's worth of daily snapshots (all of the numbers here are parameters). Then, it uses an exponential function to decide which older snapshots to delete. It always will keep one snapshot in each integral range, so one for 2^0 days, one for 2*1 days, one for 2^2 days, one for 2^3 days, and so on up to one for 2^8 days. Then, it always keeps backups that are older than 365 days. (Remember, all of these numbers can be tweeked.) This results in a kind of S-shaped function of the frequency of preserved snapshots per unit of time. Most systems like this do something similar, but use standard calendar periods, like so many daily, so many weekly, so many monthly snapshots. I thought that the exponential function would be more general than this, so that's what "snaps" uses.

Second, and this is what I'm wrestling with at this stage of development, I want to automate this script. However, there are several server databases that cannot be "live" when a snapshot is taken, and in general, a backup is much more valuable if the system is quiescent when it is done. My idea is to set up a periodic process that will reboot the system in the "wee hours" of the morning. The snapshot script will be run early in the boot process, before things really get started in the system.

It took me quite a while to figure out how to do this because of how launchd and launchctl work. There doesn't seem to be any way to get things to happen early enough. However, a perusal of the launchctl/launchd source revealed that there is a section at just the appropriate moment when a script called "/etc/rc.server" is executed if present. This is done right after single-user mode and has much the same context as single-user mode.

I just added some lines to the end of rc.server to see what the environment is. (Rc.server's standard output is placed into /var/log/system.log.) The commands I added were /sbin/mount and /bin/ps. Here is what was reported:


Note that almost nothing is running, just launchd, launchctl, and the shell, which is running /etc/rc.server. This is a quiescent system. Also note that the boot drive is still mounted read-only, which is ideal for the purposes of making a backup.

Here is the current version of snaps:


#!/bin/ksh
# ----------------------------------------------------------------------
# snaps -- maintain a set of filesystem snapshots
# the basic idea is to make rotating backup-snapshots of sourcedir
# onto a local volume whenever called. The philosophy is to put all of
# the configuration and logging information into the backup directory,
# so that snaps requires only that path to get going. The scurve filter
# causes an s-shaped frequency of preserved snapshots, with more recent
# and fewer old snapshots.
#
# Important note: HFS+ filesystems are apparently set to ignore ownership
# for all but the boot drive. This must be disabled using the Finder's
# Get Info panel. (Is there a way to check for this programatically?)
#
# NOTE: rsync must be version 3 or better
# ----------------------------------------------------------------------
# Usage: snaps [-n] SNAPS_DIR [ROOT]

# -------shell function defs------------------------------------------

# compare the current time in secs to a list of dates
# if on return, ${snap[0]} = secs, then we need to do a backup, otherwise do nothing
# also, the old backups in rmrf need to be expunged. ante is the most recent previous
# backup, if any.
function scurve {
typeset secs age tmp x i

secs=$1 ; shift
tmp=$(perl -e "@x=sort { \$b <=> \$a } qw($*);print \"@x\",\"\\n\"")

if [[ "$tmp" == "" ]] ; then
unset snap
snap[0]=$secs
return
fi
for ante in $tmp ; do
break
done

((age=secs-ante)) # age in secs of most recent snap
if [[ age -le JOUR ]] ; then # too soon
return
fi

unset snap
unset arch
unset curr
unset rmrf
for x in $tmp ; do
((age=(secs-x)/JOUR)) # age in ticks
if [[ age -le 0 ]] ; then # too soon
print age $age secs $secs x $x
continue
fi
# take care of the current backups in "real time"
if [[ age -le CURR ]] ; then
curr="$curr${curr:+ }$x"
continue
fi
# also take care of the archival backups in "real time"
if [[ age -ge ARCH ]] ; then
arch="$arch${arch:+ }$x"
continue
fi
# now set the base of the exponential portion
((age-=CURR))
((i=1+floor(log(age)/log(BASE))))
if [[ "${snap[i]}" == "" ]] ; then # nothing in this slot yet
snap[i]=$x
elif [[ ${snap[i]} -gt $x ]] ; then # always keep the older one
rmrf="$rmrf${rmrf:+ }${snap[i]}"
snap[i]=$x
else # keep unless current
rmrf="$rmrf${rmrf:+ }$x"
fi
done
if [[ "${snap[0]}" == "" ]] ; then
snap[0]=$secs
fi
}

# errs and other log stuff all go to stderr
log(){
print -u2 -- "$where:$TO@$(date +%Y%m%d.%H%M%S) $(basename $ME .ksh): $*"
}
finish(){
if [[ -e snaps.log ]] ; then
mail -s"Snaps Status for $where:$TO" root < snaps.log
rm snaps.log
fi
exit $1
}
err(){
log "$*"
finish 1
}

nopt=0
rsyncopt(){
RSYNC_OPTS[nopt++]="$RSYNC_OPTS${RSYNC_OPTS:+ }$*"
}

# ---------------------- basic parameters --------------

# NOTE: define RSYNC to a version that is 3.0.0 or newer
RSYNC=/opt/local/bin/rsync

# these are for error message purposes (see functions log & err)
ME=$0
where=$(hostname)

# limit path to /bin and /usr/bin except we need
PATH=/bin:/usr/bin

# ------------- args, file locations ----------------------------

case "$1" in
"-n" ) now=print ; dry="-n" ; shift ;;
* ) now= ; dry= ;;
esac

TO=$1

if [[ "$TO" == "" ]] ; then
err "Usage: snaps [-n] SNAPS_DIR]"
fi

# make sure we're running as root so we can start logging
if [[ `id -u` != 0 ]] ; then err "Not root" ; fi

if [[ ! -d $TO ]] ; then
err "No such directory $TO"
fi
eval `stat -s $TO`
if [[ $st_uid -ne 0 || $(($st_mode&0777)) -ne $((0755)) ]] ; then
err "$TO not mode 755 directory owned by root $st_uid $st_mode $(($st_mode&0777)) 0755"
fi

cd $TO

# set up errors from this point to be redirected to the log except for dry runs
# we do one log per backup and we store it in the snapshot folder as a record
# of that snapshot

if [[ "$now" == "" ]] ; then
if ! exec 2> snaps.log ; then
err "failed to write in $TO -- read only volume?"
fi
fi

log "Begin $dry"

# -------------- rsync parameters -------------
rsyncopt -vq # verbose error messages
rsyncopt -a # archive mode: -rlptgoD
rsyncopt -x # do not cross filesystem boundaries
rsyncopt --protect-args
rsyncopt --delete-excluded # implies --delete
rsyncopt -A # --acls
rsyncopt -X # --xattrs

# the makers of carbon copy cloner also recommend these options which are
# not available in the macports version of the program:
# rsyncopt --fileflags
# rsyncopt --force-change

# ------------ do some more checking -----------------

# NOTE: this needs to check for "Capabilities" <<<<<<<<<<<<<<<<<<<<<<
# insist on v. 3.X for working link-dest and xattrs
# if and when v. 4.X comes out, fix the script
case "$($RSYNC --version)" in
*'version '[012]'.'* ) err "$RSYNC is older than version 3.X" ;;
*'version '[456789]'.'* ) err "$RSYNC is newer than version 3.X" ;;
esac

# --------- the snapshots subdirectory ---------------

DD=$TO/snapshots
if [[ ! -d $DD ]] ; then
err "No such directory: $DD"
fi
eval `stat -s $TO`
if [[ $st_uid -ne 0 || $(($st_mode&0777)) -ne $((0755)) ]] ; then
err "$DD must be an rwx directory owned by root"
fi

# --------- configuration files -----------------
# they can be empty, but they must be uid0 and mode 0644

for x in config filter ; do
if [[ ! -f $TO/snaps.$x ]] ; then
err "No such file: $TO/snaps.$x"
fi
eval `stat -s $TO/snaps.$x`
if [[ $st_uid -ne 0 || $(($st_mode&0777)) -ne $((0644)) ]] ; then
err "$TO/snaps.$x not mode 0644 and owned by root"
fi
done

# ---------- use filter file if there is one -------
if [[ ! -s $TO/snaps.filter ]] ; then
rsyncopt "--cvs-exclude"
else
rsyncopt "--filter=. $TO/snaps.filter"
fi

# -----------------everything looks ok, let's get started--------------

# set defaults
ROOT="/"
VERSION=1
CURR=7
ARCH=731
JOUR=86400
BASE=2

# get overrides and other config info
# the only thing legal in this file is variable definitions
# of a few numeric or filepath parameters. to do comments, simply start
# the line with "#" or the word "rem" or "comment".
exec < snaps.config
while read x path ; do
for y in $path ; do
break
done
case $x in
"" ) continue ;;
ROOT )
ROOT="$path"
continue
;;
VERSION|CURR|ARCH|JOUR|BASE )
if [[ "$y" == "" || "$y" == *[^0-9.]* || "$x" != "$path" ]] ; then
err "Bad assignment in snaps.config line: \"$x\" \"$path\""
fi
eval "$x=$y"
continue
;;
comment|COMMENT|rem|REM ) continue ;;
"#"* ) continue ;;
* ) err "Unknown parameter in snaps.config line: \"$x\" \"$path\""
esac
done

# what time is it?
secs=$(date +%s)

# see if there is any work to do
unset snap
unset curr
unset arch
unset rmrf
unset ante

scurve $secs `ls snapshots`
if [[ ${snap[0]:-NIL} -ne $secs ]] ; then
log "Too soon"
exit 0
fi

# for log
df $TO

# remove unwanted snapshots if any
for x in $rmrf ; do
log "Unlinking $x"
$now rm -rf snapshots/$x
done

# if we crashed before, get rid of the remains
for x in *.partial ; do
if [[ -d $x ]] ; then
print "Unlinking $x for $where:$TO on `date`" >> snaps.log
$now rm -rf $x
fi
done

# is there a previous version to use with link-dest?
if [[ "$ante" != "" ]] ; then
rsyncopt "--link-dest=$TO/snapshots/$ante${ROOT:+/}$ROOT"
fi

# rsync from the system into the new snapshot
log "$RSYNC $dry "${RSYNC_OPTS[@]}" "$ROOT/" $TO/$secs.partial"
$RSYNC $dry "${RSYNC_OPTS[@]}" "$ROOT/" "$TO/$secs.partial"

# move the snapshot into place
$now mv "$secs.partial" snapshots/$secs

# update the mtime of the snapshot to reflect the snapshot time
$now touch snapshots/$secs

# and thats it.

df $TO

log "Completed $dry"

$now ln snaps.log snapshots/$secs

finish 0

What is this?

I spend a lot of time writing various kinds of scripts to support activities in our lab at UC Davis. They range from system administration scripts to scripts used to set up or analyse data from experiments to "helper" scripts for formatting various kinds of documents. Sometimes it helps to write about the scripting process, which I have generally done in the form of notes to myself, a kind of brainstorming and autodocumentation. I decided that it might be useful to do it online instead. That way, someone else might find something useful or something to avoid, and I might get a useful suggestion or two about it.

So in that spirit, I'm going to kick off this blog with various projects that are underway.

About Me

My photo
Ignavis semper feriæ sunt.