OS/X Lab Scripts
Ongoing brainstorming/documentation/repository of scripts and other utility programs I've written or am writing for our lab. Note that there are no guarantees that what works in our lab will work in yours. Also, although I'm trying to accurate and as thorough as possible, I may not take the time to go back and correct things that I used to think were correct but no longer are.
2014-09-29
Shared Items folder
What I wanted to do was to place a lab data folder within a user's folder. For years, I had had it under /Shared Items, but it turns out that it is simply easier to deal with things in OS/X when they are in user folders than when they are elsewhere in the directory hierarchy.
I created a “data” user and moved by data hierarchy into its home folder: « ~data/Data ». I marked this folder as shared, and then tried mounting it remotely. When I tried « afp://server.domain/Data », it failed. When I tried « afp://server.domain/Users/data/Data », it mounted the server's /Users folder (but at least it allowed access to the Data folder as « /Volumes/Users/data/Data ». But this is far from satisfactory.
It took me quite a lot of searching online, but finally a tangential remark led me to success.
It turns out that each user is also allowed to have a « Shared Items » folder that operates more or less the same as the one in the root directory of the system. This is parallel to the user-level Applications folder that can be used to install apps privately (but very few apps actually use this for some reason). Another example is that there are both user-level and root-level Library folders: these are both in heavy and regular use.
I now made a folder « ~data/Shared Items » and moved Data under it. I also made a symlink at « ~data/Data » pointing at « ~data/Shared Items/Data », mostly so scripts could avoid the space in the name. I then shared « /Users/data/Shared Items/Data ». Now it worked the same as it used to when it was in the system « /Shared Items » folder, yet it would now allow me to deal with « data » as a user with a big home folder, which in certain situations is very handy.
This is a very useful trick, I believe.
As for the space in « Shared Items », I find it unbelievable that Apple would do such a thing. I suppose it's not terrible that users are allowed to have spaces in filenames, but for a major system component to have a space, well, that's just unbelievable. SharedItems would have been fine, or even Shared_Items, but « Shared Items »? Pfui. And the same for things like « Application Support » and « Contextual Menu Items » (and yet, they got it right for tons of other ones: ColorSync, PrivilegedHelperTools, QuickTimeStreaming, ...).
2013-09-13
Waste less time (OS/X)
There is a very interesting application called WasteNoTime that is available for Safari and Chrome. This app allows you to create a schedule during which time limited or no access is allowed to certain web sites. I've found this to be a very valuable tool.
However, there are other web browsers around, and sometimes, even though it's time to get to work, the temptation to make just one more comment or to finish reading a very interesting article is too strong, and I end up firing up OmniWeb or Opera (which do not support WasteNoTime) in order to waste just a little more time. But this is pernicious, because “just a little more” can soon become hours. Oh ye of little will power.
Well, it would be great to have some kind of impediment that would at least make it less convenient to use those browsers (or certain other applications) during work hours.
So, here's how I decided to do it.
There is a command called “chmod” that can change permissions on files or folders. If certain permissions are not set on an application (e.g., /Applications/Opera.app), it cannot be executed. So my very simple idea is based on this script:
#!/bin/ksh
# Usage: lockBrowsers.ksh lock|unlock
# very simple locker for all web browsers that do not have any means of
# restricting time-wasting activities; to be called by cron. (Note that both
# Safari and Chrome have the WasteNoTime app, which is better than this.) We do
# allow some time wasting a couple of days per week (but not on weekends).
# Current schedule:
# Unlocked every day at 7PM
# Locked every day at 10PM
# Unlocked M F at 6AM
APP=/Applications
set -A Waster FireFox OmniWeb Opera
case "$1" in
lock ) m="a-x" ;;
unlock ) m="a+x" ;;
ls ) m=ls ;;
* ) print -u2 "Usage: $(basename $0) lock|unlock|ls" ; exit 1 ;;
esac
for (( i=0 ; i<${#Waster[*]} ; i++ )) ; do
if [[ $m == ls ]] ; then
ls -ld $APP/${Waster[i]}.app
else
chmod $m $APP/${Waster[i]}.app
fi
done
exit 0
2012-08-04
BOX — A way to manage the Mountain Lion Inbox
At this point, instead of viewing your Inbox, you should view Box. You will see unread messages and Inbox messages at the top, followed by all red-flag messages, then orange-flag messages, then yellow-flag messages. Within each level, you'll get the same default sorting by priority and date, oldest first. Note that messages flagged at a lower priority (Green-Grey) can be viewed via the standard Flagged bookmark (this bookmark should also be set up to sort by Flags ▾, ascending). Normally, all messages in Hold will be flagged, but the Hold bookmark can be selected to see all current contents, flagged or not.
In addition to this, you should drag the root of your current mailbox archive hierarchy to the bookmark bar (in my case, this is currently named y2012). This should allow you to keep the mailboxes sidebar hidden most of the time.
When a message arrives, it will be at the top of the column. Here is a suggested sequence for dealing with a message in your Inbox.
- Delete it if you can, possibly after jotting down a quick reply (the rule is, less than two minutes).
- If you need to act on it, but it will take longer than two minutes, set a flag according to its priority. Then...
- If you're not sure where or whether you want to save it after you've dealt with it, drag it onto the Hold bookmark.
- If you already know where to save it, drag it onto your archive bookmark and down to the appropriate mailbox.
If you need to deal with a message on a certain date, drag it to iCal on that day, and consider setting an alert. If you tend not to look at iCal regularly, I recommend a utility called EmailMyCal from the App store: it can email you your agenda, including a mention of pending email, each day. You will also receive (OS/X ≥ Mountain Lion) alerts about that message, depending on your configuration.
OK, that's the system. You really need to keep unread/Inbox messages low, or you won't see the flagged messages.
2012-06-06
BOX -- [NOT] a simple inbox management scheme
Due to an extremely annoying lack in Mail.app, the critical “Date last viewed/Not in the last...” filter simply doesn't work. Sorry.
BoxToday (red)Box1day (orange)Box2day (yellow)Box3day (green)Box1week (blue)Box2week (purple)BoxAnyFlag (gray)BoxUnreadBoxOldTMP
Deal with it, remove its flag if any, and delete itDeal with it, remove its flag if any, and file itFlag it with a (different) priority and file it
USING iCal
In some cases, you don't want to just push back an email to some rough time in the future, you must deal with it by a specific but far-off deadline. The Box method is not for that. Instead, file the message and then drag the message to a date and time in iCal. This will make the reminder part of your regular calendar system.
2012-03-19
Post-iDisk backups
I selected the free service offered by CloudSafe GmbH as the replacement iDisk. They offer 2 GB for free. Their site is very secure in that all access is via https, and all data stored there is highly encrypted and must be decrypted through the use of a lengthy key. Also, they offer WebDAV over https to the data.
The free CloudSafe accounts can have up to three WebDAV mountable remote drives, called “safes”, each with its own encryption key and access rules. For the purposes of backup, I created a safe called “Backup”.
In order to use the remote drive, you first have to use CloudSafe's dashboard to enable WebDAV on the safe. When you do this, the system will display two critical codes. The first code is part of the address used to access the drive, and is a 10-digit number, like « https://0123456789.webdav.cloudsafe.com/ ». The second code is used, along with the e-mail address you use to access your CloudSafe data online, to get access (i.e., decrypt) the data. The other code consists of four six-character alphanumeric strings, like ACB123-DEF456-GHI789-JKLMN0.
When you have received those codes, the first thing to do is to use Finder's CMD-K option to open the safe. It may be necessary to have some content in the safe for it to open correctly. In my case, I created a folder called Daily there. When you go through Finder's authentication protocol, enter the full https address as the device, the email address as the login name, and the decryption string as the password. IMPORTANT: save this in your login keychain.
Now, some of what follows can be done differently if you prefer, but this is what I did.
I have a miniature partial unix-style file system called “usr” under Documents in my home directory. I put it there to keep it relatively unobtrusive and to avoid cluttering the main file system. In what follows, it is assumed that the folder “~/Documents/usr/libexec” exists to contain the script.
Next, the script itself:
#!/bin/ksh
# backs up a list of folders or files to the CloudSafe Daily folder.
# The backups are done in subfolders of Daily as follows: there is a
# folder for every month (%m; 01-12) in every year (%Y). The backup is
# done there whenever the corresponding folder (%Y%m) doesn't exist. On
# all other days, the backup is done in a 7-day cycle based on the day
# of the week (%u; 1-7; Monday = 1). All previous contents (if any) are
# removed before each backup.
# NOTE: the CloudSafe file system is very simple and does not support
# links and so on, so nothing complicated should be backed up here. all
# are below $HOME. If it becomes necessary to backup more complicated
# filesystem structures, maybe we can backup using tar or a disk image
Me=`basename "$0" .ksh`
# server info
SAFE=0123456789 # REPLACE THIS WITH YOUR SAFE'S INFORMATION
SERVER=webdav.cloudsafe.com
URL="https://$SAFE.$SERVER/Daily"
# mountpoint info
MNT=/Volumes
DEST="$MNT/Daily"
Year=`date +%Y`
Month=`date +%m`
Day=`date +%u`
# try a command n times or until success
function tryrep {
typeset i ntry=$1 ; shift ; typeset cmd="$@"
for (( i=0 ; i<$ntry ; i++ )) ; do
if $cmd ; then return 0 ; fi
sleep 10
done
return 1
}
log(){
print -- "$Me: $*" | logger -s
}
err(){
log "$*"
exit 1
}
errum(){
if tryrep 100 umount "$DEST" ; then
sleep 5
if [[ -d "$DEST" ]] ; then
rmdir "$DEST"
fi
fi
err "$*"
}
# the list of assets
set -A Src \
Library/Keychains/personal.keychain \
Library/Keychains/login.keychain
# mount volume
if ! mkdir "$DEST" ; then
err "Mountpoint '$DEST' is in use or $MNT is unwritable"
fi
# assumes that authentication is in user's keychain & mount_webdav has access
if ! tryrep 10 /sbin/mount_webdav "$URL" "$DEST" ; then
rmdir "$DEST"
err "Failed to mount '$DEST'"
fi
log "Mounted '$URL' at '$DEST'"
# establish and zero the destination folder
if [[ ! -d "$DEST/$Year$Month" ]] ; then
Dest="$DEST/$Year$Month"
else
Dest="$DEST/$Day"
fi
rm -rf "$Dest"
mkdir "$Dest"
for (( i=0 ; i<${#Src[*]} ; i++ )) ; do
where=$(dirname "${Src[i]}")
mkdir -p "$Dest/$where"
if ! cp -Rp "$HOME"/"${Src[i]}" "$Dest/$where" ; then
errum "Copy returned an error (${Src[i]})"
fi
log "Copied '${Src[i]}' to '$Dest/$where'"
done
log "Backup complete"
if tryrep 100 umount "$DEST" ; then
sleep 5
if [[ -d "$DEST" ]] ; then
rmdir "$DEST"
fi
else
err "Problem unmounting $DEST"
fi
log "Unmounted '$DEST', exiting"
exit 0
The version of the script above backs up only your main login keychain plus a “personal” keychain, but you can alter the « Src » array to contain what you want to include. These can be either files or folders. Note that they shouldn't include symlinks or Finder aliases, because those aren't supported in the CloudSafe filesystem.
Next, use the crontab -e command to create an entry in your personal crontab like this:
In the example, this will run the above script at 2:30 AM every day. Take a look at the documentation in crontab(1) and crontab(5) for more information about how you can set this up to run.30 2 * * * ~/Documents/usr/libexec/cloudSafeDaily.ksh
Basically what it does is to try (heroically) to mount your Backup safe at the indicated time. It figures out the year, month, and the day of the week by using the date(1) command. It looks to see if there is a long-term backup already for the year and month (for example, /Volumes/Daily/201203) and if there isn't, it will use that as the destination; otherwise, it will use the day of the week (for example, /Volumes/Daily/1) as the destination. Then it copies the indicated data into the destination (after first removing whatever was there before), creating all folders in the paths as needed. For example, in the example it will create (e.g.) /Volumes/Daily/1/Library/Keychains/login.keychain along with the Library and Keychains folders. This folder-creation is necessary in order to prevent files of the same name in different folders overwriting each other.
This will allow you always to go back 7 days, plus it will keep one backup per month as long as you let it run.
It does not check for space, because the WebDAV filesystem doesn't support that feature correctly. So, it will keep going until you get an error, which shouldn't be a problem if you use this only for smallish files. If the script works normally, there will be a few lines of information written to the system log; if there are errors, a descriptive log entry will be made to help you try to pinpoint the problem.
Why did I make the login and personal keychains the default items to backup?
There is a bunch of critical information in the login keychain, plus, you can store texts in there as encrypted secure notes. You can use this for all of my password information and various other important, secret information.
Note that secure notes do not unlock automatically by default, but some passwords do. Also note that the password for the login keychain is normally the same as your login password and some feel that this is a security problem. If you think this, then my advice is to create a second keychain file, which I call « personal.keychain », for example. Put things that are unlikely to be needed by programs, such as your secure notes and certain passwords and certificates, and give it its own, different password. I added this to the nightly backup on a line before « Library/Keychains/login.keychain » that says « Library/Keychains/personal.keychain \ ». They will both be backed up. Note the backslash at the end of the non-final line: this is critical. Another option would be to remove the final « /login.keychain » from the existing line; this will cause the entire Keychains folder to be backed up, no matter how many keychains you have in there (I didn't do that by default because sometimes a lot of useless files can accumulate in the Keychains folder).
UPDATE: It turns out that in order for the crontab process to get access to the information in the keychain, it must be added to the System keychain, and access must not be restricted. This doesn't seem acceptable to me.
2009-07-07
/etc/profile
Classic sh shell.
The Bourne shell as described in the BSD 4.4 User's Reference Manual distinguishes between "interactive shells" (stdin is a terminal or -i flag was used); "login shells" (0th argument begins with '-' (e.g., "-sh"); and other invocations. Login shells evaluate /etc/profile and .profile if they exist, non-login shells skip this step. Then for every shell invocation, if the environment ENV is set, its contents are interpreted as a path that is then evaluated. Note that for non-login shells, ENV must already be in the environment; for login shells, it may be set in one of the profiles. Interactive shells can be identified by using case $- in *i* ) ... ;; ... esac.
Bash shell.
This is the default OS/X shell and is the most widely used descendant of the Bourne shell. It behaves differently when it is invoked as "sh" or "bash". In the former case, its startup is intended to emulate that of the classic sh (note that this mode is used in single-user mode and in many shell scripts intended to be widely compatible). For bash, an interactive shell is one whose stdin and stdout are connected to terminal, or if the -i flag was used. A login shell is one whose arg0 starts with - ("-bash", "-sh"), or where the --login (or -l) flag was used. When bash is invoked as "sh", it first evaluates /etc/profile and then ~/.profile unless --noprofile is given. Note that the --login can be used even with "sh" invocation. At this point, "sh"-invoked bash enters "posix mode" (the --posix flag can also be used for this purpose). In posix mode, ENV is handled as with classic sh. When bash is invoked as "bash", it also evaluates /etc/profile, then the first existing file in the set ~/.bash_profile, ~/.bash_login, and ~/.profile (unless --noprofile was given). Interactive, non-login bash evaluates ~/.bashrc, unless --norc is given. Non-interactive bash evaluates $BASH_ENV if defined. Note that for interactive bash shells, $- will include i and PS1 will be set. In bash, the following variables will be set by the shell: BASH, BASH_VERSINFO (array), BASH_VERSION.
Korn shell.
This is an excellent extended version of sh which differs from bash in various ways. Ksh defines interactive the same as bash, but it has no effect on the startup files used. Login shells are defined as for sh: arg0 must begin with '-' (e.g., "-ksh"). Login shells evaluates /etc/profile if it exists, and then .profile or $HOME/.profile, if either exists. As for ENV, it is handled the same as classic sh, except if it is not set, $HOME/..kshrc will be evaluated if it exists. If the real and effective uid or gid do not match, /etc/suid_profile will be used instead of ENV or HOME/.profile (interactive shells). Also, in ksh $- contains i. In ksh, the variable KSH_VERSION will be set by the shell.
Single user mode
On standard UNIX-style systems, either /bin/csh or /bin/sh are used in single-user mode. If /bin/csh, we are already forked, but if /bin/sh, then there are consequences for /etc/profile, because it will generally by evaluated in single-user mode (in the current launchd under OS/X, it is invoked as /bin/bash, with arg0 set to "-sh"). Functionally similar invocations are probably the norm.
Some conclusions
Basically, /etc/profile will be evaluated for all logins. If ENV is set, then in some cases but not all, it will be evaluated, and of course there are some other shell-specific files that also will be evaluated in some cases, that we aren't concerned with here. There is no simple test to detect the currently running shell. One can use $0, but that doesn't distinguish true sh from one of the others masquerading as sh. However, that may not matter in many cases. Therefore, a simple case statement on $0 will work in most cases in /etc/profile. The situation in ENV is more complicated, because there could be an unknown amount of environment setting (e.g., for PS1) before ENV is run. In one case I know of, the login shell is ksh, and it is detected correctly, and then ksh-specific material is placed in PS1. If bash is then run interactively from the ksh login session, it *inherits* PS1. The fix is to put stuff in ~/.bashrc to set up PS1, or to do other things where bash and ksh differ.
2009-06-22
Scripting single-user mode
The best alternative I've come up with is to use actual single-user mode. It is possible to get into single-user mode from a script via this sequence (executed as root): « nvram boot-args=-s » ; reboot. At some point once single-user mode is entered, the command « nvram boot-args= » must be run in order to re-eneable multi-user mode.
There is a script that is executed by the shell, and that can be hooked for the purpose of scripting maintenance in single-user mode: /etc/profile, the shared, system-wide start-up file for all shells in the sh family. However, since this location can (and should) be used to customize the shell environment at the system level for all users, it should be changed as "invisibly" as possible.
I prefer to deal with these issues as follows: I'll put one line at the top of /etc/profile that contains some fast heuristics and slower deterministic tests for single-user mode which if passed result in a call to jidaemon (which is the script I want to run in single-user mode). The presence of this line at the top of /etc/profile is required. It can be checked by comparing [[ "$THELINE" == `head -1 < /etc/profile` ]]. The heuristics should all be based on the shell's internal environment, and should be as fast as possible, because /etc/profile is called every time the shell starts up. The heuristics are UID=0, HOME=""; if those are true, the deterministic tests are `sysctl -n kern.singleuser`=1 and -x /var/root/jidaemon. If those are true, run /var/root/jidaemon. Within jidaemon, all those tests are repeated, and some additional tests are run: nvram boot.args == *-s*, read-only root, -f /tmp/just.imagine and so on. Also, if -s is set in nvram boot.args, jidaemon must clear it while preserving any other flags. If any of these tests fail, then jidaemon returns to caller and the only result (beyond clearing the boot.args -s flag) is a slight delay--the shell will continue and an interactive single-user mode session will begin. If jidaemon runs normally, it will restart the system when complete.