CMake is a great tool for makefile generation, far better than old arcane configure scripts and such, but it’s great out-of-source build support can lead to a common annoyance of constantly jumping back and forth between build and source directories, or having multiple build directories for a single source checkout. In my case, I frequently find myself forgetting exactly where the correct source tree is when I’m working in a build-tree.

So, here’s a little fish function called `prompt_src` that you can add to your prompt to let it always show you the source directory for the build tree you’re in, and also show the current git version that you’re working from. The image above shows my OpenCV/build directory (the response is in yellow indicating it’s not a Git source tree), and then my main application directory showing in Red because it’s been modified, but it’s the develop branch.

# src
function prompt_src --description 'Find details on this builds SOURCE dir'
set -l cdir (pwd)
while [ $cdir != "/" ]
if [ -e $cdir/CMakeCache.txt ]
set -l SourceDir ( cat $cdir/CMakeCache.txt | grep SOURCE_DIR | head -n 1 | cut -d '=' -f 2)
if [ -d $SourceDir/.git ]
set -l gitinfo (git --git-dir=$SourceDir/.git --work-tree=$SourceDir status -sb --untracked-files=no)
set -l branch ( echo $gitinfo[1] | cut -b 4- )
set -l branch_color (set_color red)\[M\]
if test (count $gitinfo) -eq 1
set branch_color (set_color green)

echo \* Builds (set_color green)$SourceDir $branch_color \($branch\) (set_color normal)
echo \* Builds (set_color yellow)$SourceDir (set_color normal)
set cdir (dirname $cdir)
If you spend much time in a terminal, be it on Mac or Linux, one thing you wind up doing often is creating and decompressing tarballs. Be they tgz, tbs, or just plain tar files, they’re the archive format of choice for folks working in *nix environments due to their universal support. Most commonly, the only way to get any real status information is to turn on “verbose" mode which outputs each filename as it goes. That’s not terribly useful for large archives.

If you install the ‘pv’ tool, you can partner it with some commandline-fu to get nice progress bars. But why deal with it, when you can write a fish function to do it for you!

Here's a script ( that decompresses a variety of formats:

function expand -d 'Decompress a file' -a filename
set -l extension ( echo $filename | awk -F . '{print $NF}')
switch $extension
case tar
echo "Un-tar-ing $filename..."
pv $filename | tar xf -
case tgz
echo "Un-tar/gz-ing $filename..."
pv $filename | tar zxf -
case tbz
echo "Un-tar/bz-ing $filename..."
pv $filename | tar jxf -
case gz
echo "Un-gz-ing $filename..."
pv $filename | gunzip -
case bz
echo "Un-bz-ing $filename..."
pv $filename | bunzip2 -
case zip
echo "Un-zipping $filename..."
unzip $filename
case '*'
echo I don\'t know what to do with $extension files.

And here's a matching script for creating tarballs:

function tarball -d "Create a tarball of collected files" -a filename
echo "Creating a tarball of $filename"
if [ -e $filename ]
echo "Sorry, $filename already exists."
set -l args $argv[2..-1]
set -l size (du -ck $args | tail -n 1 | cut -f 1)
set -l extension ( echo $filename | awk -F . '{print $NF}')

switch $extension
case tgz
tar cf - $args | pv -p -s {$size}k | gzip -c > $filename
case tbz
tar cf - $args | pv -p -s {$size}k | bzip2 -c > $filename
case '*'
echo "I don't know how to make a '$extension' file."
set -l shrunk (du -sk $filename | cut -f 1)
set -l ratio ( math "$shrunk * 100.0 / $size")
echo Reduced {$size}k to {$shrunk}k \({$ratio}%\)


I really love Xeno, but one of my biggest gripes is that even if I configure it to use Sublime Text as my editor, it opens the entire directory and not the project file. For most maybe that’s not an issue, but for me that means a loss of indentions, spacing styles, clang options, and lots more.

So I wrote this script called “”:

echo Looking in $1
for s in `find $1 -name \*.sublime-project -maxdepth 1 `; do
    echo "Opening a sublime project: $s"
    subl -p $s

if [ $found = 0 ]; then
    echo "No project found, opening $1"
    subl $1

And then executed this from the command line:

xeno config core.editor ~/bin/

And tada! Now when I begin or resume an editing session with xeno, it defaults to opening any Sublime Project files it finds first! If none are found, then it just does the usual.

On a console, install 'pv' (available from homebrew and apt-get) and then use the following to get a nice progress bar with percentage progress and ETA when decompressing:

pv file.tgz | tar xzf - -C target_directory

And use this when compressing for the same:

SIZE=`du -sk folder-with-big-files | cut -f 1`
tar cvf - folder-with-big-files | pv -p -s ${SIZE}k | bzip2 -c > big-files.tar.bz2

Works with gzip instead of bzip too!
Spending lots of time in git lately, I thought I’ld log my git environment here since I keep having to replicate it on various machines.

git config --global rerere.enabled true

git config --global alias.lg "log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit"
git config --global push.default current
git config --global “Randall Hand"
git config --global “"
git config --global color.ui true
git config --global core.editor vim
git config --global core.autocrlf input

And on a mac, I have a few more:

git config --global credential.helper osxkeychain
git config --global core.editor ‘subl -w'
git config --global mergetool.sublime.cmd=subl -w $MERGED
git config --global mergetool.sublime.trustexitcode=false
git config --global merge.tool=sublime

Anyone else have any neat things?
Addition Dec-09:

git config --global branch.autosetuprebase always

And for any existing branches that you want to “convert" over to always rebase, execute this in bash:

for branch in $(git for-each-ref --format='%(refname)' -- refs/heads/); do git config branch."${branch#refs/heads/}".rebase true; done

Here’s another little snippet for anyone interested… I wanted my Terminal prompt to show all active sessions, for easy memory that I may have left one open, but it was a little dull. Being a Viz guy, what can I do to make it better? _Add Color!_ So below, find a little fish function that you can add to your script (or just call manually) for nicely colored output as shown above.

# xeno_list
function xeno_list --description 'Colored xeno list results'
for s in ( xeno-list );
set -l xen_id (echo $s | cut -d ':' -f 1)
set -l xen_desc (echo $s | cut -d ':' -f 2-)

set -l xen_array (echo $s | tr ' ' \n)
set -l statuscolor (set_color green)
set -l desccolor (set_color normal)
if test $xen_array[-1] = "unsynced"
set statuscolor (set_color red)
set desccolor (set_color yellow)

echo $statuscolor\[$xen_id\] $desccolor $xen_desc (set_color normal)

A month ago or so, a friend of mine turned me onto a new unix shell called [FishShell]( It shares some similarities with other terminals, but offers lots of really nice features. It has a vastly improved autocompletion feature (including an amazing tool that parses all your installed man pages and generates an autocompletion database). It took me a while to work out some of the syntax (no more `export PATH=A`, but rather `set -x PATH A`). I’ve switched all my machines over to it (Mac and Linux) and I’m loving it so far.

Then, the other day I found out about a great tool called []( It’s a tool that combines git and ssh into a single stream that lets you edit remote files with local editors. I hooked it up with Sublime (a quick `set -xU EDITOR subl` and `set -xU GIT_EDITOR ’subl -w’`), and it’s a great way to edit code on remote systems without having to use screen and such. And if your connection drops, no worries! It’s stored in git, and when it comes back it’ll resync.

So I spent some time merging the two, and build the following nice autocompletes for xeno that support the major operations, and filling in open sessions. Hope someone out there finds it useful!

# xeno

function __fish_xeno_available_sessions
xeno-list | cut -d ':' -f 1

function __fish_xeno_needs_command
set cmd (commandline -opc)
if [ (count $cmd) -eq 1 -a $cmd[1] = 'xeno' ]
return 0
return 1

function __fish_xeno_using_command
set cmd (commandline -opc)
if [ (count $cmd) -gt 1 ]
if [ $argv[1] = $cmd[2] ]
return 0
return 1

complete -f -c xeno -n '__fish_xeno_needs_command' -a 'list stop resume sync ssh edit'

complete -f -c xeno -n '__fish_xeno_needs_command' -a list --description 'List open sessions'
complete -f -c xeno -n '__fish_xeno_using_command list' -a '(__fish_xeno_available_sessions)'

complete -f -c xeno -n '__fish_xeno_needs_command' -a stop --description 'Shutdown a session'
complete -f -c xeno -n '__fish_xeno_using_command stop' -a '(__fish_xeno_available_sessions)'

complete -f -c xeno -n '__fish_xeno_needs_command' -a resume --description 'Resume a previously used session'
complete -f -c xeno -n '__fish_xeno_using_command resume' -a '(__fish_xeno_available_sessions)'

complete -f -c xeno -n '__fish_xeno_needs_command' -a sync --description 'Force a sync of data'
complete -f -c xeno -n '__fish_xeno_using_command sync' -a '(__fish_xeno_available_sessions)'

complete -f -c xeno -n '__fish_xeno_needs_command' -a ssh --description 'SSH to a host to prepare for editing'
complete -f -c xeno -n '__fish_xeno_needs_command' -a edit --description 'Edit a file'

For the last year I’ve basically given up on the OSX Calendar App and been using the Google Calendar website in a Pinned Tab in Chrome. Although I still keep OSX Calendar synced and up to date, it became a real problem that it didn’t support Google Hangouts invitations. As such, I found myself having to manually open Google Calendar before each meeting, hunting down the meeting and then joining the hangout. I basically had given up on ever finding a solution, but today Google finally won out.

On StackOverflow I found a similar individual with this problem, and someone posted a clever combination of automator, shell scripts, and AppleScript that makes it possible. It’s not perfect, but now I can drag events from OSX Calendar to a little icon in my dock that reprocesses them and adds the hangout URL back in under the URL field, making it a simple click-and-launch to get into hangouts.

For simplicity, the procedure (with my one modification) is:

  1. Create an Automator of type application
  2. Add 'GetSpecified Finder' Items step
  3. Add a 'Run Shell Script’ step, and change the inputs to be As Arguments, not As Stdin.
  4. Copy the following into the text box:

      read url &lt;&lt;&lt; $(cat "$1" | sed "s/$(printf '\r')\$//" | awk -F':' '/X-GOOGLE-HANGOUT/ {first = $2":"$3; getline rest; print (first)(substr(rest,2)); exit 1}’;)
      read uid &lt;&lt;&lt; $(cat "$1" | sed "s/$(printf '\r')\$//" | awk -F':' '/UID/ {print $2; exit 1}’;)
      echo "$url”
      echo "$uid”
  5. Add a step of type 'Run Apple Script’

  6. Copy the following into the box replacing "myCalendar" with the name of your calendar:

      on run {input, parameters}
           set myURL to input's item 1
           set myUID to input's item 2
           set myCal to “myCalendar"
           tell application “Calendar"
                tell calendar myCal
                     set theEvent to first event whose uid = myUID    
                     set (url of theEvent) to myURL
                end tell    
          end tell
          return input
      end run
  7. Save the Application and add to your dock

I've been following the tool "ThinkUp" off an on for a while now. Gina Trapani, founder of LifeHacker long ago, created it as a hobby project and it was described as lots of fuzzy things like "social media insights engine". I was never very sure what it was, but every now and then I would hear neat things about it.

I noticed last week that apparently they've decided to create a commercial entity now around it and launch a whole website where you could just access it, instead of the current model where you have to download and install it on your own LAMP stack after some configuration.

So I figured if they saw it worth that, then maybe I should experiment with it. So I downloaded it (version 2.0-beta8, available on GitHub) and set it up on a system of mine. After a failed start or two I got it setup and was rather unimpressed. I kept tinkering with it and just couldn't see why I should care. But I left it running...

In my head I kept thinking about it and figured that since it's already pulling in all my FourSquare, Twitter, and Facebook history, it could create a nice Timeline view. Similar to TimeHop, but rather than showing me today in the past, just showing me today. Then I could take the results in a nice interleaved temporal format and toss them in Evernote for archive. ThinkUp has no internal capability for this, but has a decent plugin architecture and rather simple MySQL format that should make building this easy.

Then, I got to thinking about what else I could add. I use FitBit, so I wondered if I could get my FitBit steps & sleep stats pulled in and merged into the timeline as well. And hey, how about MyFitnessPal to show what/when I ate. And then RunKeeper blended in as well to see distance/speed. Oh, and I could access EverNote to pull in when, what, and where notes were created. And if I could get access to my PlaceMe stream I could merge that in too.

In short, I was fantasizing about building a single tool that could build a daily "Day in my Life", showing me everything I did and everywhere I went on a minute-to-minute basis. It's kinda scary to think that I leave that much "digital dust" around the internet, but it's reality. I've already begun on the FitBit integration and have a basically functioning Crawler and graph generator that shows my daily steps down at the 5-minute increment. It needs a bit of integration work to clean it up, and then a smarty template and such to get it viewable inside ThinkUp in a reasonable way, but I intend to release it on my GitHub fork once it's ready.

Oh, and while I was tinkering with all this and doing other stuff, ThinkUp kept churning away with it's hourly updates. And when I logged in the other day I finally started to catch a glimpse of what it really does. It took a few days to gather enough data, but now it shows me some interesting statistics like:
  • Most of my Tweets are questions
  • Most of my status updates are personal (containing "i", "me", "my", etc)
  • Some neat graphs about topic vs reply/like count
  • Interesting graphs about which of my friends most commonly respond
So I'm going to let it keep going. I figure it can't hurt to get some information like this while I integrate the rest of the functions I want.

Also, I might need to find a better place to host it for reliability. Right now it's just running on a system internal to my own network, so to get it I have to establish an SSH tunnel and such. It works, and it's safe so that's a plus, but it's then at the (somewhat flakey) reliability of my own network and hardware. I saw a post where you can theoretically now run ThinkUp on Google App Engine's new PHP support, but I can't find any information on how it actually performs or where anyone's ever tried it. GAE has a 60-second limit on scripts which, in my experience, would cripple crawling, so I'm curious to know if anyone's tried it.

However, I am a bit concerned about the future of ThinkUp. The latest version available is 2.0-beta8, with no commit's on the "main" branch in 2 months. I see lots of activity on forks, including a few important bugfixes that are critical to get it running with recent changes in Facebook, but none of them have been merged back to the master. Surely if they're preparing to roll out a company they've fixed them internally. I fear that they may internally fork the project into an "Awesome for-pay version" and "Less functional but open-source version that's always a bit out of date".

I guess only time will tell.......

Some of you may see that pic above and think that's from the weekend's Tropical Storm Karen, but you'll be wrong. That's actually just from a heavy rain about 2 weeks ago (which caused lots of problems I'm still dealing with).

We were all prepared for it to happen again this weekend as Tropical Storm Karen came through, but it (thankfully) wound up a letdown. It never made landfall, and we only got about an hour of rain from it in total.

All in all, 2013 has been a beautifully quiet Hurricane Season. Of course, that just means next year has the potential to be twice as bad.