Using SSH Keys on a USB Drive on MacOS X

I keep my SSH private key on a USB thumb drive.

The idea is that I don't want my private key to be on the hard disk of any of the computers that I use.  I use several and so I'm not observing them all constantly, so I don't want to leave my key around for automated attackers to pick it up.

I load the key directly from the USB drive into my SSH agent, which then mlock()s it so it doesn't get put into swap.

This works on Windows (with PuTTY) and Linux just fine.  Unfortunately MacOS X has a nasty habit of mounting FAT volumes with free-for-all permissions, so when I try to load the key:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
         WARNING: UNPROTECTED PRIVATE KEY FILE!          @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0777 for '/Volumes/GRAVITON/....id_rsa' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.

I thought that this was an intractible problem.  The only solution I'd found previously was to make a copy of the key, make a sparse disk image, and manually mount the sparse disk image.  However, this workaround has two problems:
  1. It's inconvenient.  I have to manually locate the disk image every time, double click it, etc.
  2. It's insecure.  If I ever allow other users to log in to any of my OS X machines, they can read the version of the key I'm not using on the FAT filesystem, even if only I can read the one on the HFS+ disk image.
Today, almost by accident, I discovered the real answer.

The daemon that mounts disk on OS X is called "diskarbitrationd".  I discovered this by running across some OpenDarwin documentation which explains that you can configure this daemon by putting a line into fstab.

First you need a way to identify the device in question.  None of the suggested mechanisms for determining the device UUID worked for me, so I used the device label instead.  This is probably desirable anyway, since at least you can tell when the label changes; if you move your key to a similar device, the UUID is different but you can't tell.

You can set the device label by mounting your USB drive, doing "get info" on it, editing the name in the "name and extension" section, and then hitting enter.  You should use an all-caps name, since when you re-mount the drive it will be all-caps again anyway.

You also need to know your user-ID.  The command 'id -u' will return it.

Then, you need to add a single line to /etc/fstab.  My drive's label is "GRAVITON", and my user-ID is 501, so it looks like this:

LABEL=GRAVITON none msdos -u=501,-m=700

Now, all you have to do is eject your drive and plug it in again.  Voila!

$ ssh-add /Volumes/GRAVITON/....keychain.id_rsa
Identity added: /Volumes/GRAVITON/....keychain.id_rsa (/Volumes/GRAVITON/...keychain.id_rsa)

Now you can securely carry your SSH key with you to macs, without breaking ssh-agent's intended protection.

The Joel Un-test

Joel Spolsky seems to like controversy, although I can see why.  Being a contrarian ideologue is pretty sweet.

Some people have been suggesting that the Joel Test should mention "100% unit test coverage" as well.  Personally, I think that's a great idea.  The industry is evolving, and automated testing is getting into the suite of tools that every competent programmer should be familiar with.

Joel disagrees, since 100% coverage is "a bit too doctrinaire about something you may not need".

For what it's worth, I don't completely disagree with Joel.  Some of the software that I work on doesn't have 100% test coverage, and that's okay.  I wrote it before I learned about unit testing.  I'm not freaking out and spending all of my time just writing tests for old code which apparently works.

However, we do have policies in place to add test coverage whenever we change anything.  Those policies stipulate that 100% coverage is a requirement for any new or changed code, so I consider myself a fan of 100% coverage and I generally think it's a good idea.  I do think it belongs on the Joel Test, or at least something like it.

I feel like my opinions are representative of a pretty substantial number of "agile" practitioners out there, so I'd just like to respond to a few points:

Joel mentions "SOLID principles", as if they're somehow equivalent to unit testing.  As if the sentiment that leads one to consider 100% test coverage a great idea leads one into a slavish architectural death-spiral, where any amount of "principles" are dogma if they have a sticker that says "agile" stuck to them.

Let me be clear.  I think that SOLID, at least as Joel's defined it, is pointlessly restrictive.  (I've never heard about it before.)  As a guy who spends a lot of time implementing complex state machines to parse protocols, I find "a class should have only one reason to change" a gallingly naive fantasy.  Most of the things that Joel says about SOLID are true, especially if you're using a programming language that forces you to declare types all over the place for everything.  (In Python, you get the "open" part of "OCP", and the "clients aren't forced" part of "ISP" for free.)  It does sound, in many ways, like the opposite of "agile".

So, since SOLID and unit testing are completely unrelated, I think we can abandon that part of the argument.  I can't think of anyone I know who likes unit testing would demand slavish adherence to those principles.  I agree that it sounds like it came from "somebody that has not written a lot of code, frankly".

On the other hand, Joel's opinion about unit tests sounds like it comes from someone who has not written a lot of tests, frankly.

He goes on and on about how the real measure of quality is whether your code is providing value to customers, and sure you can use unit tests if that's working for you, but hey, your code probably works anyway.

It's a pretty weaselly argument, and I think he knew it, because he kept saying how he was going to get flamed.  Well, Mr. Spolsky, here at least that prediction has come true ;-).

It's weaselly because any rule on the Joel Test could be subjected to this sort of false equivalence.  For example, let's apply one of his arguments against "100%" unit testing to apply to something that is already on the joel test, version control:

But the real problem with version control as I've discovered is that the type of changes that you tend to make as code evolves tend to sometimes cause conflicts. Sometimes you will make a change to your code that, causes a conflict with someone else's changes. Intentionally. Because you've changed the design of something... you've moved a menu, and now any other developer's changes that relied on that menu being there... the menu is now elsewhere. And so all those files now conflict. And you have to be able to go in and resolve all those conflicts to reflect the new reality of the code.

This sounds really silly to anyone who has really used version control for any length of time.  Sure, sometimes you can get conflicts.  The whole point of a version control system is that you have tools to resolve those conflicts, to record your changes, and so on.

The same applies to unit tests.  You get failures, but you have tools to deal with the failures.  Sure, sometimes you get test failures that you knew about in advance.  Great!  Now, instead of having a vague intuition about what code you've broken intentionally, you actually have some empirical evidence that you've only broken a certain portion of your test suite.  And sure, now you have to delete some old tests and write some new tests.  But, uh... aren't you deleting your old code, and writing some new code?  If you're so concerned about throwing away tests, why aren't you concerned about throwing away the code that the tests are testing?

The reason you don't want to shoot for 90% test coverage is the same reason you don't want to shoot for putting 90% of your code into version control or automating 90% of your build process into one step or putting 90% or (etc) is that you don't know where the bugs are going to crop up in your code.  After all, if we knew where the bugs were, why would we write any tests at all?  We'd just go to where the bugs are and get rid of them!

If you test 90% of your code, inevitably, the bugs will be in the 10% that you didn't test.  If you automate 90% of your build, inevitably the remaining non-automated 10% will cause the most problems.  Let's say getting the optimization options right on one particular C file is really hard.  Wouldn't it be easier to just copy the .o file over from bob's machine every time you need to link the whole system, rather than encoding those options in some kind of big fancy build process, that you'd just have to maintain, and maybe change later?

Joel goes on to make the argument that, if he were writing some software that "really needed" to be bulletproof, he'd write lots of integration tests that exercised the entire system at once to prove that it produced valid output.  That is a valid testing strategy, but it sort of misses the point of "unit" tests.

The point of unit tests — although I'll have to write more on this later, since it's a large and subtle topic — is to verify that your components work as expected before you integrate them.  This is because it's easier to spot bugs the sooner you find them: the same argument Joel makes for writing specs.  And in fact if you read Mr. Spolsky's argument for writing specs, it can very easily be converted into an argument for unit testing:

Why won't people write unit tests? People like Joel Spolksy claim that it's because they're saving time by skipping the test-writing phase. They act as if test-writing was a luxury reserved for NASA space shuttle engineers, or people who work for giant, established insurance companies. Balderdash. ... They write bad code and produce shoddy software, and they threaten their projects by taking giant risks which are completely uncalled for.

You think your simple little function that just splits a URL into four parts is super simple and doesn't need tests because it's never going to have bugs that mysteriously interact with other parts of the system, causing you a week of debugging headaches?  WRONG.  Do you think it was a coincidence that I could find a link to the exact code that Joel mentions?  No, it's not, because any component common enough to make someone think that it's so simple that it couldn't possibly have bugs in it, is also common enough that there are a zillion implementations of it with a zillion bugs to match.

Unlike specs, which just let you find bugs earlier, tests also help you make finding (and fixing) a bug later be cheaper.

Watching a test-driven developer work can be pretty boring.  We write a test.  We watch it fail.  We make it pass.  We check it in.  Then we write another test.  After a while of watching this, a manager will get itchy and say, Jeez!  Why can't you just go faster!  Stop writing all these darn tests already!  Just write the code!  We have a deadline!

The thing that the manager hasn't noticed here is that every ten cycles or so, something different happens.  We write a test.  It succeeds.  Wait, what?  Oops!  Looks like the system didn't behave like we expected!  Or, the test is failing at a weird way, before it gets to the point where we expect it to fail.  At this point, we have just taken five minutes to write a test which has saved us four hours of debugging time.  If you accept my estimate, that's 10 tests × 5 minutes, which is almost an hour, to save 4 hours.  Of course it's not always four hours; sometimes it's a minute, sometimes it's a week.

If you're not paying attention, this was just a little blip.  The test failed twice, rather than once.  So what?  It's not like you wouldn't have caught that error eventually anyway!

Of course, nobody's perfect, so sometimes we make a mistake anyway and it slips through to production, and we need to diagnose and fix it later.  The big difference is that, if we have 100% test coverage, we already have a very good idea of where the bug isn't.  And, when we start to track it down, we have a huge library of test utilities that we can use to produce different system configurations.  A test harness gives us a way to iterate extremely rapidly to create a test that fails, rather than spinning up the whole giant system and entering a bunch of user input for every attempt at a fix.

This is the reason you don't just write giant integration tests first.  If you've got a test that just tells you "COMPILE FAILED", you don't know anything useful yet.  You don't know which component is broken, and you don't know why.  Individual unit tests with individual failures mean that you know what has gone wrong.  Individual tests also mean that you know that each component works individually before inserting it into your giant complex integrated compiler, so that if it dies you have a consistent object that you know at least performs some operations correctly, which you can inspect and almost always see in a sane internal state, even if it's not what the rest of the system expects.

Giant integration test suites can be hugely helpful on some projects, but they are the things which are sometimes unnecessary gold plating unless you have a clear specification for the entire system.  Unit tests are the bedrock of any automated testing strategy; you need to start there.

Unit tests seem like they take time, because you look at the time spent on a project and you see the time you spent writing the tests, and you think, "why don't I just take that part out?".  Then your schedule magically gets shorter on paper and everything looks rosy.

You can do that to anything.  Take your build automation out of your schedule!  Take your version-control server out of your budget!  Don't write a spec, just start coding!  The fact is, we pay for these tools in money and time because they all pay off very quickly.

For the most part, if you don't apply them consistently and completely, their benefits can quickly evaporate while leaving their costs in place.  Again, you can try this incomplete application with anything.  Automate the build, but only the compile, not the installer.  Use version control, but make uncommitted hand-crafted changes to your releases after exporting them.  Ignore your spec, and don't update it.

So put "100% test coverage" on your personal copy of the Joel Test.  You'll be glad you did.

One postscript I feel obliged to add here: like any tool, unit tests can be used well and used poorly.  Just like you can write bad, hard-to-maintain code, you can write bad, hard-to-maintain tests.  Doing it well and getting the maximum benefit for the minimum cost is a subtle art.  Of course, getting the most out of your version control system or written spec is also a balancing act, but unit tests are a bit trickier than most of these areas, and it requires skill to get good at them.  It's definitely worth acquiring that skill, but the learning is not free.  The one place that unit tests can take up more time than they save is when you need to learn some new subtlety of how to write them properly.  If your developers are even halfway decent, though, this learning period will be shorter than you think.  Training and pair-programming with advanced test driven developers can help accelerate the process, too.  So, I stand by what I said above, but there is no silver bullet.

You Got Your WindowMaker In My Peanut Butter

Electric Duncan mentioned Window Maker and Ubuntu yesterday, and it reminded me of my own callow youth.

Nowadays I'm a serious Compiz junkie, so I don't think I'll be switching back any time soon.  Personally, I wouldn't want to live without maximumize or the scale window title filter.  However, I can definitely see why one would want to: WindowMaker is lightning fast, as well as being very simple and streamlined.  When I do pair-programming that needs tools that won't run in Screen, I spin up a WindowMaker session in a VNC server.  Sharing my whole gigantic screen with all the whizzy effects is impractical over anything slower than a local 100 megabit connection.

One of the problems with switching to a different window manager for your main session these days, however, is that things unrelated to window management stop working.  Your keyboard settings no longer apply, your media no longer auto-mounts, GTK ignores your theme, your media keys stop working, and your panel disappears, along ever-so-useful applets like Deskbar and the NetworkManager applet.

But, this need not be so.  GNOME will happily accomodate an alternate window manager. All you need to do is make sure that WindowMaker and Nautilus don't fight over the desktop, and then tell GNOME to start WindowMaker.

Of course, your desktop won't be quite as lean as if you'd eschewed GNOME completely.  It's up to you to decide whether these features are worth a few extra megabytes of RAM.

First, run gconf-editor and turn off "/apps/nautilus/preferences/show_desktop".  This should make your desktop go blank.
http://www.twistedmatrix.com/users/glyph/images/content/blogowebs/gconf-editor-set-show-desktop.png
Next, you need to go to "System → Preferences → Sessions", and hit "add" on the "Startup Programs" tab.  Add an entry for WindowMaker:
http://www.twistedmatrix.com/users/glyph/images/content/blogowebs/add-wmaker-as-startup-program.png
Now, all you need to do is log out!  You will, of course, want to tweak your panels a bit when you log back in, but that part's easy: right-click and season to taste.
http://www.twistedmatrix.com/users/glyph/images/content/blogowebs/party-like-its-1999.png

A Meandering Review of the Logitech Illuminated Keyboard

I haven't done a keyboard review in quite some time.  Partially this is because I've started getting only higher-quality keyboards, and so I've been getting them less frequently.  I can reliably destroy a cheap-o dome-switch keyboard in about 6 months, so now I only buy keyboards with mechanical or scissor switches.  (My rule of thumb is that if it doesn't tell me how many keystrokes the switches are rated for, I won't get it myself or put it on my wish list.  Typically the lowest number you'll get is "five million", which is a good deal higher than the two million that most dome switches can do.)

This Christmas, my grandmother kindly bought me a Logitech Illuminated Keyboard, which I've been using for the past few weeks.  I have to say I'm very happy with it.
http://www.logitech.com/repository/1170/jpg/9726.1.0.jpg

Tactile Response

First and foremost, of course, is the keyboard's feel.

I generally prefer aggressively clicky keyboards like the Das Keyboard or the venerable Unicomp EnduraPro (known in a previous life as the "Model M").  However, at home, these are not an option, as I have both limited vertical space underneath my monitor and a limited acoustic toleranceSome amount of "click" is a requirement though, or the lack of feedback causes my hands to tense up and hurt.  Just a few days ago, Cyril Kowalski of techreport.com described my experience almost exactly, in his review of the Das Keyboard.  I feel this is really worth repeating:
So, because dome-switch keyboards don't let you hear or feel exactly how much force you need to depress a key, you might find yourself pushing too hard or too softly. That can mean either more fatigue or more typos. Some users try to alleviate those shortcomings with split ergonomic keyboards, which place your hands in a more natural position, but those don't really solve the feedback problem — although they can feel comfy enough to type on.

I don't have any statistics handy, but I can throw some anecdotal evidence at you. (Take that however you please.) I've been typing 2,000 words a day five days a week for around three years on a 1989 Model M, and my fingers, hands, or wrists never get tired. When I was using a Microsoft Natural Keyboard Pro and typing less each day, I suffered from finger pain and annoying wrist tingling on a regular basis. I actually type faster on the Model M, as well, even though my touch-typing technique hasn't changed.

While I've varied the keyboards I've used considerably more than Mr. Kowalski apparently has, my experience typing lots of hours on a mushy Microsoft Natural Keyboard, at Origin, was exactly the same.

So, does the Illuminated Keyboard stack up?  In a word, "yes".  But you all know that I wouldn't use one word where 500 will do.

With "illuminated" right in the name, one might think that this keyboard is a gimmicky one-trick pony.  I've defnitely seen a few other keyboards where some marketing genius duct-taped a couple of 2¢ LEDs to the back of a crummy keyboard, spray-painted the word "GAMING" on the box, and marked it up by $50.  Even Logitech's own prior entry into the "illuminated" arena, the G15, suffered from this overfocus on bling.  Here, I'll have to amend my own review: while I was impressed at first that the G15 had reasonable tactile feedback, especially for a dome-switch keyboard, it degraded over time, as any dome-switch keyboard will.  The marketing copy talks about illumination and LCDs, but doesn't mention what type of key switch is used.

(While I'm trashing on the old model, I should also note that the "G15" that Logitech is selling today has been visibly upgraded in a number of ways, and may use a new key system as well.  Given that the new G19 costs $200, contains a USB 2.0 hub, a 320x240 color LCD, and a computer that runs linux (I am not kidding), I am hoping that it doesn't ship with keys that will wear out after a few months.)

The Illuminated Keyboard, by contrast, dedicates half of its marketing copy to talking about the key switches.  Like the diNovo Edge (but at less than half the price) it uses the "PerfectStroke Key System".  Indeed, the keyboard's tactile response feels like an updated version of the Edge.

I also have an Edge, and I am quite happy with it too.  If anything, the keys on the Illuminated Keyboard are even better calibrated.  While scissor-switch keyboards are all fairly similar, I have managed to beat many of my own speed records with this keyboard, and a few brief experiments side-by-side with the Edge suggest that I can type as fast or slightly faster on the Illuminated Keyboard.  The keyboard I was most recently using on this computer was the Moshi Celesta (warning!  link contains obnoxiously huge animation, and plays music).  I can type at about the same raw speed on all the scissor keyboards I've tried (the Celesta, the IceKey, the Edge and this one).  However, I have a marginally, but consistently, lower average error rate with the "PerfectStroke"-based keyboards.

This tactile similarity gives me high hopes for the durability of the Illuminated Keyboard as well.  When I first got my Edge, I hammered on it as my primary keyboard for a good eight months.  This is more than enough time to kill lesser keyboards.  Then, we moved it to the media center, where Ying and I would still both use it daily.  As far as I know, it would have lasted another five years, but some part of the battery or the charger gave up the ghost and it would not recharge.  (No complaints there, though.  Logitech replaced the whole unit, free of charge, despite the fact that it was out of warranty.)

Illumination

So, it feels pretty good.  Now, on to the headline feature.  Is it illuminated?

Yes.  The illumination is fairly subtle even on its brightest setting.  It's white, not some neon flourescent color.  It's not nearly as bright as many "gaming" keyboards.  However, it's also very even.  I'm not sure if they use the same trick that Déck does and backlighting every key individually, but there are no dim spots.

Actually, a better answer would be "only if you want it to be".  Regardless of whether you like backlighting — in fact, even you find backlighting obnoxious — this is a pretty good keyboard.  It has a button which allows you to select a light level.  You can turn off the light as soon as your computer starts up, and leave it off.

Design

The form-factor and design of the keyboard are also satisfactory.  As you can see on logitech's site, it's very thin, flat, and it has an integrated wrist-wrest.  The texture of both the keys and the wrist-wrest is slightly rubberized, which keeps my wrists comfortably in place and prevents my fingers from slipping onto adjacent keys when typing quickly.

The {caps,num,scroll}lock keys are vanishingly unobtrusive, but unlike the the Edge's ill-considered "boop-BEEP" audio replacement for the LEDs, they are present and visible.

Of course, any keyboard review would be incomplete without a consideration of "special features".  Normally I find "multimedia keys" and unusual layout options a grating misfeature.  For example, on my Moshi Celesta, there is an "Eject" button immediately underneath "Page Down", which I would accidentally hit at least once a day.  To my surprise, the Illuminated Keyboard is the first one where I've really used the "multimedia" functions.  They are unobtrusive.  The only dedicated "special" keys are far to the right, where there are volume controls and the button used to adjust the keyboard's backlight.

Most of the multimedia keys are alternative meanings for F1-F12 and PrintScreen/Pause.  Much like on the Edge, an "FN" key replaces the right-windows key.  Holding FN while pressing a function key invokes its alternate meaning.  For example, there is a File:Media-skip-backward.svg "previous track" icon above F10, so if I press FN-F10, my media player skips back a track.  Despite a similar setup on the Edge, I never really used the multimedia keys there, because it's awkward to move my right hand so I can hit "FN" with my thumb, then reach over with my left hand to hit the appropriate function.  On the Illuminated Keyboard, the FN key is considerably wider, and the functions that I actually want to use (Previous Track, Play/Pause Music, Next Track) are located on the right hand side of the keyboard, which allows me to easily hit them by moving only one hand.

Of course it didn't hurt that I discovered the "multimedia keys" plugin for my music player at about the same time.

The layout is a tiny bit nonstandard, but in a very useful way.  The seldom-used "insert" key has migrated north to a less prominent position on the "function" row.  In its place, the "del" key has expanded to take up two spots.  Again, I don't like layout tweaks, as they often do more harm than good, but this prevents a common and irritating accident, hitting the "insert" key when I intended to hit "delete".  (I don't know why this never happens with "Home" and "End" or "Page Up" and "Page Down", but it is a real problem.)  Aside from that, this is a bog-standard PC 105-key layout.

Annoyances

Obviously I'm pretty happy with this keyboard, but I always find the most useful part of any review the "why not" section.  So, what's wrong with this keyboard?  With this one it's a pretty short list, but it's not empty.
  1. Very occasionally, the space bar squeaks slightly.  I've had this problem on a number of different keyboards, since the wider spacebar necessarily needs a different switching mechanism, usually propped up by a small metal bar.  This isn't a huge bother.
  2. The plastic of the keyboard is bowed slightly, such that the rubber foot in the middle of the keyboard doesn't quite touch my desk when it's laid flat.  This means that the keyboard warps a little bit if you rest any weight on it.  This might even be intentional (some kind of ergonomic consideration?) but the slight warp seems like a flaw in otherwise quality construction.
  3. What I think are the "Instant Messaging", "Switch Window", and "Run" function buttons don't seem to register on Ubuntu.  I don't know if this is a problem with the keyboard, GNOME, Linux, or what, but I wish I didn't have to know.  (I was impressed to note that all the other keys seem to do something useful out of the box.)
  4. The Alt keys are a tiny bit too narrow for my taste.  Of course, being an Emacs user I have a strong bias towards having an overlarge meta key, so YMMV.  Most people who use the "control" key in the wrong position would probably disagree, as the slightly narrower Alt keys are that way because the "Ctrl" keys are nice and wide.  That said, looking at the keyboard I thought this would bother me, but in practice it hasn't.

Conclusions

The most obvious conclusion we can draw here is that I think and write about keyboards way too much.  Beyond that, the Illuminated Keyboard would get very high marks, if I did numeric grading here.  I think it will replace the MacAlly IceKey for my default keyboard recommendation.  It's got all the same properties (quiet, small, low-profile) which made that keyboard a good recommendation.  However, the construction is apparently higher-quality, it has more function keys (despite the fact that they don't get in your way if you don't use them).  The illumination is a nice touch: even us touch-typists can't necessarily remember where the "eject" button is in the dark.

Thanks, Grandma!

Commercial Break

As long as I'm doing all this blogging, there's a post I've been forgetting to do for months.  I'll keep it short and sweet:

At Divmod, we do consulting, including performance analysis, custom development, and open source maintenance.  If you have problems that involve Python, Twisted, or any Divmod open source project, you're unlikely to find better.

I usually handle inquiries, but at the moment, I'm working on some secret projects of my own.  If you're in the market for one of those things I mentioned, you should get in touch with JP Calderone.

(I am periodically amazed that people close to me don't know that we do consulting.  I need to remember to get out there and toot the horn every so often!)