The Joel Un-test

Joel Spolsky seems to like controversy, although I can see why.  Being a contrarian ideologue is pretty sweet.

Some people have been suggesting that the Joel Test should mention "100% unit test coverage" as well.  Personally, I think that's a great idea.  The industry is evolving, and automated testing is getting into the suite of tools that every competent programmer should be familiar with.

Joel disagrees, since 100% coverage is "a bit too doctrinaire about something you may not need".

For what it's worth, I don't completely disagree with Joel.  Some of the software that I work on doesn't have 100% test coverage, and that's okay.  I wrote it before I learned about unit testing.  I'm not freaking out and spending all of my time just writing tests for old code which apparently works.

However, we do have policies in place to add test coverage whenever we change anything.  Those policies stipulate that 100% coverage is a requirement for any new or changed code, so I consider myself a fan of 100% coverage and I generally think it's a good idea.  I do think it belongs on the Joel Test, or at least something like it.

I feel like my opinions are representative of a pretty substantial number of "agile" practitioners out there, so I'd just like to respond to a few points:

Joel mentions "SOLID principles", as if they're somehow equivalent to unit testing.  As if the sentiment that leads one to consider 100% test coverage a great idea leads one into a slavish architectural death-spiral, where any amount of "principles" are dogma if they have a sticker that says "agile" stuck to them.

Let me be clear.  I think that SOLID, at least as Joel's defined it, is pointlessly restrictive.  (I've never heard about it before.)  As a guy who spends a lot of time implementing complex state machines to parse protocols, I find "a class should have only one reason to change" a gallingly naive fantasy.  Most of the things that Joel says about SOLID are true, especially if you're using a programming language that forces you to declare types all over the place for everything.  (In Python, you get the "open" part of "OCP", and the "clients aren't forced" part of "ISP" for free.)  It does sound, in many ways, like the opposite of "agile".

So, since SOLID and unit testing are completely unrelated, I think we can abandon that part of the argument.  I can't think of anyone I know who likes unit testing would demand slavish adherence to those principles.  I agree that it sounds like it came from "somebody that has not written a lot of code, frankly".

On the other hand, Joel's opinion about unit tests sounds like it comes from someone who has not written a lot of tests, frankly.

He goes on and on about how the real measure of quality is whether your code is providing value to customers, and sure you can use unit tests if that's working for you, but hey, your code probably works anyway.

It's a pretty weaselly argument, and I think he knew it, because he kept saying how he was going to get flamed.  Well, Mr. Spolsky, here at least that prediction has come true ;-).

It's weaselly because any rule on the Joel Test could be subjected to this sort of false equivalence.  For example, let's apply one of his arguments against "100%" unit testing to apply to something that is already on the joel test, version control:

But the real problem with version control as I've discovered is that the type of changes that you tend to make as code evolves tend to sometimes cause conflicts. Sometimes you will make a change to your code that, causes a conflict with someone else's changes. Intentionally. Because you've changed the design of something... you've moved a menu, and now any other developer's changes that relied on that menu being there... the menu is now elsewhere. And so all those files now conflict. And you have to be able to go in and resolve all those conflicts to reflect the new reality of the code.

This sounds really silly to anyone who has really used version control for any length of time.  Sure, sometimes you can get conflicts.  The whole point of a version control system is that you have tools to resolve those conflicts, to record your changes, and so on.

The same applies to unit tests.  You get failures, but you have tools to deal with the failures.  Sure, sometimes you get test failures that you knew about in advance.  Great!  Now, instead of having a vague intuition about what code you've broken intentionally, you actually have some empirical evidence that you've only broken a certain portion of your test suite.  And sure, now you have to delete some old tests and write some new tests.  But, uh... aren't you deleting your old code, and writing some new code?  If you're so concerned about throwing away tests, why aren't you concerned about throwing away the code that the tests are testing?

The reason you don't want to shoot for 90% test coverage is the same reason you don't want to shoot for putting 90% of your code into version control or automating 90% of your build process into one step or putting 90% or (etc) is that you don't know where the bugs are going to crop up in your code.  After all, if we knew where the bugs were, why would we write any tests at all?  We'd just go to where the bugs are and get rid of them!

If you test 90% of your code, inevitably, the bugs will be in the 10% that you didn't test.  If you automate 90% of your build, inevitably the remaining non-automated 10% will cause the most problems.  Let's say getting the optimization options right on one particular C file is really hard.  Wouldn't it be easier to just copy the .o file over from bob's machine every time you need to link the whole system, rather than encoding those options in some kind of big fancy build process, that you'd just have to maintain, and maybe change later?

Joel goes on to make the argument that, if he were writing some software that "really needed" to be bulletproof, he'd write lots of integration tests that exercised the entire system at once to prove that it produced valid output.  That is a valid testing strategy, but it sort of misses the point of "unit" tests.

The point of unit tests — although I'll have to write more on this later, since it's a large and subtle topic — is to verify that your components work as expected before you integrate them.  This is because it's easier to spot bugs the sooner you find them: the same argument Joel makes for writing specs.  And in fact if you read Mr. Spolsky's argument for writing specs, it can very easily be converted into an argument for unit testing:

Why won't people write unit tests? People like Joel Spolksy claim that it's because they're saving time by skipping the test-writing phase. They act as if test-writing was a luxury reserved for NASA space shuttle engineers, or people who work for giant, established insurance companies. Balderdash. ... They write bad code and produce shoddy software, and they threaten their projects by taking giant risks which are completely uncalled for.

You think your simple little function that just splits a URL into four parts is super simple and doesn't need tests because it's never going to have bugs that mysteriously interact with other parts of the system, causing you a week of debugging headaches?  WRONG.  Do you think it was a coincidence that I could find a link to the exact code that Joel mentions?  No, it's not, because any component common enough to make someone think that it's so simple that it couldn't possibly have bugs in it, is also common enough that there are a zillion implementations of it with a zillion bugs to match.

Unlike specs, which just let you find bugs earlier, tests also help you make finding (and fixing) a bug later be cheaper.

Watching a test-driven developer work can be pretty boring.  We write a test.  We watch it fail.  We make it pass.  We check it in.  Then we write another test.  After a while of watching this, a manager will get itchy and say, Jeez!  Why can't you just go faster!  Stop writing all these darn tests already!  Just write the code!  We have a deadline!

The thing that the manager hasn't noticed here is that every ten cycles or so, something different happens.  We write a test.  It succeeds.  Wait, what?  Oops!  Looks like the system didn't behave like we expected!  Or, the test is failing at a weird way, before it gets to the point where we expect it to fail.  At this point, we have just taken five minutes to write a test which has saved us four hours of debugging time.  If you accept my estimate, that's 10 tests × 5 minutes, which is almost an hour, to save 4 hours.  Of course it's not always four hours; sometimes it's a minute, sometimes it's a week.

If you're not paying attention, this was just a little blip.  The test failed twice, rather than once.  So what?  It's not like you wouldn't have caught that error eventually anyway!

Of course, nobody's perfect, so sometimes we make a mistake anyway and it slips through to production, and we need to diagnose and fix it later.  The big difference is that, if we have 100% test coverage, we already have a very good idea of where the bug isn't.  And, when we start to track it down, we have a huge library of test utilities that we can use to produce different system configurations.  A test harness gives us a way to iterate extremely rapidly to create a test that fails, rather than spinning up the whole giant system and entering a bunch of user input for every attempt at a fix.

This is the reason you don't just write giant integration tests first.  If you've got a test that just tells you "COMPILE FAILED", you don't know anything useful yet.  You don't know which component is broken, and you don't know why.  Individual unit tests with individual failures mean that you know what has gone wrong.  Individual tests also mean that you know that each component works individually before inserting it into your giant complex integrated compiler, so that if it dies you have a consistent object that you know at least performs some operations correctly, which you can inspect and almost always see in a sane internal state, even if it's not what the rest of the system expects.

Giant integration test suites can be hugely helpful on some projects, but they are the things which are sometimes unnecessary gold plating unless you have a clear specification for the entire system.  Unit tests are the bedrock of any automated testing strategy; you need to start there.

Unit tests seem like they take time, because you look at the time spent on a project and you see the time you spent writing the tests, and you think, "why don't I just take that part out?".  Then your schedule magically gets shorter on paper and everything looks rosy.

You can do that to anything.  Take your build automation out of your schedule!  Take your version-control server out of your budget!  Don't write a spec, just start coding!  The fact is, we pay for these tools in money and time because they all pay off very quickly.

For the most part, if you don't apply them consistently and completely, their benefits can quickly evaporate while leaving their costs in place.  Again, you can try this incomplete application with anything.  Automate the build, but only the compile, not the installer.  Use version control, but make uncommitted hand-crafted changes to your releases after exporting them.  Ignore your spec, and don't update it.

So put "100% test coverage" on your personal copy of the Joel Test.  You'll be glad you did.

One postscript I feel obliged to add here: like any tool, unit tests can be used well and used poorly.  Just like you can write bad, hard-to-maintain code, you can write bad, hard-to-maintain tests.  Doing it well and getting the maximum benefit for the minimum cost is a subtle art.  Of course, getting the most out of your version control system or written spec is also a balancing act, but unit tests are a bit trickier than most of these areas, and it requires skill to get good at them.  It's definitely worth acquiring that skill, but the learning is not free.  The one place that unit tests can take up more time than they save is when you need to learn some new subtlety of how to write them properly.  If your developers are even halfway decent, though, this learning period will be shorter than you think.  Training and pair-programming with advanced test driven developers can help accelerate the process, too.  So, I stand by what I said above, but there is no silver bullet.

You Got Your WindowMaker In My Peanut Butter

Electric Duncan mentioned Window Maker and Ubuntu yesterday, and it reminded me of my own callow youth.

Nowadays I'm a serious Compiz junkie, so I don't think I'll be switching back any time soon.  Personally, I wouldn't want to live without maximumize or the scale window title filter.  However, I can definitely see why one would want to: WindowMaker is lightning fast, as well as being very simple and streamlined.  When I do pair-programming that needs tools that won't run in Screen, I spin up a WindowMaker session in a VNC server.  Sharing my whole gigantic screen with all the whizzy effects is impractical over anything slower than a local 100 megabit connection.

One of the problems with switching to a different window manager for your main session these days, however, is that things unrelated to window management stop working.  Your keyboard settings no longer apply, your media no longer auto-mounts, GTK ignores your theme, your media keys stop working, and your panel disappears, along ever-so-useful applets like Deskbar and the NetworkManager applet.

But, this need not be so.  GNOME will happily accomodate an alternate window manager. All you need to do is make sure that WindowMaker and Nautilus don't fight over the desktop, and then tell GNOME to start WindowMaker.

Of course, your desktop won't be quite as lean as if you'd eschewed GNOME completely.  It's up to you to decide whether these features are worth a few extra megabytes of RAM.

First, run gconf-editor and turn off "/apps/nautilus/preferences/show_desktop".  This should make your desktop go blank.
http://www.twistedmatrix.com/users/glyph/images/content/blogowebs/gconf-editor-set-show-desktop.png
Next, you need to go to "System → Preferences → Sessions", and hit "add" on the "Startup Programs" tab.  Add an entry for WindowMaker:
http://www.twistedmatrix.com/users/glyph/images/content/blogowebs/add-wmaker-as-startup-program.png
Now, all you need to do is log out!  You will, of course, want to tweak your panels a bit when you log back in, but that part's easy: right-click and season to taste.
http://www.twistedmatrix.com/users/glyph/images/content/blogowebs/party-like-its-1999.png

A Meandering Review of the Logitech Illuminated Keyboard

I haven't done a keyboard review in quite some time.  Partially this is because I've started getting only higher-quality keyboards, and so I've been getting them less frequently.  I can reliably destroy a cheap-o dome-switch keyboard in about 6 months, so now I only buy keyboards with mechanical or scissor switches.  (My rule of thumb is that if it doesn't tell me how many keystrokes the switches are rated for, I won't get it myself or put it on my wish list.  Typically the lowest number you'll get is "five million", which is a good deal higher than the two million that most dome switches can do.)

This Christmas, my grandmother kindly bought me a Logitech Illuminated Keyboard, which I've been using for the past few weeks.  I have to say I'm very happy with it.
http://www.logitech.com/repository/1170/jpg/9726.1.0.jpg

Tactile Response

First and foremost, of course, is the keyboard's feel.

I generally prefer aggressively clicky keyboards like the Das Keyboard or the venerable Unicomp EnduraPro (known in a previous life as the "Model M").  However, at home, these are not an option, as I have both limited vertical space underneath my monitor and a limited acoustic toleranceSome amount of "click" is a requirement though, or the lack of feedback causes my hands to tense up and hurt.  Just a few days ago, Cyril Kowalski of techreport.com described my experience almost exactly, in his review of the Das Keyboard.  I feel this is really worth repeating:
So, because dome-switch keyboards don't let you hear or feel exactly how much force you need to depress a key, you might find yourself pushing too hard or too softly. That can mean either more fatigue or more typos. Some users try to alleviate those shortcomings with split ergonomic keyboards, which place your hands in a more natural position, but those don't really solve the feedback problem — although they can feel comfy enough to type on.

I don't have any statistics handy, but I can throw some anecdotal evidence at you. (Take that however you please.) I've been typing 2,000 words a day five days a week for around three years on a 1989 Model M, and my fingers, hands, or wrists never get tired. When I was using a Microsoft Natural Keyboard Pro and typing less each day, I suffered from finger pain and annoying wrist tingling on a regular basis. I actually type faster on the Model M, as well, even though my touch-typing technique hasn't changed.

While I've varied the keyboards I've used considerably more than Mr. Kowalski apparently has, my experience typing lots of hours on a mushy Microsoft Natural Keyboard, at Origin, was exactly the same.

So, does the Illuminated Keyboard stack up?  In a word, "yes".  But you all know that I wouldn't use one word where 500 will do.

With "illuminated" right in the name, one might think that this keyboard is a gimmicky one-trick pony.  I've defnitely seen a few other keyboards where some marketing genius duct-taped a couple of 2¢ LEDs to the back of a crummy keyboard, spray-painted the word "GAMING" on the box, and marked it up by $50.  Even Logitech's own prior entry into the "illuminated" arena, the G15, suffered from this overfocus on bling.  Here, I'll have to amend my own review: while I was impressed at first that the G15 had reasonable tactile feedback, especially for a dome-switch keyboard, it degraded over time, as any dome-switch keyboard will.  The marketing copy talks about illumination and LCDs, but doesn't mention what type of key switch is used.

(While I'm trashing on the old model, I should also note that the "G15" that Logitech is selling today has been visibly upgraded in a number of ways, and may use a new key system as well.  Given that the new G19 costs $200, contains a USB 2.0 hub, a 320x240 color LCD, and a computer that runs linux (I am not kidding), I am hoping that it doesn't ship with keys that will wear out after a few months.)

The Illuminated Keyboard, by contrast, dedicates half of its marketing copy to talking about the key switches.  Like the diNovo Edge (but at less than half the price) it uses the "PerfectStroke Key System".  Indeed, the keyboard's tactile response feels like an updated version of the Edge.

I also have an Edge, and I am quite happy with it too.  If anything, the keys on the Illuminated Keyboard are even better calibrated.  While scissor-switch keyboards are all fairly similar, I have managed to beat many of my own speed records with this keyboard, and a few brief experiments side-by-side with the Edge suggest that I can type as fast or slightly faster on the Illuminated Keyboard.  The keyboard I was most recently using on this computer was the Moshi Celesta (warning!  link contains obnoxiously huge animation, and plays music).  I can type at about the same raw speed on all the scissor keyboards I've tried (the Celesta, the IceKey, the Edge and this one).  However, I have a marginally, but consistently, lower average error rate with the "PerfectStroke"-based keyboards.

This tactile similarity gives me high hopes for the durability of the Illuminated Keyboard as well.  When I first got my Edge, I hammered on it as my primary keyboard for a good eight months.  This is more than enough time to kill lesser keyboards.  Then, we moved it to the media center, where Ying and I would still both use it daily.  As far as I know, it would have lasted another five years, but some part of the battery or the charger gave up the ghost and it would not recharge.  (No complaints there, though.  Logitech replaced the whole unit, free of charge, despite the fact that it was out of warranty.)

Illumination

So, it feels pretty good.  Now, on to the headline feature.  Is it illuminated?

Yes.  The illumination is fairly subtle even on its brightest setting.  It's white, not some neon flourescent color.  It's not nearly as bright as many "gaming" keyboards.  However, it's also very even.  I'm not sure if they use the same trick that Déck does and backlighting every key individually, but there are no dim spots.

Actually, a better answer would be "only if you want it to be".  Regardless of whether you like backlighting — in fact, even you find backlighting obnoxious — this is a pretty good keyboard.  It has a button which allows you to select a light level.  You can turn off the light as soon as your computer starts up, and leave it off.

Design

The form-factor and design of the keyboard are also satisfactory.  As you can see on logitech's site, it's very thin, flat, and it has an integrated wrist-wrest.  The texture of both the keys and the wrist-wrest is slightly rubberized, which keeps my wrists comfortably in place and prevents my fingers from slipping onto adjacent keys when typing quickly.

The {caps,num,scroll}lock keys are vanishingly unobtrusive, but unlike the the Edge's ill-considered "boop-BEEP" audio replacement for the LEDs, they are present and visible.

Of course, any keyboard review would be incomplete without a consideration of "special features".  Normally I find "multimedia keys" and unusual layout options a grating misfeature.  For example, on my Moshi Celesta, there is an "Eject" button immediately underneath "Page Down", which I would accidentally hit at least once a day.  To my surprise, the Illuminated Keyboard is the first one where I've really used the "multimedia" functions.  They are unobtrusive.  The only dedicated "special" keys are far to the right, where there are volume controls and the button used to adjust the keyboard's backlight.

Most of the multimedia keys are alternative meanings for F1-F12 and PrintScreen/Pause.  Much like on the Edge, an "FN" key replaces the right-windows key.  Holding FN while pressing a function key invokes its alternate meaning.  For example, there is a File:Media-skip-backward.svg "previous track" icon above F10, so if I press FN-F10, my media player skips back a track.  Despite a similar setup on the Edge, I never really used the multimedia keys there, because it's awkward to move my right hand so I can hit "FN" with my thumb, then reach over with my left hand to hit the appropriate function.  On the Illuminated Keyboard, the FN key is considerably wider, and the functions that I actually want to use (Previous Track, Play/Pause Music, Next Track) are located on the right hand side of the keyboard, which allows me to easily hit them by moving only one hand.

Of course it didn't hurt that I discovered the "multimedia keys" plugin for my music player at about the same time.

The layout is a tiny bit nonstandard, but in a very useful way.  The seldom-used "insert" key has migrated north to a less prominent position on the "function" row.  In its place, the "del" key has expanded to take up two spots.  Again, I don't like layout tweaks, as they often do more harm than good, but this prevents a common and irritating accident, hitting the "insert" key when I intended to hit "delete".  (I don't know why this never happens with "Home" and "End" or "Page Up" and "Page Down", but it is a real problem.)  Aside from that, this is a bog-standard PC 105-key layout.

Annoyances

Obviously I'm pretty happy with this keyboard, but I always find the most useful part of any review the "why not" section.  So, what's wrong with this keyboard?  With this one it's a pretty short list, but it's not empty.
  1. Very occasionally, the space bar squeaks slightly.  I've had this problem on a number of different keyboards, since the wider spacebar necessarily needs a different switching mechanism, usually propped up by a small metal bar.  This isn't a huge bother.
  2. The plastic of the keyboard is bowed slightly, such that the rubber foot in the middle of the keyboard doesn't quite touch my desk when it's laid flat.  This means that the keyboard warps a little bit if you rest any weight on it.  This might even be intentional (some kind of ergonomic consideration?) but the slight warp seems like a flaw in otherwise quality construction.
  3. What I think are the "Instant Messaging", "Switch Window", and "Run" function buttons don't seem to register on Ubuntu.  I don't know if this is a problem with the keyboard, GNOME, Linux, or what, but I wish I didn't have to know.  (I was impressed to note that all the other keys seem to do something useful out of the box.)
  4. The Alt keys are a tiny bit too narrow for my taste.  Of course, being an Emacs user I have a strong bias towards having an overlarge meta key, so YMMV.  Most people who use the "control" key in the wrong position would probably disagree, as the slightly narrower Alt keys are that way because the "Ctrl" keys are nice and wide.  That said, looking at the keyboard I thought this would bother me, but in practice it hasn't.

Conclusions

The most obvious conclusion we can draw here is that I think and write about keyboards way too much.  Beyond that, the Illuminated Keyboard would get very high marks, if I did numeric grading here.  I think it will replace the MacAlly IceKey for my default keyboard recommendation.  It's got all the same properties (quiet, small, low-profile) which made that keyboard a good recommendation.  However, the construction is apparently higher-quality, it has more function keys (despite the fact that they don't get in your way if you don't use them).  The illumination is a nice touch: even us touch-typists can't necessarily remember where the "eject" button is in the dark.

Thanks, Grandma!

Commercial Break

As long as I'm doing all this blogging, there's a post I've been forgetting to do for months.  I'll keep it short and sweet:

At Divmod, we do consulting, including performance analysis, custom development, and open source maintenance.  If you have problems that involve Python, Twisted, or any Divmod open source project, you're unlikely to find better.

I usually handle inquiries, but at the moment, I'm working on some secret projects of my own.  If you're in the market for one of those things I mentioned, you should get in touch with JP Calderone.

(I am periodically amazed that people close to me don't know that we do consulting.  I need to remember to get out there and toot the horn every so often!)

The Television Writer's Guide to Cryptography

On television shows, sometimes characters encounter encrypted data.  There are a number of popular tropes regarding this:
  1. A technically savvy villain has encrypted some data.  The hero needs to guess the password to decrypt it.  To do so, the hero delves into the villain's psychology.  Eventually we discover that the most important thing to the villain is actually their pet rabbit, named "fluffy bunny", not their secret terrorist organization as we initially guessed.  The hero enters "fluffy", just in the nick of time.  Hooray, the hero has cracked the encryption!
  2. A technically savvy villain has encrypted some data, and the hero has their hard drive.  It will take 10 hours to decrypt, but the first bomb goes off in 8 hours!  The hero manages to deal with the first blast, giving our diligent technicians time to decrypt the data.
  3. A technically savvy villain has encrypted the data.  Normally it would be easy to break, but there are multiple layers of encryption, each somehow more devious than the last!  However, our diligent technicians report hourly progress as they break through each "layer".
  4. A technically savvy villain has a computer system that the heroes wish to acquire remote access to.  In order to access this system, the hero hacker must "break the encryption".  This will take some time, but, when the "encryption" is "broken", they have access to the villain's computer, and can control it completely.
These are wrong.  They are so wrong that they set my teeth on edge.

I am not an expert on cryptography.  I have a passing interest in computer security, but I am by no means an expert.  So, I will not approach the topic as an expert.  I won't try to explain any of the math involved; I suspect that previous explanations may have failed to reach these writers' ears because they were too confusing.  Here are a few simple facts about the plot-lines above:
  1. Nobody who has even twenty minutes of experience with encryption software will choose a password like "fluffy".  Of course, many users have weak passwords for their Facebook accounts, but a child-prodigy criminal mastermind who expects federal agents to get his encrypted hard drive will have a password like "qua2IeshvePhu2QuAeShohd8".  They will train themselves to type this from memory, very quickly.  Better yet, if their data is encrypted, it is likely encrypted with a key.  This key will most likely be separate from their data, and the key will itself be encrypted with the password.  These are not crazy military-grade precautions; this is the default behavior of the free encryption software present in various operating systems.
  2. Here's a simple rule of thumb.  If you only take one thing away from this article, I hope it will be this:

    You cannot "break" encryption.  Ever.

    In the days where movie stars will spend months and millions of dollars intensely learning kung-fu so that they can accurately portray martial-arts moves, it is amazing to me that it isn't worth one hour's time for the average television writer who is incorporating cryptography as a plot device to learn this one, very basic piece of information.
    Brute-force attacks against current cryptographic methods would, using present-day cryptographic technology, take — and this is not an exaggeration — a billion billion billion billion billion years to crack.  While there have been a few successful attacks against modern cryptographic methods, they are almost exclusively attacks which involve a bug in a popular piece of software, not a flaw in the cryptographic math.  Those bugs are fixed quickly when they are discovered, and someone concerned about the integrity of their encrypted data could quickly and easily find out about them and get a fixed version of the software in question.  If one cryptographic algorithm were well and truly cracked, there are dozens of others which our villains could upgrade to.  Again, none of this is crazy military-grade security.  This is software that any teenager with a free hour to search the internet could find.  I was encrypting my hard drive with stuff like this when I was 12.
    That's not to say that you can't have encryption being cracked on a television show.  Please be aware, however, that generalized crypto-cracking as a routine task performed by technicians, even extremely skilled technicians, is science fiction.  It is inappropriate in a dramatic show that is trying to be realistic.
    Again, for emphasis: cracking crypto isn't "really hard".  It isn't "practically impossible".  You don't need an "elite hacker" who is "really good" to do it.  Breaking crypto is really, totally, theoretically impossible, and there is a worldwide, very public community of mathematicians and researchers trying to make sure it stays that way.  If your heroes work for some kind of secret spy agency, they should remark upon the ethical considerations of their special access to technology that the general public and the scientific community does not have and are not aware of.
    The one exception to this rule is if the villain chooses a weak password, which can be guessed by a random password guesser.  Our heroes may get lucky and discover that they chose a password which a brute-force decryptor guesses in the first quintillion or so tries.  However, in this case, there is no way to know how long the cracking will take, before it is done.  Each new guess for the password is totally blind; either it decrypts the data or it doesn't.  There's no way to tell how many more guesses you have to go, or in fact whether any of the guesses will work before your guesser runs out of things it could reasonably try.
  3. Since one "layer" of encryption is effectively impossible to break, it would be very strange for our villain to use "layers" of encryption. There's rarely a need.  Ther e are some obscure possible exceptions: the villains might be if they wanted to ensure co-operation within their group, and encrypted data in such a way that multiple keys were required to decrypt it.  Or they might be using onion routing.  However, each "layer" of encryption is equally impossible to break, so it still wouldn't make sense to talk about breaking them one at a time.
  4. All "encryption" is, is converting a block of sensible data ("plaintext") into a block of what appears to be unreadable nonsense ("ciphertext") unless you have the secret decoder ring.  If the hero "breaks the encryption" (which, as I've said above, is probably impossible) they still can't access the villain's computer over the internet, unless the thing that was encrypted was the villain's remote access password.
In summary, the worst recurring theme here - although I recognize its dramatic value - is the "progress bar" approach to computer security problems.  If someone is going to break into a attempt to decrypt some data or remotely access a computer system, either it will work nearly instantly (we know the password for the encrypted data, we know an exploit for the remote system) or it will not work at all. "Your progress indicator will sit at 0% complete forever."

The underlying misconception, I think, is to believe that cryptography is like a locked box that the villain has put their data into.  If the cops found a locked box with some evidence in it, they could ask you for the key (which you would have to hide in one of a limited number of places) or they could simply drill a hole in the box.  Stressed technicians in these TV shows frequently declare that they are "going as fast as they can" with the decryption, as if they were drilling through some very hard metal.

Cryptography is not a metal box.  It's more like a parallel dimension.  There isn't really a good analogy, because no physical security system is quite like cryptography.  But since you're a TV writer if you're reading this (right?) think of it like a Stargate.  Imagine that portable stargates are cheap to manufacture.  Everybody has one; when you buy stuff over the internet, you put your credit card into a stargate and it comes out near the payment processor securely.  (This is how the little lock on your browser works.)

The Cryptogate is not exactly like a Stargate, either.  There isn't a small, limited number of places it can go.  These little devices can go to any point in the multiverse.  Rather than a rotating wheel with a number of characters, they have a little slot, where you insert a piece of glass.  It etches a random pattern on the glass (this is your "private key") that describes the point where your object will be sent: you don't know where it is, except that it will be a spot where it's safe to stick your hand to retrieve it.  It could be anywhere in an infinite number of worlds, in a cave, in the sky: nobody knows, not even you.  You put your "private key" in the key slot, the gate opens up, you drop your valuables in, and then you take your key out.  Those valuables are gone forever.  The gate is a useless hoop of metal without your "key"; there's no way to guess what mysterious pattern of scratches it put on that glass, the destination was random.

You may notice there's no password in that extended metaphor, and indeed, one can use cryptography entirely without passwords; the private key is the important bit.  However, since many people leave the private key on their hard drive, rather than separately, it is itself encrypted with a password.  We can extend the metaphor even further to include this: let's say that your little piece of glass only describes what galaxy will be selected, and you choose a magic phrase that selects what location within that galaxy will be selected.  So, you insert the key, but the gate is still useless until you say the word.  Then it opens up to reveal your stuff.

If you need a physical analogy in your mind, this is what you should imagine breaking cryptography is like.  A bunch of very frustrated technical people sitting around, staring at a useless loop of metal, knowing that it contains what they need, but totally unable to make it do anything useful without a tiny piece of glass that they don't have, and a magic word that they don't know.  They can sit around guessing words and scratching random patterns on glass all day, but they will never know if they're "20% done".

Now that I've destroyed any possible dramatic tension that can come from the race to "break the code", here are some suggestions you can replace these tired old fallacies with:
  1. It's not just bad guys who use cryptography.  In any secure super-secret anti-terrorism anti-supervillain government organization, encrypting all communication is likely to be routine.  What if one of the villains got hold of one of the heroes' private keys, via some kind of deception?  The heroes would be confident that their communications were secure and authentic, because the code is "unbreakable" — but humans are always the weakest link.
  2. A bad guy is planning something bad, and encryping their plans.  The good guys know that if they barge in, the bad guy is going to instantly destroy the key, making the data they need permanently irretrievable.  Cryptography may be secure, but there are some real-life things that aren't.  Like monitors and keyboards.  (Wouldn't it be spooky to show your spy characters determining what someone was typing by listening to them with a stethoscope against a wall?  Or looking at their screen through a solid object?  That's something you can really do!)
  3. A bad guy is using SSL encryption to communicate with a web site.  Luckily our baddy doesn't really know how security works, so the good guys execute a man in the middle attack with the complicity of the baddy's ISP and a valid certificate authority such as VeriSign, for all intents and purposes becoming the "real" web site.  If you're one of those too-clever-by-half writer types that likes that highfalutin social commentary stuff, this might be an intriguing look at our society's blind trust of the flawed security model of the web.
  4. I took away four plot devices, so I'll give you four back: one of our heroes (either temporarily or permanently) loses their encryption key, and cannot access vital information.  Can they get the key back in time?  Or: can they remember enough of their data to work without access to their computerized information?
  5. As a bonus: Spooks ran an interesting episode about a game-over exploit for TLS.  There was still a lot of cringeworthy misunderstanding of what crypto really is, though.  (In a typical mistake, the guy who possesses the crypto crack can mysteriously control computers with it.  But I could suspend my disbelief, because if he could really break crypto that easily, he could observe any communication with the supposedly secure systems, including network sessions that included passwords.)
If anyone reading this knows someone who works as a writer for television shows or movies, please, please recommend that they read this post.  These days, a lot of people learn about technology from popular culture.  We need to have better understanding of basic, everyday technologies like cryptography and digital media, if we are ever going to get sane laws about those things.