Explaining Why Interfaces Are Great

Updated July 7, 2018:
  • modernized examples to be correct for python 3
  • added <code> annotations in many places where they were typographically necessary
  • made use of attrs in the example, since not only should you always use attr.s, but it also has direct support for Zope Interface
If you enjoy this post, and thereby, Zope Interface's capabilities, please consider contributing Zope Interface support to the Mypy type checker, which will amplify its usefulness considerably.

Why?

Why use interfaces? Especially with Python's new ABCs, is there really a use for them?

Some of us Zope Interface fans — names withheld to protect the guilty, although you may feel free to unmask yourselves in the comments if you like — have expressed frustration that ABCs beat out interfaces for inclusion of the standard library.  However, I recently explored various mailing lists, Interfaces literature, and blogs, and haven't found a coherent description of why one would prefer interfaces over ABCs.  It's no surprise that Zope's interface package is poorly understood, given that nobody has even explained it!  In fact, PEP 3119 specifically says:
For now, I'll leave it to proponents of Interfaces to explain why Interfaces are better.

It seems that nobody has taken up the challenge.

I remember Jim Fulton trying to explain this to me many years ago, at a PyCon in Washington DC.  I definitely didn't understand it then.  I was reluctant to replace the crappy little interfaces system in Twisted at the time with something big and complicated-looking.   Luckily the other Twisted committers prevailed upon me, and Zope Interface has saved us from maintaining that misdesigned and semi-functional mess.

During that explanation, I remember that Jim kept saying that interfaces provided a model to "reason about intent".  At the time I didn't understand why you'd want to reason about intent in code.  Wouldn't the docstrings and the implementation specify the intent clearly enough?  Now, I can see exactly what he's talking about and I use the features he was referring to all the time.  I don't know how I'd write large Python programs without them.

Caveat

This isn't a rant against ABCs.  I think ABCs are mostly pretty good, certainly an improvement over what was (or rather, wasn't) there before.  ABCs provide things that Interfaces don't, like the new @abstractmethod and @abstractproperty decorators. Plus, one of the irritating things about using zope.interface is that the metadata about standard objects in zope.interface.common is not hooked up to anything: IMapping.providedBy({}) returns False.  ABCs will provide that metadata in the standard library, making zope.interface that much more useful once it has been upgraded to understand the declarations that the collections and numbers modules provide.

So, on to the main event: what do Zope Interfaces provide which makes them so great?

Clarity

Let's say we have an idea of something called a "vehicle".  We can represent it as one of two things: a real base class (Vehicle), an ABC (AVehicle) or an Interface (IVehicle).

There are a set of operations that interfaces and base-classes share.  We can ask, "is this thing I have a vehicle"?  In the base-class case we spell that if isinstance(something, Vehicle).  In the interfaces case, we say if IVehicle.providedBy(something).  We can ask, "will instances of this type be a vehicle?".  For an interface, we say if IVehicle.implementedBy(Something), and for a base class we say issubclass(Something, Vehicle).  With the new hooks provided by the ABCs in 2.6 and 3.0, these are almost equivalent.  With zope.interface, you can subclass InterfaceClass and write your own providedBy method.  With the ABC system, you subclass type and implement __instancecheck__.

However, there are some questions you can't quite cleanly ask of the ABC system.  For one thing, what does it really mean to be a Vehicle?  If you are looking at AVehicle, you can't tell the difference between implementation details and the specification of the interface.  You can use dir() and ignore a few of the usual suspects — __doc__, __module__, __name__, _abc_negative_cache_version — but what about the quirkier bits?  Metaclasses, inherited attributes, and so on?  There's probably some way to do it, but I certainly can't figure it out quickly enough to include in this article.  In other words, types have two jobs: they might be ABCs, or they might be types, or they might be both, and it's impossible to separate those responsibilities.

With an Interface, this question is a lot easier to ask.  For a quick look, list(IVehicle) will give a complete list of all the attributes expected of a vehicle, as strings.  If you want more detail, IVehicle.namesAndDescriptions() and Method.getSignatureInfo() will oblige.

Since the interface encapsulates only what an object is supposed to be, and no functionality of its own, it's possible for frameworks to inspect them and provide much nicer error messages when objects don't match their expectations.  zope.interface.verifyClass and zope.interface.verifyObject can tell you, both for error-reporting and unit-testing purposes, whether an object looks like a proper vehicle or not, without actually trying to drive it around.

Flexibility

At the most basic level, interfaces are more flexible because they are objects.  ABCs aren't objects, at least in the message-passing smalltalk sense; they are a collection of top-level functions and some rules about how those functions apply to types.  If you want to change the answer to isinstance(), you need to register a type by using ABCMeta.register or overriding __instancecheck__ on a real subclass of type.  If you want to change the answer to providedBy, for example for a unit test, all you need is an object with a providedBy method.

Of course, you can do it "for real" with an InterfaceClass, but you don't need to.  In other words, its semantics are those of a normal method call.

Interfaces aren't completely self-contained, of course: there are top-level functions that operate on interfaces, like verifyObject.  However, there's an interface to describe what is expected:

>>> from zope.interface.interfaces import Interface, IInterface
>>> IInterface.providedBy(Interface)
True

There's also the issue of who implements what.  For example, you might have a plug-in system which requires modules to implement some functionality.  Generally speaking, modules are instances of ModuleType, so specifying that all modules implement some type with an ABC is somewhat awkward.  With an interface, however, there is a specific facility for this: you put a moduleProvides(IVehicle) declaration at the top of your module.

In zope.interface, there is a very clear conceptual break between implements and provides.  A module may provide an interface — i.e. be an object which satisfies that interface at runtime — without there being any object that implements that interface — i.e. is a type whose instances automatically provide it.  This distinction comes in handy when avoiding certain things.  This distinction exists with ABCs; either you "are a subclass of" a type or you "are an instance of" a type, but the language around it is more awkward and vague, especially since you can be a "virtual instance" or "virtual subclass" now as well.

There's also the issue of dynamic proxies.  If you have a wrapper which provides security around another object, or transparent remote access to another object, or records method calls (and so on) the wrapper really wants to say that it provides the interfaces provided by the object it is wrapping, but the wrapper type does not implement those interfaces.  In other words, different instances of your wrapper class will actually provide different interfaces.  With zope.interface you can declare this via the directlyProvides declaration.  With ABCs, this is not generally possible because ABCMeta.register will only work on a type.

Adaptation

Let's say I have an object that provides IVehicle.  I want to display it somehow — and in today's web-centric world, that probably means "I want to generate some HTML".  How do I get from here to there?  ABCs don't provide an answer to that question.  Interfaces don't do that directly either, but they do provide a mechanism which allows you to provide an answer: you can adapt from one interface to another.

I'm not going to get into the intricacies of exactly how adaptation works in zope.interface, since it isn't important to understand most of the time.  Suffice it to say you can adapt based on specific hooks that are registered, based on the type an object is, or based on what interfaces it provides.

The gist of it is that you have some thing that you don't know what it is, and you want an object that provides IHTMLRenderer.  The way you express that intent is:

renderer = IHTMLRenderer(someObject)

If there are no rules for adapting an object like the one you have passed to an IHTMLRenderer, then you will get an exception - which is all that will happen, normally.  However, this point of separation between the contract that your code expects and the concrete type that your code ends up actually talking to can be very useful.

The larger Zope application server has a rich and complex set of tools for defining which adapter is appropriate in which context, but Twisted has a very simple interface to adaptation.  You simply register an adapter, which is a 1-argument callable that takes an object that conforms to some interface or is an instance of some class, and returns an object that provides another interface.  Here's how you do it:

import attr
from zope.interface import implementer
from twisted.python.components import registerAdapter

@implementer(IHTMLRenderer)
@attr.s
class VehicleRenderer(object):
    "Render a vehicle as HTML"
    vehicle = attr.ib(validator=attr.validator.provides(IVehicle))
    def renderHTML(self):
        return "<h1>A Great Vehicle %s (%s)</h1>" % (
                   self.vehicle.make.name,
                   self.vehicle.model.name)

registerAdapter(VehicleRenderer, IVehicle, IHTMLRenderer)


Now, whenever you do IHTMLRenderer(someVehicle), you'll get a VehicleRenderer(someVehicle).

Your code for rendering now doesn't need any special-case knowledge about particular types.  It is written to an interface, and it's very easy to figure out which one; it says "IHTMLRenderer" right there.  It's also easy to find implementors of that interface; just grep for "implementer.*IHTMLRenderer" or similar.  Or, use pydoctor and look at the "known implementations" section for the interface in question.

Conclusion

In a super-dynamic language like Python, you don't need a system for explicit abstract interfaces.  No compiler is going to shoot you for calling a 'foo' method.  But, formal interface definitions serve many purposes.  They can function as documentation, as a touchstone for code which wants to clearly report programming errors ("warning:  MyWidget claims to implement IWidget, but doesn't implement a 'doWidgetStuff' method"), and a mechanism for indirection when you know what contract your code wants but you don't know what implementation will necessarily satisfy it (adaptation).

Even with a standard library mechanism for doing some of these things, Zope Interface remains a highly useful library, and if you are working on a large Python system you should consider augmenting its organization and documentation with this excellent tool.

Help Us Help You

The Twisted project is getting ready for another round of fund-raising.

Like last year, we'll be centering this effort around PyCon.  This year, we have a year's track-record for our potential sponsors to evaluate us on.

During this year of sponsored development, we closed a record number of tickets this year.  Partially this is due to the work that JP has done himself, partially it is due to the increased rate at which users' contributions have been reviewed.

Aside from raw numbers, funding has allowed us to dedicate the sustained effort required to deal with very old, very unpleasant, very difficult issues like properly handling child process termination and the development of a new, better HTTP client.

But we can't keep this up without help.  Unless you've been living in a very deep hole, you know that the world's economy has exploded and a lot of companies are feeling the pain.  It will be harder in this tough climate to convince companies that this is a good time to invest in software which "doesn't cost anything".

At the same time, we believe that we could get an even better outcome for the Twisted project if we can allocate more funds this year.  We could upgrade from part-time to full-time maintenance, do more new development, and possibly even fund a Twisted conference.

This is where you come in.  The people responsible for raising funds for Twisted are mostly the same people who write code for it.  The more help we can get from you — the developers who use Twisted — the more of our spare time we can spend writing code.

If you are interested in helping with this, especially if you have experience doing fund-raising, please let us know on the mailing list.  This is a great opportunity for those of you who would really like to give something back to Twisted but haven't had the opportunity to contribute code.

(Of course, if you don't have the time to help with fundraising either, you can always make a small personal contribution using the form on the front page of twistedmatrix.com.  Every little bit helps, and donations are tax-deductible.)

I would be remiss if I did not mention that the Software Freedom Conservancy has been extremely helpful in helping us collect donations, manage our accounts, and deal with the legal paperwork of establishing a non-profit.  Without their help we would likely not have had the collective attention span to establish a foundation and lay the foundations that now allows us to collect tax-exempt donations.  If you are contributing to Twisted, please consider contributing something to the SFC as well.

Using SSH Keys on a USB Drive on MacOS X

I keep my SSH private key on a USB thumb drive.

The idea is that I don't want my private key to be on the hard disk of any of the computers that I use.  I use several and so I'm not observing them all constantly, so I don't want to leave my key around for automated attackers to pick it up.

I load the key directly from the USB drive into my SSH agent, which then mlock()s it so it doesn't get put into swap.

This works on Windows (with PuTTY) and Linux just fine.  Unfortunately MacOS X has a nasty habit of mounting FAT volumes with free-for-all permissions, so when I try to load the key:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
         WARNING: UNPROTECTED PRIVATE KEY FILE!          @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0777 for '/Volumes/GRAVITON/....id_rsa' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.

I thought that this was an intractible problem.  The only solution I'd found previously was to make a copy of the key, make a sparse disk image, and manually mount the sparse disk image.  However, this workaround has two problems:
  1. It's inconvenient.  I have to manually locate the disk image every time, double click it, etc.
  2. It's insecure.  If I ever allow other users to log in to any of my OS X machines, they can read the version of the key I'm not using on the FAT filesystem, even if only I can read the one on the HFS+ disk image.
Today, almost by accident, I discovered the real answer.

The daemon that mounts disk on OS X is called "diskarbitrationd".  I discovered this by running across some OpenDarwin documentation which explains that you can configure this daemon by putting a line into fstab.

First you need a way to identify the device in question.  None of the suggested mechanisms for determining the device UUID worked for me, so I used the device label instead.  This is probably desirable anyway, since at least you can tell when the label changes; if you move your key to a similar device, the UUID is different but you can't tell.

You can set the device label by mounting your USB drive, doing "get info" on it, editing the name in the "name and extension" section, and then hitting enter.  You should use an all-caps name, since when you re-mount the drive it will be all-caps again anyway.

You also need to know your user-ID.  The command 'id -u' will return it.

Then, you need to add a single line to /etc/fstab.  My drive's label is "GRAVITON", and my user-ID is 501, so it looks like this:

LABEL=GRAVITON none msdos -u=501,-m=700

Now, all you have to do is eject your drive and plug it in again.  Voila!

$ ssh-add /Volumes/GRAVITON/....keychain.id_rsa
Identity added: /Volumes/GRAVITON/....keychain.id_rsa (/Volumes/GRAVITON/...keychain.id_rsa)

Now you can securely carry your SSH key with you to macs, without breaking ssh-agent's intended protection.

The Joel Un-test

Joel Spolsky seems to like controversy, although I can see why.  Being a contrarian ideologue is pretty sweet.

Some people have been suggesting that the Joel Test should mention "100% unit test coverage" as well.  Personally, I think that's a great idea.  The industry is evolving, and automated testing is getting into the suite of tools that every competent programmer should be familiar with.

Joel disagrees, since 100% coverage is "a bit too doctrinaire about something you may not need".

For what it's worth, I don't completely disagree with Joel.  Some of the software that I work on doesn't have 100% test coverage, and that's okay.  I wrote it before I learned about unit testing.  I'm not freaking out and spending all of my time just writing tests for old code which apparently works.

However, we do have policies in place to add test coverage whenever we change anything.  Those policies stipulate that 100% coverage is a requirement for any new or changed code, so I consider myself a fan of 100% coverage and I generally think it's a good idea.  I do think it belongs on the Joel Test, or at least something like it.

I feel like my opinions are representative of a pretty substantial number of "agile" practitioners out there, so I'd just like to respond to a few points:

Joel mentions "SOLID principles", as if they're somehow equivalent to unit testing.  As if the sentiment that leads one to consider 100% test coverage a great idea leads one into a slavish architectural death-spiral, where any amount of "principles" are dogma if they have a sticker that says "agile" stuck to them.

Let me be clear.  I think that SOLID, at least as Joel's defined it, is pointlessly restrictive.  (I've never heard about it before.)  As a guy who spends a lot of time implementing complex state machines to parse protocols, I find "a class should have only one reason to change" a gallingly naive fantasy.  Most of the things that Joel says about SOLID are true, especially if you're using a programming language that forces you to declare types all over the place for everything.  (In Python, you get the "open" part of "OCP", and the "clients aren't forced" part of "ISP" for free.)  It does sound, in many ways, like the opposite of "agile".

So, since SOLID and unit testing are completely unrelated, I think we can abandon that part of the argument.  I can't think of anyone I know who likes unit testing would demand slavish adherence to those principles.  I agree that it sounds like it came from "somebody that has not written a lot of code, frankly".

On the other hand, Joel's opinion about unit tests sounds like it comes from someone who has not written a lot of tests, frankly.

He goes on and on about how the real measure of quality is whether your code is providing value to customers, and sure you can use unit tests if that's working for you, but hey, your code probably works anyway.

It's a pretty weaselly argument, and I think he knew it, because he kept saying how he was going to get flamed.  Well, Mr. Spolsky, here at least that prediction has come true ;-).

It's weaselly because any rule on the Joel Test could be subjected to this sort of false equivalence.  For example, let's apply one of his arguments against "100%" unit testing to apply to something that is already on the joel test, version control:

But the real problem with version control as I've discovered is that the type of changes that you tend to make as code evolves tend to sometimes cause conflicts. Sometimes you will make a change to your code that, causes a conflict with someone else's changes. Intentionally. Because you've changed the design of something... you've moved a menu, and now any other developer's changes that relied on that menu being there... the menu is now elsewhere. And so all those files now conflict. And you have to be able to go in and resolve all those conflicts to reflect the new reality of the code.

This sounds really silly to anyone who has really used version control for any length of time.  Sure, sometimes you can get conflicts.  The whole point of a version control system is that you have tools to resolve those conflicts, to record your changes, and so on.

The same applies to unit tests.  You get failures, but you have tools to deal with the failures.  Sure, sometimes you get test failures that you knew about in advance.  Great!  Now, instead of having a vague intuition about what code you've broken intentionally, you actually have some empirical evidence that you've only broken a certain portion of your test suite.  And sure, now you have to delete some old tests and write some new tests.  But, uh... aren't you deleting your old code, and writing some new code?  If you're so concerned about throwing away tests, why aren't you concerned about throwing away the code that the tests are testing?

The reason you don't want to shoot for 90% test coverage is the same reason you don't want to shoot for putting 90% of your code into version control or automating 90% of your build process into one step or putting 90% or (etc) is that you don't know where the bugs are going to crop up in your code.  After all, if we knew where the bugs were, why would we write any tests at all?  We'd just go to where the bugs are and get rid of them!

If you test 90% of your code, inevitably, the bugs will be in the 10% that you didn't test.  If you automate 90% of your build, inevitably the remaining non-automated 10% will cause the most problems.  Let's say getting the optimization options right on one particular C file is really hard.  Wouldn't it be easier to just copy the .o file over from bob's machine every time you need to link the whole system, rather than encoding those options in some kind of big fancy build process, that you'd just have to maintain, and maybe change later?

Joel goes on to make the argument that, if he were writing some software that "really needed" to be bulletproof, he'd write lots of integration tests that exercised the entire system at once to prove that it produced valid output.  That is a valid testing strategy, but it sort of misses the point of "unit" tests.

The point of unit tests — although I'll have to write more on this later, since it's a large and subtle topic — is to verify that your components work as expected before you integrate them.  This is because it's easier to spot bugs the sooner you find them: the same argument Joel makes for writing specs.  And in fact if you read Mr. Spolsky's argument for writing specs, it can very easily be converted into an argument for unit testing:

Why won't people write unit tests? People like Joel Spolksy claim that it's because they're saving time by skipping the test-writing phase. They act as if test-writing was a luxury reserved for NASA space shuttle engineers, or people who work for giant, established insurance companies. Balderdash. ... They write bad code and produce shoddy software, and they threaten their projects by taking giant risks which are completely uncalled for.

You think your simple little function that just splits a URL into four parts is super simple and doesn't need tests because it's never going to have bugs that mysteriously interact with other parts of the system, causing you a week of debugging headaches?  WRONG.  Do you think it was a coincidence that I could find a link to the exact code that Joel mentions?  No, it's not, because any component common enough to make someone think that it's so simple that it couldn't possibly have bugs in it, is also common enough that there are a zillion implementations of it with a zillion bugs to match.

Unlike specs, which just let you find bugs earlier, tests also help you make finding (and fixing) a bug later be cheaper.

Watching a test-driven developer work can be pretty boring.  We write a test.  We watch it fail.  We make it pass.  We check it in.  Then we write another test.  After a while of watching this, a manager will get itchy and say, Jeez!  Why can't you just go faster!  Stop writing all these darn tests already!  Just write the code!  We have a deadline!

The thing that the manager hasn't noticed here is that every ten cycles or so, something different happens.  We write a test.  It succeeds.  Wait, what?  Oops!  Looks like the system didn't behave like we expected!  Or, the test is failing at a weird way, before it gets to the point where we expect it to fail.  At this point, we have just taken five minutes to write a test which has saved us four hours of debugging time.  If you accept my estimate, that's 10 tests × 5 minutes, which is almost an hour, to save 4 hours.  Of course it's not always four hours; sometimes it's a minute, sometimes it's a week.

If you're not paying attention, this was just a little blip.  The test failed twice, rather than once.  So what?  It's not like you wouldn't have caught that error eventually anyway!

Of course, nobody's perfect, so sometimes we make a mistake anyway and it slips through to production, and we need to diagnose and fix it later.  The big difference is that, if we have 100% test coverage, we already have a very good idea of where the bug isn't.  And, when we start to track it down, we have a huge library of test utilities that we can use to produce different system configurations.  A test harness gives us a way to iterate extremely rapidly to create a test that fails, rather than spinning up the whole giant system and entering a bunch of user input for every attempt at a fix.

This is the reason you don't just write giant integration tests first.  If you've got a test that just tells you "COMPILE FAILED", you don't know anything useful yet.  You don't know which component is broken, and you don't know why.  Individual unit tests with individual failures mean that you know what has gone wrong.  Individual tests also mean that you know that each component works individually before inserting it into your giant complex integrated compiler, so that if it dies you have a consistent object that you know at least performs some operations correctly, which you can inspect and almost always see in a sane internal state, even if it's not what the rest of the system expects.

Giant integration test suites can be hugely helpful on some projects, but they are the things which are sometimes unnecessary gold plating unless you have a clear specification for the entire system.  Unit tests are the bedrock of any automated testing strategy; you need to start there.

Unit tests seem like they take time, because you look at the time spent on a project and you see the time you spent writing the tests, and you think, "why don't I just take that part out?".  Then your schedule magically gets shorter on paper and everything looks rosy.

You can do that to anything.  Take your build automation out of your schedule!  Take your version-control server out of your budget!  Don't write a spec, just start coding!  The fact is, we pay for these tools in money and time because they all pay off very quickly.

For the most part, if you don't apply them consistently and completely, their benefits can quickly evaporate while leaving their costs in place.  Again, you can try this incomplete application with anything.  Automate the build, but only the compile, not the installer.  Use version control, but make uncommitted hand-crafted changes to your releases after exporting them.  Ignore your spec, and don't update it.

So put "100% test coverage" on your personal copy of the Joel Test.  You'll be glad you did.

One postscript I feel obliged to add here: like any tool, unit tests can be used well and used poorly.  Just like you can write bad, hard-to-maintain code, you can write bad, hard-to-maintain tests.  Doing it well and getting the maximum benefit for the minimum cost is a subtle art.  Of course, getting the most out of your version control system or written spec is also a balancing act, but unit tests are a bit trickier than most of these areas, and it requires skill to get good at them.  It's definitely worth acquiring that skill, but the learning is not free.  The one place that unit tests can take up more time than they save is when you need to learn some new subtlety of how to write them properly.  If your developers are even halfway decent, though, this learning period will be shorter than you think.  Training and pair-programming with advanced test driven developers can help accelerate the process, too.  So, I stand by what I said above, but there is no silver bullet.

You Got Your WindowMaker In My Peanut Butter

Electric Duncan mentioned Window Maker and Ubuntu yesterday, and it reminded me of my own callow youth.

Nowadays I'm a serious Compiz junkie, so I don't think I'll be switching back any time soon.  Personally, I wouldn't want to live without maximumize or the scale window title filter.  However, I can definitely see why one would want to: WindowMaker is lightning fast, as well as being very simple and streamlined.  When I do pair-programming that needs tools that won't run in Screen, I spin up a WindowMaker session in a VNC server.  Sharing my whole gigantic screen with all the whizzy effects is impractical over anything slower than a local 100 megabit connection.

One of the problems with switching to a different window manager for your main session these days, however, is that things unrelated to window management stop working.  Your keyboard settings no longer apply, your media no longer auto-mounts, GTK ignores your theme, your media keys stop working, and your panel disappears, along ever-so-useful applets like Deskbar and the NetworkManager applet.

But, this need not be so.  GNOME will happily accomodate an alternate window manager. All you need to do is make sure that WindowMaker and Nautilus don't fight over the desktop, and then tell GNOME to start WindowMaker.

Of course, your desktop won't be quite as lean as if you'd eschewed GNOME completely.  It's up to you to decide whether these features are worth a few extra megabytes of RAM.

First, run gconf-editor and turn off "/apps/nautilus/preferences/show_desktop".  This should make your desktop go blank.
http://www.twistedmatrix.com/users/glyph/images/content/blogowebs/gconf-editor-set-show-desktop.png
Next, you need to go to "System → Preferences → Sessions", and hit "add" on the "Startup Programs" tab.  Add an entry for WindowMaker:
http://www.twistedmatrix.com/users/glyph/images/content/blogowebs/add-wmaker-as-startup-program.png
Now, all you need to do is log out!  You will, of course, want to tweak your panels a bit when you log back in, but that part's easy: right-click and season to taste.
http://www.twistedmatrix.com/users/glyph/images/content/blogowebs/party-like-its-1999.png