A Blowhard At Large

Pre-chewing thoughts into a hundred bite-sized morsels for someone is just about as appetizing as doing the same thing with food.

Update: I've written a brief follow-up to this post to clarify my feelings about other uses of the tweetstorm, or twitter thread, publishing idiom. This post is intended to be specifically critical of its usage as a self-promotional or commercial tool.

I don’t like Tweetstorms™1, or, to turn to a neologism, “manthreading”. They actively annoy me. Stop it. People who do this are almost always blowhards.

Blogs are free. Put your ideas on your blog.

As Eevee rightfully points out, however, if you’re a massive blowhard in your Tweetstorms, you’re likely a massive blowhard on your blog, too. So why care about the usage of Twitter threads vs. Medium posts vs. anything else for expressions of mediocre ideas?

Here’s the difference, and here’s why my problem with them does have something to do with the medium: if you put your dull, obvious thoughts in a blog2, it’s a single entity. I can skim the introduction and then skip it if it’s tedious, plodding, derivative nonsense.3

Tweetstorms™, as with all social media innovations, however, increase engagement. Affordances to read little bits of the storm abound. Ding. Ding. Ding. Little bits of an essay dribble in, interrupting me at suspiciously precisely calibrated 90-second intervals, reminding me that an Important Thought Leader has Something New To Say.


The conceit of a Tweetstorm™ is that they’re in this format because they’re spontaneous. The hottest of hot takes. The supposed reason that it’s valid to interrupt me at 30-second intervals to keep me up to date on tweet 84 of 216 of some irrelevant commentator’s opinion on the recent trend in chamfer widths on aluminum bezels is that they’re thinking those thoughts in real time! It’s an opportunity to engage with the conversation!

But of course, this is a pretense; transparently so, unless you imagine someone could divine the number after the slash without typing it out first.

The “storm” is scripted in advance, edited, and rehearsed like any other media release. It’s interrupting people repeatedly merely to increase their chances of clicking on it, or reading it. And no Tweetstorm author is meaningfully going to “engage” with their readers; they just want to maximize their view metrics.

Even if I cared a tremendous amount about the geopolitics of aluminum chamfer calibration, this is a terrible format to consume those thoughts in. Twitter’s UI is just atrocious for meaningful consideration of ideas. It’s great for pointers to things (like a link to this post!) but actively interferes with trying to follow a thought to its conclusion.

There’s a whole separate blog in there about just how gross pretty much all social-media UI is, and how much it goes out of its way to show you “what you might have missed”, or things that are “relevant to you” or “people you should follow”, instead of just giving you the actual content you requested from their platform. It’s dark patterns all the way down, betraying the user’s intent for those of the advertisers.


My tone here probably implies that I think everyone doing this is being cynically manipulative. That’s possibly the worst part - I don’t think they are. I think everyone involved is just being slightly thoughtless, trying to do the best that they can in their perceived role. Blowhards are blowing, social media is making you be more social and consume more media. All optimizing for our little niche in society. So unfortunately it’s up to us, as readers, to refuse to consume this disrespectful trash, and pipe up about the destructive aspects of communicating that way.

Personally I’m not much affected by this, because I follow hardly anyone4, I don’t have push enabled, and I would definitely unfollow (or possibly block) someone who managed to get retweeted at such great length into my feed. But a lot of people who are a lot worse than I am about managing the demands on their attention get sucked into the vortex that Tweetstorms™ (and related social-media communication habits) generate.

Attention is a precious resource; in many ways it is the only resource that matters for producing creative work.

But of course, there’s a delicate balance - we must use up that same resource to consume those same works. I don’t think anyone should stop talking. But they should mindfully speak in places and ways that are not abusive of their audience.

This post itself might be a waste of your time. Not everything I write is worth reading. Because I respect my readers, I want to give them the opportunity to ignore it.

And that’s why I don’t use Tweetstorms™5.


  1. ™ 

  2. Hi Ned. 

  3. Like, for example, you can do with this blog! 

  4. I subscribe to more RSS feeds than Twitter people by about an order of magnitude, and I heartily suggest you do the same. 

  5. ™ 

What are we afraid of?

People are good, I hope.

I’m crying as I write this, and I want you to understand why.

Politics is the mind-killer. I hate talking about it; I hate driving a wedge between myself and someone I might be able to participate in a coalition with, however narrow. But, when you ignore politics for long enough, it doesn't just kill the mind; it goes on to kill the rest of the body, as well as anyone standing nearby. So, sometimes one is really obligated to talk about it.

Today, I am in despair. Donald Trump is an unprecedented catastrophe for American politics, in many ways. I find it likely that I will get into some nasty political arguments with his supporters in the years to come. But hopefully, this post is not one of those arguments. This post is for you, hypothetical Trump supporter. I want you to understand why we1 are not just sad, that we are not just defeated, but that we are in more emotional distress than any election has ever provoked for us. I want you to understand that we are afraid for our safety, and for good reason.

I do not believe I can change your views; don’t @ me to argue, because you certainly can’t change mine. My hope is simply that you can read this and at least understand why a higher level level of care and compassion in political discourse than you are used to may now be required. At least soften your tone, and blunt your rhetoric. You already won, and if you rub it in too much, you may be driving people to literally kill themselves.


First let me list the arguments that I’m not making, so you can’t write off my concerns as a repeat of some rhetoric you’ve heard before.

I won’t tell you about how Trump has the support of the American Nazi Party and the Ku Klux Klan; I know that you’ll tell me that he “can’t control who supports him”, and that he denounced2 their support. I won’t tell you about the very real campaign of violence that has been carried out by his supporters in the mere days since his victory; a campaign that has even affected the behavior of children. I know you don’t believe there’s a connection there.

I think these are very real points to be made. But even if I agreed with you completely, that none of this was his fault, that none of this could have been prevented by his campaign, and that in his heart he’s not a hateful racist, I would still be just as scared.


Bear Sterns estimates that there are approximately 20 million illegal immigrants in the United States. Donald Trump’s official position on how to handle this population is mass deportation. He has promised that this will be done “warmly and humanely”, which betrays his total ignorance of how mass resettlements have happened in the past.

By contrast, the total combined number of active and reserve personnel in the United States Armed Forces is a little over 2 million people.

What do you imagine happens when a person is deported? A person who, as an illegal immigrant, very likely gave up everything they have in their home country, and wants to be where they are so badly that they risk arrest every day, just by living where they live? How do you think that millions of them returning to countries where they have no home, no food, and quite likely no money or access to the resources or support that they had while in the United States?

They die. They die of exposure because they are in poverty and all their possessions were just stripped away and they can no longer feed themselves, or because they were already refugees from political violence in their home country, or because their home country kills them at the border because it is a hostile action to suddenly burden an economy with the shock of millions of displaced (and therefore suddenly poor and unemployed, whether they were before or not) people.

A conflict between 20 million people on one side and 2 million (heavily armed) people on the other is not a “police action”. It cannot be done “warmly and humanely”. At best, such an action could be called a massacre. At worst (and more likely) it would be called a civil war. Individual deportees can be sent home without incident, and many have been, but the victims of a mass deportation will know what is waiting for them on the other side of that train ride. At least some of them won’t go quietly.

It doesn’t matter if this is technically enforcing “existing laws”. It doesn’t matter whether you think these people deserve to be in the country or not. This is just a reality of very, very large numbers.

Let’s say, just for the sake of argument, that of the population of immigrants has assimilated so poorly that each one knows only one citizen who will stand up to defend them, once it’s obvious that they will be sent to their deaths. That’s a hypothetical resistance army of 40 million people. Let’s say they are so thoroughly overpowered by the military and police that there are zero casualties on the other side of this. Generously, let’s say that the police and military are incredibly restrained, and do not use unnecessary overwhelming force, and the casualty rate is just 20%; 4 out of 5 people are captured without lethal force, and miraculously nobody else dies in the remaining 16 million who are sent back to their home countries.

That’s 8 million casualties.

6 million Jews died in the Holocaust.


This is why we are afraid. Forget all the troubling things about Trump’s character. Forget the coded racist language, the support of hate groups, and every detail and gaffe that we could quibble over as the usual chum of left/right political struggle in the USA. Forget his deeply concerning relationship with African-Americans, even.

We are afraid because of things that others have said about him, yes. But mainly, we are afraid because, in his own campaign, Trump promised to be 33% worse than Hitler.

I know that there are mechanisms in our democracy to prevent such an atrocity from occurring. But there are also mechanisms to prevent the kind of madman who would propose such a policy from becoming the President, and thus far they’ve all failed.

I’m not all that afraid for myself. I’m not a Muslim. I am a Jew, but despite all the swastikas painted on walls next to Trump’s name and slogans, I don’t think he’s particularly anti-Semitic. Perhaps he will even make a show of punishing anti-Semites, since he has some Jews in his family3.

I don’t even think he’s trying to engineer a massacre; I just know that what he wants to do will cause one. Perhaps, when he sees what is happening as a result of his orders, he will stop. But his character has been so erratic, I honestly have no idea.

I’m not an immigrant, but many in my family are. One of those immigrants is intimately familiar with the use of the word “deportation” as an euphemism for extermination; there’s even a museum about it where she comes from.

Her mother’s name is written in a book there.


In closing, I’d like to share a quote.

The last thing that my great-grandmother said to my grandmother, before she was dragged off to be killed by the Nazis, was this:

Pleure pas, les gens sont bons.

or, in English:

Don’t cry, people are good.

As it turns out, she was right, in a sense; thanks in large part to the help of anonymous strangers, my grandmother managed to escape, and, here I am.


My greatest hope for this upcoming regime change is that I am dramatically catastrophizing; that none of these plans will come to fruition, that the strange story4 I have been told by Trump supporters is in fact true.

But if my fears, if our fears, should come to pass – and the violence already in the streets already is showing that at least some of those fears will – you, my dear conservative, may find yourself at a crossroads. You may see something happening in your state, or your city, or even in your own home. Your children might use a racial slur, or even just tell a joke that you find troubling. You may see someone, even a policeman, beating a Muslim to death. In that moment, you will have a choice: to say something, or not. To be one of the good people, or not.

Please, be one of the good ones.

In the meanwhile, I’m going to try to take great-grandma’s advice.


  1. When I say “we”, I mean, the people that you would call “liberals”, although our politics are often much more complicated than that; the people from “blue states” even though most states are closer to purple than pure blue or pure red; people of color, and immigrants, and yes, Jews. 

  2. Eventually. 

  3. While tacitly allowing continued violence against Muslims, of course. 

  4. “His campaign is really about campaign finance”, “he just said that stuff to get votes, of course he won’t do it”, “they’ll be better off in their home countries”, and a million other justifications. 

What Am Container

Containers are a tool in the fight against evil.

Perhaps you are a software developer.

Perhaps, as a developer, you have recently become familiar with the term "containers".

Perhaps you have heard containers described as something like "LXC, but better", "an application-level interface to cgroups" or "like virtual machines, but lightweight", or perhaps (even less usefully), a function call. You've probably heard of "docker"; do you wonder whether a container is the same as, different from, or part of an Docker?

Are you are bewildered by the blisteringly fast-paced world of "containers"? Maybe you have no trouble understanding what they are - in fact you might be familiar with a half a dozen orchestration systems and container runtimes already - but frustrated because this seems like a whole lot of work and you just don't see what the point of it all is?

If so, this article is for you.

I'd like to lay out what exactly the point of "containers" are, why people are so excited about them, what makes the ecosystem around them so confusing. Unlike my previous writing on the topic, I'm not going to assume you know anything about the ecosystem in general; just that you have a basic understanding of how UNIX-like operating systems separate processes, files, and networks.1


At the dawn of time, a computer was a single-tasking machine. Somehow, you'd load your program into main memory, and then you'd turn it on; it would run the program, and (if you're lucky) spit out some output onto paper tape.

When a program running on such a computer looked around itself, it could "see" the core memory of the computer it was running on, any attached devices, including consoles, printers, teletypes, or (later) networking equipment. This was of course very powerful - the program had full control of everything attached to the computer - but also somewhat limiting.

This mode of addressing hardware is limiting because it meant that programs would break the instant you moved them to a new computer. They had to be re-written to accommodate new amounts and types of memory, new sizes and brands of storage, new types of networks. If the program had to contain within itself the full knowledge of every piece of hardware that it might ever interact with, it would be very expensive indeed.

Also, if all the resources of a computer were dedicated to one program, then you couldn't run a second program without stomping all over the first one - crashing it by mangling its structures in memory, deleting its data by overwriting its data on disk.

So, programmers cleverly devised a way of indirecting, or "virtualizing", access to hardware resources. Instead of a program simply addressing all the memory in the whole computer, it got its own little space where it could address its own memory - an address space, if you will. If a program wanted more memory, it would ask a supervising program - what we today call a "kernel" - to give it some more memory. This made programs much simpler: instead of memorizing the address offsets where a particular machine kept its memory, a program would simply begin by saying "hey operating system, give me some memory", and then it would access the memory in its own little virtual area.

In other words: memory allocation is just virtual RAM.

Virtualizing memory - i.e. ephemeral storage - wasn't enough; in order to save and transfer data, programs also had to virtualize disk - i.e. persistent storage. Whereas a whole-computer program would just seek to position 0 on the disk and start writing data to it however it pleased, a program writing to a virtualized disk - or, as we might call it today, a "file" - first needed to request a file from the operating system.

In other words: file systems are just virtual disks.

Networking was treated in a similar way. Rather than addressing the entire network connection at once, each program could allocate a little slice of the network - a "port". That way a program could, instead of consuming all network traffic destined for the entire machine, ask the operating system to just deliver it all the traffic for, say, port number seven.

In other words: listening ports are just virtual network cards.


Getting bored by all this obvious stuff yet? Good. One of the things that frustrates me the most about containers is that they are an incredibly obvious idea that is just a logical continuation of a trend that all programmers are intimately familiar with.


All of these different virtual resources exist for the same reason: as I said earlier, if two programs need the same resource to function properly, and they both try to use it without coordinating, they'll both break horribly.2

UNIX-like operating systems more or less virtualize RAM correctly. When one program grabs some RAM, nobody else - modulo super-powered administrative debugging tools - gets to use it without talking to that program. It's extremely clear which memory belongs to which process. If programs want to use shared memory, there is a very specific, opt-in protocol for doing so; it is basically impossible for it to happen by accident.

However, the abstractions we use for disks (filesystems) and network cards (listening ports and addresses) are significantly more limited. Every program on the computer sees the same file-system. The program itself, and the data the program stores, both live on the same file-system. Every program on the computer can see the same network information, can query everything about it, and can receive arbitrary connections. Permissions can remove certain parts of the filesystem from view (i.e. programs can opt-out) but it is far less clear which program "owns" certain parts of the filesystem; access must be carefully controlled, and sometimes mediated by administrators.

In particular, the way that UNIX manages filesystems creates an environment where "installing" a program requires manipulating state in the same place (the filesystem) where other programs might require different state. Popular package managers on UNIX-like systems (APT, RPM, and so on) rarely have a way to separate program installation even by convention, let alone by strict enforcement. If you want to do that, you have to re-compile the software with ./configure --prefix to hard-code a new location. And, fundamentally, this is why the package managers don't support installing to a different place: if the program can tell the difference between different installation locations, then it will, because its developers thought it should go in one place on the file system, and why not hard code it? It works on their machine.


In order to address this shortcoming of the UNIX process model, the concept of "virtualization" became popular. The idea of virtualization is simple: you write a program which emulates an entire computer, with its own storage media, network devices, and then you install an operating system on it. This completely resolves the over-sharing of resources: a process inside a virtual machine is in a very real sense running on a different computer than programs running on a different virtual machine on the same physical device.

However, virtualiztion is also an extremly heavy-weight blunt instrument. Since virtual machines are running operating systems designed for physical machines, they have tons of redundant hardware-management code; enormous amounts of operating system data which could be shared with the host, but since it's in the form of a disk image totally managed by the virtual machine's operating system, the host can't really peek inside to optimize anything. It also makes other kinds of intentional resource sharing very hard: any software to manage the host needs to be installed on the host, since if it is installed on the guest it won't have full access to the host's hardware.

I hate using the term "heavy-weight" when I'm talking about software - it's often bandied about as a content-free criticism - but the difference in overhead between running a virtual machine and a process is the difference between gigabytes and kilobytes; somewhere between 4-6 orders of magnitude. That's a huge difference.

This means that you need to treat virtual machines as multi-purpose, since one VM is too big to run just a single small program. Which means you often have to manage them almost as if they were physical harware.


When we run a program on a UNIX-like operating system, and by so running it, grant it its very own address space, we call the entity that we just created a "process".

This is how to understand a "container".

A "container" is what we get when we run a program and give it not just its own memory, but its own whole virtual filesystem and its own whole virtual network card.

The metaphor to processes isn't perfect, because a container can contain multiple processes with different memory spaces that share a single filesystem. But this is also where some of the "container ecosystem" fervor begins to creep in - this is why people interested in containers will religiously exhort you to treat a container as a single application, not to run multiple things inside it, not to SSH into it, and so on. This is because the whole point of containers is that they are lightweight - far closer in overhead to the size of a process than that of a virtual machine.

A process inside a container, if it queries the operating system, will see a computer where only it is running, where it owns the entire filesystem, and where any mounted disks were explicitly put there by the administrator who ran the container. In other words, if it wants to share data with another application, it has to be given the shared data; opt-in, not opt-out, the same way that memory-sharing is opt-in in a UNIX-like system.


So why is this so exciting?

In a sense, it really is just a lower-overhead way to run a virtual machine, as long as it shares the same kernel. That's not super exciting, by itself.

The reason that containers are more exciting than processes is the same reason that using a filesystem is more exciting than having to use a whole disk: sharing state always, inevitably, leads to brokenness. Opt-in is better than opt-out.

When you give a program a whole filesystem to itself, sharing any data explicitly, you eliminate even the possibility that some other program scribbling on a shared area of the filesystem might break it. You don't need package managers any more, only package installers; by removing the other functions of package managers (inventory, removal) they can be radically simplified, and less complexity means less brokenness.

When you give a program an entire network address to itself, exposing any ports explicitly, you eliminate even the possibility that some rogue program will expose a security hole by listening on a port you weren't expecting. You eliminate the possibility that it might clash with other programs on the same host, hard-coding the same port numbers or auto-discovering the same addresses.


In addition to the exciting things on the run-time side, containers - or rather, the things you run to get containers, "images"3, present some compelling improvements to the build-time side.

On Linux and Windows, building a software artifact for distribution to end-users can be quite challenging. It's challenging because it's not clear how to specify that you depend on certain other software being installed; it's not clear what to do if you have conflicting versions of that software that may not be the same as the versions already available on the user's computer. It's not clear where to put things on the filesystem. On Linux, this often just means getting all of your software from your operating system distributor.

You'll notice I said "Linux and Windows"; not the usual (linux, windows, mac) big-3 desktop platforms, and I didn't say anything about mobile OSes. That's because on macOS, Android, iOS, and Windows Metro, applications already run in their own containers. The rules of macOS containers are a bit weird, and very different from Docker containers, but if you have a Mac you can check out ~/Library/Containers to see the view of the world that the applications you're running can see. iOS looks much the same.

This is something that doesn't get discussed a lot in the container ecosystem, partially because everyone is developing technology at such a breakneck pace, but in many ways Linux server-side containerization is just a continuation of a trend that started on mainframe operating systems in the 1970s and has already been picked up in full force by mobile operating systems.

When one builds an image, one is building a picture of the entire filesystem that the container will see, so an image is a complete artifact. By contrast, a package for a Linux package manager is just a fragment of a program, leaving out all of its dependencies, to be integrated later. If an image runs on your machine, it will (except in some extremely unusual circumstances) run on the target machine, because everything it needs to run is fully included.

Because you build all the software an image requires into the image itself, there are some implications for server management. You no longer need to apply security updates to a machine - they get applied to one application at a time, and they get applied as a normal process of deploying new code. Since there's only one update process, which is "delete the old container, run a new one with a new image", updates can roll out much faster, because you can build an image, run tests for the image with the security updates applied, and be confident that it won't break anything. No more scheduling maintenance windows, or managing reboots (at least for security updates to applications and libraries; kernel updates are a different kettle of fish).


That's why it's exciting. So why's it all so confusing?5

Fundamentally the confusion is caused by there just being way too many tools. Why so many tools? Once you've accepted that your software should live in images, none of the old tools work any more. Almost every administrative, monitoring, or management tool for UNIX-like OSes depends intimately upon the ability to promiscuously share the entire filesystem with every other program running on it. Containers break these assumptions, and so new tools need to be built. Nobody really agrees on how those tools should work, and a wide variety of forces ranging from competitive pressure to personality conflicts make it difficult for the panoply of container vendors to collaborate perfectly4.

Many companies whose core business has nothing to do with infrastructure have gone through this reasoning process:

  1. Containers are so much better than processes, we need to start using them right away, even if there's some tooling pain in adopting them.
  2. The old tools don't work.
  3. The new tools from the tool vendors aren't ready.
  4. The new tools from the community don't work for our use-case.
  5. Time to write our own tool, just for our use-case and nobody else's! (Which causes problem #3 for somebody else, of course...)

A less fundamental reason is too much focus on scale. If you're running a small-scale web application which has a stable user-base that you don't expect a lot of growth in, there are many great reasons to adopt containers as opposed to automating your operations; and in fact, if you keep things simple, the very fact that your software runs in a container might obviate the need for a system-management solution like Chef, Ansible, Puppet, or Salt. You should totally adopt them and try to ignore the more complex and involved parts of running an orchestration system.

However, containers are even more useful at significant scale, which means that companies which have significant scaling problems invest in containers heavily and write about them prolifically. Many guides and tutorials on containers assume that you expect to be running a multi-million-node cluster with fully automated continuous deployment, blue-green zero-downtime deploys, a 1000-person operations team. It's great if you've got all that stuff, but building each of those components is a non-trivial investment.


So, where does that leave you, my dear reader?

You should absolutely be adopting "container technology", which is to say, you should probably at least be using Docker to build your software. But there are other, radically different container systems - like Sandstorm - which might make sense for you, depending on what kind of services you create. And of course there's a huge ecosystem of other tools you might want to use; too many to mention, although I will shout out to my own employer's docker-as-a-service Carina, which delivered this blog post, among other things, to you.

You shouldn't feel as though you need to do containers absolutely "the right way", or that the value of containerization is derived from adopting every single tool that you can all at once. The value of containers comes from four very simple things:

  1. It reduces the overhead and increases the performance of co-locating multiple applications on the same hardware,
  2. It forces you to explicitly call out any shared state or required resources,
  3. It creates a complete build pipeline that results in a software artifact that can be run without special installation or set-up instructions (at least, on the "software installation" side; you still might require configuration, of course), and
  4. It gives you a way to test exactly what you're deploying.

These benefits can combine and interact in surprising and interesting ways, and can be enhanced with a wide and growing variety of tools. But underneath all the hype and the buzz, the very real benefit of containerization is basically just that it is fixing a very old design flaw in UNIX.

Containers let you share less state, and shared mutable state is the root of all evil.


  1. If you have a more sophisticated understanding of memory, disks, and networks, you'll notice that everything I'm saying here is patently false, and betrays an overly simplistic understanding of the development of UNIX and the complexities of physical hardware and driver software. Please believe that I know this; this is an alternate history of the version of UNIX that was developed on platonically ideal hardware. The messy co-evolution of UNIX, preemptive multitasking, hardware offload for networks, magnetic secondary storage, and so on, is far too large to fit into the margins of this post. 

  2. When programs break horribly like this, it's called "multithreading". I have written some software to help you avoid it. 

  3. One runs an "executable" to get a process; one runs an "image" to get a container. 

  4. Although the container ecosystem is famously acrimonious, companies in it do actually collaborate better than the tech press sometimes give them credit for; the Open Container Project is a significant extraction of common technology from multiple vendors, many of whom are also competitors, to facilitate a technical substrate that is best for the community. 

  5. If it doesn't seem confusing to you, consider this absolute gem from the hilarious folks over at CircleCI. 

docker run glyph/rproxy

Want to TLS-protect your co-located stack of vanity websites with Twisted and Let's Encrypt using HawkOwl's rproxy, but can't tolerate the bone-grinding tedium of a pip install? I built a docker image for you now, so it's now as simple as:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
$ mkdir -p conf/certificates;
$ cat > conf/rproxy.ini << EOF;
> [rproxy]
> certificates=certificates
> http_ports=80
> https_ports=443
> [hosts]
> mysite.com_host=<other container host>
> mysite.com_port=8080
> EOF
$ docker run --restart=always -v "$(pwd)"/conf:/conf \
    -p 80:80 -p 443:443 \
    glyph/rproxy;

There are no docs to speak of, so if you're interested in the details, see the tree on github I built it from.

Modulo some handwaving about docker networking to get that <other container host> IP, that's pretty much it. Go forth and do likewise!

As some of you may have guessed from the unintentional recent flurry of activity on my Twitter account, twitter feed, the service I used to use to post blog links automatically, is getting end-of-lifed. I've switched to dlvr.it for the time being, unless they send another unsolicited tweetstorm out on my behalf...

Sorry about the noise! In the interests of putting some actual content here, maybe you would be interested to know that I was recently interviewed for PyDev of the Week?