Not Funny

Today’s “joke” from the PSF was not funny.

What?

Today’s “joke” from the PSF about PyCon Havana was not funny, and, speaking as a PSF Fellow, I do not endorse it.

What’s Not Funny?

Honestly I’m not sure where I could find a punch-line in this. I just don’t see much there.

But if I look for something that’s supposed to be “funny”, here’s what I see:

  1. Cuba is a backward country without sufficient technology to host a technical conference, and it is absurd and therefore “funny” that we could hold PyCon there.
  2. We are talking about PyCon US; despite the recent thaw in relations, decades of hostility that have torn families apart make it “funny” that US citizens would go to Cuba for a conference.

These things aren’t funny.

Some Non-Reasons I’m Writing This

A common objection when someone speaks up about a subject like this is that it’s “just a joke”. That anyone speaking up and saying that offensive things aren’t funny somehow dislikes the very concept of humor. I don’t know why people think that, but I guess I need to make it clear: I am not an enemy of joy. That is not why I’m saying something.

I’m also not Cuban, I have no Cuban relatives, and until this incident I didn’t even know I had friends of Cuban extraction, so I am not personally insulted by this. That means another common objection will crop up: some will ask if I’m just looking for an excuse to get offended, to write about taking offense and get attention for it.

So let me assure you, that personally, this is not the kind of attention that I want. I really didn’t want to write this post. It’s awkward. I really don’t want to be having these types of conversations. I want to get attention for the software I write, not for my opinions about tacky blog posts.

Why, Then?

I might not know many Cuban python programmers personally, but I’d love to meet some. I’d love to meet anyone who cares about programming. Meeting diverse people from all over the world and working with them on code has been one of the great joys of my life. I love the fact that the Python community facilitates that and tries hard to reach out to people and to make them feel welcome.

I am writing this because I know that, somewhere out there, there’s a Cuban programmer, or a kid who will grow up to be one, who might see that blog post, and think that the Python community, or the software industry, thinks that they’re a throw-away punch line. I want them to know that I don’t think they’re a punch line. I want them to know that the python community doesn’t think they’re a punch line. I want them to know that they are not a punch line, and I want them to pursue their interest in programming exactly as far as it takes them and not push them away.

These people are real, they are listening, and if you tell me to just “lighten up” you are saying that your enjoyment of a joke is more important than their membership in our community.

It’s Not Just Me

The PSF is paying attention. The chairman of the PSF has acknowledged the problematic nature of the “joke”. Several of my friends in the Python community spoke up before I did (here, here, here, here, here, here, here), and I am very grateful for their taking the community to task and keeping us true to ideals of inclusiveness and empathy.

That doesn’t excuse the public statement, made using official channels, which was in very poor taste. I am also very disappointed in certain people within the PSF1 who seem intent on doubling down on this mistake rather than trying to do something to correct it.


  1. names withheld to avoid a pile-on, but you know who you are and you should be ashamed. 

Deploying Python Applications with Docker - A Suggestion

A template for deploying Python applications into Docker containers.

Deploying python applications is much trickier than it should be.

Docker can simplify this, but even with Docker, there are a lot of nuances around how you package your python application, how you build it, how you pull in your python and non-python dependencies, and how you structure your images.

I would like to share with you a strategy that I have developed for deploying Python apps that deals with a number of these issues. I don’t want to claim that this is the only way to deploy Python apps, or even a particularly right way; in the rapidly evolving containerization ecosystem, new techniques pop up every day, and everyone’s application is different. However, I humbly submit that this process is a good default.

Rather than equivocate further about its abstract goodness, here are some properties of the following container construction idiom:

  1. It reduces build times from a naive “sudo setup.py install” by using Python wheels to cache repeatably built binary artifacts.
  2. It reduces container size by separating build containers from run containers.
  3. It is independent of other tooling, and should work fine with whatever configuration management or container orchestration system you want to use.
  4. It uses existing Python tooling of pip and virtualenv, and therefore doesn’t depend heavily on Docker. A lot of the same concepts apply if you have to build or deploy the same Python code into a non-containerized environment. You can also incrementally migrate towards containerization: if your deploy environment is not containerized, you can still build and test your wheels within a container and get the advantages of containerization there, as long as your base image matches the non-containerized environment you’re deploying to. This means you can quickly upgrade your build and test environments without having to upgrade the host environment on finicky continuous integration hosts, such as Jenkins or Buildbot.

To test these instructions, I used Docker 1.5.0 (via boot2docker, but hopefully that is an irrelevant detail). I also used an Ubuntu 14.04 base image (as you can see in the docker files) but hopefully the concepts should translate to other base images as well.

In order to show how to deploy a sample application, we’ll need a sample application to deploy; to keep it simple, here’s some “hello world” sample code using Klein:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# deployme/__init__.py
from klein import run, route

@route('/')
def home(request):
    request.setHeader("content-type", "text/plain")
    return 'Hello, world!'

def main():
    run("", 8081)

And an accompanying setup.py:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
from setuptools import setup, find_packages

setup (
    name             = "DeployMe",
    version          = "0.1",
    description      = "Example application to be deployed.",
    packages         = find_packages(),
    install_requires = ["twisted>=15.0.0",
                        "klein>=15.0.0",
                        "treq>=15.0.0",
                        "service_identity>=14.0.0"],
    entry_points     = {'console_scripts':
                        ['run-the-app = deployme:main']}
)

Generating certificates is a bit tedious for a simple example like this one, but in a real-life application we are likely to face the deployment issue of native dependencies, so to demonstrate how to deal with that issue, that this setup.py depends on the service_identity module, which pulls in cryptography (which depends on OpenSSL) and its dependency cffi (which depends on libffi).

To get started telling Docker what to do, we’ll need a base image that we can use for both build and run images, to ensure that certain things match up; particularly the native libraries that are used to build against. This also speeds up subsquent builds, by giving a nice common point for caching.

In this base image, we’ll set up:

  1. a Python runtime (PyPy)
  2. the C libraries we need (the libffi6 and openssl ubuntu packages)
  3. a virtual environment in which to do our building and packaging
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# base.docker
FROM ubuntu:trusty

RUN echo "deb http://ppa.launchpad.net/pypy/ppa/ubuntu trusty main" > \
    /etc/apt/sources.list.d/pypy-ppa.list

RUN apt-key adv --keyserver keyserver.ubuntu.com \
                --recv-keys 2862D0785AFACD8C65B23DB0251104D968854915
RUN apt-get update

RUN apt-get install -qyy \
    -o APT::Install-Recommends=false -o APT::Install-Suggests=false \
    python-virtualenv pypy libffi6 openssl

RUN virtualenv -p /usr/bin/pypy /appenv
RUN . /appenv/bin/activate; pip install pip==6.0.8

The apt options APT::Install-Recommends and APT::Install-Suggests are just there to prevent python-virtualenv from pulling in a whole C development toolchain with it; we’ll get to that stuff in the build container. In the run container, which is also based on this base container, we will just use virtualenv and pip for putting the already-built artifacts into the right place. Ubuntu expects that these are purely development tools, which is why it recommends installation of python development tools as well.

You might wonder “why bother with a virtualenv if I’m already in a container”? This is belt-and-suspenders isolation, but you can never have too much isolation.

It’s true that in many cases, perhaps even most, simply installing stuff into the system Python with Pip works fine; however, for more elaborate applications, you may end up wanting to invoke a tool provided by your base container that is implemented in Python, but which requires dependencies managed by the host. By putting things into a virtualenv regardless, we keep the things set up by the base image’s package system tidily separated from the things our application is building, which means that there should be no unforseen interactions, regardless of how complex the application’s usage of Python might be.

Next we need to build the base image, which is accomplished easily enough with a docker command like:

1
$ docker build -t deployme-base -f base.docker .;

Next, we need a container for building our application and its Python dependencies. The dockerfile for that is as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# build.docker
FROM deployme-base

RUN apt-get install -qy libffi-dev libssl-dev pypy-dev
RUN . /appenv/bin/activate; \
    pip install wheel

ENV WHEELHOUSE=/wheelhouse
ENV PIP_WHEEL_DIR=/wheelhouse
ENV PIP_FIND_LINKS=/wheelhouse

VOLUME /wheelhouse
VOLUME /application

ENTRYPOINT . /appenv/bin/activate; \
           cd /application; \
           pip wheel .

Breaking this down, we first have it pulling from the base image we just built. Then, we install the development libraries and headers for each of the C-level dependencies we have to work with, as well as PyPy’s development toolchain itself. Then, to get ready to build some wheels, we install the wheel package into the virtualenv we set up in the base image. Note that the wheel package is only necessary for building wheels; the functionality to install them is built in to pip.

Note that we then have two volumes: /wheelhouse, where the wheel output should go, and /application, where the application’s distribution (i.e. the directory containing setup.py) should go.

The entrypoint for this image is simply running “pip wheel” with the appropriate virtualenv activated. It runs against whatever is in the /application volume, so we could potentially build wheels for multiple different applications. In this example, I’m using pip wheel . which builds the current directory, but you may have a requirements.txt which pins all your dependencies, in which case you might want to use pip wheel -r requirements.txt instead.

At this point, we need to build the builder image, which can be accomplished with:

1
$ docker build -t deployme-builder -f build.docker .;

This builds a deployme-builder that we can use to build the wheels for the application. Since this is a prerequisite step for building the application container itself, you can go ahead and do that now. In order to do so, we must tell the builder to use the current directory as the application being built (the volume at /application) and to put the wheels into a wheelhouse directory (one called wheelhouse will do):

1
2
3
4
5
$ mkdir -p wheelhouse;
$ docker run --rm \
         -v "$(pwd)":/application \
         -v "$(pwd)"/wheelhouse:/wheelhouse \
         deployme-builder;

After running this, if you look in the wheelhouse directory, you should see a bunch of wheels built there, including one for the application being built:

1
2
3
4
5
6
$ ls wheelhouse
DeployMe-0.1-py2-none-any.whl
Twisted-15.0.0-pp27-none-linux_x86_64.whl
Werkzeug-0.10.1-py2-none-any.whl
cffi-0.9.0-py2-none-any.whl
# ...

At last, time to build the application container itself. The setup for that is very short, since most of the work has already been done for us in the production of the wheels:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# run.docker
FROM deployme-base

ADD wheelhouse /wheelhouse
RUN . /appenv/bin/activate; \
    pip install --no-index -f wheelhouse DeployMe

EXPOSE 8081

ENTRYPOINT . /appenv/bin/activate; \
           run-the-app

During build, this dockerfile pulls from our shared base image, then adds the wheelhouse we just produced as a directory at /wheelhouse. The only shell command that needs to run in order to get the wheels installed is pip install TheApplicationYouJustBuilt, with two options: --no-index to tell pip “don’t bother downloading anything from PyPI, everything you need should be right here”, and, -f wheelhouse which tells it where “here” is.

The entrypoint for this one activates the virtualenv and invokes run-the-app, the setuptools entrypoint defined above in setup.py, which should be on the $PATH once that virtualenv is activated.

The application build is very simple, just

1
$ docker build -t deployme-run -f run.docker .;

to build the docker file.

Similarly, running the application is just like any other docker container:

1
$ docker run --rm -it -p 8081:8081 deployme-run

You can then hit port 8081 on your docker host to load the application.

The command-line for docker run here is just an example; for example, I’m passing --rm so that if you run this example just so that it won’t clutter up your container list. Your environment will have its own way to call docker run, how to get your VOLUMEs and EXPOSEd ports mapped, and discussing how to orchestrate your containers is out of scope for this post; you can pretty much run it however you like. Everything the image needs is built in at this point.

To review:

  1. have a common base container that contains all your non-Python (C libraries and utilities) dependencies. Avoid installing development tools here.
  2. use a virtualenv even though you’re in a container to avoid any surprises from the host Python environment
  3. have a “build” container that just makes the virtualenv and puts wheel and pip into it, and runs pip wheel
  4. run the build container with your application code in a volume as input and a wheelhouse volume as output
  5. create an application container by starting from the same base image and, once again not installing any dev tools, pip install all the wheels that you just built, turning off access to PyPI for that installation so it goes quickly and deterministically based on the wheels you’ve built.

While this sample application uses Twisted, it’s quite possible to apply this same process to just about any Python application you want to run inside Docker.

I’ve put a sample project up on Github which contain all the files referenced here, as well as “build” and “run” shell scripts that combine the necessary docker commandlines to go through the full process to build and run this sample app. While it defaults to the PyPy runtime (as most networked Python apps generally should these days, since performance is so much better than CPython), if you have an application with a hard CPython dependency, I’ve also made a branch and pull request on that project for CPython, and you can look at the relatively minor patch required to get it working for CPython as well.

Now that you have a container with an application in it that you might want to deploy, my previous write-up on a quick way to securely push stuff to a production service might be of interest.

(Once again, thanks to my employer, Rackspace, for sponsoring the time for me to write this post. Thanks also to Shawn Ashlee and Jesse Spears for helping me refine these ideas and listening to me rant about them. However, that expression of gratitude should not be taken as any kind of endorsement from any of these parties as to my technical suggestions or opinions here, as they are entirely my own.)

The Report Of Our Death

The report of Twisted’s death was an exaggeration.

tulips

Lots of folks are very excited about the Tulip project, recently released as the asyncio standard library module in Python 3.4.

This module is potentially exciting for two reasons:

  1. It provides an abstract, language-blessed, standard interface for event-driven network I/O. This means that instead of every event-loop library out there having to implement everything in terms of every other event-loop library’s ideas of how things work, each library can simply implement adapters from and to those standard interfaces. These interfaces substantially resemble the abstract interfaces that Twisted has provided for a long time, so it will be relatively easy for us to adapt to them.
  2. It provides a new, high-level coroutine scheduler, providing a slightly cleaned-up syntax (return works!) and more efficient implementation (no need for a manual trampoline, it’s built straight into the language runtime via yield from) of something like inlineCallbacks.

However, in their understandable enthusiasm, some observers of Tulip’s progress – links withheld to protect the guilty – have been forecasting Twisted’s inevitable death, or at least its inevitable consignment to the dustbin of “legacy code”.

At first I thought that this was just sour grapes from people who disliked Twisted for some reason or other, but then I started hearing this as a concern from users who had enjoyed using Twisted.

So let me reassure you that the idea that Twisted is going away is totally wrong. I’ll explain how.

The logic that leads to this belief seems to go like this:

Twisted is an async I/O thing, asyncio is an async I/O thing. Therefore they are the same kind of thing. I only need one kind of thing in each category of thing. Therefore I only need one of them, and the “standard” one is probably the better one to depend on. So I guess nobody will need Twisted any more!

The problem with this reasoning is that “an async I/O thing” is about as specific as “software that runs on a computer”. After all, Firefox is also “an async I/O thing” but I don’t think that anyone is forecasting the death of web browsers with the release of Python 3.4.

Which Is Better: OpenOffice or Linux?

Let’s begin with the most enduring reason that Twisted is not going anywhere any time soon. asyncio is an implementation of a transport layer and an event-loop API; it can move bytes into and out of your application, it can schedule timed calls to happen at some point in the future, and it can start and stop. It’s also an implementation of a coroutine scheduler; it can interleave apparently sequential logic with explicit yield points. There are also some experimental third-party extension modules available, including an event-driven HTTP server and client, and the community keeps building more stuff.

In other words, asyncio is a kernel for event-driven programming, with some applications starting to be developed.

Twisted is also an implementation of a transport layer and an event-loop API. It’s also got a coroutine scheduler, in the form of inlineCallbacks.

Twisted is also a production-quality event-driven HTTP server and client including its own event-driven templating engine, with third-party HTTP addons including server microframeworks, high-level client tools, API construction kits, robust, two-way browser communication, and automation for usually complex advanced security features.

Twisted is also an SSH client, both an API and a command-line replacement for OpenSSH. It’s also an SSH server which I have heard some people think is OK to use in production. Again, the SSH server is both an API and again as a daemon replacement for OpenSSH. (I'd say "drop-in replacement" except that neither the client nor server can parse OpenSSH configuration files. Yet.)

Twisted also has a native, symmetric event-driven message-passing protocol designed to be easy to implement in other languages and environments, making it incredibly easy to develop a custom protocol to propagate real-time events through multiple components; clients, servers, embedded devices, even web browsers.

Twisted is also a chat server you can deploy with one shell command. Twisted is also a construction kit for IRC bots. Twisted is also an XMPP client and server library. Twisted is also a DNS server and event-driven DNS client. Twisted is also a multi-protocol integrated authentication API. Twisted is also a pluggable system for creating transports to allow for third-party transport components to do things like allow you to run as a TOR hidden service with a single command-line switch and no modifications to your code.

Twisted is also a system for processing streams of geolocation data including from real hardware devices, via serial port support.

Twisted also natively implements GUI integration support for the Mac, Windows, and Linux.

I could go on.

If I were to include what third-party modules are available as well, I could go on at some considerable length.

The point is, while Twisted also has an existing kernel – largely compatible, at a conceptual level at least, with the way asyncio was designed – it also has a huge suite of functionality, both libraries and applications. Twisted is OpenOffice to asyncio’s Linux.

Of course this metaphor isn’t perfect. Of course the nascent asyncio community will come to supplant some of these things with other third-party tools. Of course there will be some duplication and some competing projects. That’s normal, and even healthy. But this is not a winner-take all existential Malthusian competition. Being able to leverage the existing functionality within the Twisted and Tornado ecosystems – and vice versa, allowing those ecosystems to leverage new things written with asyncio – was not an accident, it was an explicit, documented design goal of asyncio.

Now And Later

Python 3 is the future of Python.

While Twisted still has a ways to go to finish porting to Python 3, you will be able to pip install the portions that already work as of the upcoming 14.0 release (which should be out any day now). So contrary to some incorrect impressions I’ve heard, the Twisted team is working to support that future.

(One of the odder things that I’ve heard people say about asyncio is that now that Python 3 has asyncio, porting Twisted is now unnecessary. I'm not sure that Twisted is necessary per se – our planet has been orbiting that cursed orb for over 4 billion years without Twisted’s help – but if you want to create an XMPP to IMAP gateway in Python it's not clear to me how having just asyncio is going to help.)

However, while Python 3 may be the future of Python, right now it is sadly just the future, and not the present.

If you want to use asyncio today, that means foregoing the significant performance benefits of pypy. (Even the beta releases of pypy3, which still routinely segfault for me, only support the language version 3.2, so no “yield from” syntax.) It means cutting yourself off from a significant (albeit gradually diminishing) subset of available Python libraries.

You could use the Python 2 backport of Tulip, Trollius, but since idiomatic Tulip code relies heavily on the new yield from syntax, it’s possible to write code that works on both, but not idiomatically. Trollius actually works on Python 3 as well, but then you miss out on one of the real marquee features of Tulip.

Also, while Twisted has a strict compatibility policy, asyncio is still marked as having provisional status, meaning that unlike the rest of the standard library, its API may change incompatibly in the next version of Python. While it’s unlikely that this will mean major changes for asyncio, since Python 3.4 was just released, it will be in this status for at least the next 18 months, until Python 3.5 arrives.

As opposed to the huge laundry list of functionality above, all of these reasons will eventually be invalidated, hopefully sooner rather than later; if these were the only reasons that Twisted were going to stick around, I would definitely be worried. However, they’re still reasons why today, even if you only need the pieces of an asynchronous I/O system that the new asyncio module offers, you still might want to choose Twisted’s core event loop APIs. Keep in mind that using Twisted today doesn’t cut you off from using asyncio in the future: far from it, it makes it likely that you will be able to easily integrate whatever new asyncio code you write once you adopt it. Twisted’s goal, as Laurens van Houtven eloquently explained it this year in a PyCon talk, is to work with absolutely everything, and that very definitely includes asyncio and Python 3.

My Own Feelings

I feel like asyncio is a step forward for Python, and, despite the dire consequences some people seemed to expect, a tremendous potential benefit to Twisted.

For years, while we – the Twisted team – were trying to build the “engine of your Internet”, we were also having to make a constant, tedious sales pitch for the event-driven model of programming. Even today, we’re still stuck writing rambling, digressive rants explaining why you might not want three threads for every socket, giving conference talks where we try to trick the audience into writing a callback, and trying to explain basic stuff about networking protocols to an unreceptive, frustrated audience.

This audience was unreceptive because the broader python community has been less than excited about event-driven networking and cooperative task coordination in general. It’s a vicious cycle: programmers think events look “unpythonic”, so they write their code to block. Other programmers then just want to make use of the libraries suitable to their task, and they find ones which couple (blocking) I/O together with basic data-processing tasks like parsing.

Oddly enough, I noticed a drop in the frequency that I needed to have this sort of argument once node.js started to gain some traction. Once server-side Python programmers started to hear all the time about how writing callbacks wasn't a totally crazy thing to be doing on the server, there was a whole other community to answer their questions about why that was.

With the advent of asyncio, there is functionality available in the standard library to make event-driven implementations of things immediately useful. Perhaps even more important this functionality, there is guidance on how to make your own event-driven stuff. Every module that is written using asyncio rather than io is a module that at least can be made to work natively within Twisted without rewriting or monkeypatching it.

In other words, this has shifted the burden of arguing that event-driven programming is a worthwhile thing to do at all from Twisted to a module in the language’s core.

While it’ll be quite a while before most Python programmers are able to use asyncio on a day to day basis its mere existence justifies the conceptual basis of Twisted to our core consituency of Python programmers who want to put an object on a network. Which, in turn, means that we can dedicate more of our energy to doing cool stuff with Twisted, and we can dedicate more of the time we spend educating people to explaining how to do cool things with all the crazy features Twisted provides rather than explaining why you would even want to write all these weird callbacks in the first place.

So Tulip is a good thing for Python, a good thing for Twisted, I’m glad it exists, and it doesn't make me worried at all about Twisted’s future.

Quite the opposite, in fact.

Unyielding

Be as the reed, not the oak tree. Green threads are just threads.

The Oak and the Reed by Achille Michallon

… that which is hard and stiff
is the follower of death
that which is soft and yielding
is the follower of life …

the Tao Te Ching, chapter 76

Problem: Threads Are Bad

As we know, threads are a bad idea, (for most purposes). Threads make local reasoning difficult, and local reasoning is perhaps the most important thing in software development.

With the word “threads”, I am referring to shared-state multithreading, despite the fact that there are languages, like Erlang and Haskell which refer to concurrent processes – those which do not implicitly share state, and require explicit coordination – as “threads”.

My experience is mainly (although not exclusively) with Python but the ideas presented here should generalize to most languages which have global shared mutable state by default, which is to say, quite a lot of them: C (including Original Recipe, Sharp, Extra Crispy, Objective, and Plus Plus), JavaScript, Java, Scheme, Ruby, and PHP, just to name a few.

With the phrase “local reasoning”, I’m referring to the ability to understand the behavior (and thereby, the correctness) of a routine by examining the routine itself rather than examining the entire system.

When you’re looking at a routine that manipulates some state, in a single-tasking, nonconcurrent system, you only have to imagine the state at the beginning of the routine, and the state at the end of the routine. To imagine the different states, you need only to read the routine and imagine executing its instructions in order from top to bottom. This means that the number of instructions you must consider is n, where n is the number of instructions in the routine. By contrast, in a system with arbitrary concurrent execution – one where multiple threads might concurrently execute this routine with the same state – you have to read the method in every possible order, making the complexity nn.

Therefore it is – literally – exponentially more difficult to reason about a routine that may be executed from an arbitrary number of threads concurrently. Instead, you need to consider every possible caller across your program, understanding what threads they might be invoked from, or what state they might share. If you’re writing a library desgined to be thread-safe, then you must place some of the burden of this understanding on your caller.

The importance of local reasoning really cannot be overstated. Computer programs are, at least for the time being, constructed by human beings who are thinking thoughts. Correct computer programs are constructed by human beings who can simultaneously think thoughts about all the interactions that the portion of the system they’re developing will have with other portions.

A human being can only think about seven things at once, plus or minus two. Therefore, although we may develop software systems that contain thousands, millions, or billions of components over time, we must be able to make changes to that system while only holding in mind an average of seven things. Really bad systems will make us concentrate on nine things and we will only be able to correctly change them when we’re at our absolute best. Really good systems will require us to concentrate on only five, and we might be able to write correct code for them even when we’re tired.

Aside: “Oh Come On They’re Not That Bad”

Those of you who actually use threads to write real software are probably objecting at this point. “Nobody would actually try to write free-threading code like this,” I can hear you complain, “Of course we’d use a lock or a queue to introduce some critical sections if we’re manipulating state.”

Mutexes can help mitigate this combinatorial explosion, but they can’t eliminate it, and they come with their own cost; you need to develop strategies to ensure consistent ordering of their acquisition. Mutexes should really be used to build queues, and to avoid deadlocks those queues should be non-blocking but eventually a system which communicates exclusively through non-blocking queues effectively becomes a set of communicating event loops, and its problems revert to those of an event-driven system; it doesn’t look like regular programming with threads any more.

But even if you build such a system, if you’re using a language like Python (or the ones detailed above) where modules, classes, and methods are all globally shared, mutable state, it’s always possible to make an error that will affect the behavior of your whole program without even realizing that you’re interacting with state at all. You have to have a level of vigilance bordering on paranoia just to make sure that your conventions around where state can be manipulated and by whom are honored, because when such an interaction causes a bug it’s nearly impossible to tell where it came from.

Of course, threads are just one source of inscrutable, brain-bending bugs, and quite often you can make workable assumptions that preclude you from actually having to square the complexity of every single routine that you touch; for one thing, many computations don’t require manipulating state at all, and you can (and must) ignore lots of things that can happen on every line of code anyway. (If you think not, when was the last time you audited your code base for correct behavior in the face of memory allocation failures?) So, in a sense, it’s possible to write real systems with threads that perform more or less correctly for the same reasons it’s possible to write any software approximating correctness at all; we all need a little strength of will and faith in our holy cause sometimes.

Nevertheless I still think it’s a bad idea to make things harder for ourselves if we can avoid it.

Solution: Don’t Use Threads

So now I’ve convinced you that if you’re programming in Python (or one of its moral equivalents with respect to concurrency and state) you shouldn’t use threads. Great. What are you going to do instead?

There’s a lot of debate over the best way to do “asynchronous” programming - that is to say, “not threads”, four options are often presented.

  1. Straight callbacks: Twisted’s IProtocol, JavaScript’s on<foo> idiom, where you give a callback to something which will call it later and then return control to something (usually a main loop) which will execute those callbacks,
  2. “Managed” callbacks, or Futures: Twisted’s Deferred, JavaScript’s Promises/A[+], E’s Promises, where you create a dedicated result-that-will-be-available-in-the-future object and return it for the caller to add callbacks to,
  3. Explicit coroutines: Twisted’s @inlineCallbacks, Tulip’s yield from coroutines, C#’s async/await, where you have a syntactic feature that explicitly suspends the current routine,
  4. and finally, implicit coroutines: Java’s “green threads”, Twisted’s Corotwine, eventlet, gevent, where any function may switch the entire stack of the current thread of control by calling a function which suspends it.

One of these things is not like the others; one of these things just doesn’t belong.

Don’t Use Those Threads Either

Options 1-3 are all ways of representing the cooperative transfer of control within a stateful system. They are a semantic improvement over threads. Callbacks, Futures, and Yield-based coroutines all allow for local reasoning about concurrent operations.

So why does option 4 even show up in this list?

Unfortunately, “asynchronous” systems have often been evangelized by emphasizing a somewhat dubious optimization which allows for a higher level of I/O-bound concurrency than with preemptive threads, rather than the problems with threading as a programming model that I’ve explained above. By characterizing “asynchronousness” in this way, it makes sense to lump all 4 choices together.

I’ve been guilty of this myself, especially in years past: saying that a system using Twisted is more efficient than one using an alternative approach using threads. In many cases that’s been true, but:

  1. the situation is almost always more complicated than that, when it comes to performance,
  2. “context switching” is rarely a bottleneck in real-world programs, and
  3. it’s a bit of a distraction from the much bigger advantage of event-driven programming, which is simply that it’s easier to write programs at scale, in both senses (that is, programs containing lots of code as well as programs which have many concurrent users).

A system that presents “implicit coroutines” – those which may transfer control to another concurrent task at any layer of the stack without any syntactic indication that this may happen – are simply the dubious optimization by itself.

Despite the fact that implicit coroutines masquerade under many different names, many of which don’t include the word “thread” – for example, “greenlets”, “coroutines”, “fibers”, “tasks” – green or lightweight threads are indeed threads, in that they present these same problems. In the long run, when you build a system that relies upon them, you eventually have all the pitfalls and dangers of full-blown preemptive threads. Which, as shown above, are bad.

When you look at the implementation of a potentially concurrent routine written using callbacks or yielding coroutines, you can visually see exactly where it might yield control, either to other routines, or perhaps even re-enter the same routine concurrently. If you are using callbacks – managed or otherwise – you will see a return statement, or the termination of a routine, which allows execution of the main loop to potentially continue. If you’re using explicit coroutines, you’ll see a yield (or await) statement which suspends the coroutine. Because you can see these indications of potential concurrency, they’re outside of your mind, in your text editor, and you don’t need to actively remember them as you’re working on them.

You can think of these explicit yield-points as places where your program may gracefully bend to the needs of concurrent inputs. Crumple zones, or relief valves, for your logic, if you will: a single point where you have to consider the implications of a transfer of control to other parts of your program, rather than a rigid routine which might transfer (break) at any point beyond your control.

Like crumple zones, you shouldn’t have too many of them, or they lose their effectiveness. A long routine which has an explicit yield point before every single instruction requires just as much out-of-order reasoning, and is therefore just as error-prone as one which has none, but might context switch before any instruction anyway. The advantage of having to actually insert the yield point explicitly is that at least you can see when a routine has this problem, and start to clean up and consolidate the mangement of its concurrency.

But this is all pretty abstract; let me give you a specific practical example, and a small theoretical demonstration.

The Buggiest Bug

Brass Cockroach - Image Credit GlamourGirlBeads http://www.etsy.com/listing/62042780/large-antiqued-brass-cockroach1-ants3074

When we wrote the very first version of Twisted Reality in Python, the version we had previously written in Java was already using green threads; at the time, the JVM didn’t have any other kind of threads. The advantage to the new networking layer that we developed was not some massive leap forward in performance (the software in question was a multiplayer text adventure, which at the absolute height of its popularity might have been played by 30 people simultaneously) but rather the dramatic reduction in the number and severity of horrible, un-traceable concurrency bugs. One, in particular, involved a brass, mechanical cockroach which would crawl around on a timer, leaping out of a player’s hands if it was in their inventory, moving between rooms if not. In the multithreaded version, the cockroach would leap out of your hands but then also still stay in your hands. As the cockroach moved between rooms it would create shadow copies of itself, slowly but inexorably creating a cockroach apocalypse as tens of thousands of pointers to the cockroach, each somehow acquiring their own timer, scuttled their way into every player’s inventory dozens of times.

Given that the feeling that this particular narrative feature was supposed to inspire was eccentric whimsy and not existential terror, the non-determinism introduced by threads was a serious problem. Our hope for the even-driven re-write was simply that we’d be able to diagnose the bug by single-stepping through a debugger; instead, the bug simply disappeared. (Echoes of this persist, in that you may rarely hear a particularly grizzled Twisted old-timer refer to a particularly intractable bug as a “brass cockroach”.)

The original source of the bug was so completely intractable that the only workable solution was to re-write the entire system from scratch. Months of debugging and testing and experimenting could still reproduce it only intermittently, and several “fixes” (read: random, desperate changes to the code) never resulted in anything.

I’d rather not do that ever again.

Ca(sh|che Coherent) Money

Despite the (I hope) entertaining nature of that anecdote, it still might be somewhat hard to visualize how concurrency results in a bug like that, and the code for that example is far too sprawling to be useful as an explanation. So here's a smaller in vitro example. Take my word for it that the source of the above bug was the result of many, many intersecting examples of the problem described below.

As it happens, this is the same variety of example Guido van Rossum gives when he describes why chose to use explicit coroutines instead of green threads for the upcoming standard library asyncio module, born out of the “tulip” project, so it's happened to more than one person in real life.

Photo Credit: Ennor https://www.flickr.com/photos/ennor/441394582/sizes/l/

Let’s say we have this program:

1
2
3
4
5
6
7
8
9
def transfer(amount, payer, payee, server):
    if not payer.sufficient_funds_for_withdrawl(amount):
        raise InsufficientFunds()
    log("{payer} has sufficient funds.", payer=payer)
    payee.deposit(amount)
    log("{payee} received payment", payee=payee)
    payer.withdraw(amount)
    log("{payer} made payment", payer=payer)
    server.update_balances([payer, payee])

(I realize that the ordering of operations is a bit odd in this example, but it makes the point easier to demonstrate, so please bear with me.)

In a world without concurrency, this is of course correct. If you run transfer twice in a row, the balance of both accounts is always correct. But if we were to run transfer with the same two accounts in an arbitrary number of threads simultaneously, it is (obviously, I hope) wrong. One thread could update a payer’s balance below the funds-sufficient threshold after the check to see if they’re sufficient, but before issuing the withdrawl.

So, let’s make it concurrent, in the PEP 3156 style. That update_balances routine looks like it probably has to do some network communication and block, so let’s consider that it is as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
@coroutine
def transfer(amount, payer, payee, server):
    if not payer.sufficient_funds_for_withdrawl(amount):
        raise InsufficientFunds()
    log("{payer} has sufficient funds.", payer=payer)
    payee.deposit(amount)
    log("{payee} received payment", payee=payee)
    payer.withdraw(amount)
    log("{payer} made payment", payer=payer)
    yield from server.update_balances([payer, payee])

So now we have a trivially concurrent, correct version of this routine, although we did have to update it a little. Regardless of what sufficient_funds_for_withdrawl, deposit and withdrawl do - even if they do network I/O - we know that we aren’t waiting for any of them to complete, so they can’t cause transfer to interfere with itself. For the sake of a brief example here, we’ll have to assume update_balances is a bit magical; for this to work our reads of the payer and payee’s balance must be consistent.

But if we were to use green threads as our “asynchronous” mechanism rather than coroutines and yields, we wouldn’t need to modify the program at all! Isn’t that better? And only update_balances blocks anyway, so isn’t it just as correct?

Sure: for now.

But now let’s make another, subtler code change: our hypothetical operations team has requested that we put all of our log messages into a networked log-gathering system for analysis. A reasonable request, so we alter the implementation of log to write to the network.

Now, what will we have to do to modify the green-threaded version of this code? Nothing! This is usually the point where fans of various green-threading systems will point and jeer, since once the logging system is modified to do its network I/O, you don’t even have to touch the code for the payments system. Separation of concerns! Less pointless busy-work! Looks like the green-threaded system is winning.

Oh well. Since I’m still a fan of explicit concurrency management, let’s do the clearly unnecessary busy-work of updating the ledger code.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
@coroutine
def transfer(amount, payer, payee, server):
    if not payer.sufficient_funds_for_withdrawl(amount):
        raise InsufficientFunds()
    yield from log("{payer} has sufficient funds.", payer=payer)
    payee.deposit(amount)
    yield from log("{payee} received payment", payee=payee)
    payer.withdraw(amount)
    yield from log("{payer} made payment", payer=payer)
    yield from server.update_balances([payer, payee])

Well okay, at least that wasn’t too hard, if somewhat tedious. Sigh. I guess we can go update all of the ledger’s callers now and update them too…

…wait a second.

In order to update this routine for a non-blocking version of log, we had to type a yield keyword between the sufficient_funds_for_withdrawl check and the withdraw call, between the deposit and the withdraw call, and between the withdraw and update_balances call. If we know a little about concurrency and a little about what this program is doing, we know that every one of those yield froms are a potential problem. If those log calls start to back up and block, a payer may have their account checked for sufficient funds, then funds could be deducted while a log message is going on, leaving them with a negative balance.

If we were in the middle of updating lots of code, we might have blindly added these yield keywords without noticing that mistake. I've certainly done that in the past, too. But just the mechanical act of typing these out is an opportunity to notice that something’s wrong, both now and later. Even if we get all the way through making the changes without realizing the problem, when we notice that balances are off, we can look only (reasoning locally!) at the transfer routine and realize, when we look at it, based on the presence of the yield from keywords, that there is something wrong with the transfer routine itself, regardless of the behavior of any of the things it’s calling.

In the process of making all these obviously broken modifications, another thought might occur to us: do we really need to wait before log messages are transmitted to the logging system before moving on with our application logic? The answer would almost always be “no”. A smart implementation of log could simply queue some outbound messages to the logging system, then discard if too many are buffered, removing any need for its caller to honor backpressure or slow down if the logging system can’t keep up. Consider the way syslog says “and N more” instead of logging certain messages repeatedly. That feature allows it to avoid filling up logs with repeated messages, and decreases the amount of stuff that needs to be buffered if writing the logs to disk is slow.

All the extra work you need to do when you update all the callers of log when you make it asynchronous is therefore a feature. Tedious as it may be, the asynchronousness of an individual function is, in fact, something that all of its callers must be aware of, just as they must be aware of its arguments and its return type.

In fact you are changing its return type: in Twisted, that return type would be Deferred, and in Tulip, that return type is a new flavor of generator. This new return type represents the new semantics that happen when you make a function start having concurrency implications.

Haskell does this as well, by embedding the IO monad in the return type of any function which needs to have side-effects. This is what certain people mean when they say Deferreds are a Monad.

The main difference between lightweight and heavyweight threads is that it is that, with rigorous application of strict principles like “never share any state unnecessarily”, and “always write tests for every routine at every point where it might suspend”, lightweight threads make it at least possible to write a program that will behave deterministically and correctly, assuming you understand it in its entirety. When you find a surprising bug in production, because a routine that is now suspending in a place it wasn’t before, it’s possible with a lightweight threading system to write a deterministic test that will exercise that code path. With heavyweight threads, any line could be the position of a context switch at any time, so it’s just not tractable to write tests for every possible order of execution.

However, with lightweight threads, you still can’t write a test to discover when a new yield point might be causing problems, so you're still always playing catch-up.

Although it’s possible to do this, it remains very challenging. As I described above, in languages like Python, Ruby, JavaScript, and PHP, even the code itself is shared, mutable state. Classes, types, functions, and namespaces are all shared, and all mutable. Libraries like object relational mappers commonly store state on classes.

No Shortcuts

Despite the great deal of badmouthing of threads above, my main purpose in writing this was not to convince you that threads are, in fact, bad. (Hopefully, you were convinced before you started reading this.) What I hope I’ve demonstrated is that if you agree with me that threading has problematic semantics, and is difficult to reason about, then there’s no particular advantage to using microthreads, beyond potentially optimizing your multithreaded code for a very specific I/O bound workload.

There are no shortcuts to making single-tasking code concurrent. It's just a hard problem, and some of that hard problem is reflected in the difficulty of typing a bunch of new concurrency-specific code.

So don’t be fooled: a thread is a thread regardless of its color. If you want your program to be supple and resilient in the face of concurrency, when the storm of concurrency blows, allow it to change. Tell it to yield, just like the reed. Otherwise, just like the steadfast and unchanging oak tree in the storm, your steadfast and unchanging algorithms will break right in half.