Toward a “Kernel Python”

The life changing magic of a minimal standard library.

Prompted by Amber Brown’s presentation at the Python Language Summit last month, Christian Heimes has followed up on his own earlier work on slimming down the Python standard library, and created a proper Python Enhancement Proposal PEP 594 for removing obviously obsolete and unmaintained detritus from the standard library.

PEP 594 is great news for Python, and in particular for the maintainers of its standard library, who can now address a reduced surface area. A brief trip through the PEP’s rogues gallery of modules to deprecate or remove1 is illuminating. The python standard library contains plenty of useful modules, but it also hides a veritable necropolis of code, a towering monument to obsolescence, threatening to topple over on its maintainers at any point.

However, I believe the PEP may be approaching the problem from the wrong direction. Currently, the standard library is maintained in tandem with, and by the maintainers of, the CPython python runtime. Large portions of it are simply included in the hope that it might be useful to somebody. In the aforementioned PEP, you can see this logic at work in defense of the colorsys module: why not remove it? “The module is useful to convert CSS colors between coordinate systems. [It] does not impose maintenance overhead on core development.”

There was a time when Internet access was scarce, and maybe it was helpful to pre-load Python with lots of stuff so it could be pre-packaged with the Python binaries on the CD-ROM when you first started learning.

Today, however, the modules you need to convert colors between coordinate systems are only a pip install away. The bigger core interpreter is just more to download before you can get started.

Why Didn’t You Review My PR?

So let’s examine that claim: does a tiny module like colorsys “impose maintenance overhead on core development”?

The core maintainers have enough going on just trying to maintain the huge and ancient C codebase that is CPython itself. As Mariatta put it in her North Bay Python keynote, the most common question that core developers get is “Why haven’t you looked at my PR?” And the answer? It’s easier to not look at PRs when you don’t care about them. This from a talk about what it means to be a core developer!

One might ask, whether Twisted has the same problem. Twisted is a big collection of loosely-connected modules too; a sort of standard library for networking. Are clients and servers for SSH, IMAP, HTTP, TLS, et. al. all a bit much to try to cram into one package?

I’m compelled to reply: yes. Twisted is monolithic because it dates back to a similar historical period as CPython, where installing stuff was really complicated. So I am both sympathetic and empathetic towards CPython’s plight.

At some point, each sub-project within Twisted should ideally become a separate project with its own repository, CI, website, and of course its own more focused maintainers. We’ve been slowly splitting out projects already, where we can find a natural boundary. Some things that started in Twisted like constantly and incremental have been split out; deferred and filepath are in the process of getting that treatment as well. Other projects absorbed into the org continue to live separately, like klein and treq. As we figure out how to reduce the overhead of setting up and maintaining the CI and release infrastructure for each of them, we’ll do more of this.


But is our monolithic nature the most pressing problem, or even a serious problem, for the project? Let’s quantify it.

As of this writing, Twisted has 5 outstanding un-reviewed pull requests in our review queue. The median time a ticket spends in review is roughly four and a half days.2 The oldest ticket in our queue dates from April 22, which means it’s been less than 2 months since our oldest un-reviewed PR was submitted.

It’s always a struggle to find enough maintainers and enough time to respond to pull requests. Subjectively, it does sometimes feel like “Why won’t you review my pull request?” is a question we do still get all too often. We aren’t always doing this well, but all in all, we’re managing; the queue hovers between 0 at its lowest and 25 or so during a bad month.

By comparison to those numbers, how is core CPython doing?

Looking at CPython’s keyword-based review queue queue, we can see that there are 429 tickets currently awaiting review. The oldest PR awaiting review hasn’t been touched since February 2, 2018, which is almost 500 days old.

How many are interpreter issues and how many are stdlib issues? Clearly review latency is a problem, but would removing the stdlib even help?

For a quick and highly unscientific estimate, I scanned the first (oldest) page of PRs in the query above. By my subjective assessment, on this page of 25 PRs, 14 were about the standard library, 10 were about the core language or interpreter code; one was a minor documentation issue that didn’t really apply to either. If I can hazard a very rough estimate based on this proportion, somewhere around half of the unreviewed PRs might be in standard library code.


So the first reason the CPython core team needs to stop maintaining the standard library because they literally don’t have the capacity to maintain the standard library. Or to put it differently: they aren’t maintaining it, and what remains is to admit that and start splitting it out.

It’s true that none of the open PRs on CPython are in colorsys3. It does not, in fact, impose maintenance overhead on core development. Core development imposes maintenance overhead on it. If I wanted to update the colorsys module to be more modern - perhaps to have a Color object rather than a collection of free functions, perhaps to support integer color models - I’d likely have to wait 500 days, or more, for a review.

As a result, code in the standard library is harder to change, which means its users are less motivated to contribute to it. CPython’s unusually infrequent releases also slow down the development of library code and decrease the usefulness of feedback from users. It’s no accident that almost all of the modules in the standard library have actively maintained alternatives outside of it: it’s not a failure on the part of the stdlib’s maintainers. The whole process is set up to produce stagnation in all but the most frequently used parts of the stdlib, and that’s exactly what it does.

New Environments, New Requirements

Perhaps even more importantly is that bundling together CPython with the definition of the standard library privileges CPython itself, and the use-cases that it supports, above every other implementation of the language.

Podcast after podcast after podcast after keynote tells us that in order to keep succeeding and expanding, Python needs to grow into new areas: particularly web frontends, but also mobile clients, embedded systems, and console games.

These environments require one or both of:

  • a completely different runtime, such as Brython, or MicroPython
  • a modified, stripped down version of the standard library, which elides most of it.

In all of these cases, determining which modules have been removed from the standard library is a sticking point. They have to be discovered by a process of trial and error; notably, a process completely different from the standard process for determining dependencies within a Python application. There’s no install_requires declaration you can put in your setup.py that indicates that your library uses a stdlib module that your target Python runtime might leave out due to space constraints.

You can even have this problem even if all you ever use is the standard python on your Linux installation. Even server- and desktop-class Linux distributions have the same need for a more minimal core Python package, and so they already chop up the standard library somewhat arbitrarily. This can break the expectations of many python codebases, and result in bugs where even pip install won’t work.

Take It All Out

How about the suggestion that we should do only a little a day? Although it sounds convincing, don’t be fooled. The reason you never seem to finish is precisely because you tidy a little at a time. [...] The ultimate secret of success is this: If you tidy up in one shot, rather than little by little, you can dramatically change your mind-set.

Kondō, Marie.
“The Life-Changing Magic of Tidying Up”
(p. 15-16)

While incremental slimming of the standard library is a step in the right direction, incremental change can only get us so far. As Marie Kondō says, when you really want to tidy up, the first step is to take everything out so that you can really see everything, and put back only what you need.

It’s time to thank those modules which do not spark joy and send them on their way.

We need a “kernel” version of Python that contains only the most absolutely minimal library, so that all implementations can agree on a core baseline that gives you a “python”, and applications, even those that want to run on web browsers or microcontrollers, can simply state their additional requirements in terms of requirements.txt.

Now, there are some business environments where adding things to your requirements.txt is a fraught, bureaucratic process, and in those places, a large standard library might seem appealing. But “standard library” is a purely arbitrary boundary that the procurement processes in such places have drawn, and an equally arbitrary line may be easily drawn around a binary distribution.

So it may indeed be useful for some CPython binary distributions — perhaps even the official ones — to still ship with a broader selection of modules from PyPI. Even for the average user, in order to use it for development, at the very least, you’d need enough stdlib stuff that pip can bootstrap itself, to install the other modules you need!

It’s already the case, today, that pip is distributed with Python, but isn’t maintained in the CPython repository. What the default Python binary installer ships with is already a separate question from what is developed in the CPython repo, or what ships in the individual source tarball for the interpreter.

In order to use Linux, you need bootable media with a huge array of additional programs. That doesn’t mean the Linux kernel itself is in one giant repository, where the hundreds of applications you need for a functioning Linux server are all maintained by one team. The Linux kernel project is immensely valuable, but functioning operating systems which use it are built from the combination of the Linux kernel and a wide variety of separately maintained libraries and programs.

Conclusion

The “batteries included” philosophy was a great fit for the time when it was created: a booster rocket to sneak Python into the imagination of the programming public. As the open source and Python packaging ecosystems have matured, however, this strategy has not aged well, and like any booster, we must let it fall back to earth, lest it drag us back down with it.

New Python runtimes, new deployment targets, and new developer audiences all present tremendous opportunities for the Python community to soar ever higher.

But to do it, we need a newer, leaner, unburdened “kernel” Python. We need to dump the whole standard library out on the floor, adding back only the smallest bits that we need, so that we can tell what is truly necessary and what’s just nice to have.

I hope I’ve convinced at least a few of you that we need a kernel Python.

Now: who wants to write the PEP?

🚀

Acknowledgments

Thanks to Jean-Paul Calderone, Donald Stufft, Alex Gaynor, Amber Brown, Ian Cordasco, Jonathan Lange, Augie Fackler, Hynek Schlawack, Pete Fein, Mark Williams, Tom Most, Jeremy Thurgood, and Aaron Gallagher for feedback and corrections on earlier drafts of this post. Any errors of course remain my own.


  1. sunau, xdrlib, and chunk are my personal favorites. 

  2. Yeah, yeah, you got me, the mean is 102 days. 

  3. Well, as it turns out, one is on colorsys, but it’s a documentation fix that Alex Gaynor filed after reviewing a draft of this post so I don’t think it really counts. 

Tips And Tricks for Shipping a PyGame App on the Mac

A quick and dirty guide to getting that little PyGame hack you did up and running on someone else’s Mac.

I have written a tool you can actually use rather than copying and pasting shell-script snippets, which you can read about in a new post here. I've done my best to update the accuracy of the information below as well, particularly with respect to which Python you want and why, but it is a much older post and I could easily have missed something.

I’ve written and spoken at some length about shipping software in the abstract. Sometimes I’ve even had the occasional concrete tidbit, but that advice wasn’t really complete.

In honor of Eevee’s delightful Games Made Quick???, I’d like to help you package your games even quicker than you made them.

Who is this for?

About ten years ago I made a prototype of a little PyGame thing which I wanted to share with a few friends. Building said prototype was quick and fun, and very different from the usual sort of work I do. But then, the project got just big enough that I started to wonder if it would be possible to share the result, and thus began the long winter of my discontent with packaging tools.

I might be the only one, but... I don’t think so. The history of PyWeek, for example, looks to be a history of games distributed as Github repositories, or, at best, apps which don’t launch. It seems like people who participate in game jams with Unity push a button and publish their games to Steam; people who participate in game jams with Python wander away once the build toolchain defeats them.

So: perhaps you’re also a Python programmer, and you’ve built something with PyGame, and you want to put it on your website so your friends can download it. Perhaps many or most of your friends and family are Mac users. Perhaps you tried to make a thing with py2app once, and got nothing but inscrutable tracebacks or corrupt app bundles for your trouble.

If so, read on and enjoy.

What changed?

If things didn’t work for me when I first tried to do this, what’s different now?

  • the packaging ecosystem in general is far less buggy, and py2app’s dependencies, like setuptools, have become far more reliable as well. Many thanks to Donald Stufft and the whole PyPA for that.
  • Binary wheels exist, and the community has been getting better and better at building self-contained wheels which include any necessary C libraries, relieving the burden on application authors to figure out gnarly C toolchain issues.
  • The PyGame project now ships just such wheels for a variety of Python versions on Mac, Windows, and Linux, which removes a whole huge pile of complexity both in generally understanding the C toolchain and specifically understanding the SDL build process.
  • py2app has been actively maintained and many bugs have been fixed - many thanks to Ronald Oussoren et. al. for that.
  • I finally broke down and gave Apple a hundred dollars so I can produce an app that normal humans might actually be able to run.

There are still weird little corner cases you have to work around — hence this post – but mostly this is the story of how years of effort by the Python packaging community have resulted in tools that are pretty close to working out of the box now.

Step 0: Development Setup

You will also want to use a virtual environment for development.

Finally: pip install all your requirements into your virtualenv, including PyGame itself.

Step 1: Make an icon

All good apps need an icon, right?

When I was young, one would open up ResEdit Resorcerer MPW CodeWarrior Project Builder Icon Composer Xcode and create a new ICON resource cicn resource .tiff file .icns file. Nowadays there’s some weird opaque stuff with xcassets files and Contents.json and “Copy Bundle Resources” in the default Swift and Objective C project templates and honestly I can’t be bothered to keep track of what’s going on with this nonsense any more.

Luckily the OS ships with the macOS-specific “scriptable image processing system”, which can helpfully convert an icon for you. Make yourself a 512x512 PNG file in your favorite image editor (with an alpha channel!) that you want to use as your icon, then run it something like this:

1
$ sips -s format icns Icon.png --out Icon.icns

somewhere in your build process, to produce an icon in the appropriate format.

There’s also one additional wrinkle with PyGame: once you’ve launched the game, PyGame helpfully assigns the cute, but ugly, default PyGame icon to your running process. To avoid this, you’ll need these two lines somewhere in your initialization code, somewhere before pygame.display.init (or, for that matter, pygame.display.<anything>):

1
2
from pygame.sdlmain_osx import InstallNSApplication
InstallNSApplication()

Obviously this is pretty Mac-specific so you probably want this under some kind of platform-detection conditional, perhaps this one.

Step 2: Include All The Dang Files, I Don’t Care About Performance

Unfortunately py2app still tries really hard to jam all your code into a .zip file, which breaks the world in various hilarious ways. Your app will probably have some resources you want to load, as will PyGame itself.

Supposedly, packages=["your_package"] in your setup.py should address this, and it comes with a “pygame” recipe, but neither of these things worked for me. Instead, I convinced py2app to splat out all the files by using the not-quite-public “recipe” plugin API:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
import py2app.recipes
import py2app.build_app

from setuptools import find_packages, setup

pkgs = find_packages(".")

class recipe_plugin(object):
    @staticmethod
    def check(py2app_cmd, modulegraph):
        local_packages = pkgs[:]
        local_packages += ['pygame']
        return {
            "packages": local_packages,
        }

py2app.recipes.my_recipe = recipe_plugin

APP = ['my_main_file.py']
DATA_FILES = []
OPTIONS = {}
OPTIONS.update(
    iconfile="Icon.icns",
    plist=dict(CFBundleIdentifier='com.example.yourdomain.notmine')
)

setup(
    name="Your Game",
    app=APP,
    data_files=DATA_FILES,
    include_package_data=True,
    options={'py2app': OPTIONS},
    setup_requires=['py2app'],
    packages=pkgs,
    package_data={
        "": ["*.gal" , "*.gif" , "*.html" , "*.jar" , "*.js" , "*.mid" ,
             "*.png" , "*.py" , "*.pyc" , "*.sh" , "*.tmx" , "*.ttf" ,
             # "*.xcf"
        ]
    },
)

This is definitely somewhat less efficient than py2app’s default of stuffing the code into a single zip file, but, as a counterpoint to that: it actually works.

Step 3: Build it

Hopefully, at this point you can do python setup.py py2app and get a shiny new app bundle in dist/$NAME.app. We haven’t had to go through the hell of quarantine yet, so it should launch at this point. If it doesn’t, sorry :-(.

You can often debug more obvious fail-to-launch issues by running the executable in the command line, by running ./dist/$NAME.app/Contents/MacOS/$NAME. Although this will run in a slightly different environment than double clicking (it will have all your shell’s env vars, for example, so if your app needs an env var to work it might mysteriously work there) it will also print out any tracebacks to your terminal, where they’ll be slightly easier to find than in Console.app.

Once your app at least runs locally, it’s time to...

Step 4: Code sign it

All the tutorials that I’ve found on how to do this involve doing Xcode project goop where it’s not clear what’s happening underneath. But despite the fact that the introductory docs aren’t quite there, the underlying model for codesigning stuff is totally common across GUI and command-line cases. However, actually getting your cert requires Xcode, an apple ID, and a credit card.

After paying your hundred dollars, go into Xcode, go to Accounts, hit “+”, “Apple ID”, then log in. Then, in your shiny new account, go to “Manage Certificates”, hit the little “+”, and (assuming, like me, you want to put something up on your own website, and not submit to the Mac App Store), and choose Developer ID Application. You probably think you want “mac app distribution” because you are wanting to distribute a mac app! But you don’t.

Next, before you do anything else, make sure you have backups of your certificate and private key. You really don’t want to lose the private key associated with that cert.

Now quit Xcode; you’re done with the GUI.

You will need to know the identifier of your signing key though, which should be output from the command:

1
$ security find-identity -v -p codesigning | grep 'Developer ID' | sed -e 's/.*"\(.*\)"/\1/'

You probably want to put that in your build script, since you want to sign with the same identity every time. Further commands here will assume you’ve copied one of the lines of results from that command and done export IDENTITY="..." with it.

Step 4a: Become Aware Of New Annoying Requirements

Update for macOS Catalina: In Catalina, Apple has added a new code-signing requirement; even for apps distributed outside of the app store, they still have to be submitted to and approved by Apple.

In order to be notarized, you will need to codesign not only your app itself, but to also:

  1. add the hardened-runtime exception entitlements that allow Python to work, and
  2. directly sign every shared library that is part of your app bundle.

So the actual code-signing step is now a little more complicated.

Step 4b: Write An Entitlements Plist That Allows Python To Work

One of the features that notarization is intended to strongly encourage1 is the “hardened runtime”, a feature of macOS which opts in to stricter run-time behavior designed to stop malware. One thing that the hardened runtime does is to disable writable, executable memory, which is used by JITs, FFIs ... and malware.

Unfortunately, both Python’s built-in ctypes module and various popular bits of 3rd-party stuff that uses cffi, including pyOpenSSL, require writable, executable memory to work. Furthermore, py2app actually imports ctypes during its bootstrapping phase, so you can’t even get your own code to start running to perform any workarounds unless this is enabled. So this is just if you want to use Python, not if your project requires ctypes directly.

To make this long, sad story significantly shorter and happier, you can create an entitlements property list that enables the magical property which allows this to work. It looks like this:

1
2
3
4
5
6
7
8
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>com.apple.security.cs.allow-unsigned-executable-memory</key>
    <true/>
</dict>
</plist>

Subsequent steps assume that you’ve put this into a file called entitleme.plist in your project root.

Step 4c: SIGN ALL THE THINGS

Notarization also requires that all the executable files in your bundle, not just the main executable, are properly code-signed before submitting. So you’ll need to first run the codesign command across all your shared libraries, something like this:

1
2
3
4
5
6
7
8
9
$ cd dist
$ find "${NAME}.app" -iname '*.so' -or -iname '*.dylib' |
    while read libfile; do
        codesign --sign "${IDENTITY}" \
                 --entitlements ../entitleme.plist \
                 --deep "${libfile}" \
                 --force \
                 --options runtime;
    done;

Then finally, sign the bundle itself.

1
2
3
4
5
$ codesign --sign "${IDENTITY}" \
         --entitlements ../entitleme.plist \
         --deep "${NAME}.app" \
         --force \
         --options runtime;

Now, your app is code-signed.

Step 5: Archive it

The right way to do this is probably to use dmgbuild or something like it, but what I promised here was quick and dirty, not beautiful and best practices.

You have to make a Zip archive that preserves symbolic links. There are a couple of options for this:

  • open dist/, then in the Finder window that comes up, right click on the app and “compress” it
  • cd dist; zip -yr $NAME.app.zip $NAME.app

Most importantly, if you use the zip command line tool, you must use the -y option. Without it, your downloadable app bundle will be somewhat mysteriously broken even though the one before you zipped it will be fine.

Step 6: Actually The Rest Of Step 4: Request Notarization

Notarization is a 2-step process, which is somewhat resistant to fully automating. You submit to Apple, then they email you the results of doing the notarization, then if that email indicates that your notarization succeded, you can “staple” the successful result to your bundle.

The thing you notarize is an archive, which is why you need to do step 5 first. Then, you need to do this:

1
2
3
4
5
$ xcrun altool --notarize-app \
      --file "${NAME}.app.zip" \
      --type osx \
      --username "${YOUR_DEVELOPER_ID_EMAIL}" \
      --primary-bundle-id="${YOUR_BUNDLE_ID}";

Be sure that YOUR_BUNDLE_ID matches the CFBundleIdentifier you told py2app about before, so that the tool can find your app bundle inside the archive.

You’ll also need to type in the iCloud password for your Developer ID account here.2

Step 6a: Wait A Minute

Anxiously check your email for an hour or so. Hope you don’t get any errors.

Step 6b: Finish Notarizing It, Finally!

Once Apple has a record of the app’s notarization, their tooling will recognize it, so you don’t need any information from the confirmation email or the previous command; just make sure that you are running this on the exact same .app directory you just built and archived and not a version that differs in any way.

1
$ xcrun stapler staple "./${NAME}.app";

Finally, you will want to archive it again:

1
$ zip -qyr "${NAME}.notarized.app.zip" "${NAME}.app";

Step 7: Download it

Ideally, at this point, everything should be working. But to make sure that code-signing and archiving and notarizing and re-archiving went correctly, you should have either a pristine virtual machine with no dev tools and no Python installed, or a non-programmer friend’s machine that can serve the same purpose. They probably need a relatively recent macOS - my own experience has shown that apps made using the above technique will definitely work on High Sierra (and later) and will definitely break on Yosemite (and earlier); they probably start working at some OS version between those.

There’s no tooling that I know of that can clearly tell you whether your mac app depends on some detail of your local machine. Even for your dependencies, there’s no auditwheel for macOS.

Updated 2019-06-27: It turns out there is an auditwheel like thing for macOS: delocate! In fact, it predated and inspired auditwheel!

Thanks to Nathaniel Smith for the update (which he provided in, uh, January of 2018 and I’ve only just now gotten around to updating...).

Nevertheless, it’s always a good idea to check your final app build on a fresh computer before you announce it.

Coda

If you were expecting to get to the end and download my cool game, sorry to disappoint! It really is a half-broken prototype that is in no way ready for public consumption, and given my current load of personal and professional responsibilities, you definitely shouldn’t expect anything from me in this area any time soon, or, you know, ever.

But, from years of experience, I know that it’s nearly impossible to summon any motivation to work on small projects like this without the knowledge that the end result will be usable in some way, so I hope that this helps someone else set up their Python game-dev pipeline.

I’d really like to turn this into a 3-part series, with a part for Linux (perhaps using flatpak? is that a good thing?) and a part for Windows. However, given my aforementioned time constraints, I don’t think I’m going to have the time or energy to do that research, so if you’ve got the appropriate knowledge, I’d love to host a guest post on this blog, or even just a link to yours.

If this post helped you, if you have questions or corrections, or if you’d like to write the Linux or Windows version of this post, let me know.


  1. The hardened runtime was originally required when notarization was introduced. Apparently this broke too much software and now the requirement is relaxed until January 2020. But it’s probably best to treat it as if it is required, since the requirement is almost certainly coming back, and may in fact be back by the time you’re reading this. 

  2. You can pass it via the --password option but there are all kinds of security issues with that so I wouldn’t recommend it. 

Careful With That PyPI

PyPI credentials are important. Here are some tips for securing them a little better.

Too Many Secrets

A wise man once said, “you shouldn’t use ENV variables for secret data”. In large part, he was right, for all the reasons he gives (and you should read them). Filesystem locations are usually a better operating system interface to communicate secrets than environment variables; fewer things can intercept an open() than can read your process’s command-line or calling environment.

One might say that files are “more secure” than environment variables. To his credit, Diogo doesn’t, for good reason: one shouldn’t refer to the superiority of such a mechanism as being “more secure” in general, but rather, as better for a specific reason in some specific circumstance.

Supplying your PyPI password to tools you run on your personal machine is a very different case than providing a cryptographic key to a containerized application in a remote datacenter. In this case, based on the constraints of the software presently available, I believe an environment variable provides better security, if you use it correctly.

Popping A Shell By Any Other Name

If you upload packages to the python package index, and people use those packages, your PyPI password is an extremely high-privilege credential: effectively, it grants a time-delayed arbitrary code execution privilege on all of the systems where anyone might pip install your packages.

Unfortunately, the suggested mechanism to manage this crucial, potentially world-destroying credential is to just stick it in an unencrypted file.

The authors of this documentation know this is a problem; the authors of the tooling know too (and, given that these tools are all open source and we all could have fixed them to be better about this, we should all feel bad).

Leaving the secret lying around on the filesystem is a form of ambient authority; a permission you always have, but only sometimes want. One of the worst things about this is that you can easily forget it’s there if you don’t use these credentials very often.

The keyring is a much better place, but even it can be a slightly scary place to put such a thing, because it’s still easy to put it into a state where some random command could upload a PyPI release without prompting you. PyPI is forever, so we want to measure twice and cut once.

Luckily, even more secure places exist: password managers. If you use https://1password.com or https://www.lastpass.com, both offer command-line interfaces that integrate nicely with PyPI. If you use 1password, you’ll really want https://stedolan.github.io/jq/ (apt-get install jq, brew install jq) to slice & dice its command-line.

The way that I manage my PyPI credentials is that I never put them on my filesystem, or even into my keyring; instead, I leave them in my password manager, and very briefly toss them into the tools that need them via an environment variable.

First, I have the following shell function, to prevent any mistakes:

1
2
3
4
function twine () {
    echo "Use dev.twine or prod.twine depending on where you want to upload.";
    return 1;
}

For dev.twine, I configure twine to always only talk to my local DevPI instance:

1
2
3
4
5
6
function dev.twine () {
    env TWINE_USERNAME=root \
        TWINE_PASSWORD= \
        TWINE_REPOSITORY_URL=http://127.0.0.1:3141/root/plus/ \
        twine "$@";
}

This way I can debug Twine, my setup.py, and various test-upload things without ever needing real credentials at all.

But, OK. Eventually, I need to actually get the credentials and do the thing. How does that work?

1Password

1password’s command line is a little tricky to log in to (you have to eval its output, it’s not just a command), so here’s a handy shell function that will do it.

1
2
3
4
5
6
function opme () {
    # Log this shell in to 1password.
    if ! env | grep -q OP_SESSION; then
        eval "$(op signin "$(jq -r '.latest_signin' ~/.op/config)")";
    fi;
}

Then, I have this little helper for slicing out a particular field from the OP JSON structure:

1
2
3
function _op_field () {
    jq -r '.details.fields[] | select(.name == "'"${1}"'") | .value';
}

And finally, I use this to grab the item I want (named, memorably enough, “PyPI”) and invoke Twine:

1
2
3
4
5
6
7
function prod.twine () {
    opme;
    local pypi_item="$(op get item PyPI)";
    env TWINE_USERNAME="$(echo ${pypi_item} | _op_field username)" \
        TWINE_PASSWORD="$(echo "${pypi_item}" | _op_field password)" \
        twine "$@";
}

LastPass

For lastpass, you can just log in (for all shells; it’s a little less secure) via lpass login; if you’ve logged in before you often don’t even have to do that, and it will just prompt you when running command that require you to be logged in; so we don’t need the preamble that 1password’s command line did.

Its version of prod.twine looks quite similar, but its plaintext output obviates the need for jq:

1
2
3
4
5
function prod.twine () {
    env TWINE_USERNAME="$(lpass show PyPI --username)" \
        TWINE_PASSWORD="$(lpass show PyPI --password)" \
        twine "$@";
}

In Conclusion

“Keep secrets out of your environment” is generally a good idea, and you should always do it when you can. But, better a moment in your process environment than an eternity on your filesystem. Environment-based configuration can be a very useful stopgap for limiting the lifetimes of credentials when your tools don’t support more sophisticated approaches to secret storage.1

Post Script

If you are interested in secure secret storage, my micro-project secretly might be of interest. Right now it doesn’t do a whole lot; it’s just a small wrapper around the excellent keyring module and the pinentry / pinentry-mac password prompt tools. secretly presents an interface both for prompting users for their credentials without requiring the command-line or env vars, and for saving them away in keychain, for tools that need to pull in an API key and don’t want to make the user manually edit a config file first.


  1. Really, PyPI should have API keys that last for some short amount of time, that automatically expire so you don’t have to freak out if you gave somebody a 5-year-old laptop and forgot to wipe it first. But again, if I wanted that so bad, I should have implemented it myself... 

The Sororicide Antipattern

Don’t murder your parents or your siblings to get their attributes.

Composition is better than inheritance.”. This is a true statement. “Inheritance is bad.” Also true. I’m a well-known compositional extremist. There’s a great talk you can watch if I haven’t talked your ear off about it already.

Which is why I was extremely surprised in a recent conversation when my interlocutor said that while inheritance might be bad, composition is worse. Once I understood what they meant by “composition”, I was even more surprised to find that I agreed with this assertion.

Although inheritance is bad, it’s very important to understand why. In a high-level language like Python, with first-class runtime datatypes (i.e.: user defined classes that are objects), the computational difference between what we call “composition” and what we call “inheritance” is a matter of where we put a pointer: is it on a type or on an instance? The important distinction has to do with human factors.

First, a brief parable about real-life inheritance.


You find yourself in conversation with an indolent heiress-in-waiting. She complains of her boredom whiling away the time until the dowager countess finally leaves her her fortune.

“Inheritance is bad”, you opine. “It’s better to make your own way in life”.

“By George, you’re right!” she exclaims. You weren’t expecting such an enthusiastic reversal.

“Well,”, you sputter, “glad to see you are turning over a new leaf”.

She crosses the room to open a sturdy mahogany armoire, and draws forth a belt holstering a pistol and a menacing-looking sabre.

“Auntie has only the dwindling remnants of a legacy fortune. The real money has always been with my sister’s manufacturing concern. Why passively wait for Auntie to die, when I can murder my dear sister now, and take what is rightfully mine!”

Cinching the belt around her waist, she strides from the room animated and full of purpose, no longer indolent or in-waiting, but you feel less than satisfied with your advice.

It is, after all, important to understand what the problem with inheritance is.


The primary reason inheritance is bad is confusion between namespaces.

The most important role of code organization (division of code into files, modules, packages, subroutines, data structures, etc) is division of responsibility. In other words, Conway’s Law isn’t just an unfortunate accident of budgeting, but a fundamental property of software design.

For example, if we have a function called multiply(a, b) - its presence in our codebase suggests that if someone were to want to multiply two numbers together, it is multiply’s responsibility to know how to do so. If there’s a problem with multiplication, it’s the maintainers of multiply who need to go fix it.

And, with this responsibility comes authority over a specific scope within the code. So if we were to look at an implementation of multiply:

1
2
3
def multiply(a, b):
    product = a * b
    return product

The maintainers of multiply get to decide what product means in the context of their function. It’s possible, in Python, for some other funciton to reach into multiply with frame objects and mangle the meaning of product between its assignment and return, but it’s generally understood that it’s none of your business what product is, and if you touch it, all bets are off about the correctness of multiply. More importantly, if the maintainers of multiply wanted to bind other names, or change around existing names, like so, in a subsequent version:

1
2
3
4
5
def multiply(a, b):
    factor1 = a
    factor2 = b
    result = a * b
    return result

It is the maintainer of multiply’s job, not the caller of multiply, to make those decisions.

The same programmer may, at different times, be both a caller and a maintainer of multiply. However, they have to know which hat they’re wearing at any given time, so that they can know which stuff they’re still repsonsible for when they hand over multiply to be maintained by a different team.

It’s important to be able to forget about the internals of the local variables in the functions you call. Otherwise, abstractions give us no power: if you have to know the internals of everything you’re using, you can never build much beyond what’s already there, because you’ll be spending all your time trying to understand all the layers below it.

Classes complicate this process of forgetting somewhat. Properties of class instances “stick out”, and are visible to the callers. This can be powerful — and can be a great way to represent shared data structures — but this is exactly why we have the ._ convention in Python: if something starts with an underscore, and it’s not in a namespace you own, you shouldn’t mess with it. So: other._foo is not for you to touch, unless you’re maintaining type(other). self._foo is where you should put your own private state.

So if we have a class like this:

1
2
3
class A(object):
    def __init__(self):
        self._note = "a note"

we all know that A()._note is off-limits.

But then what happens here?

1
2
3
4
class B(A):
    def __init__(self):
        super().__init__()
        self._note = "private state for B()"

B()._note is also off limits for everyone but B, except... as it turns out, B doesn’t really own the namespace of self here, so it’s clashing with what A wants _note to mean. Even if, right now, we were to change it to _note2, the maintainer of A could, in any future release of A, add a new _note2 variable which conflicts with something B is using. A’s maintainers (rightfully) think they own self, B’s maintainers (reasonably) think that they do. This could continue all the way until we get to _note7, at which point it would explode violently.


So that’s why Inheritance is bad. It’s a bad way for two layers of a system to communicate because it leaves each layer nowhere to put its internal state that the other doesn’t need to know about. So what could be worse?

Let’s say we’ve convinced our junior programmer who wrote A that inheritance is a bad interface, and they should instead use the panacea that cures all inherited ills, composition. Great! Let’s just write a B that composes in an A in a nice clean way, instead of doing any gross inheritance:

1
2
3
4
class Bprime(object):
    def __init__(self, a):
        for var in dir(a):
            setattr(self, var, getattr(a, var))

Uh oh. Looks like composition is worse than inheritance.


Let’s enumerate some of the issues with this “solution” to the problem of inheritance:

  • How do we know what attributes Bprime has?
  • How do we even know what type a is?
  • How is anyone ever going to grep for relevant methods in this code and have them come up in the right place?

We briefly reclaimed self for Bprime by removing the inheritance from A, but what Bprime does in __init__ to replace it is much worse. At least with normal, “vertical” inheritance, IDEs and code inspection tools can have some idea where your parents are and what methods they declare. We have to look aside to know what’s there, but at least it’s clear from the code’s structure where exactly we have to look aside to.

When faced with a class like Bprime though, what does one do? It’s just shredding apart some apparently totally unrelated object, there’s nearly no way for tooling to inspect this code to the point that they know where self.<something> comes from in a method defined on Bprime.

The goal of replacing inheritance with composition is to make it clear and easy to understand what code owns each attribute on self. Sometimes that clarity comes at the expense of a few extra keystrokes; an __init__ that copies over a few specific attributes, or a method that does nothing but forward a message, like def something(self): return self.other.something().

Automatic composition is just lateral inheritance. Magically auto-proxying all methods1, or auto-copying all attributes, saves a few keystrokes at the time some new code is created at the expense of hours of debugging when it is being maintained. If readability counts, we should never privilege the writer over the reader.


  1. It is left as an exercise for the reader why proxyForInterface is still a reasonably okay idea even in the face of this criticism.2 

  2. Although ironically it probably shouldn’t use inheritance as its interface. 

What Am Container

Containers are a tool in the fight against evil.

Perhaps you are a software developer.

Perhaps, as a developer, you have recently become familiar with the term "containers".

Perhaps you have heard containers described as something like "LXC, but better", "an application-level interface to cgroups" or "like virtual machines, but lightweight", or perhaps (even less usefully), a function call. You've probably heard of "docker"; do you wonder whether a container is the same as, different from, or part of an Docker?

Are you are bewildered by the blisteringly fast-paced world of "containers"? Maybe you have no trouble understanding what they are - in fact you might be familiar with a half a dozen orchestration systems and container runtimes already - but frustrated because this seems like a whole lot of work and you just don't see what the point of it all is?

If so, this article is for you.

I'd like to lay out what exactly the point of "containers" are, why people are so excited about them, what makes the ecosystem around them so confusing. Unlike my previous writing on the topic, I'm not going to assume you know anything about the ecosystem in general; just that you have a basic understanding of how UNIX-like operating systems separate processes, files, and networks.1


At the dawn of time, a computer was a single-tasking machine. Somehow, you'd load your program into main memory, and then you'd turn it on; it would run the program, and (if you're lucky) spit out some output onto paper tape.

When a program running on such a computer looked around itself, it could "see" the core memory of the computer it was running on, any attached devices, including consoles, printers, teletypes, or (later) networking equipment. This was of course very powerful - the program had full control of everything attached to the computer - but also somewhat limiting.

This mode of addressing hardware is limiting because it meant that programs would break the instant you moved them to a new computer. They had to be re-written to accommodate new amounts and types of memory, new sizes and brands of storage, new types of networks. If the program had to contain within itself the full knowledge of every piece of hardware that it might ever interact with, it would be very expensive indeed.

Also, if all the resources of a computer were dedicated to one program, then you couldn't run a second program without stomping all over the first one - crashing it by mangling its structures in memory, deleting its data by overwriting its data on disk.

So, programmers cleverly devised a way of indirecting, or "virtualizing", access to hardware resources. Instead of a program simply addressing all the memory in the whole computer, it got its own little space where it could address its own memory - an address space, if you will. If a program wanted more memory, it would ask a supervising program - what we today call a "kernel" - to give it some more memory. This made programs much simpler: instead of memorizing the address offsets where a particular machine kept its memory, a program would simply begin by saying "hey operating system, give me some memory", and then it would access the memory in its own little virtual area.

In other words: memory allocation is just virtual RAM.

Virtualizing memory - i.e. ephemeral storage - wasn't enough; in order to save and transfer data, programs also had to virtualize disk - i.e. persistent storage. Whereas a whole-computer program would just seek to position 0 on the disk and start writing data to it however it pleased, a program writing to a virtualized disk - or, as we might call it today, a "file" - first needed to request a file from the operating system.

In other words: file systems are just virtual disks.

Networking was treated in a similar way. Rather than addressing the entire network connection at once, each program could allocate a little slice of the network - a "port". That way a program could, instead of consuming all network traffic destined for the entire machine, ask the operating system to just deliver it all the traffic for, say, port number seven.

In other words: listening ports are just virtual network cards.


Getting bored by all this obvious stuff yet? Good. One of the things that frustrates me the most about containers is that they are an incredibly obvious idea that is just a logical continuation of a trend that all programmers are intimately familiar with.


All of these different virtual resources exist for the same reason: as I said earlier, if two programs need the same resource to function properly, and they both try to use it without coordinating, they'll both break horribly.2

UNIX-like operating systems more or less virtualize RAM correctly. When one program grabs some RAM, nobody else - modulo super-powered administrative debugging tools - gets to use it without talking to that program. It's extremely clear which memory belongs to which process. If programs want to use shared memory, there is a very specific, opt-in protocol for doing so; it is basically impossible for it to happen by accident.

However, the abstractions we use for disks (filesystems) and network cards (listening ports and addresses) are significantly more limited. Every program on the computer sees the same file-system. The program itself, and the data the program stores, both live on the same file-system. Every program on the computer can see the same network information, can query everything about it, and can receive arbitrary connections. Permissions can remove certain parts of the filesystem from view (i.e. programs can opt-out) but it is far less clear which program "owns" certain parts of the filesystem; access must be carefully controlled, and sometimes mediated by administrators.

In particular, the way that UNIX manages filesystems creates an environment where "installing" a program requires manipulating state in the same place (the filesystem) where other programs might require different state. Popular package managers on UNIX-like systems (APT, RPM, and so on) rarely have a way to separate program installation even by convention, let alone by strict enforcement. If you want to do that, you have to re-compile the software with ./configure --prefix to hard-code a new location. And, fundamentally, this is why the package managers don't support installing to a different place: if the program can tell the difference between different installation locations, then it will, because its developers thought it should go in one place on the file system, and why not hard code it? It works on their machine.


In order to address this shortcoming of the UNIX process model, the concept of "virtualization" became popular. The idea of virtualization is simple: you write a program which emulates an entire computer, with its own storage media, network devices, and then you install an operating system on it. This completely resolves the over-sharing of resources: a process inside a virtual machine is in a very real sense running on a different computer than programs running on a different virtual machine on the same physical device.

However, virtualiztion is also an extremly heavy-weight blunt instrument. Since virtual machines are running operating systems designed for physical machines, they have tons of redundant hardware-management code; enormous amounts of operating system data which could be shared with the host, but since it's in the form of a disk image totally managed by the virtual machine's operating system, the host can't really peek inside to optimize anything. It also makes other kinds of intentional resource sharing very hard: any software to manage the host needs to be installed on the host, since if it is installed on the guest it won't have full access to the host's hardware.

I hate using the term "heavy-weight" when I'm talking about software - it's often bandied about as a content-free criticism - but the difference in overhead between running a virtual machine and a process is the difference between gigabytes and kilobytes; somewhere between 4-6 orders of magnitude. That's a huge difference.

This means that you need to treat virtual machines as multi-purpose, since one VM is too big to run just a single small program. Which means you often have to manage them almost as if they were physical harware.


When we run a program on a UNIX-like operating system, and by so running it, grant it its very own address space, we call the entity that we just created a "process".

This is how to understand a "container".

A "container" is what we get when we run a program and give it not just its own memory, but its own whole virtual filesystem and its own whole virtual network card.

The metaphor to processes isn't perfect, because a container can contain multiple processes with different memory spaces that share a single filesystem. But this is also where some of the "container ecosystem" fervor begins to creep in - this is why people interested in containers will religiously exhort you to treat a container as a single application, not to run multiple things inside it, not to SSH into it, and so on. This is because the whole point of containers is that they are lightweight - far closer in overhead to the size of a process than that of a virtual machine.

A process inside a container, if it queries the operating system, will see a computer where only it is running, where it owns the entire filesystem, and where any mounted disks were explicitly put there by the administrator who ran the container. In other words, if it wants to share data with another application, it has to be given the shared data; opt-in, not opt-out, the same way that memory-sharing is opt-in in a UNIX-like system.


So why is this so exciting?

In a sense, it really is just a lower-overhead way to run a virtual machine, as long as it shares the same kernel. That's not super exciting, by itself.

The reason that containers are more exciting than processes is the same reason that using a filesystem is more exciting than having to use a whole disk: sharing state always, inevitably, leads to brokenness. Opt-in is better than opt-out.

When you give a program a whole filesystem to itself, sharing any data explicitly, you eliminate even the possibility that some other program scribbling on a shared area of the filesystem might break it. You don't need package managers any more, only package installers; by removing the other functions of package managers (inventory, removal) they can be radically simplified, and less complexity means less brokenness.

When you give a program an entire network address to itself, exposing any ports explicitly, you eliminate even the possibility that some rogue program will expose a security hole by listening on a port you weren't expecting. You eliminate the possibility that it might clash with other programs on the same host, hard-coding the same port numbers or auto-discovering the same addresses.


In addition to the exciting things on the run-time side, containers - or rather, the things you run to get containers, "images"3, present some compelling improvements to the build-time side.

On Linux and Windows, building a software artifact for distribution to end-users can be quite challenging. It's challenging because it's not clear how to specify that you depend on certain other software being installed; it's not clear what to do if you have conflicting versions of that software that may not be the same as the versions already available on the user's computer. It's not clear where to put things on the filesystem. On Linux, this often just means getting all of your software from your operating system distributor.

You'll notice I said "Linux and Windows"; not the usual (linux, windows, mac) big-3 desktop platforms, and I didn't say anything about mobile OSes. That's because on macOS, Android, iOS, and Windows Metro, applications already run in their own containers. The rules of macOS containers are a bit weird, and very different from Docker containers, but if you have a Mac you can check out ~/Library/Containers to see the view of the world that the applications you're running can see. iOS looks much the same.

This is something that doesn't get discussed a lot in the container ecosystem, partially because everyone is developing technology at such a breakneck pace, but in many ways Linux server-side containerization is just a continuation of a trend that started on mainframe operating systems in the 1970s and has already been picked up in full force by mobile operating systems.

When one builds an image, one is building a picture of the entire filesystem that the container will see, so an image is a complete artifact. By contrast, a package for a Linux package manager is just a fragment of a program, leaving out all of its dependencies, to be integrated later. If an image runs on your machine, it will (except in some extremely unusual circumstances) run on the target machine, because everything it needs to run is fully included.

Because you build all the software an image requires into the image itself, there are some implications for server management. You no longer need to apply security updates to a machine - they get applied to one application at a time, and they get applied as a normal process of deploying new code. Since there's only one update process, which is "delete the old container, run a new one with a new image", updates can roll out much faster, because you can build an image, run tests for the image with the security updates applied, and be confident that it won't break anything. No more scheduling maintenance windows, or managing reboots (at least for security updates to applications and libraries; kernel updates are a different kettle of fish).


That's why it's exciting. So why's it all so confusing?5

Fundamentally the confusion is caused by there just being way too many tools. Why so many tools? Once you've accepted that your software should live in images, none of the old tools work any more. Almost every administrative, monitoring, or management tool for UNIX-like OSes depends intimately upon the ability to promiscuously share the entire filesystem with every other program running on it. Containers break these assumptions, and so new tools need to be built. Nobody really agrees on how those tools should work, and a wide variety of forces ranging from competitive pressure to personality conflicts make it difficult for the panoply of container vendors to collaborate perfectly4.

Many companies whose core business has nothing to do with infrastructure have gone through this reasoning process:

  1. Containers are so much better than processes, we need to start using them right away, even if there's some tooling pain in adopting them.
  2. The old tools don't work.
  3. The new tools from the tool vendors aren't ready.
  4. The new tools from the community don't work for our use-case.
  5. Time to write our own tool, just for our use-case and nobody else's! (Which causes problem #3 for somebody else, of course...)

A less fundamental reason is too much focus on scale. If you're running a small-scale web application which has a stable user-base that you don't expect a lot of growth in, there are many great reasons to adopt containers as opposed to automating your operations; and in fact, if you keep things simple, the very fact that your software runs in a container might obviate the need for a system-management solution like Chef, Ansible, Puppet, or Salt. You should totally adopt them and try to ignore the more complex and involved parts of running an orchestration system.

However, containers are even more useful at significant scale, which means that companies which have significant scaling problems invest in containers heavily and write about them prolifically. Many guides and tutorials on containers assume that you expect to be running a multi-million-node cluster with fully automated continuous deployment, blue-green zero-downtime deploys, a 1000-person operations team. It's great if you've got all that stuff, but building each of those components is a non-trivial investment.


So, where does that leave you, my dear reader?

You should absolutely be adopting "container technology", which is to say, you should probably at least be using Docker to build your software. But there are other, radically different container systems - like Sandstorm - which might make sense for you, depending on what kind of services you create. And of course there's a huge ecosystem of other tools you might want to use; too many to mention, although I will shout out to my own employer's docker-as-a-service Carina, which delivered this blog post, among other things, to you.

You shouldn't feel as though you need to do containers absolutely "the right way", or that the value of containerization is derived from adopting every single tool that you can all at once. The value of containers comes from four very simple things:

  1. It reduces the overhead and increases the performance of co-locating multiple applications on the same hardware,
  2. It forces you to explicitly call out any shared state or required resources,
  3. It creates a complete build pipeline that results in a software artifact that can be run without special installation or set-up instructions (at least, on the "software installation" side; you still might require configuration, of course), and
  4. It gives you a way to test exactly what you're deploying.

These benefits can combine and interact in surprising and interesting ways, and can be enhanced with a wide and growing variety of tools. But underneath all the hype and the buzz, the very real benefit of containerization is basically just that it is fixing a very old design flaw in UNIX.

Containers let you share less state, and shared mutable state is the root of all evil.


  1. If you have a more sophisticated understanding of memory, disks, and networks, you'll notice that everything I'm saying here is patently false, and betrays an overly simplistic understanding of the development of UNIX and the complexities of physical hardware and driver software. Please believe that I know this; this is an alternate history of the version of UNIX that was developed on platonically ideal hardware. The messy co-evolution of UNIX, preemptive multitasking, hardware offload for networks, magnetic secondary storage, and so on, is far too large to fit into the margins of this post. 

  2. When programs break horribly like this, it's called "multithreading". I have written some software to help you avoid it. 

  3. One runs an "executable" to get a process; one runs an "image" to get a container. 

  4. Although the container ecosystem is famously acrimonious, companies in it do actually collaborate better than the tech press sometimes give them credit for; the Open Container Project is a significant extraction of common technology from multiple vendors, many of whom are also competitors, to facilitate a technical substrate that is best for the community. 

  5. If it doesn't seem confusing to you, consider this absolute gem from the hilarious folks over at CircleCI.