Get Your Mac Python From Python.org

There are many ways to get Python installed on macOS, but for most people the version that you download from Python.org is best.

One of the most unfortunate things about learning Python is that there are so many different ways to get it installed, and you need to choose one before you even begin. The differences can also be subtle and require technical depth to truly understand, which you don’t have yet.1 Even experts can be missing information about which one to use and why.

There are perhaps more of these on macOS than on any other platform, and that’s the platform I primarily use these days. If you’re using macOS, I’d like to make it simple for you.

The One You Probably Want: Python.org

My recommendation is to use an official build from python.org.

I recommed the official installer for most uses, and if you were just looking for a choice about which one to use, you can stop reading now. Thanks for your time, and have fun with Python.

If you want to get into the nerdy nuances, read on.

For starters, the official builds are compiled in such a way that they will run on a wide range of macs, both new and old. They are universal2 binaries, unlike some other builds, which means you can distribute them as part of a mac application.

The main advantage that the Python.org build has, though, is very subtle, and not any concrete technical detail. It’s a social, structural issue: the Python.org builds are produced by the people who make CPython, who are more likely to know about the nuances of what options it can be built with, and who are more likely to adopt their own improvements as they are released. Third party builders who are focused on a more niche use-case may not realize that there are build options or environment requirements that could make their Pythons better.

I’m being a bit vague deliberately here, because at any particular moment in time, this may not be an advantage at all. Third party integrators generally catch up to changes, and eventually achieve parity. But for a specific upcoming example, PEP 703 will have extensive build-time implications, and I would trust the python.org team to be keeping pace with all those subtle details immediately as releases happen.

(And Auto-Update It)

The one downside of the official build is that you have to return to the website to check for security updates. Unlike other options described below, there’s no built-in auto-updater for security patches. If you follow the normal process, you still have to click around in a GUI installer to update it once you’ve clicked around on the website to get the file.

I have written a micro-tool to address this and you can pip install mopup and then periodically run mopup and it will install any security updates for your current version of Python, with no interaction besides entering your admin password.

(And Always Use Virtual Environments)

Once you have installed Python from python.org, never pip install anything globally into that Python, even using the --user flag. Always, always use a virtual environment of some kind. In fact, I recommend configuring it so that it is not even possible to do so, by putting this in your ~/.pip/pip.conf:

1
2
[global]
require-virtualenv = true

This will avoid damaging your Python installation by polluting it with libraries that you install and then forget about. Any time you need to do something new, you should make a fresh virtual environment, and then you don’t have to worry about library conflicts between different projects that you may work on.

If you need to install tools written in Python, don’t manage those environments directly, install the tools with pipx. By using pipx, you allow each tool to maintain its own set dependencies, which means you don’t need to worry about whether two tools you use have conflicting version requirements, or whether the tools conflict with your own code.2

The Others

There are, of course, several other ways to install Python, which you probably don’t want to use.

The One For Running Other People’s Code, Not Yours: Homebrew

In general, Homebrew Python is not for you.

The purpose of Homebrew’s python is to support applications packaged within Homebrew, which have all been tested against the versions of python libraries also packaged within Homebrew. It may upgrade without warning on just about any brew operation, and you can’t downgrade it without breaking other parts of your install.

Specifically for creating redistributable binaries, Homebrew python is typically compiled only for your specific architecture, and thus will not create binaries that can be used on Intel macs if you have an Apple Silicon machine, or will run slower on Apple Silicon machines if you have an Intel mac. Also, if there are prebuilt wheels which don’t yet exist for Apple Silicon, you cannot easily arch -x86_64 python ... and just install them; you have to install a whole second copy of Homebrew in a different location, which is a headache.

In other words, homebrew is an alternative to pipx, not to Python. For that purpose, it’s fine.

The One For When You Need 20 Different Pythons For Debugging: pyenv

Like Homebrew, pyenv will default to building a single-architecture binary. Even worse, it will not build a Framework build of Python, which means several things related to being a mac app just won’t work properly. Remember those build-time esoterica that the core team is on top of but third parties may not be? “Should I use a Framework build” is an enduring piece of said esoterica.

The purpose of pyenv is to provide a large matrix of different, precise legacy versions of python for library authors to test compatibility against those older Pythons. If you need to do that, particularly if you work on different projects where you may need to install some random super-old version of Python that you would not normally use to test something on, then pyenv is great. But if you only need one version of Python, it’s not a great way to get it.

The Other One That’s Exactly Like pyenv: asdf-python

The issues are exactly the same as with pyenv, as the tool is a straightforward alternative for the exact same purpose. It’s a bit less focused on Python than pyenv, which has pros and cons; it has broader community support, but it’s less specifically tuned for Python. But a comparative exploration of their differences is beyond the scope of this post.

The Built-In One That Isn’t Really Built-In: /usr/bin/python3

There is a binary in /usr/bin/python3 which might seem like an appealing option — it comes from Apple, after all! — but it is provided as a developer tool, for running things like build scripts. It isn’t for building applications with.

That binary is not a “system python”; the thing in the operating system itself is only a shim, which will determine if you have development tools, and shell out to a tool that will download the development tools for you if you don’t. There is unfortunately a lot of folk wisdom among older Python programmers who remember a time when apple did actually package an antedeluvian version of the interpreter that seemed to be supported forever, and might suggest it for things intended to be self-contained or have minimal bundled dependencies, but this is exactly the reason that Apple stopped shipping that.

If you use this option, it means that your Python might come from the Xcode Command Line Tools, or the Xcode application, depending on the state of xcode-select in your current environment and the order in which you installed them.

Upgrading Xcode via the app store or a developer.apple.com manual download — or its command-line tools, which are installed separately, and updated via the “settings” application in a completely different workflow — therefore also upgrades your version of Python without an easy way to downgrade, unless you manage multiple Xcode installs. Which, at 12G per install, is probably not an appealing option.3

The One With The Data And The Science: Conda

As someone with a limited understanding of data science and scientific computing, I’m not really qualified to go into the detailed pros and cons here, but luckily, Itamar Turner-Trauring is, and he did.

My one coda to his detailed exploration here is that while there are good reasons to want to use Anaconda — particularly if you are managing a data-science workload across multiple platforms and you want a consistent, holistic development experience across a large team supporting heterogenous platforms — some people will tell you that you need Conda to get you your libraries if you want to do data science or numerical work with Python at all, because Conda is how you install those libraries, and otherwise things just won’t work.

This is a historical artifact that is no longer true. Over the last decade, Python Wheels have been comprehensively adopted across the Python community, and almost every popular library with an extension module ships pre-built binaries to multiple platforms. There may be some libraries that only have prebuilt binaries for conda, but they are sufficiently specialized that I don’t know what they are.

The One for Being Consistent With Your Cloud Hosting

Another way to run Python on macOS is to not run it on macOS, but to get another computer inside your computer that isn’t running macOS, and instead run Python inside that, usually using Docker.4

There are good reasons to want to use a containerized configuration for development, but they start to drift away from the point of this post and into more complicated stuff about how to get your Python into the cloud.

So rather than saying “use Python.org native Python instead of Docker”, I am specifically not covering Docker as a replacement for a native mac Python here because in a lot of cases, it can’t be one. Many tools require native mac facilities like displaying GUIs or scripting applications, or want to be able to take a path name to a file without elaborate pre-work to allow the program to access it.

Summary

If you didn’t want to read all of that, here’s the summary.

If you use a mac:

  1. Get your Python interpreter from python.org.
  2. Update it with mopup so you don’t fall behind on security updates.
  3. Always use venvs for specific projects, never pip install anything directly.
  4. Use pipx to manage your Python applications so you don’t have to worry about dependency conflicts.
  5. Don’t worry if Homebrew also installs a python executable, but don’t use it for your own stuff.
  6. You might need a different Python interpreter if you have any specialized requirements, but you’ll probably know if you do.

Acknowledgements

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support me on Patreon as well! I am also available for consulting work if you think your organization could benefit from expertise on topics like “which Python is the really good one”.


  1. If somebody sent you this article because you’re trying to get into Python and you got stuck on this point, let me first reassure you that all the information about this really is highly complex and confusing; if you’re feeling overwhelmed, that’s normal. But the good news is that you can really ignore most of it. Just read the next little bit. 

  2. Some tools need to be installed in the same environment as the code they’re operating on, so you may want to have multiple installs of, for example, Mypy, PuDB, or sphinx. But for things that just do something useful but don’t need to load your code — such as this small selection of examples from my own collection: certbot, pgcli, asciinema, gister, speedtest-cli) — pipx means you won’t have to debug wonky dependency interactions. 

  3. The command-line tools are a lot smaller, but cannot have multiple versions installed at once, and are updated through a different mechanism. There are odd little details like the fact that the default bundle identifier for the framework differs, being either org.python.python or com.apple.python3. They’re generally different in a bunch of small subtle ways that don’t really matter in 95% of cases until they suddenly matter a lot in that last 5%. 

  4. Or minikube, or podman, or colima or whatever I guess, there’s way too many of these containerization Pokémon running around for me to keep track of them all these days. 

Bilithification

Not sure how to do microservices? Split your monolith in half.

Several years ago at O’Reilly’s Software Architecture conference, within a comprehensive talk on refactoring “Technical Debt: A Masterclass”, r0ml1 presented a concept that I think should be highlighted.

If you have access to O’Reilly Safari, I think the video is available there, or you can get the slides here. It’s well worth watching in its own right. The talk contains a lot of hard-won wisdom from a decades-long career, but in slides 75-87, he articulates a concept that I believe resolves the perennial pendulum-swing between microservices and monoliths that we see in the Software as a Service world.

I will refer to this concept as “the bilithification strategy”.

Background

Personally, I have long been a microservice skeptic. I would generally articulate this skepticism in terms of “YAGNI”.

Here’s the way I would advise people asking about microservices before encountering this concept:

Microservices are often adopted by small teams due to their advertised benefits. Advocates from very large organizations—ones that have been very successful with microservices—frequently give talks claiming that microservices are more modular, more scalable, and more fault-tolerant than their monolithic progenitors. But these teams rarely appreciate the costs, particularly the costs for smaller orgs. Specifically, there is a fixed operational marginal cost to each new service, and a fairly large fixed operational overhead to the infrastructure for an organization deploying microservices in at all.

With a large enough team, the operational cost is easy to absorb. As the overhead is fixed, it trends towards zero as your total team size and system complexity trend towards infinity. Also, in very large teams, the enforced isolation of components in separate services reduces complexity. It does so specifically intentionally causing the software architecture to mirror the organizational structure of the team that deploys it. This — at the cost of increased operational overhead and decreased efficiency — allows independent parts of the organization to make progress independently, without blocking on each other. Therefore, in smaller teams, as you’re building, you should bias towards building a monolith until the complexity costs of the monolith become apparent. Then you should build the infrastructure to switch to microservices.

I still stand by all of this. However, it’s incomplete.

What does it mean to “switch to microservices”?

The biggest thing that this advice leaves out is a clear understanding of the “micro” in “microservice”. In this framing, I’m implicitly understanding “micro” services to be services that are too small — or at least, too small for your team. But if they do work for large organizations, then at some point, you need to have them. This leaves open several questions:

  • What size is the right size for a service?
  • When should you split your monolith up into smaller services?
  • Wait, how do you even measure “size” of a service? Lines of code? Gigabytes of memory? Number of team members?

In a specific situation I could probably look at these questions for that situation, and make suggestions as to the appropriate course of action, but that’s based largely on vibes. There’s just a lot of drawing on complex experiences, not a repeatable pattern that a team could apply on their own.

We can be clear that you should always start with a monolith. But what should you do when that’s no longer working? How do you even tell when it’s no longer working?

Bilithification

Every codebase begins as a monolith. That is a single (mono) rock (lith). Here’s what it looks like.

a circle with the word “monolith” on it

Let’s say that the monolith, and the attendant team, is getting big enough that we’re beginning to consider microservices. We might now ask, “what is the appropriate number of services to split the monolith into?” and that could provoke endless debate even among a team with total consensus that it might need to be split into some number of services.

Rather than beginning with the premise that there is a correct number, we may observe instead that splitting the service into N services where N is more than one may be accomplished splitting the service in half N-1 times.

So let’s bi (two) lithify (rock) this monolith, and take it from 1 to 2 rocks.

The task of splitting the service into two parts ought to be a manageable amount of work — two is a definitively finite number, as composed to the infinite point-cloud of “microservices”. Thus, we should search, first, for a single logical seam along which we might cleave the monolith.

a circle with the word “monolith” on it and a line through it

In many cases—as in the specific case that r0ml gave—the easiest way to articulate a boundary between two parts of a piece of software is to conceptualize a “frontend” and a “backend”. In the absence of any other clear boundary, the question “does this functionality belong in the front end or the back end” can serve as a simple razor for separating the service.

Remember: the reason we’re splitting this software up is because we are also splitting the team up. You need to think about this in terms of people as well as in terms of functionality. What division between halves would most reduce the number of lines of communication, to reduce the quadratic increase in required communication relationships that comes along with the linear increase in team size? Can you identify two groups who need to talk amongst themselves, but do not need to talk with all of each other?2

two circles with the word “hemilith” on them and a double-headed arrow
between them

Once you’ve achieved this separation, we no longer have a single rock, we have two half-rocks: hemiliths to borrow from the same Greek root that gave us “monolith”.

But we are not finished, of course. Two may not be the correct number of services to end up with. Now, we ask: can we split the frontend into a frontend and backend? Can we split the backend? If so, then we now have four rocks in place of our original one:

four circles with the word “tetartolith” on them and double-headed arrows
connecting them all

You might think that this would be a “tetralith” for “four”, but as they are of a set, they are more properly a tetartolith.

Repeat As Necessary

At some point, you’ll hit a point where you’re looking at a service and asking “what are the two pieces I could split this service into?”, and the answer will be “none, it makes sense as a single piece”. At that point, you will know that you’ve achieved services of the correct size.

One thing about this insight that may disappoint some engineers is the realization that service-oriented architecture is much more an engineering management tool than it is an engineering tool. It’s fun to think that “microservices” will let you play around with weird technologies and niche programming languages consequence-free at your day job because those can all be “separate services”, but that was always a fantasy. Integrating multiple technologies is expensive, and introducing more moving parts always introduces more failure points.

Advanced Techniques: A Multi-Stack Microservice Environment

You’ll note that splitting a service heavily implies that the resulting services will still all be in the same programming language and the same tech stack as before. If you’re interested in deploying multiple stacks (languages, frameworks, libraries), you can proceed to that outcome via bilithification, but it is a multi-step process.

First, you have to complete the strategy that I’ve already outlined above. You need to get to a service that is sufficiently granular that it is atomic; you don’t want to split it up any further.

Let’s call that service “X”.

Second, you identify the additional complexity that would be introduced by using a different tech stack. It’s important to be realistic here! New technology always seems fun, and if you’re investigating this, you’re probably predisposed to think it would be an improvement. So identify your costs first and make sure you have them enumerated before you move on to the benefits.

Third, identify the concrete benefits to X’s problem domain that the new tech stack would provide.

Finally, do a cost-benefit analysis where you make sure that the costs from step 2 are clearly exceeded by the benefits from step three. If you can’t readily identify that in advance – sometimes experimentation is required — then you need to treat this as an experiment, rather than as a strategic direction, until you’ve had a chance to answer whatever questions you have about the new technology’s benefits benefits.

Note, also, that this cost-benefit analysis requires not only doing the technical analysis but getting buy-in from the entire team tasked with maintaining that component.

Conclusion

To summarize:

  1. Always start with a monolith.
  2. When the monolith is too big, both in terms of team and of codebase, split the monolith in half until it doesn’t make sense to split it in half any more.
  3. (Optional) Carefully evaluate services that want to adopt new technologies, and keep the costs of doing that in mind.

There is, of course, a world of complexity beyond this associated with managing the cost of a service-oriented architecture and solving specific technical problems that arise from that architecture.

If you remember the tetartolith, though, you should at least be able to get to the right number and size of services for your system.


Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support me on Patreon as well! I am also available for consulting work if you think your organization could benefit from more specificity on the sort of insight you've seen here.


  1. AKA “my father” 

  2. Denouncing “silos” within organizations is so common that it’s a tired trope at this point. There is no shortage of vaguely inspirational articles across the business trade-rag web and on LinkedIn exhorting us to “break down silos”, but silos are the point of having an organization. If everybody needs to talk to everybody else in your entire organization, if no silos exist to prevent the necessity of that communication, then you are definitionally operating that organization at minimal efficiency. What people actually want when they talk about “breaking down silos” is a re-org into a functional hierarchy rather than a role-oriented hierarchy (i.e., “this is the team that makes and markets the Foo product, this is the team that makes and markets the Bar product” as opposed to “this is the sales team”, “this is the engineering team”). But that’s a separate post, probably. 

What Would You Say You Do Here?

A brief description of the various projects that I am hoping to do independently, with your support. In other words, this is an ad, for me.

What have I been up to?

Late last year, I launched a Patreon. Although not quite a “soft” launch — I did toot about it, after all — I didn’t promote it very much.

I started this way because I realized that if I didn’t just put something up I’d be dithering forever. I’d previously been writing a sprawling monster of an announcement post that went into way too much detail, and kept expanding to encompass more and more ideas until I came to understand that salvaging it was going to be an editing process just as brutal and interminable as the writing itself.

However, that post also included a section where I just wrote about what I was actually doing.

So, for lots of reasons1, there are a diverse array of loosely related (or unrelated) projects below which may not get finished any time soon. Or, indeed, may go unfinished entirely. Some are “done enough” now, and just won’t receive much in the way of future polish.

That is an intentional choice.

The rationale, as briefly as I can manage, is: I want to lean into the my strength2 of creative, divergent thinking, and see how these ideas pan out without committing to them particularly intensely. My habitual impulse, for many years, has been to lean extremely hard on strategies that compensate for my weaknesses in organization, planning, and continued focus, and attempt to commit to finishing every project to prove that I’ll never flake on anything.

While the reward tiers for the Patreon remain deliberately ambiguous3, I think it would be fair to say that patrons will have some level of influence in directing my focus by providing feedback on these projects, and requesting that I work more on some and less on others.

So, with no further ado: what have I been working on, and what work would you be supporting if you signed up? For each project, I’ll be answering 3 questions:

  1. What is it?
  2. What have I been doing with it recently?
  3. What are my plans for it?

This. i.e. blog.glyph.im

What is it?

For starters, I write stuff here. I guess you’re reading this post for some reason, so you might like the stuff I write? I feel like this doesn’t require much explanation.

What have I done with it recently?

You might appreciate the explicitly patron-requested Potato Programming post, a screed about dataclass, or a deep dive on the difficulties of codesigning and notarization on macOS along with an announcement of a tool to remediate them.

What are my plans for it?

You can probably expect more of the same; just all the latest thoughts & ideas from Glyph.

Twisted

What is it?

If you know of me you probably know of me as “the Twisted guy” and yeah, I am still that. If, somehow, you’ve ended up here and you don’t know what it is, wow, that’s cool, thanks for coming, super interested to know what you do know me for.

Twisted is an event-driven networking engine written in Python, the precursor and inspiration for the asyncio module, and a suite of event-driven programming abstractions, network protocol implementations, and general utility code.

What have I done with it recently?

I’ve gotten a few things merged, including type annotations for getPrimes and making the bundled CLI OpenSSH server replacement work at all with public key authentication again, as well as some test cleanups that reduce the overall surface area of old-style Deferred-returning tests that can be flaky and slow.

I’ve also landed a posix_spawnp-based spawnProcess implementation which speed up process spawning significantly; this can be as much as 3x faster if you do a lot of spawning of short-running processes.

I have a bunch of PRs in flight, too, including better annotations for FilePath Deferred, and IReactorProcess, as well as a fix for the aforementioned posix_spawnp implementation.

What are my plans for it?

A lot of the projects below use Twisted in some way, and I continue to maintain it for my own uses. My particular focus is in quality-of-life improvements; issues that someone starting out with a Twisted project will bump into and find confusing or difficult. I want it to be really easy to write applications with Twisted and I want to use my own experiences with it.

I also do code reviews of other folks’ contributions; we do still have over 100 open PRs right now.

DateType

What is it?

DateType is a workaround for a very specific bug in the way that the datetime standard library module deals with type composition: to wit, that datetime is a subclass of date but is not Liskov-substitutable for it. There are even #type:ignore comments in the standard library type stubs to work around this problem, because if you did this in your own code, it simply wouldn’t type-check.

What have I done with it recently?

I updated it a few months ago to expose DateTime and Time directly (as opposed to AwareDateTime and NaiveDateTime), so that users could specialize their own functions that took either naive or aware times without ugly and slightly-incorrect unions.

What are my plans for it?

This library is mostly done for the time being, but if I had to polish it a bit I’d probably do two things:

  1. a readthedocs page for nice documentation
  2. write a PEP to get this integrated into the standard library

Although the compatibility problems are obviously very tricky and a PEP would probably be controversial, this is ultimately a bug in the stdlib, and should be fixed upstream there.

Automat

What is it?

It’s a library to make deterministic finite-state automata easier to create and work with.

What have I done with it recently?

Back in the middle of last year, I opened a PR to create a new, completely different front-end API for state machine definition. Instead of something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
class MachineExample:
    machine = MethodicalMachine()

    @machine.state()
    def a_state(self): ...

    @machine.state()
    def other_state(self): ...

    @machine.input()
    def flip(self): ...

    @machine.output()
    def _do_flip(self): return ...

    on.upon(flip, enter=off, outputs=[_do_flip], collector=list)
    off.upon(flip, enter=on, outputs=[_do_flip], collector=list)

this branch lets you instead do something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
class MachineProtocol(Protocol):
    def flip(self) -> None: ...

class MachineCore: ...

def buildCore() -> MachineCore: ...
machine = TypicalBuilder(MachineProtocol, buildCore)

@machine.state()
class _OffState:
    @machine.handle(MachineProtocol.flip, enter=lambda: _OnState)
    def flip(self) -> None: ...

@machine.state()
class _OnState:
    @machine.handle(MachineProtocol.flip, enter=lambda: _OffState)
    def flip(self) -> None: ...

MachineImplementation = machine.buildClass()

In other words, it creates a state for every type, and type safety that much more cleanly expresses what methods can be called and by whom; no need to make everything private with tons of underscore-prefixed methods and attributes, since all the caller can see is “an implementation of MachineProtocol”; your state classes can otherwise just be normal classes, which do not require special logic to be instantiated if you want to use them directly.

Also, by making a state for every type, it’s a lot cleaner to express that certain methods require certain attributes, by simply making them available as attributes on that state and then requiring an argument of that state type; you don’t need to plot your way through the outputs generated in your state graph.

What are my plans for it?

I want to finish up dealing with some issues with that branch - particularly the ugly patterns for communicating portions of the state core to the caller and also the documentation; there are a lot of magic signatures which make sense in heavy usage but are a bit mysterious to understand while you’re getting started.

I’d also like the visualizer to work on it, which it doesn’t yet, because the visualizer cribs a bunch of state from MethodicalMachine when it should be working purely on core objects.

Secretly

What is it?

This is an attempt at a holistic, end-to-end secret management wrapper around Keyring. Whereas Keyring handles password storage, this handles the whole lifecycle of looking up the secret to see if it’s there, displaying UI to prompt the user (leveraging a pinentry program from GPG if available)

What have I done with it recently?

It’s been a long time since I touched it.

What are my plans for it?

  • Documentation. It’s totally undocumented.
  • It could be written to be a bit more abstract. It dates from a time before asyncio, so its current Twisted requirement for Deferred could be made into a generic Awaitable one.
  • Better platform support for Linux & Windows when GPG’s pinentry is not available.
  • Support for multiple accounts so that when the user is prompted for the relevant credential, they can store it.
  • Integration with 1Password via some of their many potentially relevant APIs.

Fritter

What is it?

Fritter is a frame-rate independent timer tree.

In the course of developing Twisted, I learned a lot about time and timers. LoopingCall encodes some of this knowledge, but it’s very tightly coupled to the somewhat limited IReactorTime API.

Also, LoopingCall was originally designed with the needs of media playback (particularly network streaming audio playback) in mind, but I have used it more for background maintenance tasks and for animations. Both of these things have requirements that LoopingCall makes awkward but FRITTer is designed to meet:

  1. At higher loads, surprising interactions can occur with the underlying priority queue implementation, and different algorithms may make a significant difference to performance. Fritter has a pluggable implementation of a priority queue and is carefully minimally coupled to it.

  2. Driver selection is a first-class part of the API, with an included, public “Memory” driver for testing, rather than LoopingCall’s “testing is at least possible.reactor attribute. This means that out of the box it supports both Twisted and asyncio, and can easily have other things added.

  3. The API is actually generic on what constitutes time itself, which means that you can use it for both short-term (i.e.: monotonic clock values as float-seconds) and long-term (civil times as timezone-aware datetime objects) recurring tasks. Recurrence rules can also be arbitrary functions.

  4. There is a recursive driver (this is the “tree” part) which both allows for:

    a. groups of timers which can be suspended and resumed together, and

    b. scaling of time, so that you can e.g. speed up or slow down the ticks for AIs, groups of animations, and so on, also in groups.

  5. The API is also generic on what constitutes work. This means that, for example, in a certain timer you can say “all work units scheduled on this scheduler, in addition to being callable, must also have an asJSON method”. And in fact that’s exactly what the longterm module in Fritter does.

I can neither confirm nor deny that this project was factored out of a game engine for a secret game project which does not appear on this list.

What have I done with it recently?

Besides realizing, in the course of writing this blog post, that its CI was failing its code quality static checks (oops), the last big change was the preliminary support for recursive timers and serialization.

What are my plans for it?

  • These haven’t been tested in anger yet and I want to actually use them in a larger project to make sure that they don’t have any necessary missing pieces.

  • Documentation.

Encrust

What is it?

I have written about Encrust quite recently so if you want to know about it, you should probably read that post. In brief, it is a code-shipping tool for py2app. It takes care of architecture-independence, code-signing, and notarization.

What have I done with it recently?

Wrote it. It’s brand new as of this month.

What are my plans for it?

I really want this project to go away as a tool with an independent existence. Either I want its lessons to be fully absorbed into Briefcase or perhaps py2app itself, or for it to become a library that those things call into to do its thing.

Various Small Mac Utilities

What is it?

  • QuickMacApp is a very small library for creating status-item “menu bar apps” in Python which don’t have much of a UI but want to run some Python code in the background and occasionally pop up a notification or ask the user a question or something. The idea is that if you have a utility that needs a minimal UI to just ask the user one or two things, you should be able to give it a GUI immediately, without thinking about it too much.
  • QuickMacHotkey this is a very minimal API to register hotkeys on macOS. this example is what comes up if you search the web for such a thing, but it hasn’t worked on a current Python for about 11 years. This isn’t the “right” way to do such a thing, since it provides no UI to set the shortcut, you’d have to hard-code it. But MASShortcut is now archived and I haven’t had the opportunity to investigate HotKey, so for the time being, it’s a handy thing, and totally adequate for the sort of quick-and-dirty applications you might make with QuickMacApp.
  • VEnvDotApp is a way of giving a virtualenv its own Info.plist and bundle ID, so that command-line python tools that just need to pop up a little mac GUI, like an alert or a notification, can do so with cross-platform tools without looking like it’s an app called “Python”, or in some cases breaking entirely.
  • MOPUp is a command-line updater for upstream Python.org macOS Python. For distributing third-party apps, Python.org’s version is really the one you want to use (it’s universal2, and it’s generally built with compiler options that make it a distributable thing itself) but updating it by downloading a .pkg file from a web browser is kind of annoying.

What have I done with it recently?

I’ve been releasing all these tools as they emerge and are factored out of other work, and they’re all fairly recent.

What are my plans for it?

I will continue to factor out any general-purpose tools from my platform-specific Python explorations — hopefully more Linux and Windows too, once I’ve got writing code for my own computer down, but most of the tools above are kind of “done” on their own, at the moment.

The two things that come to mind though are that QuickMacApp should have a way of owning the menubar sometimes (if you don’t have something like Bartender, menu-bar-status-item-only apps can look like they don’t do anything when you launch them), and that MOPUp should probably be upstreamed to python.org.

Pomodouroboros

What is it?

Pomodouroboros is a pomodoro timer with a highly opinionated take. It’s based on my own experience of ADHD time blindness, and is more like a therapeutic intervention for that specific condition than a typical “productivity” timer app.

In short, it has two important features that I have found lacking in other tools:

  1. A gigantic, absolutely impossible to ignore visual timer that presents a HUD overlay over your entire desktop. It remains low-opacity and static most of the time but pulses every 30 seconds to remind you that time is passing.
  2. Rather than requiring you to remember to set a timer before anything happens, it has an idea of “work hours” when you want to be time-sensitive and presents constant prompting to get started.

What have I done with it recently?

I’ve been working on it fairly consistently lately. The big things I’ve been doing have been:

  1. factoring things out of the Pomodouroboros-specific code and into QuickMacApp and Encrust.
  2. Porting the UI to the redesigned core of the application, which has been implemented and tested in platform-agnostic Python but does not have any UI yet.
  3. fully productionizing the build process and ensuring that Encrust is producing binary app bundles that people can use.

What are my plans for it?

In brief, “finish the app”. I want this to have its own website and find a life beyond the Python community, with people who just want a timer app and don’t care how it’s written. The top priority is to replace the current data model, which is to say the parts of the UI that set and evaluate timers and edit the list of upcoming timers (the timer countdown HUD UI itself is fine).

I also want to port it to other platforms, particularly desktop Linux, where I know there are many users interested in such a thing. I also want to do a CLI version for folks who live on the command line.

Finally: Pomodouroboros serves as a test-bed for a larger goal, which is that I want to make it easier for Python programmers, particularly beginners who are just getting into coding at all, to write code that not only interacts with their own computer, but that they can share with other users in a real way. As you can see with Encrust and other projects above, as much as I can I want my bumpy ride to production code to serve as trailblazing so that future travelers of this path find it as easy as possible.

And Here Is Where The CTA Goes

If this stuff sounds compelling, you can obviously sign up, that would be great. But also, if you’re just curious, go ahead and give some of these projects some stars on GitHub or just share this post. I’d also love to hear from you about any of this!

If a lot of people find this compelling, then pursuing these ideas will become a full-time job, but I’m pretty far from that threshold right now. In the meanwhile, I will also be doing a bit of consulting work.

I believe much of my upcoming month will be spoken for with contracting, although quite a bit of that work will also be open source maintenance, for which I am very grateful to my generous clients. Please do get in touch if you have something more specific you’d like me to work on, and you’d like to become one of those clients as well.


  1. Reasons which will have to remain mysterious until I can edit about 10,000 words of abstract, discursive philosophical rambling into something vaguely readable. 

  2. A strength which is common to many, indeed possibly most, people with ADHD. 

  3. While I want to give myself some leeway to try out ideas without necessarily finishing them, I do not want to start making commitments that I can’t keep. Particularly commitments that are tied to money! 

Building And Distributing A macOS Application Written in Python

Even with all the great tools we have, getting a macOS application written in Python all the way to a production-ready build suitable for end users can involve a lot of esoteric trivia.

Why Bother With All This?

In other words: if you want to run on an Apple platform, why not just write everything in an Apple programming language, like Swift? If you need to ship to multiple platforms, you might have to rewrite it all anyway, so why not give up?

Despite the significant investment that platform vendors make in their tools, I fundamentally believe that the core logic in any software application ought to be where its most important value lies. For small, independent developers, having portable logic that can be faithfully replicated on every platform without massive rework might be tricky to get started with, but if you can’t do it, it may not be cost effective to support multiple platforms at all.

So, it makes sense for me to write my applications in Python to achieve this sort of portability, even though on each platform it’s going to be a little bit more of a hassle to get it all built and shipped since the default tools don’t account for the use of Python.

But how much more is “a little bit” more of a hassle? I’ve been slowly learning about the pipeline to ship independently-distributed1 macOS applications for the last few years, and I’ve encountered a ton of annoying roadblocks.

Didn’t You Do This Already? What’s New?

So nice of you to remember. Thanks for asking. While I’ve gotten this to mostly work in the past, some things have changed since then:

  • the notarization toolchain has been updated (altool is now notarytool),
  • I’ve had to ship libraries other than just PyGame,
  • Apple Silicon launched, necessitating another dimension of build complexity to account for multiple architectures,
  • Perhaps most significantly, I have written a tool that attempts to encode as much of this knowledge as possible, Encrust, which I have put on PyPI and GitHub. If this is of interest to you, I would encourage you to file bugs on it, and hopefully add in more corner cases which I have missed.

I’ve also recently shipped my first build of an end-user application that successfully launches on both Apple Silicon and Intel macs, so here is a brief summary of the hoops I needed to jump through, from the beginning, in order to make everything work.

Wait did you say you wrote a tool? Is this fixed, then?

Encrust is, I hope, a temporary stopgap on the way to a much better comprehensive solution.

Specifically, I believe that Briefcase is a much more holistic solution to the general problem being described here, but it doesn’t suit my very specific needs right now4, and it doesn’t address a couple of minor points that I was running into here.

It is mostly glue that is shelling out to other tools that already solve portions of the problem, even when better APIs exist. It addresses three very specific layers of complexity:

  1. It enforces architecture independence, so that your app built on an M1 machine will still actually run on about half of the macs remaining out there2.
  2. It remembers tricky nuances of the notarization submission process, such as the highly specific way I need to generate my zip files to avoid mysterious notarization rejections3.
  3. Providing a common and central way to store the configuration for these things across repositories so I don’t need to repeat this process and copy/paste a shell script every time I make a tiny new application.

It only works on Apple Silicon macs, because I didn’t bother to figure out how pip actually determines which architecture to download wheels for.

As such, unfortunately, Encrust is mostly a place for other people who have already solved this problem to collaborate to centralize this sort of knowledge and share ideas about where this code should ultimately go, rather than a tool for users trying to get started with shipping an app.

Open Offer

That said:

  1. I want to help Python developers ship their Python apps to users who are not also Python developers.
  2. macOS is an even trickier platform to do that on than most.
  3. It’s now easy for me to sign, notarize, and release new applications reliably

Therefore:

If you have an open source Python application that runs on macOS5 but can’t ship to macOS — either because:

  1. you’ve gotten stuck on one of the roadblocks that this post describes,
  2. you don’t have $100 to give to Apple, or because
  3. the app is using a cross-platform toolkit that should work just fine and you don’t have access to a mac at all, then

Send me an email and I’ll sign and post your releases.

What’s this post about, then?

People still frequently complain that “Python packaging” is really bad. And I’m on record that packaging Python (in the sense of “code”) for Python (in the sense of “deployment platform”) is actually kind of fine right now; if what you’re trying to get to is a package that can be pip installed, you can have a reasonably good experience modulo a few small onboarding hiccups that are well-understood in the community and fairly easy to overcome.

However, it’s still unfortunately hard to get Python code into the hands of users who are not also Python programmers with their own development environments.

My goal here is to document the difficulties themselves to try to provide a snapshot of what happens if you try to get started from scratch today. I think it is useful to record all the snags and inscrutable error messages that you will hit in a row, so we can see what the experience really feels like.

I hope that everyone will find it entertaining.

  • Other Mac python programmers might find pieces of trivia useful, and
  • Linux users will have fun making fun of the hoops we have to jump through on Apple platforms,

but the main audience is the maintainers of tools like Briefcase and py2app to evaluate the new-user experience holistically, and to see how much the use of their tools feels like this. This necessarily includes the parts of the process that are not actually packaging.

This is why I’m starting from the beginning again, and going through all the stuff that I’ve discussed in previous posts again, to present the whole experience.

Here Goes

So, with no further ado, here is a non-exhaustive list of frustrations that I have encountered in this process:

  • Okay. Time to get started. How do I display a GUI at all? Nothing happens when I call some nominally GUI API. Oops: I need my app to exist in an app bundle, which means I need to have a framework build. Time to throw those partially-broken pyenv pythons in the trash, and carefully sidestep around Homebrew; best to use the official Python.org from here on out.
  • Bonus Frustration since I’m using AppKit directly: why is my app segfaulting all the time? Oh, target is a weak reference in objective C, so if I make a window and put a button in it that points at a Python object, the Python interpreter deallocates it immediately because only the window (which is “nothing” as it’s a weakref) is referring to it. I need to start stuffing every Python object that talks to a UI element like a window or a button into a global list, or manually calling .retain() on all of them and hoping I don’t leak memory.
  • Everything seems to be using the default Python Launcher icon, and the app menu says “Python”. That wouldn’t look too good to end users. I should probably have my own app.
  • I’ll skip the part here where the author of a new application might have to investigate py2app, briefcase, pyoxidizer, and pyinstaller separately and try to figure out which one works the best right now. As I said above, I started with py2app and I’m stubborn to a fault, so that is the one I’m going to make work.
  • Now I need to set up py2app. Oops, I can’t use pyproject.toml any more, time to go back to setup.py.
  • Now I built it and the the app is crashing on startup when I click on it. I can’t see a traceback anywhere, so I guess I need to do something in the console.
    • Wow; the console is an unusable flood of useless garbage. Forget that.
    • I guess I need to run it in the terminal somehow. After some googling I figure out it’s ./dist/MyApp.app/Contents/Resources/MacOS/MyApp. Aha, okay, I can see the traceback now, and it’s … an import error?
    • Ugh, py2app isn’t actually including all of my code, it’s using some magic to figure out which modules are actually used, but it’s doing it by traversing import statements, which means I need to put a bunch of fake static import statements for everything that is used indirectly at the top of my app’s main script so that it gets found by the build. I experimentally discover a half a dozen things that are dynamically imported inside libraries that I use and jam them all in there.
  • Okay. Now at least it starts up. The blank app icon is uninspiring, though, time to actually get my own icon in there. Cool, I’ll make an icon in my favorite image editor, and save it as... icons must be PNGs, right? Uhh... no, looks like they have to be .icns files. But luckily I can convert the PNG I saved with a simple 12-line shell script that invokes sips and iconutil6.

At this point I have an app bundle which kinda works. But in order to run on anyone else’s computer, I have to code-sign it.

  • In order to code-sign anything, I have to have an account with Apple that costs $99 per year, on developer.apple.com.
  • The easiest way to get these certificates is to log in to Xcode itself. There’s a web portal too but using it appears to involve a lot more manual management of key material, so, no thanks. This requires the full-fat Xcode.app though, not just the command-line tools that come down when I run xcode-select --install, so, time to wait for an 11GB download.
  • Oops, I made the wrong certificate type. Apparently the only right answer here is a “Developer ID Application” certificate.
  • Now that I’ve logged in to Xcode to get the certificate, I need to figure out how to tell my command-line tools about it (for starters, “codesign”). Looks like I need to run security find-identity -v -p codesigning.
  • Time to sign the application’s code.
    • The codesign tool has a --deep option which can sign the whole bundle. Great!
    • Except, that doesn’t work, because Python ships shared libraries in locations that macOS doesn’t automatically expect, so I have to manually locate those files and sign them, invoking codesign once for each.
    • Also, --deep is deprecated. There’s no replacement.
    • Logically, it seems like I still need --deep, because it does some poorly-explained stuff with non-code resource files that maybe doesn’t happen properly if I don’t? Oh well. Let's drop the option and hope for the best.8
    • With a few heuristics I think we can find all the relevant files with a little script7.

Now my app bundle is signed! Hooray. 12 years ago, I’d be all set. But today I need some additional steps.

  • After I sign my app, Apple needs to sign my app (to indicate they’ve checked it for malware), which is called “notarization”.
    • In order to be eligible for notarization, I can’t just code-sign my app. I have to code-sign it with entitlements.
    • Also I can’t just code sign it with entitlements, I have to sign it with the hardened runtime, or it fails notarization.
    • Oops, out of the box, the hardened runtime is incompatible with a bunch of stuff in Python, including cffi and ctypes, because nobody has implemented support for MAP_JIT yet, so it crashes at startup. After some thrashing around I discover that I need a legacy “allow unsigned executable memory” entitlement. I can’t avoid importing this because a bunch of things in py2app’s bootstrapping code import things that use ctypes, and probably a bunch of packages which I’m definitely going to need, like cryptography require cffi directly anyway.
    • In order to set up notarization external to Xcode, I need to create an App Password which is set up at appleid.apple.com, not the developer portal.
    • Bonus Frustration since I’ve been doing this for a few years: Originally this used to be even more annoying as I needed to wait for an email (with altool), and so I couldn’t script it directly. Now, at least, the new notarytool (which will shortly be mandatory) has a --wait flag.
    • Although the tool is documented under man notarytool, I actually have to run it as xcrun notarytool, even though codesign can be run either directly or via xcrun codesign.
    • Great, we’re ready to zip up our app and submit to Apple. Wait, they’re rejecting it? Why???
    • Aah, I need to manually copy and paste the UUID in the console output of xcrun notarytool submit into xcrun notarytool log to get some JSON that has some error messages embedded in it.
    • Oh. The bundle contains internal symlinks, so when I zipped it without the -y option, I got a corrupt archive.
    • Great, resubmitted with zip -y.
    • Oops, just kidding, that only works sometimes. Later, a different submission with a different hash will fail, and I’ll learn that the correct command line is actually ditto -c -k --sequesterRsrc --keepParent MyApp.app MyApp.app.zip.
      • Note that, for extra entertainment value, the position of the archive itself and directory are reversed on the command line from zip (and tar, and every other archive tool).
    • notarytool doesn’t record anything in my app though; it puts the “notarization ticket” on Apple's servers. Apparently, I still need to run stapler for users to be able to launch it while those servers are inaccessible, like, for example, if they’re offline.
    • Oops, not stapler. xcrun stapler. Whatever.
    • Except notarytool operates on a zip archive, but stapler operates on an app bundle. So we have to save the original app bundle, run stapler on it, then re-archive the whole thing into a new archive.

Hooray! Time to release my great app!

  • Whoops, just got a bug report that it crashes immediately on every Intel mac. What’s going on?
  • Turns out I’m using a library whose authors distribute both aarch64 and x86_64 wheels; pip will prefer single-architecture wheels even if universal2 wheels are also available, so I’ve got to somehow get fat binaries put together. Am I going to have to build a huge pile of C code by myself? I thought all these Python hassles would at least let me avoid the C hassles!
  • Whew, okay, no need for that: there’s an amazing Swiss-army knife for macOS binary wheels, called delocate that includes a delocate-fuse tool that can fuse two wheels together. So I just need to figure out which binaries are the wrong architecture and somehow install my fixed/fused wheels before building my app with py2app.

    • except, oops, this tool just rewrites the file in-place without even changing its name, so I have to write some janky shell scripts to do the reinstallation. Ugh.
  • OK now that all that is in place, I just need to re-do all the steps:

    • universal2-ize my virtualenv!
    • build!
    • sign!
    • archive!
    • notarize!
    • wait!!!
    • staple!
    • re-archive!
    • upload!

And we have an application bundle we can ship to users.

It’s just that easy.

As long as I don’t need sandboxing or Mac App Store distribution, of course. That’s a challenge for another day.


So, that was terrible. But what should be happening here?

Some of this is impossible to simplify beyond a certain point - many of the things above are not really about Python, but are about distribution requirements for macOS specifically, and we in the Python community can’t affect operating system vendors’ tooling.

What we can do is build tools that produce clear guidance on what step is required next, handle edge cases on their own, and generally guide users through these complex processes without requiring them to hit weird binary-format or cryptographic-signing errors on their own with no explanation of what to do next.

I do not think that documentation is the answer here. The necessary steps should be discoverable. If you need to go to a website, the tool should use the webbrowser module to open a website. If you need to launch an app, the tool should launch that app.

With Encrust, I am hoping to generalize the solutions that I found while working on this for this one specific slice of the app distribution pipeline — i.e. a macOS desktop application desktop, as distributed independently and not through the mac app store — but other platforms will need the same treatment.

However, even without really changing py2app or any of the existing tooling, we could imagine a tool that would interactively prompt the user for each manual step, automate as much of it as possible, verify that it was performed correctly, and give comprehensible error messages if it was not.

For a lot of users, this full code-signing journey may not be necessary; if you just want to run your code on one or two friends’ computers, telling them to right click, go to ‘open’ and enter their password is not too bad. But it may not even be clear to them what the trade-off is, exactly; it looks like the app is just broken when you download it. The app build pipeline should tell you what the limitations are.

Other parts of this just need bug-fixes to address. py2app specifically, for example, could have a better self-test for its module-collecting behavior, launching an app to make sure it didn’t leave anything out.

Interactive prompts to set up a Homebrew tap, or a Flatpak build, or a Microsoft Store Metro app, might be similarly useful. These all have outside-of-Python required manual steps, and all of them are also amenable to at least partial automation.


Thanks to my patrons on Patreon for supporting this sort of work, including development of Encrust, of Pomodouroboros, of posts like this one and of that offer to sign other people’s apps. If you think this sort of stuff is worthwhile, you might want to consider supporting me over there as well.


  1. I am not even going to try to describe building a sandboxed, app-store ready application yet. 

  2. At least according to the Steam Hardware Survey, which as of this writing in March of 2023 pegs the current user-base at 54% apple silicon and 46% Intel. The last version I can convince the Internet Archive to give me, from December of 2022, has it closer to 51%/49%, which suggests a transition rate of 1% per month. I suspect that this is pretty generous to Apple Silicon as Steam users would tend to be earlier adopters and more sensitive to performance, but mostly I just don’t have any other source of data. 

  3. It is truly remarkable how bad the error reporting from the notarization service is. There are dozens of articles and forum posts around the web like this one where someone independently discovers this failure mode after successfully notarizing a dozen or so binaries and then suddenly being unable to do so any more because one of the bytes in the signature is suddenly not valid UTF-8 or something. 

  4. A lot of this is probably historical baggage; I started with py2app in 2008 or so, and I have been working on these apps in fits and starts for… ugh… 15 years. At some point when things are humming along and there are actual users, a more comprehensive retrofit of the build process might make sense but right now I just want to stop thinking about this

  5. If your application isn’t open source, or if it requires some porting work, I’m also available for light contract work, but it might take a while to get on my schedule. Feel free to reach out as well, but I am not looking to spend a lot of time doing porting work. 

  6. I find this particular detail interesting; it speaks to the complexity and depth of this problem space that this has been a known issue for several years in Briefcase, but there’s just so much other stuff to handle in the release pipeline that it remains open. 

  7. I forgot both .a files and the py2app-included python executable itself here, and had to discover that gap when I signed a different app where that made a difference. 

  8. Thus far, it seems to be working. 

Data Classification

Does Python still have a need for class without @dataclass?

Is there a place for non-@dataclass classes in Python any more?

I have previously — and somewhat famously — written favorably about @dataclass’s venerable progenitor, attrs, and how you should use it for pretty much everything.

At the time, attrs was an additional dependency, a piece of technology that you could bolt on to your Python stack to make your particular code better. While I advocated for it strongly, there are all the usual implicit reasons against using a new thing. It was an additional dependency, it might not interoperate with other convenience mechanisms for type declarations that you were already using (i.e. NamedTuple), it might look weird to other Python programmers familiar with existing tools, and so on. I don’t think that any of these were good counterpoints, but there was nevertheless a robust discussion to be had in addressing them all.

But for many years now, dataclasses have been — and currently are — built in to the language. They are increasingly integrated to the toolchain at a deep level that is difficult for application code — or even other specialized tools — to replicate. Everybody knows what they are. Few or none of those reasons apply any longer.

For example, classes defined with @dataclass are now optimized as a C structure might be when you compile them with mypyc, a trick that is extremely useful in some circumstances, which even attrs itself now has trouble keeping up with.

This all raises the question for me: beyond backwards compatibility, is there any point to having non-@dataclass classes any more? Is there any remaining justification for writing them in new code?

Consider my original example, translated from attrs to dataclasses. First, the non-dataclass version:

1
2
3
4
5
class Point3D:
    def __init__(self, x, y, z):
        self.x = x
        self.y = y
        self.z = z

And now the dataclass one:

1
2
3
4
5
6
7
from dataclasses import dataclass

@dataclass
class Point3D:
    x: int
    y: int
    z: int

Many of my original points still stand. It’s still less repetitive. In fewer characters, we’ve expressed considerably more information, and we get more functionality (repr, sorting, hashing, etc). There doesn’t seem to be much of a downside besides the strictness of the types, and if typing.Any were a builtin, x: any would be fine for those who don’t want to unduly constrain their code.

The one real downside of the latter over the former right now is the need for an import. Which, at this point, just seems… confusing? Wouldn’t it be nicer to be able to just write this:

1
2
3
4
class Point3D:
    x: int
    y: int
    z: int

and not need to faff around with decorator semantics and fudging the difference between Mypy (or Pyright or Pyre) type-check-time and Mypyc or Cython compile time? Or even better, to not need to explain the complexity of all these weird little distinctions to new learners of Python, and to have to cover import before class?

These tools all already treat the @dataclass decorator as a totally special language construct, not really like a decorator at all, so to really explore it you have to explain a special case and then a special case of a special case. The extension hook for this special case of the special case notwithstanding.

If we didn’t want any new syntax, we would need a from __future__ import dataclassification or some such for a while, but this doesn’t seem like an impossible bar to clear.


There are still some folks who don’t like type annotations at all, and there’s still the possibility of awkward implicit changes in meaning when transplanting code from a place with dataclassification enabled to one without, so perhaps an entirely new unambiguous syntax could be provided. One that more closely mirrors the meaning of parentheses in def, moving inheritance (a feature which, whether you like it or not, is clearly far less central to class definitions than ‘what fields do I have’) off to its own part of the syntax:

1
2
3
data Point3D(x: int, y: int, z: int) from Vector:
    def method(self):
        ...

which, for the “I don’t like types” contingent, could reduce to this in the minimal case:

1
2
data Point3D(x, y, z):
    pass

Just thinking pedagogically, I find it super compelling to imagine moving from teaching def foo(x, y, z):... to data Foo(x, y, z):... as opposed to @dataclass class Foo: x: int....

I don’t have any desire for semantic changes to accompany this, just to make it possible for newcomers to ignore the circuitous historical route of the @dataclass syntax and get straight into defining their own types with legible reprs from the very beginning of their Python journey.

(And make it possible for me to skip a couple of lines of boilerplate in short examples, as a bonus.)


I’m curious to know what y’all think, though. Shoot me an email or a toot and let me know.

In particular:

  1. Do you think there’s some reason I’m missing why Python’s current method for defining classes via a bunch of dunder methods is still better than dataclasses, or should stick around into the future for reasons beyond “compatibility”?
  2. Do you think “compatibility” is sufficient reason to keep the syntax the way it is forever, and I’m underestimating the cost of adding a keyword like this?
  3. If you do think that a change should be made, would you prefer:
    1. changing the meaning of class itself via a __future__ import,
    2. a new data keyword like the one I’ve proposed,
    3. a new keyword that functions exactly like the one I have proposed but really want to bikeshed the word data a bunch,
    4. something more incremental like just putting dataclass and field in builtins,
    5. or an option I haven’t even contemplated here?

If I find I’m not alone in this perhaps I will wander over to the Python discussion boards to have a more substantive conversation...


Thank you to my patrons who are helping me while I try to turn… whatever this is… along with open source maintenance and application development, into a real job. Do you want to see me pursue ideas like this one further? If so, you can support me on Patreon as well!