A Response to Jacob Kaplan-Moss’s “Incompetent But Nice”

What can managers do about employees who are easy to work with, and are trying their best, but can’t seem to get the job done?

Jacob Kaplan-Moss has written a post about one of the most stressful things that can happen to you as a manager: when someone on your team is getting along well with the team, apparently trying their best, but unable to do the work. Incompetent, but nice.

I have some relevant experience with this issue. On more than one occasion in my career:

  1. I have been this person, more than once. I have both resigned, and been fired, as a result.
  2. I’ve been this person’s manager, and had to fire them after being unable to come up with a training plan that would allow them to improve.
  3. I’ve been an individual contributor on a team with this person, trying to help them improve.

So I can speak to this issue from all angles, and I can confirm: it is gut-wrenchingly awful no matter where you are in relation to it. It is social stress in its purest form. Everyone I’ve been on some side of this dynamic with is someone that I’d love to work with again. Including the managers who fired me!

Perhaps most of all, since I am not on either side of an employer/employee relationship right now1, I have some emotional distance from this kind of stress which allows me to write about it in a more detached way than others with more recent and proximate experience.

As such, I’d like to share some advice, in the unfortunate event that you find yourself in one of these positions and are trying to figure out what to do.

I’m going to speak from the perspective of the manager here, because that’s where most of the responsibility for decision-making lies. If you’re a teammate you probably need to wait for a manager to ask you to do something, if you’re the underperformer here you’re probably already trying as hard as you can to improve. But hopefully this reasoning process can help you understand what the manager is trying to do here, and find the bits of the process you can take some initiative to help with.

Step 0: Preliminaries

First let’s lay some ground rules.

  1. Breathe. Maintaining an explicit focus on explicitly regulating your own mood is important, regardless of whether you’re the manager, teammate, or the underperformer.
  2. Accept that this may be intractable. You’re going to do your best in this situation but you are probably choosing between bad options. Nevertheless you will need to make decisions as confidently and quickly as possible. Letting this situation drag on can be a recipe for misery.
  3. You will need to do a retrospective.2 Get ready to collect information as you go through the process to try to identify what happened in detail so you can analyze it later. If you are the hiring manager, that means that after you’ve got your self-compassion together and your equanimous professional mood locked in, you will also need to reflect on the fact that you probably fucked up here, and get ready to try to improve your own skills and processes so that you don’t end up this situation again.

I’m going to try to pick up where Jacob left off, and skip over the “easy” parts of this process. As he puts it:

The proximate answer is fairly easy: you try to help them level up: pay for classes, conferences, and/or books; connect them with mentors or coaches; figure out if something’s in the way and remove the blocker. Sometimes folks in this category just need to learn, and because they’re nice it’s easy to give them a lot of support and runway to level up. Sometimes these are folks with things going on in their lives outside work and they need some time (or some leave) to focus on stuff that’s more important than work. Sometimes the job has requirements that can be shifted or eased or dropped – you can match the work to what the person’s good at. These situations aren’t always easy but they are simple: figure out the problem and make a change.

Step 1: Figuring Out What’s Going On

There are different reasons why someone might underperform.

Possibility: The person is over-leveled.

This is rare. For the most part, pervasive under-leveling is more of a problem in the industry. But, it does happen, and when it happens, what it looks like is someone who is capable of doing some of the work that they’re assigned, but getting stuck and panicking with more challenging or ambiguous assignments.

Moreover, this particular organizational antipattern, the “nice but incompetent” person, is more likely to be over-leveled, because if they’re friendly and getting along with the team, then it’s likely they made a good first impression and made the hiring committee want to go to bat for them as well, and may have done them the “favor” of giving them a higher level. This is something to consider in the retrospective.

Now, Jacob’s construction of this situation explicitly allows for “leveling up”, but the sort of over-leveling that can be resolved with a couple of conference talks, books, and mentoring sessions is just a challenge. We’re talking about persistent resistance to change here, and that sort of over-leveling means that they have skipped an entire rung on the professional development ladder, and do not have the professional experience at this point to bridge the gaps between their experience and the ladder.

If this is the case, consider a demotion. Identify the aspects of the work that the underperformer can handle, and try to match them to a role. If you are yourself the underperformer, proactively identifying what you’re actually doing well at and acknowledging the work you can’t handle, and identifying any other open headcount for roles at that level can really make this process easier for your manager.

However, be sure to be realistic. Are they capable enough to work at that reduced level? Does your team really have a role for someone at that level? Don’t invent makework in the hopes that you can give them a bootleg undergraduate degree’s worth of training on the job; they need to be able to contribute.

Possibility: Undiagnosed health issues

Jacob already addressed the “easy” version here: someone is struggling with an issue that they know about and they can ask for help, or at least you can infer the help they need from the things they’ve explicitly said.

But the underperformer might have something going on which they don’t realize is an issue. Or they might know there’s an issue but not think it’s serious, or not be able to find a diagnosis or treatment. Most frequently, this is a mental health problem, but it can also be things like unexplained fatigue.

This possibility is the worst. Not only do you feel like a monster for adding insult to injury, there’s also a lot of risk in discussing it.

Sometimes, you feel like you can just tell3 that somebody is suffering from a particular malady. It may seem obvious to you. If you have any empathy, you probably want to help them. However, you have to be careful.

First, some illnesses qualify as disabilities. Just because the employee has not disclosed their disability to you, does not mean they are unaware. It is up to the employee whether to tell you, and you are not allowed to require them to disclose anything to you. They may have good reasons for not talking about it.

Beyond highlighting some relevant government policy I am not equipped on how to advise you on how to handle this. You probably want to loop in someone from HR and/or someone from Legal, and have a meeting to discuss the particulars of what’s happening and how you’d like to try to help.

Second, there’s a big power differential here. You have to think hard about how to broach the subject; you’re their boss, telling them that you think they’re too sick to work. In this situation, they explicitly don’t necessarily agree, and that can quite reasonably be perceived as an attack and an insult, even if you’re correct. Hopefully the folks in Legal or HR can help you with some strategies here; again, I’m not really qualified to do anything but point at the risks involved and say “oh no”.

The “good” news here is that if this really is the cause, then there’s not a whole lot to change in your retrospective. People get sick, their families get sick, it can’t always be predicted or prevented.

Possibility: Burnout

While it is certainly a mental health issue in its own right, burnout is specifically occupational and you can thus be a bit more confident as a manager recognizing it in an employment context.

This is better to prevent than to address, but if you’ve got someone burning out badly enough to be a serious performance issue, you need to give them more leave than they think they need, and you need to do it quickly. A week or two off is not going to cut it.

In my experience, this is the most common cause of an earnest but agreeable employee underperforming, and it is also the one we are most reluctant to see. Each step on the road to burnout seems locally reasonable.

Just push yourself a little harder. Just ask for a little overtime. Just until this deadline; it’s really important. Just until we can hire someone; we’ve already got a req open for that role. Just for this one client.

It feels like we should be able to white-knuckle our way through “just” a little inconvenience. It feels that way both individually and collectively. But the impacts are serious, and they are cumulative.

There are two ways this can manifest.

Usually, it’s a gradual decline that you can see over time, and you’ll see this in an employee that was previously doing okay, but now can’t hack it.

However, another manifestation is someone who was burned out at their previous role, did not take any break between jobs, and has stepped into a moderately stressful role which could be a healthy level of challenge for someone refreshed and taking frequent enough breaks, but is too demanding for someone who needs to recover.

If that’s the case, and you feel like you accurately identified a promising candidate, it is really worthwhile to get that person the break that they need. Just based on vague back-of-the-envelope averages, it would typically be about as expensive to find a way to wrangle 8 weeks of extra leave than to go through the whole hiring process for a mid-career software engineer from scratch. However, that math assumes that the morale cost of firing someone is zero, and the morale benefit of being seen to actually care about your people and proactively treat them well as also zero.

If you can positively identify this as the issue, then you have a lot of work to do in the retrospective. People do not spontaneously burn out by themselves. this is a management problem and this person is likely to be the first domino to fall. You may need to make some pretty big changes across your team.

Possibility: Persistent Personality Conflict

It may be that someone else on the team is actually the problem. If the underperformer is inconsistent, observe the timing of the inconsistencies: does it get much worse when they’re assigned to work closely with someone else? Note that “personality conflict” does not mean that the other person is necessarily an asshole; it is possible for communication styles to simply fail to mesh due to personality differences which cannot be easily addressed.

You will be tempted to try to reshuffle responsibilities to keep these team members further apart from each other.

Don’t.

People with a conflict that is persistently interfering in day-to-day work need to be on different teams entirely. If you attempt to separate them but have them working closely, then inevitably one is going to get the higher-status projects and the other is going to be passed over for advancement. Or they’re going to end up drifting back into similar areas again.

Find a way to transfer them internally far enough away that they can have breathing room away from this conflict. If you can’t do that, then a firing may be your best option.

In the retrospective, it’s worth examining the entire team dynamic at this point, to see if the conflict is really just between two people, or if it’s more pervasive than that and other members of the team are just handling it better.

Step 2: Help By Being Kind, Not By Being Nice

Again, we’re already past the basics here. You’ve already got training and leave and such out of the way. You’re probably going to need to make a big change.

Responding to Jacob, specifically:

Firing them feels wrong; keeping them on feels wrong.

I think that which is right is heavily context-dependent. But, realistically, firing is the more likely option. You’ve got someone here who isn’t performing adequately, and you’ve already deployed all the normal tools to try to dig yourself out of that situation.

So let’s talk about that first.

Firing

Briefly, If you need to fire them, just fire them, and do it quickly.

Firing people sucks, even obnoxious people. Not to mention that this situation we’re in is about a person that you like! You’ll want to be nice to this person. The person is also almost certainly trying their best. It’s pretty hard to be agreeable to your team if you’re disappointing that team and not even trying to improve.

If you find yourself thinking “I probably need to fire this person but it’s going to be hard on them”, the thought “hard on them” indicates you are focused on trying to help them personally, and not what is best for your company, your team, or even the employee themselves professionally. The way to show kindness in that situation is not to keep them in a role that’s bad for them and for you.

It would be much better for the underperformer to find a role where they are not an underperformer, and at this point, that role is probably not on your team. Every minute that you keep them on in the bad role is a minute that they can’t spend finding a good one.

As such, you need to immediately shift gears towards finding them a soft landing that does not involve delaying whatever action you need to take.

Being kind is fine. It is not even a conflict to try to spend some company resources to show that kindness. It is in the best interest of all concerned that anyone you have to fire or let go is inclined to sing your praises wherever they end up. The best brand marketing in the world for your jobs page is a diaspora of employees who wish they could still be on your team.

But being nice here, being conflict-avoidant and agreeable, only drags out an unpleasant situation. The way to spend company resources on kindness is to negotiate with your management for as large a severance package as you can manage, and give as much runway as possible for their job search, and clarity about what else you can do.

For example, are you usable as a positive reference? I.e., did they ever have a period where their performance was good, which you could communicate to future employers? Be clear.

Not-Firing

But while I think it’s the more likely option, it’s certainly not the only option. There are certainly cases where underperformers really can be re-situated into better roles, and this person could still find a good fit within the team, or at least, the company. You think you’ve solved the mystery of the cause of the problem here, and you need to make a change. What then?

In that case, the next step is to have a serious conversation about performance management. Set expectations clearly. Ironically, if you’re dealing with a jerk, you’ve probably already crisply communicated your issues. But if you’re dealing with a nice person, you’re more likely to have a slow, painful drift into this awkward situation, where they probably vaguely know that they’re doing poorly but may not realize how poorly.

If that’s what’s happening, you need to immediately correct it, even if firing isn’t on the table. If you’ve gotten to this point, some significant action is probably necessary. Make sure they understand the urgency of the situation, and if you have multiple options for them to consider, give them a clear timeline for how long they have to make a decision.

As I detailed above, things like a down-leveling or extended leave might be on the table. You probably do not want to do anything like that in a normal 1x1: make sure you have enough time to deal with it.

Remember: Run That Retrospective!

In most cases where this sort of situation develops, is a clear management failure. If you’re the manager, you need to own it.

Try to determine where the failure was. Was it hiring? Leveling? A team culture that encourages burnout? A poorly-vetted internal transfer? Accepting low performance for too long, not communicating expectations?

If you can identify a systemic cause as actionable, then you need to make sure there is time and space to make necessary changes, and soon. It’s stressful to have to go through this process with one person, but if you have to do this repeatedly, any dynamic that can damage multiple people’s productivity persistently is almost definitionally a team-destroyer.


  1. REMEMBER TO LIKE AND SUBSCRIBE 

  2. I know it’s a habit we all have from industry jargon — heck, I used the term in my own initial Mastodon posts about this — but “postmortem” is a fraught term in the best of circumstances, and super not great when you’re talking about an actual person who has recently been fired. Try to stick to “retrospective” or “review” when you’re talking about this process with your team. 

  3. Now, I don’t know if this is just me, but for reasons that are outside the scope of this post, when I was in my mid teens I got a copy of the DSM-IV, read the whole thing back to back, and studied it for a while. I’ve never had time to catch up to the DSM-5 but I’m vaguely aware of some of the changes and I’ve read a ton of nonfiction related to mental health. As a result of this self-education, I have an extremely good track record of telling people “you should see a psychiatrist about X”. I am cautious about this sort of thing and really only tell close friends, but as far as I know my hit-rate is 100%. 

Telemetry Is Not Your Enemy

Not all data collection is the same, and not all of it is bad.

Part 1: A Tale of Two Metaphors

In software development “telemetry” is data collected from users of the software, almost always delivered to the authors of the software via the Internet.

In recent years, there has been a great deal of angry public discourse about telemetry. In particular, there is a lot of concern that every software vendor and network service operator collecting any data at all is spying on its users, surveilling every aspect of our lives. The media narrative has been that any tech company collecting data for any purpose is acting creepy as hell.

I am quite sympathetic to this view. In general, some concern about privacy is warranted whenever some new data-collection scheme is proposed. However it seems to me that the default response is no longer “concern and skepticism”; but rather “panic and fury”. All telemetry is seen as snooping and all snooping is seen as evil.

There’s a sense in which software telemetry is like surveillance. However, it is only like surveillance. Surveillance is a metaphor, not a description. It is far from a perfect metaphor.

In the discourse around user privacy, I feel like we have lost a lot of nuance about the specific details of telemetry when some people dismiss all telemetry as snooping, spying, or surveillance.

Here are some ways in which software telemetry is not like “snooping”:

  1. The data may be aggregated. The people consuming the results of telemetry are rarely looking at individual records, and individual records may not even exist in some cases. There are tools, like Prio, to do this aggregation to be as privacy-sensitive as possible.
  2. The data is rarely looked at by human beings. In the cases (such as ad-targeting) where the data is highly individuated, both the input (your activity) and the output (your recommendations) are both mainly consumed by you, in your experience of a product, by way of algorithms acting upon the data, not by an employee of the company you’re interacting with.1
  3. The data is highly specific. “Here’s a record with your account ID and the number of times you clicked the Add To Cart button without checking out” is not remotely the same class of information as “Here’s several hours of video and audio, attached to your full name, recorded without your knowledge or consent”. Emotional appeals calling any data “surveillance” tend to suggest that all collected data is the latter, where in reality most of it is much closer to the former.

There are other metaphors which can be used to understand software telemetry. For example, there is also a sense in which it is like voting.

I emphasize that voting is also a metaphor here, not a description. I will also freely admit that it is in many ways a worse metaphor for telemetry than “surveillance”. But it can illuminate other aspects of telemetry, the ones that the surveillance metaphor leaves out.

Data-collection is like voting because the data can represent your interests to a party that has some power over you. Your software vendor has the power to change your software, and you probably don’t, either because you don’t have access to the source code. Even if it’s open source, you almost certainly don’t have the resources to take over its maintenance.

For example, let’s consider this paragraph from some Microsoft documentation about telemetry:

We also use the insights to drive improvements and intelligence into some of our management and monitoring solutions. This improvement helps customers diagnose quality issues and save money by making fewer support calls to Microsoft.

“Examples of how Microsoft uses the telemetry data” from the Azure SDK documentation

What Microsoft is saying here is that they’re collecting the data for your own benefit. They’re not attempting to justify it on the basis that defenders of law-enforcement wiretap schemes might. Those who want literal mass surveillance tend to justify it by conceding that it might hurt individuals a little bit to be spied upon, but if we spy on everyone surely we can find the bad people and stop them from doing bad things. That’s best for society.

But Microsoft isn’t saying that.2 What Microsoft is saying here is that if you’re experiencing a problem, they want to know about it so they can fix it and make the experience better for you.

I think that is at least partially true.

Part 2: I Qualify My Claims Extensively So You Jackals Don’t Lose Your Damn Minds On The Orange Website

I was inspired to write this post due to the recent discussions in the Go community about how to collect telemetry which provoked a lot of vitriol from people viscerally reacting to any telemetry as invasive surveillance. I will therefore heavily qualify what I’ve said above to try to address some of that emotional reaction in advance.

I am not suggesting that we must take Microsoft (or indeed, the Golang team) fully at their word here. Trillion dollar corporations will always deserve skepticism. I will concede in advance that it’s possible the data is put to other uses as well, possibly to maximize profits at the expense of users. But it seems reasonable to assume that this is at least partially true; it’s not like Microsoft wants Azure to be bad.

I can speak from personal experience. I’ve been in professional conversations around telemetry. When I have, my and my teams’ motivations were overwhelmingly focused on straightforwardly making the user experience good. We wanted it to be good so that they would like our products and buy more of them.

It’s hard enough to do that without nefarious ulterior motives. Most of the people who develop your software just don’t have the resources it takes to be evil about this.

Part 3: They Can’t Help You If They Can’t See You

With those qualifications out of the way, I will proceed with these axioms:

  1. The developers of software will make changes to it.
  2. These changes will benefit some users.
  3. Which changes the developers select will be derived, at least in part, from the information that they have.
  4. At least part of the information that the developers have is derived from the telemetry they collect.

If we can agree that those axioms are reasonable, then let us imagine two user populations:

  • Population A is privacy-sensitive and therefore sees telemetry as bad, and opts out of everything they possibly can.
  • Population B doesn’t care about privacy, and therefore ignores any telemetry and blithely clicks through any opt-in.

When the developer goes to make changes, they will have more information about Population B. Even if they’re vaguely aware that some users are opting out (or refusing to opt in), the developer will know far less about Population A. This means that any changes the developer makes will not serve the needs of their privacy-conscious users, which means fewer features that respect privacy as time goes on.

Part 4: Free as in Fact-Free Guesses

In the world of open source software, this problem is even worse. We often have fewer resources with which to collect and analyze telemetry in the first place, and when we do attempt to collect it, a vocal minority among those users are openly hostile, with feedback that borders on harassment. So we often have no telemetry at all, and are making changes based on guesses.

Meanwhile, in proprietary software, the user population is far larger and less engaged. Developers are not exposed directly to users and therefore cannot be harassed or intimidated into dropping their telemetry. Which means that proprietary software gains a huge advantage: they can know what most of their users want, make changes to accommodate it, and can therefore make a product better than the one based on uninformed guesses from the open source competition.

Proprietary software generally starts out with a panoply of advantages already — most of which boil down to “money” — but our collective knee-jerk reaction to any attempt to collect telemetry is a massive and continuing own-goal on the part of the FLOSS community. There’s no inherent reason why free software’s design cannot be based on good data, but our community’s history and self-selection biases make us less willing to consider it.

That does not mean we need to accept invasive data collection that is more like surveillance. We do not need to allow for stockpiled personally-identifiable information about individual users that lives forever. The abuses of indiscriminate tech data collection are real, and I am not suggesting that we forget about them.

The process for collecting telemetry must be open and transparent, the data collected needs to be continuously vetted for safety. Clear data-retention policies should always be in place to avoid future unanticipated misuses of data that is thought to be safe today but may be de-anonymized or otherwise abused in the future.

I want the collaborative feedback process of open source development to result in this kind of telemetry: thoughtful, respectful of user privacy, and designed with the principle of least privilege in mind. If we have this kind of process, then we could hold it up as an example for proprietary developers to follow, and possibly improve the industry at large.

But in order to be able to produce that example, we must produce criticism of telemetry efforts that is specific, grounded in actual risks and harms to users, rather than a series of emotional appeals to slippery-slope arguments that do not correspond to the actual data being collected. We must arrive at a consensus that there are benefits to users in allowing software engineers to have enough information to do their jobs, and telemetry is not uniformly bad. We cannot allow a few users who are complaining to stop these efforts for everyone.

After all, when those proprietary developers look at the hard data that they have about what their users want and need, it’s clear that those who are complaining don’t even exist.


  1. Please note that I’m not saying that this automatically makes such collection ethical. Attempting to modify user behavior or conduct un-reviewed psychological experiments on your customers is also wrong. But it’s wrong in a way that is somewhat different than simply spying on them. 

  2. I am not suggesting that data collected for the purposes of improving the users’ experience could not be used against their interest, whether by law enforcement or by cybercriminals or by Microsoft itself. Only that that’s not what the goal is here. 

What Would You Say You Do Here?

A brief description of the various projects that I am hoping to do independently, with your support. In other words, this is an ad, for me.

What have I been up to?

Late last year, I launched a Patreon. Although not quite a “soft” launch — I did toot about it, after all — I didn’t promote it very much.

I started this way because I realized that if I didn’t just put something up I’d be dithering forever. I’d previously been writing a sprawling monster of an announcement post that went into way too much detail, and kept expanding to encompass more and more ideas until I came to understand that salvaging it was going to be an editing process just as brutal and interminable as the writing itself.

However, that post also included a section where I just wrote about what I was actually doing.

So, for lots of reasons1, there are a diverse array of loosely related (or unrelated) projects below which may not get finished any time soon. Or, indeed, may go unfinished entirely. Some are “done enough” now, and just won’t receive much in the way of future polish.

That is an intentional choice.

The rationale, as briefly as I can manage, is: I want to lean into the my strength2 of creative, divergent thinking, and see how these ideas pan out without committing to them particularly intensely. My habitual impulse, for many years, has been to lean extremely hard on strategies that compensate for my weaknesses in organization, planning, and continued focus, and attempt to commit to finishing every project to prove that I’ll never flake on anything.

While the reward tiers for the Patreon remain deliberately ambiguous3, I think it would be fair to say that patrons will have some level of influence in directing my focus by providing feedback on these projects, and requesting that I work more on some and less on others.

So, with no further ado: what have I been working on, and what work would you be supporting if you signed up? For each project, I’ll be answering 3 questions:

  1. What is it?
  2. What have I been doing with it recently?
  3. What are my plans for it?

This. i.e. blog.glyph.im

What is it?

For starters, I write stuff here. I guess you’re reading this post for some reason, so you might like the stuff I write? I feel like this doesn’t require much explanation.

What have I done with it recently?

You might appreciate the explicitly patron-requested Potato Programming post, a screed about dataclass, or a deep dive on the difficulties of codesigning and notarization on macOS along with an announcement of a tool to remediate them.

What are my plans for it?

You can probably expect more of the same; just all the latest thoughts & ideas from Glyph.

Twisted

What is it?

If you know of me you probably know of me as “the Twisted guy” and yeah, I am still that. If, somehow, you’ve ended up here and you don’t know what it is, wow, that’s cool, thanks for coming, super interested to know what you do know me for.

Twisted is an event-driven networking engine written in Python, the precursor and inspiration for the asyncio module, and a suite of event-driven programming abstractions, network protocol implementations, and general utility code.

What have I done with it recently?

I’ve gotten a few things merged, including type annotations for getPrimes and making the bundled CLI OpenSSH server replacement work at all with public key authentication again, as well as some test cleanups that reduce the overall surface area of old-style Deferred-returning tests that can be flaky and slow.

I’ve also landed a posix_spawnp-based spawnProcess implementation which speed up process spawning significantly; this can be as much as 3x faster if you do a lot of spawning of short-running processes.

I have a bunch of PRs in flight, too, including better annotations for FilePath Deferred, and IReactorProcess, as well as a fix for the aforementioned posix_spawnp implementation.

What are my plans for it?

A lot of the projects below use Twisted in some way, and I continue to maintain it for my own uses. My particular focus is in quality-of-life improvements; issues that someone starting out with a Twisted project will bump into and find confusing or difficult. I want it to be really easy to write applications with Twisted and I want to use my own experiences with it.

I also do code reviews of other folks’ contributions; we do still have over 100 open PRs right now.

DateType

What is it?

DateType is a workaround for a very specific bug in the way that the datetime standard library module deals with type composition: to wit, that datetime is a subclass of date but is not Liskov-substitutable for it. There are even #type:ignore comments in the standard library type stubs to work around this problem, because if you did this in your own code, it simply wouldn’t type-check.

What have I done with it recently?

I updated it a few months ago to expose DateTime and Time directly (as opposed to AwareDateTime and NaiveDateTime), so that users could specialize their own functions that took either naive or aware times without ugly and slightly-incorrect unions.

What are my plans for it?

This library is mostly done for the time being, but if I had to polish it a bit I’d probably do two things:

  1. a readthedocs page for nice documentation
  2. write a PEP to get this integrated into the standard library

Although the compatibility problems are obviously very tricky and a PEP would probably be controversial, this is ultimately a bug in the stdlib, and should be fixed upstream there.

Automat

What is it?

It’s a library to make deterministic finite-state automata easier to create and work with.

What have I done with it recently?

Back in the middle of last year, I opened a PR to create a new, completely different front-end API for state machine definition. Instead of something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
class MachineExample:
    machine = MethodicalMachine()

    @machine.state()
    def a_state(self): ...

    @machine.state()
    def other_state(self): ...

    @machine.input()
    def flip(self): ...

    @machine.output()
    def _do_flip(self): return ...

    on.upon(flip, enter=off, outputs=[_do_flip], collector=list)
    off.upon(flip, enter=on, outputs=[_do_flip], collector=list)

this branch lets you instead do something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
class MachineProtocol(Protocol):
    def flip(self) -> None: ...

class MachineCore: ...

def buildCore() -> MachineCore: ...
machine = TypicalBuilder(MachineProtocol, buildCore)

@machine.state()
class _OffState:
    @machine.handle(MachineProtocol.flip, enter=lambda: _OnState)
    def flip(self) -> None: ...

@machine.state()
class _OnState:
    @machine.handle(MachineProtocol.flip, enter=lambda: _OffState)
    def flip(self) -> None: ...

MachineImplementation = machine.buildClass()

In other words, it creates a state for every type, and type safety that much more cleanly expresses what methods can be called and by whom; no need to make everything private with tons of underscore-prefixed methods and attributes, since all the caller can see is “an implementation of MachineProtocol”; your state classes can otherwise just be normal classes, which do not require special logic to be instantiated if you want to use them directly.

Also, by making a state for every type, it’s a lot cleaner to express that certain methods require certain attributes, by simply making them available as attributes on that state and then requiring an argument of that state type; you don’t need to plot your way through the outputs generated in your state graph.

What are my plans for it?

I want to finish up dealing with some issues with that branch - particularly the ugly patterns for communicating portions of the state core to the caller and also the documentation; there are a lot of magic signatures which make sense in heavy usage but are a bit mysterious to understand while you’re getting started.

I’d also like the visualizer to work on it, which it doesn’t yet, because the visualizer cribs a bunch of state from MethodicalMachine when it should be working purely on core objects.

Secretly

What is it?

This is an attempt at a holistic, end-to-end secret management wrapper around Keyring. Whereas Keyring handles password storage, this handles the whole lifecycle of looking up the secret to see if it’s there, displaying UI to prompt the user (leveraging a pinentry program from GPG if available)

What have I done with it recently?

It’s been a long time since I touched it.

What are my plans for it?

  • Documentation. It’s totally undocumented.
  • It could be written to be a bit more abstract. It dates from a time before asyncio, so its current Twisted requirement for Deferred could be made into a generic Awaitable one.
  • Better platform support for Linux & Windows when GPG’s pinentry is not available.
  • Support for multiple accounts so that when the user is prompted for the relevant credential, they can store it.
  • Integration with 1Password via some of their many potentially relevant APIs.

Fritter

What is it?

Fritter is a frame-rate independent timer tree.

In the course of developing Twisted, I learned a lot about time and timers. LoopingCall encodes some of this knowledge, but it’s very tightly coupled to the somewhat limited IReactorTime API.

Also, LoopingCall was originally designed with the needs of media playback (particularly network streaming audio playback) in mind, but I have used it more for background maintenance tasks and for animations. Both of these things have requirements that LoopingCall makes awkward but FRITTer is designed to meet:

  1. At higher loads, surprising interactions can occur with the underlying priority queue implementation, and different algorithms may make a significant difference to performance. Fritter has a pluggable implementation of a priority queue and is carefully minimally coupled to it.

  2. Driver selection is a first-class part of the API, with an included, public “Memory” driver for testing, rather than LoopingCall’s “testing is at least possible.reactor attribute. This means that out of the box it supports both Twisted and asyncio, and can easily have other things added.

  3. The API is actually generic on what constitutes time itself, which means that you can use it for both short-term (i.e.: monotonic clock values as float-seconds) and long-term (civil times as timezone-aware datetime objects) recurring tasks. Recurrence rules can also be arbitrary functions.

  4. There is a recursive driver (this is the “tree” part) which both allows for:

    a. groups of timers which can be suspended and resumed together, and

    b. scaling of time, so that you can e.g. speed up or slow down the ticks for AIs, groups of animations, and so on, also in groups.

  5. The API is also generic on what constitutes work. This means that, for example, in a certain timer you can say “all work units scheduled on this scheduler, in addition to being callable, must also have an asJSON method”. And in fact that’s exactly what the longterm module in Fritter does.

I can neither confirm nor deny that this project was factored out of a game engine for a secret game project which does not appear on this list.

What have I done with it recently?

Besides realizing, in the course of writing this blog post, that its CI was failing its code quality static checks (oops), the last big change was the preliminary support for recursive timers and serialization.

What are my plans for it?

  • These haven’t been tested in anger yet and I want to actually use them in a larger project to make sure that they don’t have any necessary missing pieces.

  • Documentation.

Encrust

What is it?

I have written about Encrust quite recently so if you want to know about it, you should probably read that post. In brief, it is a code-shipping tool for py2app. It takes care of architecture-independence, code-signing, and notarization.

What have I done with it recently?

Wrote it. It’s brand new as of this month.

What are my plans for it?

I really want this project to go away as a tool with an independent existence. Either I want its lessons to be fully absorbed into Briefcase or perhaps py2app itself, or for it to become a library that those things call into to do its thing.

Various Small Mac Utilities

What is it?

  • QuickMacApp is a very small library for creating status-item “menu bar apps” in Python which don’t have much of a UI but want to run some Python code in the background and occasionally pop up a notification or ask the user a question or something. The idea is that if you have a utility that needs a minimal UI to just ask the user one or two things, you should be able to give it a GUI immediately, without thinking about it too much.
  • QuickMacHotkey this is a very minimal API to register hotkeys on macOS. this example is what comes up if you search the web for such a thing, but it hasn’t worked on a current Python for about 11 years. This isn’t the “right” way to do such a thing, since it provides no UI to set the shortcut, you’d have to hard-code it. But MASShortcut is now archived and I haven’t had the opportunity to investigate HotKey, so for the time being, it’s a handy thing, and totally adequate for the sort of quick-and-dirty applications you might make with QuickMacApp.
  • VEnvDotApp is a way of giving a virtualenv its own Info.plist and bundle ID, so that command-line python tools that just need to pop up a little mac GUI, like an alert or a notification, can do so with cross-platform tools without looking like it’s an app called “Python”, or in some cases breaking entirely.
  • MOPUp is a command-line updater for upstream Python.org macOS Python. For distributing third-party apps, Python.org’s version is really the one you want to use (it’s universal2, and it’s generally built with compiler options that make it a distributable thing itself) but updating it by downloading a .pkg file from a web browser is kind of annoying.

What have I done with it recently?

I’ve been releasing all these tools as they emerge and are factored out of other work, and they’re all fairly recent.

What are my plans for it?

I will continue to factor out any general-purpose tools from my platform-specific Python explorations — hopefully more Linux and Windows too, once I’ve got writing code for my own computer down, but most of the tools above are kind of “done” on their own, at the moment.

The two things that come to mind though are that QuickMacApp should have a way of owning the menubar sometimes (if you don’t have something like Bartender, menu-bar-status-item-only apps can look like they don’t do anything when you launch them), and that MOPUp should probably be upstreamed to python.org.

Pomodouroboros

What is it?

Pomodouroboros is a pomodoro timer with a highly opinionated take. It’s based on my own experience of ADHD time blindness, and is more like a therapeutic intervention for that specific condition than a typical “productivity” timer app.

In short, it has two important features that I have found lacking in other tools:

  1. A gigantic, absolutely impossible to ignore visual timer that presents a HUD overlay over your entire desktop. It remains low-opacity and static most of the time but pulses every 30 seconds to remind you that time is passing.
  2. Rather than requiring you to remember to set a timer before anything happens, it has an idea of “work hours” when you want to be time-sensitive and presents constant prompting to get started.

What have I done with it recently?

I’ve been working on it fairly consistently lately. The big things I’ve been doing have been:

  1. factoring things out of the Pomodouroboros-specific code and into QuickMacApp and Encrust.
  2. Porting the UI to the redesigned core of the application, which has been implemented and tested in platform-agnostic Python but does not have any UI yet.
  3. fully productionizing the build process and ensuring that Encrust is producing binary app bundles that people can use.

What are my plans for it?

In brief, “finish the app”. I want this to have its own website and find a life beyond the Python community, with people who just want a timer app and don’t care how it’s written. The top priority is to replace the current data model, which is to say the parts of the UI that set and evaluate timers and edit the list of upcoming timers (the timer countdown HUD UI itself is fine).

I also want to port it to other platforms, particularly desktop Linux, where I know there are many users interested in such a thing. I also want to do a CLI version for folks who live on the command line.

Finally: Pomodouroboros serves as a test-bed for a larger goal, which is that I want to make it easier for Python programmers, particularly beginners who are just getting into coding at all, to write code that not only interacts with their own computer, but that they can share with other users in a real way. As you can see with Encrust and other projects above, as much as I can I want my bumpy ride to production code to serve as trailblazing so that future travelers of this path find it as easy as possible.

And Here Is Where The CTA Goes

If this stuff sounds compelling, you can obviously sign up, that would be great. But also, if you’re just curious, go ahead and give some of these projects some stars on GitHub or just share this post. I’d also love to hear from you about any of this!

If a lot of people find this compelling, then pursuing these ideas will become a full-time job, but I’m pretty far from that threshold right now. In the meanwhile, I will also be doing a bit of consulting work.

I believe much of my upcoming month will be spoken for with contracting, although quite a bit of that work will also be open source maintenance, for which I am very grateful to my generous clients. Please do get in touch if you have something more specific you’d like me to work on, and you’d like to become one of those clients as well.


  1. Reasons which will have to remain mysterious until I can edit about 10,000 words of abstract, discursive philosophical rambling into something vaguely readable. 

  2. A strength which is common to many, indeed possibly most, people with ADHD. 

  3. While I want to give myself some leeway to try out ideas without necessarily finishing them, I do not want to start making commitments that I can’t keep. Particularly commitments that are tied to money! 

Building And Distributing A macOS Application Written in Python

Even with all the great tools we have, getting a macOS application written in Python all the way to a production-ready build suitable for end users can involve a lot of esoteric trivia.

Why Bother With All This?

In other words: if you want to run on an Apple platform, why not just write everything in an Apple programming language, like Swift? If you need to ship to multiple platforms, you might have to rewrite it all anyway, so why not give up?

Despite the significant investment that platform vendors make in their tools, I fundamentally believe that the core logic in any software application ought to be where its most important value lies. For small, independent developers, having portable logic that can be faithfully replicated on every platform without massive rework might be tricky to get started with, but if you can’t do it, it may not be cost effective to support multiple platforms at all.

So, it makes sense for me to write my applications in Python to achieve this sort of portability, even though on each platform it’s going to be a little bit more of a hassle to get it all built and shipped since the default tools don’t account for the use of Python.

But how much more is “a little bit” more of a hassle? I’ve been slowly learning about the pipeline to ship independently-distributed1 macOS applications for the last few years, and I’ve encountered a ton of annoying roadblocks.

Didn’t You Do This Already? What’s New?

So nice of you to remember. Thanks for asking. While I’ve gotten this to mostly work in the past, some things have changed since then:

  • the notarization toolchain has been updated (altool is now notarytool),
  • I’ve had to ship libraries other than just PyGame,
  • Apple Silicon launched, necessitating another dimension of build complexity to account for multiple architectures,
  • Perhaps most significantly, I have written a tool that attempts to encode as much of this knowledge as possible, Encrust, which I have put on PyPI and GitHub. If this is of interest to you, I would encourage you to file bugs on it, and hopefully add in more corner cases which I have missed.

I’ve also recently shipped my first build of an end-user application that successfully launches on both Apple Silicon and Intel macs, so here is a brief summary of the hoops I needed to jump through, from the beginning, in order to make everything work.

Wait did you say you wrote a tool? Is this fixed, then?

Encrust is, I hope, a temporary stopgap on the way to a much better comprehensive solution.

Specifically, I believe that Briefcase is a much more holistic solution to the general problem being described here, but it doesn’t suit my very specific needs right now4, and it doesn’t address a couple of minor points that I was running into here.

It is mostly glue that is shelling out to other tools that already solve portions of the problem, even when better APIs exist. It addresses three very specific layers of complexity:

  1. It enforces architecture independence, so that your app built on an M1 machine will still actually run on about half of the macs remaining out there2.
  2. It remembers tricky nuances of the notarization submission process, such as the highly specific way I need to generate my zip files to avoid mysterious notarization rejections3.
  3. Providing a common and central way to store the configuration for these things across repositories so I don’t need to repeat this process and copy/paste a shell script every time I make a tiny new application.

It only works on Apple Silicon macs, because I didn’t bother to figure out how pip actually determines which architecture to download wheels for.

As such, unfortunately, Encrust is mostly a place for other people who have already solved this problem to collaborate to centralize this sort of knowledge and share ideas about where this code should ultimately go, rather than a tool for users trying to get started with shipping an app.

Open Offer

That said:

  1. I want to help Python developers ship their Python apps to users who are not also Python developers.
  2. macOS is an even trickier platform to do that on than most.
  3. It’s now easy for me to sign, notarize, and release new applications reliably

Therefore:

If you have an open source Python application that runs on macOS5 but can’t ship to macOS — either because:

  1. you’ve gotten stuck on one of the roadblocks that this post describes,
  2. you don’t have $100 to give to Apple, or because
  3. the app is using a cross-platform toolkit that should work just fine and you don’t have access to a mac at all, then

Send me an email and I’ll sign and post your releases.

What’s this post about, then?

People still frequently complain that “Python packaging” is really bad. And I’m on record that packaging Python (in the sense of “code”) for Python (in the sense of “deployment platform”) is actually kind of fine right now; if what you’re trying to get to is a package that can be pip installed, you can have a reasonably good experience modulo a few small onboarding hiccups that are well-understood in the community and fairly easy to overcome.

However, it’s still unfortunately hard to get Python code into the hands of users who are not also Python programmers with their own development environments.

My goal here is to document the difficulties themselves to try to provide a snapshot of what happens if you try to get started from scratch today. I think it is useful to record all the snags and inscrutable error messages that you will hit in a row, so we can see what the experience really feels like.

I hope that everyone will find it entertaining.

  • Other Mac python programmers might find pieces of trivia useful, and
  • Linux users will have fun making fun of the hoops we have to jump through on Apple platforms,

but the main audience is the maintainers of tools like Briefcase and py2app to evaluate the new-user experience holistically, and to see how much the use of their tools feels like this. This necessarily includes the parts of the process that are not actually packaging.

This is why I’m starting from the beginning again, and going through all the stuff that I’ve discussed in previous posts again, to present the whole experience.

Here Goes

So, with no further ado, here is a non-exhaustive list of frustrations that I have encountered in this process:

  • Okay. Time to get started. How do I display a GUI at all? Nothing happens when I call some nominally GUI API. Oops: I need my app to exist in an app bundle, which means I need to have a framework build. Time to throw those partially-broken pyenv pythons in the trash, and carefully sidestep around Homebrew; best to use the official Python.org from here on out.
  • Bonus Frustration since I’m using AppKit directly: why is my app segfaulting all the time? Oh, target is a weak reference in objective C, so if I make a window and put a button in it that points at a Python object, the Python interpreter deallocates it immediately because only the window (which is “nothing” as it’s a weakref) is referring to it. I need to start stuffing every Python object that talks to a UI element like a window or a button into a global list, or manually calling .retain() on all of them and hoping I don’t leak memory.
  • Everything seems to be using the default Python Launcher icon, and the app menu says “Python”. That wouldn’t look too good to end users. I should probably have my own app.
  • I’ll skip the part here where the author of a new application might have to investigate py2app, briefcase, pyoxidizer, and pyinstaller separately and try to figure out which one works the best right now. As I said above, I started with py2app and I’m stubborn to a fault, so that is the one I’m going to make work.
  • Now I need to set up py2app. Oops, I can’t use pyproject.toml any more, time to go back to setup.py.
  • Now I built it and the the app is crashing on startup when I click on it. I can’t see a traceback anywhere, so I guess I need to do something in the console.
    • Wow; the console is an unusable flood of useless garbage. Forget that.
    • I guess I need to run it in the terminal somehow. After some googling I figure out it’s ./dist/MyApp.app/Contents/Resources/MacOS/MyApp. Aha, okay, I can see the traceback now, and it’s … an import error?
    • Ugh, py2app isn’t actually including all of my code, it’s using some magic to figure out which modules are actually used, but it’s doing it by traversing import statements, which means I need to put a bunch of fake static import statements for everything that is used indirectly at the top of my app’s main script so that it gets found by the build. I experimentally discover a half a dozen things that are dynamically imported inside libraries that I use and jam them all in there.
  • Okay. Now at least it starts up. The blank app icon is uninspiring, though, time to actually get my own icon in there. Cool, I’ll make an icon in my favorite image editor, and save it as... icons must be PNGs, right? Uhh... no, looks like they have to be .icns files. But luckily I can convert the PNG I saved with a simple 12-line shell script that invokes sips and iconutil6.

At this point I have an app bundle which kinda works. But in order to run on anyone else’s computer, I have to code-sign it.

  • In order to code-sign anything, I have to have an account with Apple that costs $99 per year, on developer.apple.com.
  • The easiest way to get these certificates is to log in to Xcode itself. There’s a web portal too but using it appears to involve a lot more manual management of key material, so, no thanks. This requires the full-fat Xcode.app though, not just the command-line tools that come down when I run xcode-select --install, so, time to wait for an 11GB download.
  • Oops, I made the wrong certificate type. Apparently the only right answer here is a “Developer ID Application” certificate.
  • Now that I’ve logged in to Xcode to get the certificate, I need to figure out how to tell my command-line tools about it (for starters, “codesign”). Looks like I need to run security find-identity -v -p codesigning.
  • Time to sign the application’s code.
    • The codesign tool has a --deep option which can sign the whole bundle. Great!
    • Except, that doesn’t work, because Python ships shared libraries in locations that macOS doesn’t automatically expect, so I have to manually locate those files and sign them, invoking codesign once for each.
    • Also, --deep is deprecated. There’s no replacement.
    • Logically, it seems like I still need --deep, because it does some poorly-explained stuff with non-code resource files that maybe doesn’t happen properly if I don’t? Oh well. Let's drop the option and hope for the best.8
    • With a few heuristics I think we can find all the relevant files with a little script7.

Now my app bundle is signed! Hooray. 12 years ago, I’d be all set. But today I need some additional steps.

  • After I sign my app, Apple needs to sign my app (to indicate they’ve checked it for malware), which is called “notarization”.
    • In order to be eligible for notarization, I can’t just code-sign my app. I have to code-sign it with entitlements.
    • Also I can’t just code sign it with entitlements, I have to sign it with the hardened runtime, or it fails notarization.
    • Oops, out of the box, the hardened runtime is incompatible with a bunch of stuff in Python, including cffi and ctypes, because nobody has implemented support for MAP_JIT yet, so it crashes at startup. After some thrashing around I discover that I need a legacy “allow unsigned executable memory” entitlement. I can’t avoid importing this because a bunch of things in py2app’s bootstrapping code import things that use ctypes, and probably a bunch of packages which I’m definitely going to need, like cryptography require cffi directly anyway.
    • In order to set up notarization external to Xcode, I need to create an App Password which is set up at appleid.apple.com, not the developer portal.
    • Bonus Frustration since I’ve been doing this for a few years: Originally this used to be even more annoying as I needed to wait for an email (with altool), and so I couldn’t script it directly. Now, at least, the new notarytool (which will shortly be mandatory) has a --wait flag.
    • Although the tool is documented under man notarytool, I actually have to run it as xcrun notarytool, even though codesign can be run either directly or via xcrun codesign.
    • Great, we’re ready to zip up our app and submit to Apple. Wait, they’re rejecting it? Why???
    • Aah, I need to manually copy and paste the UUID in the console output of xcrun notarytool submit into xcrun notarytool log to get some JSON that has some error messages embedded in it.
    • Oh. The bundle contains internal symlinks, so when I zipped it without the -y option, I got a corrupt archive.
    • Great, resubmitted with zip -y.
    • Oops, just kidding, that only works sometimes. Later, a different submission with a different hash will fail, and I’ll learn that the correct command line is actually ditto -c -k --sequesterRsrc --keepParent MyApp.app MyApp.app.zip.
      • Note that, for extra entertainment value, the position of the archive itself and directory are reversed on the command line from zip (and tar, and every other archive tool).
    • notarytool doesn’t record anything in my app though; it puts the “notarization ticket” on Apple's servers. Apparently, I still need to run stapler for users to be able to launch it while those servers are inaccessible, like, for example, if they’re offline.
    • Oops, not stapler. xcrun stapler. Whatever.
    • Except notarytool operates on a zip archive, but stapler operates on an app bundle. So we have to save the original app bundle, run stapler on it, then re-archive the whole thing into a new archive.

Hooray! Time to release my great app!

  • Whoops, just got a bug report that it crashes immediately on every Intel mac. What’s going on?
  • Turns out I’m using a library whose authors distribute both aarch64 and x86_64 wheels; pip will prefer single-architecture wheels even if universal2 wheels are also available, so I’ve got to somehow get fat binaries put together. Am I going to have to build a huge pile of C code by myself? I thought all these Python hassles would at least let me avoid the C hassles!
  • Whew, okay, no need for that: there’s an amazing Swiss-army knife for macOS binary wheels, called delocate that includes a delocate-fuse tool that can fuse two wheels together. So I just need to figure out which binaries are the wrong architecture and somehow install my fixed/fused wheels before building my app with py2app.

    • except, oops, this tool just rewrites the file in-place without even changing its name, so I have to write some janky shell scripts to do the reinstallation. Ugh.
  • OK now that all that is in place, I just need to re-do all the steps:

    • universal2-ize my virtualenv!
    • build!
    • sign!
    • archive!
    • notarize!
    • wait!!!
    • staple!
    • re-archive!
    • upload!

And we have an application bundle we can ship to users.

It’s just that easy.

As long as I don’t need sandboxing or Mac App Store distribution, of course. That’s a challenge for another day.


So, that was terrible. But what should be happening here?

Some of this is impossible to simplify beyond a certain point - many of the things above are not really about Python, but are about distribution requirements for macOS specifically, and we in the Python community can’t affect operating system vendors’ tooling.

What we can do is build tools that produce clear guidance on what step is required next, handle edge cases on their own, and generally guide users through these complex processes without requiring them to hit weird binary-format or cryptographic-signing errors on their own with no explanation of what to do next.

I do not think that documentation is the answer here. The necessary steps should be discoverable. If you need to go to a website, the tool should use the webbrowser module to open a website. If you need to launch an app, the tool should launch that app.

With Encrust, I am hoping to generalize the solutions that I found while working on this for this one specific slice of the app distribution pipeline — i.e. a macOS desktop application desktop, as distributed independently and not through the mac app store — but other platforms will need the same treatment.

However, even without really changing py2app or any of the existing tooling, we could imagine a tool that would interactively prompt the user for each manual step, automate as much of it as possible, verify that it was performed correctly, and give comprehensible error messages if it was not.

For a lot of users, this full code-signing journey may not be necessary; if you just want to run your code on one or two friends’ computers, telling them to right click, go to ‘open’ and enter their password is not too bad. But it may not even be clear to them what the trade-off is, exactly; it looks like the app is just broken when you download it. The app build pipeline should tell you what the limitations are.

Other parts of this just need bug-fixes to address. py2app specifically, for example, could have a better self-test for its module-collecting behavior, launching an app to make sure it didn’t leave anything out.

Interactive prompts to set up a Homebrew tap, or a Flatpak build, or a Microsoft Store Metro app, might be similarly useful. These all have outside-of-Python required manual steps, and all of them are also amenable to at least partial automation.


Thanks to my patrons for supporting this sort of work, including development of Encrust, of Pomodouroboros, of posts like this one and of that offer to sign other people’s apps. If you think this sort of stuff is worthwhile, you might want to consider supporting my work as well.


  1. I am not even going to try to describe building a sandboxed, app-store ready application yet. 

  2. At least according to the Steam Hardware Survey, which as of this writing in March of 2023 pegs the current user-base at 54% apple silicon and 46% Intel. The last version I can convince the Internet Archive to give me, from December of 2022, has it closer to 51%/49%, which suggests a transition rate of 1% per month. I suspect that this is pretty generous to Apple Silicon as Steam users would tend to be earlier adopters and more sensitive to performance, but mostly I just don’t have any other source of data. 

  3. It is truly remarkable how bad the error reporting from the notarization service is. There are dozens of articles and forum posts around the web like this one where someone independently discovers this failure mode after successfully notarizing a dozen or so binaries and then suddenly being unable to do so any more because one of the bytes in the signature is suddenly not valid UTF-8 or something. 

  4. A lot of this is probably historical baggage; I started with py2app in 2008 or so, and I have been working on these apps in fits and starts for… ugh… 15 years. At some point when things are humming along and there are actual users, a more comprehensive retrofit of the build process might make sense but right now I just want to stop thinking about this

  5. If your application isn’t open source, or if it requires some porting work, I’m also available for light contract work, but it might take a while to get on my schedule. Feel free to reach out as well, but I am not looking to spend a lot of time doing porting work. 

  6. I find this particular detail interesting; it speaks to the complexity and depth of this problem space that this has been a known issue for several years in Briefcase, but there’s just so much other stuff to handle in the release pipeline that it remains open. 

  7. I forgot both .a files and the py2app-included python executable itself here, and had to discover that gap when I signed a different app where that made a difference. 

  8. Thus far, it seems to be working. 

Data Classification

Does Python still have a need for class without @dataclass?

Is there a place for non-@dataclass classes in Python any more?

I have previously — and somewhat famously — written favorably about @dataclass’s venerable progenitor, attrs, and how you should use it for pretty much everything.

At the time, attrs was an additional dependency, a piece of technology that you could bolt on to your Python stack to make your particular code better. While I advocated for it strongly, there are all the usual implicit reasons against using a new thing. It was an additional dependency, it might not interoperate with other convenience mechanisms for type declarations that you were already using (i.e. NamedTuple), it might look weird to other Python programmers familiar with existing tools, and so on. I don’t think that any of these were good counterpoints, but there was nevertheless a robust discussion to be had in addressing them all.

But for many years now, dataclasses have been — and currently are — built in to the language. They are increasingly integrated to the toolchain at a deep level that is difficult for application code — or even other specialized tools — to replicate. Everybody knows what they are. Few or none of those reasons apply any longer.

For example, classes defined with @dataclass are now optimized as a C structure might be when you compile them with mypyc, a trick that is extremely useful in some circumstances, which even attrs itself now has trouble keeping up with.

This all raises the question for me: beyond backwards compatibility, is there any point to having non-@dataclass classes any more? Is there any remaining justification for writing them in new code?

Consider my original example, translated from attrs to dataclasses. First, the non-dataclass version:

1
2
3
4
5
class Point3D:
    def __init__(self, x, y, z):
        self.x = x
        self.y = y
        self.z = z

And now the dataclass one:

1
2
3
4
5
6
7
from dataclasses import dataclass

@dataclass
class Point3D:
    x: int
    y: int
    z: int

Many of my original points still stand. It’s still less repetitive. In fewer characters, we’ve expressed considerably more information, and we get more functionality (repr, sorting, hashing, etc). There doesn’t seem to be much of a downside besides the strictness of the types, and if typing.Any were a builtin, x: any would be fine for those who don’t want to unduly constrain their code.

The one real downside of the latter over the former right now is the need for an import. Which, at this point, just seems… confusing? Wouldn’t it be nicer to be able to just write this:

1
2
3
4
class Point3D:
    x: int
    y: int
    z: int

and not need to faff around with decorator semantics and fudging the difference between Mypy (or Pyright or Pyre) type-check-time and Mypyc or Cython compile time? Or even better, to not need to explain the complexity of all these weird little distinctions to new learners of Python, and to have to cover import before class?

These tools all already treat the @dataclass decorator as a totally special language construct, not really like a decorator at all, so to really explore it you have to explain a special case and then a special case of a special case. The extension hook for this special case of the special case notwithstanding.

If we didn’t want any new syntax, we would need a from __future__ import dataclassification or some such for a while, but this doesn’t seem like an impossible bar to clear.


There are still some folks who don’t like type annotations at all, and there’s still the possibility of awkward implicit changes in meaning when transplanting code from a place with dataclassification enabled to one without, so perhaps an entirely new unambiguous syntax could be provided. One that more closely mirrors the meaning of parentheses in def, moving inheritance (a feature which, whether you like it or not, is clearly far less central to class definitions than ‘what fields do I have’) off to its own part of the syntax:

1
2
3
data Point3D(x: int, y: int, z: int) from Vector:
    def method(self):
        ...

which, for the “I don’t like types” contingent, could reduce to this in the minimal case:

1
2
data Point3D(x, y, z):
    pass

Just thinking pedagogically, I find it super compelling to imagine moving from teaching def foo(x, y, z):... to data Foo(x, y, z):... as opposed to @dataclass class Foo: x: int....

I don’t have any desire for semantic changes to accompany this, just to make it possible for newcomers to ignore the circuitous historical route of the @dataclass syntax and get straight into defining their own types with legible reprs from the very beginning of their Python journey.

(And make it possible for me to skip a couple of lines of boilerplate in short examples, as a bonus.)


I’m curious to know what y’all think, though. Shoot me an email or a toot and let me know.

In particular:

  1. Do you think there’s some reason I’m missing why Python’s current method for defining classes via a bunch of dunder methods is still better than dataclasses, or should stick around into the future for reasons beyond “compatibility”?
  2. Do you think “compatibility” is sufficient reason to keep the syntax the way it is forever, and I’m underestimating the cost of adding a keyword like this?
  3. If you do think that a change should be made, would you prefer:
    1. changing the meaning of class itself via a __future__ import,
    2. a new data keyword like the one I’ve proposed,
    3. a new keyword that functions exactly like the one I have proposed but really want to bikeshed the word data a bunch,
    4. something more incremental like just putting dataclass and field in builtins,
    5. or an option I haven’t even contemplated here?

If I find I’m not alone in this perhaps I will wander over to the Python discussion boards to have a more substantive conversation...


Thank you to my patrons who are helping me while I try to turn… whatever this is… along with open source maintenance and application development, into a real job. Do you want to see me pursue ideas like this one further? If so, you can support my work as a sponsor!