Bilithification

Not sure how to do microservices? Split your monolith in half.

Several years ago at O’Reilly’s Software Architecture conference, within a comprehensive talk on refactoring “Technical Debt: A Masterclass”, r0ml1 presented a concept that I think should be highlighted.

If you have access to O’Reilly Safari, I think the video is available there, or you can get the slides here. It’s well worth watching in its own right. The talk contains a lot of hard-won wisdom from a decades-long career, but in slides 75-87, he articulates a concept that I believe resolves the perennial pendulum-swing between microservices and monoliths that we see in the Software as a Service world.

I will refer to this concept as “the bilithification strategy”.

Background

Personally, I have long been a microservice skeptic. I would generally articulate this skepticism in terms of “YAGNI”.

Here’s the way I would advise people asking about microservices before encountering this concept:

Microservices are often adopted by small teams due to their advertised benefits. Advocates from very large organizations—ones that have been very successful with microservices—frequently give talks claiming that microservices are more modular, more scalable, and more fault-tolerant than their monolithic progenitors. But these teams rarely appreciate the costs, particularly the costs for smaller orgs. Specifically, there is a fixed operational marginal cost to each new service, and a fairly large fixed operational overhead to the infrastructure for an organization deploying microservices in at all.

With a large enough team, the operational cost is easy to absorb. As the overhead is fixed, it trends towards zero as your total team size and system complexity trend towards infinity. Also, in very large teams, the enforced isolation of components in separate services reduces complexity. It does so specifically intentionally causing the software architecture to mirror the organizational structure of the team that deploys it. This — at the cost of increased operational overhead and decreased efficiency — allows independent parts of the organization to make progress independently, without blocking on each other. Therefore, in smaller teams, as you’re building, you should bias towards building a monolith until the complexity costs of the monolith become apparent. Then you should build the infrastructure to switch to microservices.

I still stand by all of this. However, it’s incomplete.

What does it mean to “switch to microservices”?

The biggest thing that this advice leaves out is a clear understanding of the “micro” in “microservice”. In this framing, I’m implicitly understanding “micro” services to be services that are too small — or at least, too small for your team. But if they do work for large organizations, then at some point, you need to have them. This leaves open several questions:

  • What size is the right size for a service?
  • When should you split your monolith up into smaller services?
  • Wait, how do you even measure “size” of a service? Lines of code? Gigabytes of memory? Number of team members?

In a specific situation I could probably look at these questions for that situation, and make suggestions as to the appropriate course of action, but that’s based largely on vibes. There’s just a lot of drawing on complex experiences, not a repeatable pattern that a team could apply on their own.

We can be clear that you should always start with a monolith. But what should you do when that’s no longer working? How do you even tell when it’s no longer working?

Bilithification

Every codebase begins as a monolith. That is a single (mono) rock (lith). Here’s what it looks like.

a circle with the word “monolith” on it

Let’s say that the monolith, and the attendant team, is getting big enough that we’re beginning to consider microservices. We might now ask, “what is the appropriate number of services to split the monolith into?” and that could provoke endless debate even among a team with total consensus that it might need to be split into some number of services.

Rather than beginning with the premise that there is a correct number, we may observe instead that splitting the service into N services where N is more than one may be accomplished splitting the service in half N-1 times.

So let’s bi (two) lithify (rock) this monolith, and take it from 1 to 2 rocks.

The task of splitting the service into two parts ought to be a manageable amount of work — two is a definitively finite number, as composed to the infinite point-cloud of “microservices”. Thus, we should search, first, for a single logical seam along which we might cleave the monolith.

a circle with the word “monolith” on it and a line through it

In many cases—as in the specific case that r0ml gave—the easiest way to articulate a boundary between two parts of a piece of software is to conceptualize a “frontend” and a “backend”. In the absence of any other clear boundary, the question “does this functionality belong in the front end or the back end” can serve as a simple razor for separating the service.

Remember: the reason we’re splitting this software up is because we are also splitting the team up. You need to think about this in terms of people as well as in terms of functionality. What division between halves would most reduce the number of lines of communication, to reduce the quadratic increase in required communication relationships that comes along with the linear increase in team size? Can you identify two groups who need to talk amongst themselves, but do not need to talk with all of each other?2

two circles with the word “hemilith” on them and a double-headed arrow
between them

Once you’ve achieved this separation, we no longer have a single rock, we have two half-rocks: hemiliths to borrow from the same Greek root that gave us “monolith”.

But we are not finished, of course. Two may not be the correct number of services to end up with. Now, we ask: can we split the frontend into a frontend and backend? Can we split the backend? If so, then we now have four rocks in place of our original one:

four circles with the word “tetartolith” on them and double-headed arrows
connecting them all

You might think that this would be a “tetralith” for “four”, but as they are of a set, they are more properly a tetartolith.

Repeat As Necessary

At some point, you’ll hit a point where you’re looking at a service and asking “what are the two pieces I could split this service into?”, and the answer will be “none, it makes sense as a single piece”. At that point, you will know that you’ve achieved services of the correct size.

One thing about this insight that may disappoint some engineers is the realization that service-oriented architecture is much more an engineering management tool than it is an engineering tool. It’s fun to think that “microservices” will let you play around with weird technologies and niche programming languages consequence-free at your day job because those can all be “separate services”, but that was always a fantasy. Integrating multiple technologies is expensive, and introducing more moving parts always introduces more failure points.

Advanced Techniques: A Multi-Stack Microservice Environment

You’ll note that splitting a service heavily implies that the resulting services will still all be in the same programming language and the same tech stack as before. If you’re interested in deploying multiple stacks (languages, frameworks, libraries), you can proceed to that outcome via bilithification, but it is a multi-step process.

First, you have to complete the strategy that I’ve already outlined above. You need to get to a service that is sufficiently granular that it is atomic; you don’t want to split it up any further.

Let’s call that service “X”.

Second, you identify the additional complexity that would be introduced by using a different tech stack. It’s important to be realistic here! New technology always seems fun, and if you’re investigating this, you’re probably predisposed to think it would be an improvement. So identify your costs first and make sure you have them enumerated before you move on to the benefits.

Third, identify the concrete benefits to X’s problem domain that the new tech stack would provide.

Finally, do a cost-benefit analysis where you make sure that the costs from step 2 are clearly exceeded by the benefits from step three. If you can’t readily identify that in advance – sometimes experimentation is required — then you need to treat this as an experiment, rather than as a strategic direction, until you’ve had a chance to answer whatever questions you have about the new technology’s benefits benefits.

Note, also, that this cost-benefit analysis requires not only doing the technical analysis but getting buy-in from the entire team tasked with maintaining that component.

Conclusion

To summarize:

  1. Always start with a monolith.
  2. When the monolith is too big, both in terms of team and of codebase, split the monolith in half until it doesn’t make sense to split it in half any more.
  3. (Optional) Carefully evaluate services that want to adopt new technologies, and keep the costs of doing that in mind.

There is, of course, a world of complexity beyond this associated with managing the cost of a service-oriented architecture and solving specific technical problems that arise from that architecture.

If you remember the tetartolith, though, you should at least be able to get to the right number and size of services for your system.


Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from more specificity on the sort of insight you've seen here.


  1. AKA “my father” 

  2. Denouncing “silos” within organizations is so common that it’s a tired trope at this point. There is no shortage of vaguely inspirational articles across the business trade-rag web and on LinkedIn exhorting us to “break down silos”, but silos are the point of having an organization. If everybody needs to talk to everybody else in your entire organization, if no silos exist to prevent the necessity of that communication, then you are definitionally operating that organization at minimal efficiency. What people actually want when they talk about “breaking down silos” is a re-org into a functional hierarchy rather than a role-oriented hierarchy (i.e., “this is the team that makes and markets the Foo product, this is the team that makes and markets the Bar product” as opposed to “this is the sales team”, “this is the engineering team”). But that’s a separate post, probably. 

Post-PyCon-US 2023 Notes

Some stream of consciousness post-conference notes.

PyCon 2023 was last week, and I wanted to write some notes on it while the memory is fresh. Much of this was jotted down on the plane ride home and edited a few days later.

Health & Safety

Even given my smaller practice run at PyBay, it was a bit weird for me to be back around so many people, given that it was all indoors.

However, it was very nice that everyone took masking seriously. I personally witnessed very few violations of the masking rules, and they all seemed to be momentary, unintentional slip-ups after eating or drinking something.

As a result, I’ve now been home for 4 full days, am COVID negative and did not pick up any more generic con crud. It’s really nice to be feeling healthy after a conference!

Overall Vibe

I was a bit surprised to find the conference much more overwhelming than I remembered it being. It’s been 4 years since my last PyCon; I was out of practice! It was also odd since last year was in person and at the same venue, so most folks had a sense of Salt Lake, and I really didn’t.

I think this was good, since I’ll remember this experience and have a fresher sense of what it feels like (at least a little bit) to be a new attendee next year.

The Schedule

I only managed to attend a few talks, but every one was excellent. In case you were not aware, un-edited livestream VODs of the talks are available with your online ticket, in advance of the release of the final videos on the YouTube channel, so if you missed these but you attended the conference you can still watch them1

My Talk

My talk, “How To Keep A Secret”, seemed to be very well received.2

I got to talk to a lot of people who said they learned things from it. I had the idea to respond to audience feedback by asking “will you be doing anything differently as a result of seeing the talk?” and so I got to hear about which specific information was actually useful to help improve the audience’s security posture. I highly recommend this follow-up question to other speakers in the future.

As part of the talk, I released and announced 2 projects related to its topic of better security posture around secrets-management:

  • PINPal, a little spaced-repetition tool to help you safely rotate your “core” passwords, the ones you actually need to memorize.

  • TokenRing, a backend for the keyring module which uses a hardware token to require user presence for any secret access, by encrypting your vault and passwords as Fernet tokens.

I also called for donations to a local LGBT+ charity in Salt Lake City and made a small matching donation, to try to help the conference have a bit of a positive impact on the area’s trans population, given the extremely bigoted law passed by the state legislature in the run-up to the conference.

We raised $330 in total3, and I think other speakers were making similar calls. Nobody wanted any credit; everyone who got in touch and donated just wanted to help out.

Open Spaces

I went to a couple of open spaces that were really engaging and thought-provoking.

  • Hynek hosted one based on his talk (which is based on this blog post) where we explored some really interesting case-studies in replacing subclassing with composition.

  • There was a “web framework maintainers” open-space hosted by David Lord, which turned into a bit of a group therapy session amongst framework maintainers from Flask, Django, Klein (i.e. Twisted), and Sanic. I had a few key takeaways from this one:

    • We should try to keep our users in the loop with what is going on in the project. Every project should have a project blog so that users have a single point of contact.

      • It turns out Twisted does actually have one of these. But we should actually post updates to that blog so that users can see new developments. We have forgotten to even post.

      • We should repeatedly drive users to those posts, from every communications channel; social media (mastodon, twitter), chat (discord, IRC, matrix, gitter), or mailing lists. We should not be afraid to repeat ourselves a bit. We’re often afraid to spam our users but there’s a lot of room between where we are now — i.e. “users never hear from us” — and spamming them.

    • We should regularly remind ourselves, and each other, that any work doing things like ticket triage, code review, writing for the project blog, and writing the project website are valuable work. We all kinda know this already, but psychologically it just feels like ancillary “stuff” that isn’t as real as the coding itself.

    • We should find ways to recognize contributions, especially the aforementioned less-visible stuff, like people who hang out in chat and patiently direct users to the appropriate documentation or support channels.

The Sprints

The sprints were not what I expected. I sat down thinking I’d be slogging through some Twisted org GitHub Actions breakage on Klein and Treq, but what I actually did was:

  • Request an org on the recently-released PyPI “Organizations” feature, got it approved, and started adding a few core contributors.

  • Have some lovely conversations with PyCon and PSF staff about several potential projects that I think could really help the ecosystem. I don’t want to imply anyone has committed to anything here, so I’ll leave a description of exactly what those were for later.

  • Filed a series of issues against BeeWare™ Briefcase™ detailing exactly what I needed from Encrust that wasn’t already provided by Briefcase’s existing Mac support.

  • I also did much more than I expected on Pomodouroboros, including:

  • I talked to my first in-the-wild Pomodouroboros user, someone who started using the app early enough to get bitten by a Pickle data-migration bug and couldn’t upgrade! I’d forgotten that I’d released a version that modeled time as a float rather than a datetime.

  • Started working on a design with Moshe Zadka for integrations for external time-tracking services and task-management services.

  • I had the opportunity to review datetype with Paul Ganssle and explore options for integrating it with or recommending it from the standard library, to hopefully start to address the both the datetime-shouldn’t-subclass-date problem and the how-do-you-know-if-a-datetime-is-timezone-aware problem.

  • Speaking of Twisted infrastructure maintenance, special thanks to Paul Kehrer, who noticed that pyasn1 was breaking Twisted’s CI, and submitted a PR to fix it. I finally managed to do a review a few days after the conference and that’s landed now.

Everything Else

I’m sure I’m forgetting at least a half a dozen other meaningful interactions that I had; the week was packed, and I talked to lots of interesting people as always.

See you next year in Pittsburgh!.


  1. Go to your dashboard and click the “Join PyCon US 2023 Online Now!” button at the top of the page, then look for the talk on the “agenda” tab or the speaker in the search box on the right. 

  2. Talks like these and software like PINPal and TokenRing are the sorts of things things that I hope to get support for from my Patreon, so please go there if you’d like to support my continuing to do this sort of work. 

  3. If you’d like to make that number bigger, I’ll do another $100 match on this blog post, and update that paragraph if I receive anything; just send the receipt to encircle@glyph.im. A reader sent in another matching donation and I made a contribution, so the total raised is now $530. 

No Executable Is An Island

Single-file executables are neither necessary nor sufficient to provide a good end-user software installation experience. They don’t work at all on macOS today, but they don't really work great anywhere else either. The focus of Python packaging tool development ought to be elsewhere.

One of the perennial talking points in the Python packaging discourse is that it’s unnecessarily difficult to create a simple, single-file binary that you can hand to users.

This complaint is understandable. In most other programming languages, the first thing you sit down to do is to invoke the compiler, get an executable, and run it. Other, more recently created programming languages — particularly Go and Rust — have really excellent toolchains for doing this which eliminate a lot of classes of error during that build-and-run process. A single, dependency-free, binary executable file that you can run is an eminently comprehensible artifact, an obvious endpoint for “packaging” as an endeavor.

While Rust and Go are producing these artifacts as effectively their only output, Python has such a dizzying array of tooling complexity that even attempting to describe “the problem” takes many thousands of words and one may not ever even get around to fully describing the complexity of the issues involved in the course of those words. All the more understandable, then, that we should urgently add this functionality to Python.

A programmer might see Python produce wheels and virtualenvs which can break when anything in their environment changes, and see the complexity of that situation. Then, they see Rust produce a statically-linked executable which “just works”, and they see its toolchain simplicity. I agree that this shouldn’t be so hard, and some of the architectural decisions that make this difficult in Python are indeed unfortunate.

But then, I think, our hypothetical programmer gets confused. They think that Rust is simple because it produces an executable, and they think Python’s complexity comes from all the packaging standards and tools. But this is not entirely true.

Python’s packaging complexity, and indeed some of those packaging standards, arises from the fact that it is often used as a glue language. Packaging pure Python is really not all that hard. And although the tools aren’t included with the language and the DX isn’t as smooth, even packaging pure Python into a single-file executable is pretty trivial.

But almost nobody wants to package a single pure-python script. The whole reason you’re writing in Python is because you want to rely on an enormous ecosystem of libraries, and some low but critical percentage of those libraries include things like their own statically-linked copies of OpenSSL or a few megabytes of FORTRAN code with its own extremely finicky build system you don’t want to have to interact with.

When you look aside to other ecosystems, while Python still definitely has some unique challenges, shipping Rust with a ton of FFI, or Go with a bunch of Cgo is substantially more complex than the default out-of-the-box single-file “it just works” executable you get at the start.1

Still, all things being equal, I think single-file executable builds would be nice for Python to have as a building block. It’s certainly easier to produce a package for a platform if your starting point is that you have a known-good, functioning single-file executable and you all you need to do is embed it in some kind of environment-specific envelope. Certainly if what you want is to copy a simple microservice executable into a container image, you might really want to have this rather than setting up what is functionally a full Python development environment in your Dockerfile. After team-wide philosophical debates over what virtual environment manager to use, those Golang Dockerfiles that seem to be nothing but the following 4 lines are really appealing:

1
2
3
4
FROM golang
COPY *.go ./
RUN go build -o /app
ENTRYPOINT ["/app"]

All things are rarely equal, however.

The issue that we, as a community, ought to be trying to address with build tools is to get the software into users’ hands, not to produce a specific file format. In my opinion, single-file binary builds are not a great tool for this. They’re fundamentally not how people, even quite tech-savvy programming people, find and manage their tools.

A brief summary of the problems with single-file distributions:

  1. They’re not discoverable. A single file linked on your website will not be found via something like brew search, apt search, choco search or searching in a platform’s GUI app store’s search bar.
  2. They’re not updatable. People expect their system package manager to update stuff for them. Standalone binaries might add their own updaters, but now you’re shipping a whole software-update system inside your binary. More likely, it’ll go stale forever while better-packaged software will be updated and managed properly.
  3. They have trouble packaging resources. Once you’ve got your code stuffed into a binary, how do you distribute images, configuration files, or other data resources along with it? This isn’t impossible to solve, but in other programming languages which do have a great single-file binary story, this problem is typically solved by third party tooling which, while it might work fine, will still generally exist in multiple alternative forms which adds its own complexity.

So while it might be a useful building-block that simplifies those annoying container builds a lot, it hardly solves the problem comprehensively.

If we were to build a big new tool, the tool we need is something that standardizes the input format to produce a variety of different complex, multi-file output formats, including things like:

  • deb packages (including uploading to PPA archives so people can add an apt line; a manual dpkg -i has many of the same issues as a single file)
  • container images (including the upload to a registry so that people can "$(shuf -n 1 -e nerdctl docker podman)" pull or FROM it)
  • Flatpak apps
  • Snaps
  • macOS apps
  • Microsoft store apps
  • MSI installers
  • Chocolatey / NuGet packages
  • Homebrew formulae

In other words, ask yourself, as a user of an application, how do you want to consume it? It depends what kind of tool it is, and there is no one-size-fits-all answer.


In any software ecosystem, if a feature is a building block which doesn’t fully solve the problem, that is an issue with the feature, but in many cases, that’s fine. We need lots of building blocks to get to full solutions. This is the story of open source.

However, if I had to take a crack at summing up the infinite-headed hydra of the Problem With Python Packaging, I’d put it like this:

Python has a wide array of tools which can be used to package your Python code for almost any platform, in almost any form, if you are sufficiently determined. The problem is that the end-to-end experience of shipping an application to end users who are not Python programmers2 for any particular platform has a terrible user experience. What we need are more holistic solutions, not more building blocks.3

This makes me want to push back against this tendency whenever I see it, and to try to point towards more efficient ways to achieving a good user experience, with the relatively scarce community resources at our collective disposal4. Efficiency isn’t exclusively about ideal outcomes, though; it’s the optimization a cost/benefit ratio. In terms of benefits, it’s relatively low, as I hope I’ve shown above.

Building a tool that makes arbitrary Python code into a fully self-contained executable is also very high-cost, in terms of community effort, for a bunch of reasons. For starters, in any moderately-large collection of popular dependencies from PyPI, at least a few of them are going to want to find their own resources via __file__, and you need to hack in a way to find those, which is itself error prone. Python also expects dynamic linking in a lot of places, and messing around with C linkers to change that around is a complex process with its own huge pile of failure modes. You need to do this on pre-existing binaries built with options you can’t necessarily control, because making everyone rebuild all the binary wheels they find on PyPI is a huge step backwards in terms of exposing app developers to confusing infrastructure complexity.

Now, none of this is impossible. There are even existing tools to do some of the scarier low-level parts of these problems. But one of the reasons that all the existing tools for doing similar things have folk-wisdom reputations and even official documentation expecting constant pain is that part of the project here is conducting a full audit of every usage of __file__ on PyPI and replacing it with some resource-locating API which we haven’t even got a mature version of yet5.

Whereas copying all the files into the right spots in an archive file that can be directly deployed to an existing platform is tedious, but only moderately challenging. It usually doesn’t involve fundamentally changing how the code being packaged works, only where it is placed.

To the extent that we have a choice between “make it easy to produce a single-file binary without needing to understand the complexities of binaries” or “make it easy to produce a Homebrew formula / Flatpak build / etc without the user needing to understand Homebrew / Flatpak / etc”, we should always choose the latter.


If this is giving you déjà vu, I’ve gestured at this general concept more vaguely in a few places, including tweeting6 about it in 2019, saying vaguely similar stuff:

Everything I’ve written here so far is debatable.

You can find that debate both in replies to that original tweet and in various other comments and posts elsewhere that I’ve grumbled about this. I still don’t agree with that criticism, but there are very clever people working on complex tools which are slowly gaining popularity and might be making the overall packaging situation better.

So while I think we should in general direct efforts more towards integrating with full-featured packaging standards, I don’t want to yuck anybody’s yum when it comes to producing clean single-file executables in general. If you want to build that tool and it turns out to be a more useful building block than I’m giving it credit for, knock yourself out.

However, in addition to having a comprehensive write-up of my previously-stated opinions here, I want to impart a more concrete, less debatable issue. To wit: single-file executables as a distribution mechanism, specifically on macOS is not only sub-optimal, but a complete waste of time.


Late last year, Hynek wrote a great post about his desire for, and experience of, packaging a single-file binary for multiple platforms. This should serve as an illustrative example of my point here. I don’t want to pick on Hynek. Prominent Python programmers wish for this all the time.. In fact, Hynek also did the thing I said is a good idea here, and did, in fact, create a Homebrew tap, and that’s the one the README recommends.

So since he kindly supplied a perfect case-study of the contrasting options, let’s contrast them!

The first thing I notice is that the Homebrew version is Apple Silicon native, whereas the single-file binary is still x86_64, as the brew build and test infrastructure apparently deals with architectural differences (probably pretty easy given it can use Homebrew’s existing Python build) but the more hand-rolled PyOxidizer setup builds only for the host platform, which in this case is still an Intel mac thanks to GitHub dragging their feet.

The second is that the Homebrew version runs as I expect it to. I run doc2dash in my terminal and I see Usage: doc2dash [OPTIONS] SOURCE, as I should.

So, A+ on the Homebrew tap. No notes. I did not have to know anything about Python being in the loop at all here, it “just works” like every Ruby, Clojure, Rust, or Go tool I’ve installed with the same toolchain.

Over to the single-file brew-less version.

Beyond the architecture being emulated and having to download Rosetta28, I have to note that this “single file” binary already comes in a zip file, since it needs to include the license in a separate file.7 Now that it’s unarchived, I have some choices to make about where to put it on my $PATH. But let’s ignore that for now and focus on the experience of running it. I fire up a terminal, and run cd Downloads/doc2dash.x86_64-apple-darwin/ and then ./doc2dash.

Now we hit the more intractable problem:

a gatekeeper launch-refusal dialog box

The executable does not launch because it is neither code-signed nor notarized. I’m not going to go through the actual demonstration here, because you already know how annoying this is, and also, you can’t actually do it.

Code-signing is more or less fine. The codesign tool will do its thing, and that will change the wording in the angry dialog box from something about an “unidentified developer” to being “unable to check for malware”, which is not much of a help. You still need to notarize it, and notarization can’t work.

macOS really wants your executable code to be in a bundle (i.e., an App) so that it can know various things about its provenance and structure. CLI tools are expected to be in the operating system, or managed by a tool like brew that acts like a sort of bootleg secondary operating-system-ish thing and knows how to manage binaries.

If it isn’t in a bundle, then it needs to be in a platform-specific .pkg file, which is installed with the built-in Installer app. This is because apple cannot notarize a stand-alone binary executable file.

Part of the notarization process involves stapling an external “notarization ticket” to your code, and if you’ve only got a single file, it has nowhere to put that ticket. You can’t even submit a stand-alone binary; you have to package it in a format that is legible to Apple’s notarization service, which for a pure-CLI tool, means a .pkg.

What about corporate distributions of proprietary command-line tools, like the 1Password CLI? Oh look, their official instructions also tell you to use their Homebrew formula too. Homebrew really is the standard developer-CLI platform at this point for macOS. When 1Password distributes stuff outside of Homebrew, as with their beta builds, it’s stuff that lives in a .pkg as well.


It is possible to work around all of this.

I could open the unzipped file, right-click on the CLI tool, go to “Open”, get a subtly differently worded error dialog, like this…

a gatekeeper open-right-click dialog box

…watch it open Terminal for me and then exit, then wait multiple seconds for it to run each time I want to re-run it at the command line. Did I mention that? The single-file option takes 2-3 seconds doing who-knows what (maybe some kind of security check, maybe pyoxidizer overhead, I don’t know) but the Homebrew version starts imperceptibly instantly.

Also, I then need to manually re-do this process in the GUI every time I want to update it.

If you know about the magic of how this all actually works, you can also do xattr -d com.apple.quarantine doc2dash by hand, but I feel like xattr -d is a step lower down in the user-friendliness hierarchy than python3 -m pip install9, and not only because making a habit of clearing quarantine attributes manually is a little like cutting the brake lines on Gatekeeper’s ability to keep out malware.

But the point of distributing a single-file binary is to make it “easy” for end users, and is explaining gatekeeper’s integrity verification accomplishing that goal?


Apple’s effectively mandatory code-signing verification on macOS is far out ahead of other desktop platforms right now, both in terms of its security and in terms of its obnoxiousness. But every mobile platform is like this, and I think that as everyone gets more and more panicked about malicious interference with software delivery, we’re going to see more and more official requirements that software must come packaged in one of these containers.

Microsoft will probably fix their absolute trash-fire of a codesigning system one day too. I predict that something vaguely like this will eventually even come to most Linux distributions. Not necessarily a prohibition on individual binaries like this, or like a GUI launch-prevention tool, but some sort of requirement imposed by the OS that every binary file be traceable to some sort of package, maybe enforced with some sort of annoying AppArmor profile if you don’t do it.

The practical, immediate message today is: “don’t bother producing a single-file binary for macOS users, we don’t want it and we will have a hard time using it”. But the longer term message is that focusing on creating single-file binaries is, in general, skating to where the puck used to be.

If we want Python to have a good platform-specific distribution mechanism for every platform, so it’s easy for developers to get their tools to users without having to teach all their users a bunch of nonsense about setuptools and virtualenvs first, we need to build that, and not get hung up on making a single-file executable packer a core part of the developer experience.


Thanks very much to my patrons for their support of writing like this, and software like these.

Oh, right. This is where I put the marketing “call to action”. Still getting the hang of these.

Did you enjoy this post and want me to write more like it, and/or did you hate it and want the psychological leverage and moral authority to tell me to stop and do something else? You can sign up here!


  1. I remember one escapade in particular where someone had to ship a bunch of PKCS#11 providers along with a Go executable in their application and it was, to put it lightly, not a barrel of laughs. 

  2. Shipping to Python programmers in a Python environment is kind of fine now, and has been for a while

  3. Yet, even given my advance awareness of this criticism, despite my best efforts, I can’t seem to stop building little tools that poorly solve only one problem in isolation

  4. And it does have to be at our collective disposal. Even the minuscule corner of this problem I’ve worked on, the aforementioned Mac code-signing and notarization stuff, is tedious and exhausting; nobody can take on the whole problem space, which is why I find writing about this is such an important part of the problem. Thanks Pradyun, and everyone else who has written about this at length! 

  5. One of the sources of my anti-single-file stance here is that I tried very hard, for many years, to ensure that everything in Twisted was carefully zipimport-agnostic, even before pkg_resources existed, by using the from twisted.python.modules import getModule, getModule(__name__).filePath.sibling(...) idiom, where that .filePath attribute might be anything FilePath-like, specifically including ZipPath. It didn’t work; fundamentally, nobody cared, other contributors wouldn’t bother to enforce this, or even remember that it might be desirable, because they’d never worked in an environment where it was. Today, a quick git grep __file__ in Twisted turns up tons of usages that will make at least the tests fail to run in a single-file or zipimport environment. Despite the availability of zipimport itself since 2001, I have never seen tox or a tool like it support running with a zipimport-style deployment to ensure that this sort of configuration is easily, properly testable across entire libraries or applications. If someone really did want to care about single-file deployments, fixing this problem comprehensively across the ecosystem is probably one of the main things to start with, beginning with an international evangelism tour for importlib.resources

  6. This is where the historical document is, given that I was using it at the time, but if you want to follow me now, please follow me on Mastodon

  7. Oops, I guess we might as well not have bothered to make a single-file executable anyway! Once you need two files, you can put whatever you want in the zip file... 

  8. Just kidding. Of course it’s installed already. It’s almost cute how Apple shows you the install progress to remind you that one day you might not need to download it whenever you get a new mac. 

  9. There’s still technically a Python included in Xcode and the Xcode CLT, so functionally macs do have a /usr/bin/python3 that is sort of a python3.9. You shouldn’t really use it. Instead, download the python installer from python.org. But probably you should use it before you start disabling code integrity verification everywhere. 

A Response to Jacob Kaplan-Moss’s “Incompetent But Nice”

What can managers do about employees who are easy to work with, and are trying their best, but can’t seem to get the job done?

Jacob Kaplan-Moss has written a post about one of the most stressful things that can happen to you as a manager: when someone on your team is getting along well with the team, apparently trying their best, but unable to do the work. Incompetent, but nice.

I have some relevant experience with this issue. On more than one occasion in my career:

  1. I have been this person, more than once. I have both resigned, and been fired, as a result.
  2. I’ve been this person’s manager, and had to fire them after being unable to come up with a training plan that would allow them to improve.
  3. I’ve been an individual contributor on a team with this person, trying to help them improve.

So I can speak to this issue from all angles, and I can confirm: it is gut-wrenchingly awful no matter where you are in relation to it. It is social stress in its purest form. Everyone I’ve been on some side of this dynamic with is someone that I’d love to work with again. Including the managers who fired me!

Perhaps most of all, since I am not on either side of an employer/employee relationship right now1, I have some emotional distance from this kind of stress which allows me to write about it in a more detached way than others with more recent and proximate experience.

As such, I’d like to share some advice, in the unfortunate event that you find yourself in one of these positions and are trying to figure out what to do.

I’m going to speak from the perspective of the manager here, because that’s where most of the responsibility for decision-making lies. If you’re a teammate you probably need to wait for a manager to ask you to do something, if you’re the underperformer here you’re probably already trying as hard as you can to improve. But hopefully this reasoning process can help you understand what the manager is trying to do here, and find the bits of the process you can take some initiative to help with.

Step 0: Preliminaries

First let’s lay some ground rules.

  1. Breathe. Maintaining an explicit focus on explicitly regulating your own mood is important, regardless of whether you’re the manager, teammate, or the underperformer.
  2. Accept that this may be intractable. You’re going to do your best in this situation but you are probably choosing between bad options. Nevertheless you will need to make decisions as confidently and quickly as possible. Letting this situation drag on can be a recipe for misery.
  3. You will need to do a retrospective.2 Get ready to collect information as you go through the process to try to identify what happened in detail so you can analyze it later. If you are the hiring manager, that means that after you’ve got your self-compassion together and your equanimous professional mood locked in, you will also need to reflect on the fact that you probably fucked up here, and get ready to try to improve your own skills and processes so that you don’t end up this situation again.

I’m going to try to pick up where Jacob left off, and skip over the “easy” parts of this process. As he puts it:

The proximate answer is fairly easy: you try to help them level up: pay for classes, conferences, and/or books; connect them with mentors or coaches; figure out if something’s in the way and remove the blocker. Sometimes folks in this category just need to learn, and because they’re nice it’s easy to give them a lot of support and runway to level up. Sometimes these are folks with things going on in their lives outside work and they need some time (or some leave) to focus on stuff that’s more important than work. Sometimes the job has requirements that can be shifted or eased or dropped – you can match the work to what the person’s good at. These situations aren’t always easy but they are simple: figure out the problem and make a change.

Step 1: Figuring Out What’s Going On

There are different reasons why someone might underperform.

Possibility: The person is over-leveled.

This is rare. For the most part, pervasive under-leveling is more of a problem in the industry. But, it does happen, and when it happens, what it looks like is someone who is capable of doing some of the work that they’re assigned, but getting stuck and panicking with more challenging or ambiguous assignments.

Moreover, this particular organizational antipattern, the “nice but incompetent” person, is more likely to be over-leveled, because if they’re friendly and getting along with the team, then it’s likely they made a good first impression and made the hiring committee want to go to bat for them as well, and may have done them the “favor” of giving them a higher level. This is something to consider in the retrospective.

Now, Jacob’s construction of this situation explicitly allows for “leveling up”, but the sort of over-leveling that can be resolved with a couple of conference talks, books, and mentoring sessions is just a challenge. We’re talking about persistent resistance to change here, and that sort of over-leveling means that they have skipped an entire rung on the professional development ladder, and do not have the professional experience at this point to bridge the gaps between their experience and the ladder.

If this is the case, consider a demotion. Identify the aspects of the work that the underperformer can handle, and try to match them to a role. If you are yourself the underperformer, proactively identifying what you’re actually doing well at and acknowledging the work you can’t handle, and identifying any other open headcount for roles at that level can really make this process easier for your manager.

However, be sure to be realistic. Are they capable enough to work at that reduced level? Does your team really have a role for someone at that level? Don’t invent makework in the hopes that you can give them a bootleg undergraduate degree’s worth of training on the job; they need to be able to contribute.

Possibility: Undiagnosed health issues

Jacob already addressed the “easy” version here: someone is struggling with an issue that they know about and they can ask for help, or at least you can infer the help they need from the things they’ve explicitly said.

But the underperformer might have something going on which they don’t realize is an issue. Or they might know there’s an issue but not think it’s serious, or not be able to find a diagnosis or treatment. Most frequently, this is a mental health problem, but it can also be things like unexplained fatigue.

This possibility is the worst. Not only do you feel like a monster for adding insult to injury, there’s also a lot of risk in discussing it.

Sometimes, you feel like you can just tell3 that somebody is suffering from a particular malady. It may seem obvious to you. If you have any empathy, you probably want to help them. However, you have to be careful.

First, some illnesses qualify as disabilities. Just because the employee has not disclosed their disability to you, does not mean they are unaware. It is up to the employee whether to tell you, and you are not allowed to require them to disclose anything to you. They may have good reasons for not talking about it.

Beyond highlighting some relevant government policy I am not equipped on how to advise you on how to handle this. You probably want to loop in someone from HR and/or someone from Legal, and have a meeting to discuss the particulars of what’s happening and how you’d like to try to help.

Second, there’s a big power differential here. You have to think hard about how to broach the subject; you’re their boss, telling them that you think they’re too sick to work. In this situation, they explicitly don’t necessarily agree, and that can quite reasonably be perceived as an attack and an insult, even if you’re correct. Hopefully the folks in Legal or HR can help you with some strategies here; again, I’m not really qualified to do anything but point at the risks involved and say “oh no”.

The “good” news here is that if this really is the cause, then there’s not a whole lot to change in your retrospective. People get sick, their families get sick, it can’t always be predicted or prevented.

Possibility: Burnout

While it is certainly a mental health issue in its own right, burnout is specifically occupational and you can thus be a bit more confident as a manager recognizing it in an employment context.

This is better to prevent than to address, but if you’ve got someone burning out badly enough to be a serious performance issue, you need to give them more leave than they think they need, and you need to do it quickly. A week or two off is not going to cut it.

In my experience, this is the most common cause of an earnest but agreeable employee underperforming, and it is also the one we are most reluctant to see. Each step on the road to burnout seems locally reasonable.

Just push yourself a little harder. Just ask for a little overtime. Just until this deadline; it’s really important. Just until we can hire someone; we’ve already got a req open for that role. Just for this one client.

It feels like we should be able to white-knuckle our way through “just” a little inconvenience. It feels that way both individually and collectively. But the impacts are serious, and they are cumulative.

There are two ways this can manifest.

Usually, it’s a gradual decline that you can see over time, and you’ll see this in an employee that was previously doing okay, but now can’t hack it.

However, another manifestation is someone who was burned out at their previous role, did not take any break between jobs, and has stepped into a moderately stressful role which could be a healthy level of challenge for someone refreshed and taking frequent enough breaks, but is too demanding for someone who needs to recover.

If that’s the case, and you feel like you accurately identified a promising candidate, it is really worthwhile to get that person the break that they need. Just based on vague back-of-the-envelope averages, it would typically be about as expensive to find a way to wrangle 8 weeks of extra leave than to go through the whole hiring process for a mid-career software engineer from scratch. However, that math assumes that the morale cost of firing someone is zero, and the morale benefit of being seen to actually care about your people and proactively treat them well as also zero.

If you can positively identify this as the issue, then you have a lot of work to do in the retrospective. People do not spontaneously burn out by themselves. this is a management problem and this person is likely to be the first domino to fall. You may need to make some pretty big changes across your team.

Possibility: Persistent Personality Conflict

It may be that someone else on the team is actually the problem. If the underperformer is inconsistent, observe the timing of the inconsistencies: does it get much worse when they’re assigned to work closely with someone else? Note that “personality conflict” does not mean that the other person is necessarily an asshole; it is possible for communication styles to simply fail to mesh due to personality differences which cannot be easily addressed.

You will be tempted to try to reshuffle responsibilities to keep these team members further apart from each other.

Don’t.

People with a conflict that is persistently interfering in day-to-day work need to be on different teams entirely. If you attempt to separate them but have them working closely, then inevitably one is going to get the higher-status projects and the other is going to be passed over for advancement. Or they’re going to end up drifting back into similar areas again.

Find a way to transfer them internally far enough away that they can have breathing room away from this conflict. If you can’t do that, then a firing may be your best option.

In the retrospective, it’s worth examining the entire team dynamic at this point, to see if the conflict is really just between two people, or if it’s more pervasive than that and other members of the team are just handling it better.

Step 2: Help By Being Kind, Not By Being Nice

Again, we’re already past the basics here. You’ve already got training and leave and such out of the way. You’re probably going to need to make a big change.

Responding to Jacob, specifically:

Firing them feels wrong; keeping them on feels wrong.

I think that which is right is heavily context-dependent. But, realistically, firing is the more likely option. You’ve got someone here who isn’t performing adequately, and you’ve already deployed all the normal tools to try to dig yourself out of that situation.

So let’s talk about that first.

Firing

Briefly, If you need to fire them, just fire them, and do it quickly.

Firing people sucks, even obnoxious people. Not to mention that this situation we’re in is about a person that you like! You’ll want to be nice to this person. The person is also almost certainly trying their best. It’s pretty hard to be agreeable to your team if you’re disappointing that team and not even trying to improve.

If you find yourself thinking “I probably need to fire this person but it’s going to be hard on them”, the thought “hard on them” indicates you are focused on trying to help them personally, and not what is best for your company, your team, or even the employee themselves professionally. The way to show kindness in that situation is not to keep them in a role that’s bad for them and for you.

It would be much better for the underperformer to find a role where they are not an underperformer, and at this point, that role is probably not on your team. Every minute that you keep them on in the bad role is a minute that they can’t spend finding a good one.

As such, you need to immediately shift gears towards finding them a soft landing that does not involve delaying whatever action you need to take.

Being kind is fine. It is not even a conflict to try to spend some company resources to show that kindness. It is in the best interest of all concerned that anyone you have to fire or let go is inclined to sing your praises wherever they end up. The best brand marketing in the world for your jobs page is a diaspora of employees who wish they could still be on your team.

But being nice here, being conflict-avoidant and agreeable, only drags out an unpleasant situation. The way to spend company resources on kindness is to negotiate with your management for as large a severance package as you can manage, and give as much runway as possible for their job search, and clarity about what else you can do.

For example, are you usable as a positive reference? I.e., did they ever have a period where their performance was good, which you could communicate to future employers? Be clear.

Not-Firing

But while I think it’s the more likely option, it’s certainly not the only option. There are certainly cases where underperformers really can be re-situated into better roles, and this person could still find a good fit within the team, or at least, the company. You think you’ve solved the mystery of the cause of the problem here, and you need to make a change. What then?

In that case, the next step is to have a serious conversation about performance management. Set expectations clearly. Ironically, if you’re dealing with a jerk, you’ve probably already crisply communicated your issues. But if you’re dealing with a nice person, you’re more likely to have a slow, painful drift into this awkward situation, where they probably vaguely know that they’re doing poorly but may not realize how poorly.

If that’s what’s happening, you need to immediately correct it, even if firing isn’t on the table. If you’ve gotten to this point, some significant action is probably necessary. Make sure they understand the urgency of the situation, and if you have multiple options for them to consider, give them a clear timeline for how long they have to make a decision.

As I detailed above, things like a down-leveling or extended leave might be on the table. You probably do not want to do anything like that in a normal 1x1: make sure you have enough time to deal with it.

Remember: Run That Retrospective!

In most cases where this sort of situation develops, is a clear management failure. If you’re the manager, you need to own it.

Try to determine where the failure was. Was it hiring? Leveling? A team culture that encourages burnout? A poorly-vetted internal transfer? Accepting low performance for too long, not communicating expectations?

If you can identify a systemic cause as actionable, then you need to make sure there is time and space to make necessary changes, and soon. It’s stressful to have to go through this process with one person, but if you have to do this repeatedly, any dynamic that can damage multiple people’s productivity persistently is almost definitionally a team-destroyer.


  1. REMEMBER TO LIKE AND SUBSCRIBE 

  2. I know it’s a habit we all have from industry jargon — heck, I used the term in my own initial Mastodon posts about this — but “postmortem” is a fraught term in the best of circumstances, and super not great when you’re talking about an actual person who has recently been fired. Try to stick to “retrospective” or “review” when you’re talking about this process with your team. 

  3. Now, I don’t know if this is just me, but for reasons that are outside the scope of this post, when I was in my mid teens I got a copy of the DSM-IV, read the whole thing back to back, and studied it for a while. I’ve never had time to catch up to the DSM-5 but I’m vaguely aware of some of the changes and I’ve read a ton of nonfiction related to mental health. As a result of this self-education, I have an extremely good track record of telling people “you should see a psychiatrist about X”. I am cautious about this sort of thing and really only tell close friends, but as far as I know my hit-rate is 100%. 

Telemetry Is Not Your Enemy

Not all data collection is the same, and not all of it is bad.

Part 1: A Tale of Two Metaphors

In software development “telemetry” is data collected from users of the software, almost always delivered to the authors of the software via the Internet.

In recent years, there has been a great deal of angry public discourse about telemetry. In particular, there is a lot of concern that every software vendor and network service operator collecting any data at all is spying on its users, surveilling every aspect of our lives. The media narrative has been that any tech company collecting data for any purpose is acting creepy as hell.

I am quite sympathetic to this view. In general, some concern about privacy is warranted whenever some new data-collection scheme is proposed. However it seems to me that the default response is no longer “concern and skepticism”; but rather “panic and fury”. All telemetry is seen as snooping and all snooping is seen as evil.

There’s a sense in which software telemetry is like surveillance. However, it is only like surveillance. Surveillance is a metaphor, not a description. It is far from a perfect metaphor.

In the discourse around user privacy, I feel like we have lost a lot of nuance about the specific details of telemetry when some people dismiss all telemetry as snooping, spying, or surveillance.

Here are some ways in which software telemetry is not like “snooping”:

  1. The data may be aggregated. The people consuming the results of telemetry are rarely looking at individual records, and individual records may not even exist in some cases. There are tools, like Prio, to do this aggregation to be as privacy-sensitive as possible.
  2. The data is rarely looked at by human beings. In the cases (such as ad-targeting) where the data is highly individuated, both the input (your activity) and the output (your recommendations) are both mainly consumed by you, in your experience of a product, by way of algorithms acting upon the data, not by an employee of the company you’re interacting with.1
  3. The data is highly specific. “Here’s a record with your account ID and the number of times you clicked the Add To Cart button without checking out” is not remotely the same class of information as “Here’s several hours of video and audio, attached to your full name, recorded without your knowledge or consent”. Emotional appeals calling any data “surveillance” tend to suggest that all collected data is the latter, where in reality most of it is much closer to the former.

There are other metaphors which can be used to understand software telemetry. For example, there is also a sense in which it is like voting.

I emphasize that voting is also a metaphor here, not a description. I will also freely admit that it is in many ways a worse metaphor for telemetry than “surveillance”. But it can illuminate other aspects of telemetry, the ones that the surveillance metaphor leaves out.

Data-collection is like voting because the data can represent your interests to a party that has some power over you. Your software vendor has the power to change your software, and you probably don’t, either because you don’t have access to the source code. Even if it’s open source, you almost certainly don’t have the resources to take over its maintenance.

For example, let’s consider this paragraph from some Microsoft documentation about telemetry:

We also use the insights to drive improvements and intelligence into some of our management and monitoring solutions. This improvement helps customers diagnose quality issues and save money by making fewer support calls to Microsoft.

“Examples of how Microsoft uses the telemetry data” from the Azure SDK documentation

What Microsoft is saying here is that they’re collecting the data for your own benefit. They’re not attempting to justify it on the basis that defenders of law-enforcement wiretap schemes might. Those who want literal mass surveillance tend to justify it by conceding that it might hurt individuals a little bit to be spied upon, but if we spy on everyone surely we can find the bad people and stop them from doing bad things. That’s best for society.

But Microsoft isn’t saying that.2 What Microsoft is saying here is that if you’re experiencing a problem, they want to know about it so they can fix it and make the experience better for you.

I think that is at least partially true.

Part 2: I Qualify My Claims Extensively So You Jackals Don’t Lose Your Damn Minds On The Orange Website

I was inspired to write this post due to the recent discussions in the Go community about how to collect telemetry which provoked a lot of vitriol from people viscerally reacting to any telemetry as invasive surveillance. I will therefore heavily qualify what I’ve said above to try to address some of that emotional reaction in advance.

I am not suggesting that we must take Microsoft (or indeed, the Golang team) fully at their word here. Trillion dollar corporations will always deserve skepticism. I will concede in advance that it’s possible the data is put to other uses as well, possibly to maximize profits at the expense of users. But it seems reasonable to assume that this is at least partially true; it’s not like Microsoft wants Azure to be bad.

I can speak from personal experience. I’ve been in professional conversations around telemetry. When I have, my and my teams’ motivations were overwhelmingly focused on straightforwardly making the user experience good. We wanted it to be good so that they would like our products and buy more of them.

It’s hard enough to do that without nefarious ulterior motives. Most of the people who develop your software just don’t have the resources it takes to be evil about this.

Part 3: They Can’t Help You If They Can’t See You

With those qualifications out of the way, I will proceed with these axioms:

  1. The developers of software will make changes to it.
  2. These changes will benefit some users.
  3. Which changes the developers select will be derived, at least in part, from the information that they have.
  4. At least part of the information that the developers have is derived from the telemetry they collect.

If we can agree that those axioms are reasonable, then let us imagine two user populations:

  • Population A is privacy-sensitive and therefore sees telemetry as bad, and opts out of everything they possibly can.
  • Population B doesn’t care about privacy, and therefore ignores any telemetry and blithely clicks through any opt-in.

When the developer goes to make changes, they will have more information about Population B. Even if they’re vaguely aware that some users are opting out (or refusing to opt in), the developer will know far less about Population A. This means that any changes the developer makes will not serve the needs of their privacy-conscious users, which means fewer features that respect privacy as time goes on.

Part 4: Free as in Fact-Free Guesses

In the world of open source software, this problem is even worse. We often have fewer resources with which to collect and analyze telemetry in the first place, and when we do attempt to collect it, a vocal minority among those users are openly hostile, with feedback that borders on harassment. So we often have no telemetry at all, and are making changes based on guesses.

Meanwhile, in proprietary software, the user population is far larger and less engaged. Developers are not exposed directly to users and therefore cannot be harassed or intimidated into dropping their telemetry. Which means that proprietary software gains a huge advantage: they can know what most of their users want, make changes to accommodate it, and can therefore make a product better than the one based on uninformed guesses from the open source competition.

Proprietary software generally starts out with a panoply of advantages already — most of which boil down to “money” — but our collective knee-jerk reaction to any attempt to collect telemetry is a massive and continuing own-goal on the part of the FLOSS community. There’s no inherent reason why free software’s design cannot be based on good data, but our community’s history and self-selection biases make us less willing to consider it.

That does not mean we need to accept invasive data collection that is more like surveillance. We do not need to allow for stockpiled personally-identifiable information about individual users that lives forever. The abuses of indiscriminate tech data collection are real, and I am not suggesting that we forget about them.

The process for collecting telemetry must be open and transparent, the data collected needs to be continuously vetted for safety. Clear data-retention policies should always be in place to avoid future unanticipated misuses of data that is thought to be safe today but may be de-anonymized or otherwise abused in the future.

I want the collaborative feedback process of open source development to result in this kind of telemetry: thoughtful, respectful of user privacy, and designed with the principle of least privilege in mind. If we have this kind of process, then we could hold it up as an example for proprietary developers to follow, and possibly improve the industry at large.

But in order to be able to produce that example, we must produce criticism of telemetry efforts that is specific, grounded in actual risks and harms to users, rather than a series of emotional appeals to slippery-slope arguments that do not correspond to the actual data being collected. We must arrive at a consensus that there are benefits to users in allowing software engineers to have enough information to do their jobs, and telemetry is not uniformly bad. We cannot allow a few users who are complaining to stop these efforts for everyone.

After all, when those proprietary developers look at the hard data that they have about what their users want and need, it’s clear that those who are complaining don’t even exist.


  1. Please note that I’m not saying that this automatically makes such collection ethical. Attempting to modify user behavior or conduct un-reviewed psychological experiments on your customers is also wrong. But it’s wrong in a way that is somewhat different than simply spying on them. 

  2. I am not suggesting that data collected for the purposes of improving the users’ experience could not be used against their interest, whether by law enforcement or by cybercriminals or by Microsoft itself. Only that that’s not what the goal is here.