No More Stories

Journalists need to stop writing “stories” and start monitoring empirical consensus.

This is a bit of a rant, and it's about a topic that I’m not an expert on, but I do feel strongly about. So, despite the forceful language, please read this knowing that there’s still a fair amount of epistemic humility behind what I’m saying and I’m definitely open to updating my opinion if an expert on journalism or public policy were to have some compelling reason for the Chestertonian fence of the structure of journalistic institutions. Comments sections are the devil’s playground so I don’t have one, but feel free to reach out and if we have a fruitful discussion I’m happy to publish it here.

One of the things that COVID has taught me is that the concept of a “story” in the news media is a relic that needs to be completely re-thought. It is not suited to the challenges of media communication today.

Specifically, there are challenging and complex public-policy questions which require robust engagement from an informed electorate1. These questions are open-ended and their answers are unclear. What’s an appropriate strategy for public safety, for example? Should policing be part of it? I have my preferred snappy slogans in these areas but if we want to step away from propaganda for a moment and focus on governance, this is actually a really difficult question that hinges on a ton of difficult-to-source data.

For most of history, facts were scarce. It was the journalist’s job to find facts, to write them down, and to circulate them to as many people as possible, so that the public discourse could at least be fact-based; to have some basis in objective reality.

In the era of the Internet, though, we are drowning in facts. We don't just have facts, we have data. We don't just have data, we have metadata; we have databases and data warehouses and data lakes and all manner of data containers in between. These data do not coalesce into information on their own, however. They need to be collected, collated, synthesized, and interpreted.

Thus was born the concept of Data Journalism. No longer is it the function of the journalist simply to report the facts; in order for the discussion to be usefully grounded, they must also aggregate the facts, and present their aggregation in a way that can be comprehended.

Data journalism is definitely a step up, and there are many excellent data-journalism projects that have been done. But the problem with these projects is that they are often individual data-journalism stories that give a temporal snapshot of one journalist's interpretation of an issue. Just a tidy little pile of motivated reasoning with a few cherry-picked citations, and then we move on to the next story.

And that's when we even get data journalism. Most journalism is still just isolated stories, presented as prose. But this sort of story-after-story presentation in most publications provides a misleading picture of the world. Beyond even the sample bias of what kinds of stories get clicks and can move ad inventory, this sequential chain of disconnected facts is extremely prone to cherry-picking by bad-faith propagandists, and even much less malicious problems like recency bias and the availability heuristic.

Trying to develop a robust understanding of complex public policy issues by looking at individual news stories is like trying to map a continent's coastline by examining individual grains of sand one at a time.

What we need from journalism for the 21st century is a curated set of ongoing collections of consensus. What the best strategy is to combat COVID might change over time. Do mask mandates work? You can't possibly answer that question by scrounging around on pubmed by yourself, or worse yet reading a jumbled stream of op-ed thinkpieces in the New York Times and the Washington Post.

During COVID, some major press institutions started caving to the fairly desperate need for this sort of structure by setting up "trackers" for COVID vaccinations, case counts, and so on. But these trackers are still fit awkwardly within the "story" narrative. This one from the Washington post is a “story” from 2020, but has data from December 16th, 2021.

These trackers monitor only a few stats though, and don’t provide much in the way of meta-commentary on pressing questions: do masks work? Do lockdowns work? How much do we know about the efficacy of various ventilation improvements?

Each journalistic institution should maintain a “tracker” for every issue of public concern, and ideally they’d be in conversation with each other, constantly curating their list of sources in real time, updating conclusions as new data arrives, and recording an ongoing tally of what we can really be certain about and what is still a legitimate controversy.2


Declaratively

Insecure states should be unrepresentable.

This weekend a catastrophic bug in log4j2 was disclosed, leading to the potential for remote code execution on a huge number of unpatched endpoints.

In this specific case, it turns out there was not really any safe way to use the API. Initially it might appear that the issue was the treatment of an apparently fixed format string as a place to put variable user-specified data, but as it turns out it just recursively expands the log data forever, looking for code to execute. So perhaps the lesson here is nothing technical, just that we should remain ready to patch, or that we should pay the maintainers.

Still, it’s worth considering that injection vulnerabilities of this type exist pretty much everywhere, usually in places where the supposed defense against getting catastrophically RCE’d is to carefully remember that the string that you pass in isn’t that kind of string.

While not containing anything nearly so pernicious as a place to put a URL that lets you execute arbitrary attacker-controlled code, Python’s logging module does contain a fair amount of confusing indirection around its log message. Sometimes — if you’re passing a non-zero number of *args — the parts of the logging module will interpret msg as a format string; other times it will interpret it as a static string. This is in some sense a reasonable compromise; you can have format strings and defer formatting if you want, but also log.warning(f"hi, {attacker_controlled_data}") is fairly safe by default. It’s still a somewhat muddled and difficult to document situation.

Similarly, Twisted’s logging system does always treat its string argument as a format string, which is more consistent. However, it does let attackers put garbage into the log wherever the developer might not have understood the documentation.1

This is to say nothing of the elephant in the room here: SQL. Almost every SQL API takes a bunch of strings, and the ones that make you declare an object in advance (i.e. Java’s PreparedStatement) don’t mind at all if you create one at runtime.

In the interest of advancing the state of the art just a little here, I’d like to propose a pattern to encourage the idiomatic separation of user-entered data (i.e. attacker-controlled payloads) from pre-registration of static, sensitive data, whether it’s SQL queries, format strings, static HTML or something else. One where copying and pasting examples won’t instantly subvert the intended protection. What I suggest would look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# module scope
with sql_statements.declarations() as d:
    create_table = d.declare("create table foo (bar int, baz str)")
    save_foo = d.declare("insert into foo values (?, ?)")
    load_by_bar = d.declare("select * from foo where bar = :bar")

# later, inside a function
con = sqlite3.connect(":memory:")
cur = con.cursor()
create_table.run(cur)
save_foo.run(cur, 3, "hello")
save_foo.run(cur, 4, "goodbye")
print((list(load_by_bar.run(cur, bar=3))))

The idea here is that sql_statements.declarations() detects which module it’s in, and only lets you write those declarations once. Attempting to stick that inside your function and create some ad-hoc formatted string should immediately fail with a loud exception; copying this into the wrong part of your code just won’t work, so you won’t have a chance to create an injection vulnerability.

If this idea appeals to you, I’ve written an extremely basic prototype here on github and uploaded it to PyPI here.


  1. I’m not dropping a 0day on you, there’s not a clear vulnerability here; it only lets you draw data from explicitly-specified parameters into the log. If you use it wrong, you just might get an "Unable to format event" type error, which we'll go out of our way to not raise back to you as an exception. It just makes some ugly log messages. 

Unproblematize

I’ve got 999 problems.

The essence of software engineering is solving problems.

The first impression of this insight will almost certainly be that it seems like a good thing. If you have a problem, then solving it is great!

But software engineers are more likely to have mental health problems1 than those who perform mechanical labor, and I think our problem-oriented world-view has something to do with that.

So, how could solving problems be a problem?


As an example, let’s consider the idea of a bug tracker.

For many years, in the field of software, any system used to track work has been commonly referred to as a “bug tracker”. In recent years, the labels have become more euphemistic and general, and we might now call them “issue trackers”. We have Sapir-Whorfed2 our way into the default assumption that any work that might need performing is a degenerate case of a problem.

We can contrast this with other fields. Any industry will need to track work that must be done. For example, in doing some light research for this post, I discovered that the relevant term of art in construction3 is typically “Project Management” or “Task Management” software. “Projects” and “Tasks” are no less hard work, but the terms do have a different valence than “Bugs” and “Issues”.

I don’t think we can start to fix this ... problem ... by attempting to change the terminology. Firstly, the domain inherently lends itself to this sort of language, which is why it emerged in the first place.

Secondly, Atlassian has desperately been trying to get everybody to call their bug tracker a “software development tool” where you write “stories” for years, and nobody does. It’s an issue tracker where you file bugs, and that’s what everyone calls it and describes what they do with it. Even they have to protest, perhaps a bit too much, that it’s “way more than a bug and issue tracker”4.


This pervasive orientation towards “problems” as the atom of work does extend to any knowledge work, and thereby to any “productivity system”. Any to-do list is, at its core, a list of problems. You wouldn’t put an item on the list if you were happy with the way the world was. Therefore every unfinished item in any to-do list is a little pebble of worry.

As of this writing, I have almost 1000 unfinished tasks on my personal to-do list.

This is to say nothing of any tasks I have to perform at work, not to mention the implicit א‎0 of additional unfinished tasks once one considers open source issue trackers for projects I work on.

It’s not really reasonable to opt out of this habit of problematizing everything. This monument to human folly that I’ve meticulously constructed out of the records of aspirations which exceed my capacity is, in fact, also an excellent prioritization tool. If you’re a good engineer, or even just good at making to-do lists, you’ll inevitably make huge lists of problems. On some level, this is what it means to set an intention to make the world — or at least your world — better.

On a different level though, this is how you set out to systematically give yourself anxiety, depression, or both. It’s clear from a wealth of neurological research that repeated experiences and thoughts change neural structures5. Thinking the same thought over and over literally re-wires your brain. Thinking the thought “here is another problem” over and over again forever is bound to cause some problems of its own.

The structure of to-do apps, bug trackers and the like is such that when an item is completed — when a problem is solved — it is subsequently removed from both physical view and our mind’s eye. What would be the point of simply lingering on a completed task? All the useful work is, after all, problems that haven’t been solved yet. Therefore the vast majority of our time is spent contemplating nothing but problems, prompting the continuous potentiation6 of neural pathways which lead to despair.


I don’t want to pretend that I have a cure for this self-inflicted ailment. I do, however, have a humble suggestion for one way to push back just a little bit against the relentless, unending tide of problems slowly eroding the shores of our souls: a positivity journal.

By “journal”, I do mean a private journal. Public expressions of positivity7 can help; indeed, some social and cultural support for expressing positivity is an important tool for maintaining a positive mind-set. However, it may not be the best starting point.

Unfortunately, any public expression becomes a discourse, and any discourse inevitably becomes a dialectic. Any expression of a view in public is seen by some as an invitation to express its opposite8. Therefore one either becomes invested in defending the boundaries of a positive community space — a psychically exhausting task in its own right — or one must constantly entertain the possibility that things are, in fact, bad, when one is trying to condition one’s brain to maintain the ability to recognize when things are actually good.

Thus my suggestion to write something for yourself, and only for yourself.

Personally, I use a template that I fill out every day, with four sections:

  • “Summary”. Summarize the day in one sentence that encapsulates its positive vibes. Honestly I put this in there because the Notes app (which is what I’m using to maintain this) shows a little summary of the contents of the note, and I was getting annoyed by just seeing “Proud:” as the sole content of that summary. But once I did so, I found that it helps to try to synthesize a positive narrative, as your brain may be constantly trying to assemble a negative one. It can help to write this last, even if it’s up at the top of your note, once you’ve already filled out some of the following sections.

  • “I’m proud of:”. First, focus on what you personally have achieved through your skill and hard work. This can be very difficult, if you are someone who has a habit of putting yourself down. Force yourself to acknowledge that you did something useful, even if you didn’t finish anything, you almost certainly made progress and that progress deserves celebration.

  • “I’m grateful to:”. Who are you grateful to? Why? What did they do for you? Once you’ve made the habit of allowing yourself to acknowledge your own accomplishments, it’s easy to see those; pay attention to the ways in which others support and help you. Thank them by name.

  • “I’m lucky because:”. Particularly in post-2020 hell-world it’s easy to feel like every random happenstance is an aggravating tragedy. But good things happen randomly all the time, and it’s easy to fail to notice them. Take a moment to notice things that went well for no good reason, because you’re definitely going to feel attacked by the universe when bad things happen for no good reason; and they will.

Although such a journal is private, it’s helpful to actually write out the answers, to focus on them, to force yourself to get really specific.

I hope this tool is useful to someone out there. It’s not going to solve any problems, but perhaps it will make the world seem just a little brighter.


  1. “Maintaining Mental health on Software Development Teams”, Lena Kozar and Vova Vovk, in InfoQ 

  2. Wikipedia page for “Linguistic Relativity” 

  3. “Construction Task and Project Tracking”, from Raptor Project Management Software 

  4. Jira Features List, Atlassian Software 

  5. “Culture Wires the Brain: A Cognitive Neuroscience Perspective”, Denise C. Park and Chih-Mao Huang, Perspect Psychol Sci. 2010 Jul 1; 5(4): 391–400. 

  6. Long-term potentiation and learning, J L Martinez Jr, B E Derrick 

  7. The #PositivePython hashtag on Twitter was a lovely experiment and despite my cautions here about public solutions to this problem, it’s generally pleasant to participate in. 

  8. As we well know. 

Announcing Pomodouroboros

I wrote my own pomodoro timer which is also a meditation on mortality.

As I mentioned previously, I’ve recently been medicated for ADHD.

Everyone’s experience with medication, even the same medication, is different, but my particular experience — while hugely positive — has involved not so much a decrease in symptoms, but rather a shifting of my symptom profile. Some of my executive functions (particularly task initiation) have significantly improved, but other symptoms, such as time blindness have gotten significantly worse. This means, for example, I can now easily decide to perform a task, and actually maintain focus on that task for hours1, but it’s harder to notice that it’s time to stop, and still somewhat difficult to tear myself away from it.

I’ve tried pomodoro timers before and I’ve had mixed success with them. While I tend to get more done if I set a pomodoro, it’s hard to remember to set the timers in the first place, and it’s hard to do the requisite time-keeping to remember how many pomodoros I’ve already set, how many more I’ll have the opportunity to set, etc. Physical timers have no associated automation and data recording, and apps can be so unobtrusive that I can easily forget about them entirely. I’ve long had an aspiration to eventually write my own custom-tailored app that addresses some of these issues.

As part of a renewed interest in ADHD management techniques, I watched this video about ADHD treatments from Dr. Russell Barkley, wherein he said (I’m paraphrasing) “if I don’t put an intervention into your visual field it might as well not exist”.

I imagined timer that:

  1. was always clearly present in my visual field;
  2. recorded the passage of intervals of time regardless of any active engagement from the user; the idea is to record the progress of the day, not give you a button you need to remember to push;
  3. rewarded me for setting active intentions about what to do with those chunks of time, and allowed me to mark them as successful or failed.

So, last weekend, leveraging my newly enhanced task-initiation and concentration-maintenance abilities, I wrote it, and I’ve been using it all week. Introducing Pomodouroboros, the pomodoro timer that reminds you that the uncaring void marches on regardless of your plans or intentions.

I’ve been using it all week and preliminary results are extremely positive.

This thing is in an extremely rough state. It has no tests, no docs, and an extremely inscrutable UI that you need to memorize in order to use effectively. I need plenty of help with it. I contemplated keeping it private and just shipping a binary, but a realistic assessment of my limited time resources forced me to admit that it already kind of does what I need, and if I want to enhance it to the point where it can help other people, I’ll need plenty of help.

If this idea resonates with you, and you’re on macOS, check out the repo, make a virtualenv somehow, install its dependencies, I don’t know how you make virtualenvs or install dependencies, I’m not your dad2, and run ./runme. If you’re on another platform, check out the code, ask me some questions, and maybe try to write a port to one of them.


  1. I cannot express how alien the sensation is to have conscious control over initiating this process; I’ve certainly experienced hyperfocus before but it’s always been something that happens to me and not something that I do 

  2. If I am your dad, come talk to me, based on your family history it’s quite likely that you do have ADHD and I’m happy to talk about how to get this installed for you offline. 

Diagnosis

I got diagnosed for ADHD, and you won’t believe what happened next. At least, I didn’t.

On August 4, I received a clinical neuropsychiatric diagnosis of ADHD.

Squirrel squirrel on gold
Brunetto Latini, Li Livres dou Trésor, Rouen ca. 1450-1480
Bibliothèque de Genève, Ms. fr. 160, fol. 82r

I expected this to be a complete non-event. I’ve known I have ADHD for the last 16 years or so, so in principle this should not have been news to me.

The formal diagnosis was also unlikely to affect my treatment. Prior to testing, I’d had an initial consultation with a psychiatry provider and based on that was prescribed Buproprion. While this medication is more commonly used for depression, it’s increasingly commonly used off-label for ADHD. Good evidence of its efficacy for ADHD has emerged in the last few years. It has fewer side-effects than stimulant medications. I’ve been tolerating it well — almost no experience of side-effects. More importantly, it’s helping to manage my symptoms. Doctors are unlikely to switch treatments if the one with fewer side-effects is working well. Furthermore, my extremely offensively named, specific subtype of ADHD1 is correlated with somewhat poor performance of methylphenidate specifically and sometimes stimulant medication more generally, so I have low expectations of improved performance if I take something stronger. And I certainly wouldn’t look forward to the much more annoying process for managing the prescription for those medications.

And yet.

One of the quirks of the particular way that I went about getting a diagnosis was that I had a battery of neuropsychiatric psychometric tests to go along with the traditional interview-based evaluation process for ADHD. At the time, this was just a huge annoyance. I was subjected to a lot of psychometric testing in my early childhood, and given the circumstances of that testing2, I have very negative associations with the experience. Moreover, since these tests were all administered remotely due to COVID, they were on a website, and unfortunately, as you probably already know, computers. JavaScript almost stopped me from getting critical mental health care.

I already knew what the interview portion of the testing would say, more or less. I’d been roughly aware of the diagnostic criteria for many years, I knew what my childhood was like, I knew how the symptoms still affected me today, so there wasn’t a whole lot of variation that I’d expect there. However, I’d never self-administered any neuropsychiatric evaluations, and when I’d been subjected to psychometric testing as a child, I’d never gotten to review the results in detail, just given a high-level summary.

So, given this quirk, included with my diagnostic results was clear evidence of additional ADHD symptoms, such as a gap between general intelligence and cognitive performance explained by a deficit in working memory.

I already knew many of my issues were caused by ADHD. I knew that I have a neurodevelopmental disorder that affects roughly 3% of the adult population; i.e. fewer than 1 in 20 people. I knew that despite public perception of this disorder as something frivolously over-diagnosed and “not real”, it’s been possibly the best-researched condition in clinical psychiatry for decades.

And yet.


Reading through my diagnosis, after the fact, I was surprised to discover that despite having known this for years, despite having written extensively about how this specific paradigm about ADHD was both incorrect and unhelpful, there was still somehow a part of me which subconsciously believed that it was just a collection of character defects. That neurotypical people must feel like this all the time as well, and that they just try harder than me somehow.

One can easily believe that any behavior out “in the world” is simply a result of character. Failing to complete assignments in school, blowing through estimate after estimate at work, needing 3 different “upcoming meeting” reminders on every device to ensure that one doesn’t miss appointments, having a slavish dedication to to-do lists so intense that it literally borders on an obsessive compulsion... one can believe that these are all just quirks, responses to things that everyone must struggle with to some degree, and that one’s behavior in these areas might be colored a little bit by a disorder but ultimately it’s down to choices.

But what influence could “character” have on the performance on totally synthetic psychometric tasks? “Repeat this string of numbers backwards.” “Sort the numbers and repeat them in descending order.” “Describe the relationship between these two words.” “Describe some property of this baffling arrangement of shapes and colors, then do it again faster and faster.”

These are completely value-neutral activities. They take a few minutes each. They couldn’t possibly require sticktoitiveness or will-power, ambition or drive. They’re just arbitrary test results. And from the aforementioned childhood experiences of psychometrics, I know that I am hilariously, almost pathologically competitive about these sorts of things, so there’s no way I’m not going to give these brief tests my full attention and effort.

And yet, the raw data that these tests produced are consistent with my diagnosis. I really can’t help it. It’s not a choice.

I knew I might feel externally validated by receiving an official-sounding clinical diagnosis. I know that I crave validation, so I expected this to feel a little good. What I didn’t expect was the extent to which this would subtly allow me to align my subconscious, emotional self-concept with the one that I’d rationally accepted a long time ago.


The medication that came along with the same process has been life-changing, but I’ll cover that in a separate post. The diagnosis itself (along with the medication changing my symptom profile somewhat) has also lead me to re-investigate coping strategies for ADHD, and to discover that quite a bit of useful research has been done since I last checked in on this disorder. There are new strategies, techniques, and medications to use since the last time I read a book on it. As annoying and tedious as the whole process was — the first step to getting treatment for ADHD is to prove you don’t have ADHD — it has absolutely been worth it.

So fine, I had a non-intuitive but ultimately positive experience with a psychiatric diagnosis, but why’d I write this? There are a few reasons.

In part, I just needed to work through my own complex feelings. I wanted to have something long-form written out that I can refer to which explains the journey that I’ve been on for the last couple of months, instead of haltingly re-deriving the explanation in every personal conversation I have.

I also wanted to “normalize”, as the kids say, talking about struggles with mental health issues. I’m too self-conscious and private to lay out the full extent of my struggles in public, but I can at least gesture towards the fact that I have struggles, and thereby give people some comfort.

As a consequence of my particular … idiom … I guess, it seems to have taken the form of an essay. Every good essay has a call to action, so here’s one: consider that getting help might be worth it. If you believe you’ve got a mental health condition—whether it be ADHD, anxiety, depression, or anything else—and you believe that you’ve been managing it on your own, I think it’s worth considering: maybe not. Particularly after this hellish 18 months. I really was managing my disorder on my own reasonably well, until one day, I wasn’t. Maybe you could really use a little help, too.3 If you can afford to, seek therapy. Seek treatment.

Good luck, and be well.


  1. The psychiatrist apologized when they delivered the results, prefacing it with “I know the name is offensive, and it’s not very accurate, but please forgive me since this is the clinical term”. 

  2. But that’s a story for a different day. 

  3. I haven’t yet had the opportunity to check it out yet, but given the likely audience for my blog generally, and for this particular post, I would be remiss if I did not mention that Open Sourcing Mental Illness might be a good place to start for that particular audience.