selfaware soup

Esther Weidauer

Systematic Cowardice

How the tech industry has uniquely positioned itself to be terrible at social problems through its own "best practices"

2023-03-21

text "--dry-run" in a monospace font and the look of an old CRT monitor terminal

It’s often said that you can’t solve social problems with technology alone and there’s many reasons why this is true. Technology often reproduces the social systems in which it was created, it presents a kind of solutionism that fails to look at the root causes of the problems, and it has politics and economics embedded in itself that might be part of the problem, just to name a few.

I want to elaborate on another reason why the tech industry in particular is not positioned to take on social problems. This is not due to the inherent problem with technology that I mentioned above but due to a mindset that I’ll call “systematic cowardice” that it has cultivated over at least the last two decades.

To clarify, by “tech industry” I mean the type of organizations that have grown out of silicon valley and similar “hubs” around the world. The Amazons, Googles, Metas and all the smaller companies in their orbit, most of them funded by venture capital and mostly producing software, a lot of software. I don’t mean the computer shop down the road, the freelance sysadmin or the media agency that builds websites for small businesses.


Software has a property that very few things have, at least not nearly to the same extent. It’s fully testable. You can run a program without it ever making any real change in the world. You can use fake data, or isolated test environments, or special testing modes that run the program as if it was really being used but prevents any actual actions (e.g. the --dry-run option in many command line programs). This is an awesome property. It means you can try out things that are unproven or risky without worrying to break anything. You can’t do this with for example a physical bridge. That bridge better work right on the first try. You can make scale models and run simulations of course but you don’t get a chance to build a complete bridge to try and see if it holds weight.

Hardware always has a material effect on the world. At least in the material used to make it, and often in its interactions with the physical world, the waste it produces, the noise it makes, the injury it causes in case of an accident, etc. Technically software has a non-avoidable material effect too in the form of the energy it requires to run but that effect is so easy to abstract away and often so small in each individual instance that it’s ignored completely by software developers. Nobody really cares about how much carbon emission one run of their unit test suite causes.

This makes building software “safely” relatively easy. As long as your test suite is good enough, you can be pretty confident that whatever change you’re deploying won’t break things in an unexpected way. In the early 2000s relying on automated testing became more and more popular, to the point that it’s standard practice today and has been for quite some time. A code base without decent test coverage is almost universally seen as bad and many projects have automated checks in place that actually prevent any change to reach the production environment and the “real” world that hasn’t passed a test suite.

In addition to that the industry commonly uses review processes where a certain number of other developers, or may be a developer of a certain seniority, need to sign off on every change that is proposed to go into the code base. This is facilitated by version control systems like Git and an ecosystem of surrounding applications (Github, Gitlab, JIRA, and many more). This process is in fact often followed so rigorously that it slows down production of working software to a near halt by introducing waiting times for test servers to run complex test pipelines and for humans to approve changes. In many companies it’s commonplace that even a minor change, e.g. the layout of a UI element, takes hours from when the author of the change is “done” to when it is deployed even when it’s perfectly well made and correct. For more complex changes this waiting time quickly becomes days, even weeks.

While immensely frustrating to developers, product managers, and end users alike, this is considered normal in many organizations and challenging this status quo is an intensely uphill battle.


Here’s where I want to give this phenomenon a name: systematic cowardice.

During my previous career in tech I have repeatedly encountered an outright refusal from software developers to not run every minor change through an intense review and testing process, even things that were so minor that checking the changed code for correctness was trivial. The reliance on these processes and machines had cultivated a mindset of fear in that nobody would feel empowered to make a quick change that would immediately materialize a benefit without risk, and a rejection of responsibility in that every decision would be deferred to a committee of reviewers so that if something unexpected would happen, the blame would not fall on anyone in particular but on the process.

This whole ritual doesn’t even lower the risk of unexpected effects. Because nobody is really responsible a certain lack of care and attention sneaks into the software development practice at every level and the resulting group of actors is not less likely to collectively make mistakes than an individual who is aware of the risks they need to account for, possible even worse.


Social problems are even worse than hardware when it comes to how they respond to changes. A faulty machine might cause real physical harm but there can be failsafe and safety procedures in place that minimize the chance and extent of it. Testing things in a controlled environment is more difficult and less complete with hardware than with software but it is still possible. Complexity can be minimized by using previously proven techniques that interact in predictable ways. And in case things go wrong, turning the machine off or disabling it is usually an option.

A social system however has none of these safeguards. And to make things worse, it is always irreversible. Humans don’t simply forget harmful experiences once they have been identified as such. Every interaction with a social system can and will have unpredictable effects. The space of variables to consider in order to foresee the outcome of something like a policy change is practically infinite. And the information that goes into any review process of a proposed change is always going to be incomplete and biased.

It is impossible to test the outcome of an election or changing of a school curriculum without enacting the outcome of that change onto the real world. We can’t spin up a complete simulation of reality to try things out, or clone the entire planet to A/B test our politics – and if we did, the ethical implications of those abilities would keep philosophers busy for a very long time.

The clash between this persistence of social systems and the inability of tech to respect it shows itself in how technologies are deployed haphazardly and without care and proper awareness of risk or in a kind of frozen-in-place attitude that can’t handle the discomfort of making a decision with potentially harmful outcomes and then living with those outcomes. A situation to observe this in directly is when people get promoted from a pure technology role to one with management responsibility. Many will either apply their ideas as if they still have the multi-layered safety net that protected them from doing damage through carelessness or refuse to take any action in their new role without the approval of the next manager one level up the org-chart.


As with many things, the technologies used in software development that I described aren’t inherently bad. They have their legitimate uses, but they also have politics embedded in them like any other technology. And the way they are often deployed today, those are politics that absolve individuals of both agency and responsibility and enforce trust in a process that exists as its own eternal object with nobody to blame for it, and nobody to fix it. It’s a realism of tech and the capitalist politics that created it, which get applied to the social problems of the world by tech companies, made visible in their own microcosm.

The changes that our society needs will require a kind of deliberate bravery, a willingness to try even when the chance of failure is high, not a systematic cowardice that refuses to upset the system that led to it.

I don’t want to argue for a “move fast and break things” mindset. That attitude is already reflected in the carelessness of tech, even though ironically the industry has set itself up to move at a glacial pace when it comes to its actual productive output. The deliberate bravery I mean requires care and the willingness to engage with the people who will be affected by the decisions we make, which is difficult and often uncomfortable. That is why I call it bravery, not for a reckless default to action but for accepting to sit with and endure the discomfort that comes from confronting and engaging with our fellow human beings, still attempting change and accepting the responsibility for the outcome.