Substack, the platform you’re reading this newsletter on right now1, has been in the news! And not for fun reasons. For Nazi reasons. The Atlantic published a broadside against the site this week titled Substack has a Nazi problem. The central thesis, which nobody really disputes, is that Substack hosts a lot of Nazi blogs. As they say:
At least 16 of the newsletters that I reviewed have overt Nazi symbols, including the swastika and the sonnenrad, in their logos or in prominent graphics. Andkon’s Reich Press, for example, calls itself “a National Socialist newsletter”; its logo shows Nazi banners on Berlin’s Brandenburg Gate, and one recent post features a racist caricature of a Chinese person. A Substack called White-Papers, bearing the tagline “Your pro-White policy destination,” is one of several that openly promote the “Great Replacement” conspiracy theory that inspired deadly mass shootings at a Pittsburgh, Pennsylvania, synagogue; two Christchurch, New Zealand, mosques; an El Paso, Texas, Walmart; and a Buffalo, New York, supermarket. Other newsletters make prominent references to the “Jewish Question.”
Substack is pretty open about this. Their position is that Substack, outside of very narrow guidelines, simply does not censor content.
“Substack is a platform that is built on freedom of expression, and helping writers publish what they want to write,” McKenzie and the company’s other co-founders, Chris Best and Jairaj Sethi, said in a statement when asked for comment on this article. “Some of that writing is going to be objectionable or offensive. Substack has a content moderation policy that protects against extremes—like incitements to violence—but we do not subjectively censor writers outside of those policies.”
I’ve interviewed Chris Best, the CEO of Substack, for the New Liberal Podcast. He’s sincere in this belief. I asked him pretty bluntly how he feels about people who literally advocate for a return to slavery making six figures from his platform, and he basically shrugged and said “That person sucks but we’re committed to being a neutral platform that doesn’t censor for content reasons. We think that’s the best path.”
Perhaps this strikes you as a principled stand for free speech. Perhaps it seems like a weak cop-out when they should be aggressively removing Nazis from their site. Truthfully, I’m not sure what I would do in Substack’s position. But while I don’t know whether or not I would make the same decision Substack made, I do sympathize with the decision to basically just opt-out of moderating content (beyond legally required categories like CSAM).2 Because you really, truly cannot ever win with content moderation online. Getting content moderation right is impossible.
Everything I’m going to say here has been said more eloquently by Mike Masnick over at TechDirt. Mike coined the Masnick Impossibility Theorem - content moderation will always end up frustrating very large segments of the population and will always fail to accurately represent the “proper” level of moderation of anyone. It’s impossible to do well at scale. You should really just go read a bunch of Mike’s posts, but I know from the neat analytics tools on Substack that very few of you gremlins actually click links. So we’re gonna talk about it here too. There have been a number of news stories that illustrate that no matter what you do, you can’t win. There will always be media stories, there will always be scandals, and people will always be mad.
We start with the Substack story above. If you take a principled free speech stand and refuse to censor anything but outright illegal material, people will get really mad. They will write in prestigious magazines that your site is now the Nazi site. They will attempt to harm your business and cancel you.
So perhaps you move away from that! You still believe in free speech, but you make a policy against hate speech, bigotry, and so on. You just don’t really enforce it very well and there’s still a lot of Nazi and anti-Semitic stuff on your site. This is basically where Twitter’s at these days, and organizations are having a field day showing that ads are running next to Nazi content. You still look terrible and everyone is angry.
So maybe you forget about unlimited free speech. You commit to being firmly anti-Nazi, you hire a bunch of content moderators, and dedicate a lot of resources towards killing Nazi stuff on your site. Instagram, for instance, isn’t perfect but does a pretty good job there. It’s just too bad that they didn’t realize the other creepy young children content their ads were running next to. You solved the Nazi problem, but now you’re the ‘creepy guys looking at minors’ site. Congrats!
So you say “Ok we are firmly anti-Nazi, and we are also going to hire another massive batch of content moderators to nuke CSAM and anything that even hints in the direction of CSAM”. You really go all-out, nobody’s going to be writing stories about you! Congrats, you just suspended an entire family’s digital profiles, impacting their work and potentially ruining their lives, because their seven year thought it would be funny to moon the camera. In this instance, Google only reversed course once - you guessed it - a piece in the New York Times called them out for wildly overreacting to an innocent mistake from a seven year old. Everyone is still mad at you.
So now you say “I will hire a third giant batch of moderators to review the decisions of the other moderators, and we will really truly try to get all this stuff right. No Nazis, no hate speech, no CSAM, no overreactions”. Did you remember that content in other languages exists? Because if not then you’re Facebook, who has been accused of complicity in genocide in Myanmar and also less than a month ago was accused of contributing to ethnic violence in Ethiopia. You’re not just the Nazi site now, you’re the genocide site, as prestigious non-profits yell at you.
Weary, you now commit to a fourth round of hiring moderators. You will work them overtime if you have to. You will hire in local languages and figure it out. Uh oh! Turns out that your third party contractor hired underage teens to do the work. And the press is running stories about how your moderators are facing a mental health crisis because their working conditions are so bad. Conditions might even be so bad they sue you! People are still furious, and you are the bad guy.
Dejected, you decide you have to lessen the load somehow. You can’t moderate all this. You decide to have a policy just banning kids’ accounts. An age limit should lessen the workload. Haha, the joke’s on you! Detecting child accounts is actually a difficult problem, and you need to hire people to do it. If you don’t do it well enough, The New York Times will write a story about how you are not responding fast enough to reports of underage accounts. 33 state attorneys general will sue you for violating those children’s privacy. You’re still the bad guy and everyone hates you.
At this point you have a team of content moderators stretching into the thousands or even tens of thousands, but Nazi stuff and CSAM and underage kids are still all over the place. You can barely keep up and any slip comes with a giant shaming in the national news. You decide it’s time to just automate it all. Algorithms and AI will save you - they don’t complain about working conditions and they can work at scale! Congrats, now the media is shaming you for algorithmic bias, with some outlets screaming that you promote racism through an algorithmic bias against Palestinians and some leaders yelling that your algorithm promotes anti-Semitism through pro-Palestinian bias. Both sides hate you now.
You decide to quit social media. You still love tech, but you’re done with moderating other people’s posts. You go work for an online retailer, blessedly free of the culture war. Oops! Turns out that people are selling items with a certain pro-Palestinian phrase, and when you ban those products people get furious at you for being anti-Palestine. If you had left the phrase up, a different group would be furious about the phrase’s anti-Semitism. You decide to move to the Alaskan wilderness, hoping you’ll be eaten by bears.
(And we haven’t even talked about stuff like the expense of so much content moderation, whether we comply with authoritarian countries, dealing with spam, copyright infringement, and a million other issues)
This isn’t cherry picking the worst stories from years and years of reporting. Almost every single link above about some company’s terrible moderation decisions is from the last month. This cycle never stops.
I don’t think that extreme free speech is the right approach for most sites, but the urge to just throw your hands up and surrender? I get that. No matter what you do, people are going to hate you. There are going to simultaneously be people mad that you’re doing too much and people mad that you’re doing too little. In any conflict, people will accuse you of bias for and against any given side. Content moderation is a whack-a-mole game where every mole is Nazis or child abuse content, and there are ten moles popping up every second, and the moles literally never stop. If you miss even one mole, the mainstream media labels you the Nazi and/or child abuse site.
Content moderation at scale is utterly impossible to get ‘right’, where the online crowd and journalists and politicians all agree you’ve done the correct things. This doesn’t absolve sites from having to try, to do the best job they can under the circumstances. But it should make us a bit more sympathetic when we see stories about how terrible some site’s efforts at moderation are. I’ve linked this before, but Masnick’s game Trust and Safety Tycoon is a fun way to experience this in real time. You fundamentally can’t win in this arena.
Another area I wish we thought more about was what types of moderation deserve to go on which layers of our technology stack. Most people seem to want the worst Nazis kicked off major social media sites. But social sites are just one layer of the internet. Should Nazis be kicked off not-really-social sites like GoodReads? Should they be allowed to sell things on Etsy? Should they be allowed to have UberEats accounts? What about financial services - should Zelle and Venmo deactivate Nazi accounts? If you kick Nazis off of enough sites, they’ll make their own. Should we de-rank Nazi sites in search engines? Block even mentioning their URLs from social media? Should domain hosts deactivate those Nazi sites? Should we go even further and have all email providers cancel the email accounts of Nazis? Should we leave the realm of software and pressure Apple to not sell smartphones to Nazis?
I don’t know the answers to any of this! The point I want to emphasize is that all of this is impossibly nuanced, impossibly complex, and impossible to do in such a way that satisfies everyone. You still have to draw a line somewhere, but be prepared to get yelled at anyways. Substack’s approach, I suspect, is partially motivated by an honest belief in the value of free expression and partially motivated by a desire to minimize this unwinnable cycle. And that’s something I sympathize with.
Or that emailed you the post
Child Sexual Abuse Material
The problem is in the approach, really: every content moderation decision is treated as an all-or-nothing, final, and often completely opaque choice, isolated from all other aspects of being a social media site or communications business or whatever. Where a bar owner can say, "Dude, fuck off" to the klansman that started getting pissy about having to deal with opinions from a mere football player, and it's understood to not necessarily be a permanent decision, the scale problem of social media creates a situation where people feel that moderation decisions have to be pretty permanent and pretty absolute, because otherwise repeat offenders can skate or otherwise just make the mods play whackamole. I think if there was some mechanism for scaling and time-limiting things like downranking, demonetization, and the like, there would be less of a tendency for every incident to turn into an agonizing Hobson's choice over whether it's really time to take out the banhammer.
Well, third time writing this now since I’ve accidentally swiped and lost several paragraphs where I included more nuance.
Substack’s ignoring Nazis seems a lil sus when an official account is signal boosting a self described “reactionary radical”.
That isn’t to say that I disagree with your perspective, I do appreciate the way you’ve broken things down and am excited to check out the author over at tech dirt!