GARMful Content Standards
How advertisers are trying to define harm online, and why this could be bad for free speech
Last month, activist Ashley Lake wrote a Twitter thread calling attention to recent Reddit bans of sex worker accounts—something that’ll feel pretty familiar to people who pay attention to issues of free expression online. Rolling Stone later covered the story, and spoke to sex workers who were unnerved by the unexplained bans or restrictions placed on their Reddit accounts. But one topic in Lake’s thread, which went unaddressed by the Rolling Stone article, could have far-reaching implications for the internet.
I’m talking about GARM, or the Global Alliance for Responsible Media—a project of the World Federation of Advertisers. Though it sounds like a Star Trek villain, GARM is a group of corporations and social media platforms that have teamed up to define standards for ad-supported content online. The basic idea is to prevent ads from respectable businesses—like Mastercard, T-Mobile, Disney, Coca Cola, and other GARM members—from being shown alongside harmful or disreputable social media posts.
GARM’s standards (posted as a PDF here) have two main components—the “brand safety floor” and the “brand suitability framework.” The brand safety floor describes content that should never be shown alongside ads, while the suitability framework defines a set of standardized risk levels for different types of problematic content—from high risk to low risk.
While some types of prohibited content are clearly defined—no ad support for depictions of murder or sale of illegal drugs—others are much more ambiguous and slippery. For instance, the entry on profanity and gory content describes “excessive use of profane language or gestures and other repulsive actions that shock, offend, or insult.” While this would include some despicable stuff—though, to be fair, hate speech gets its own GARM entry—it could easily apply to valid protest rhetoric. Or Rick and Morty clips, for that matter.
But for adult content creators like Ashley Lake, one of the most significant things here is the entry on “explicit or gratuitous depiction of sexual acts, and/or display of genitals”—which looks like it would prohibit GARM-affiliated companies from running ads alongside pornography.
And GARM’s definition of “high risk” but permissible sexual content—stuff that advertisers can choose to run ads next to—is pretty narrow. It allows for nudity and “suggestive sexual situations” but notably doesn’t use words like “explicit.”
To be clear, there’s no evidence that these rules affect content moderation itself. In Twitter’s monetization program, for example, monetized tweets are held to a set of GARM-like content standards that are more stringent than the normal terms of service—meaning adult content can be tweeted out but not monetized. Similarly on YouTube, which doesn’t allow explicit sex, offensive but TOS-permissible videos can get demonetized.
But the existence of these corporate content standards, combined with the opaque nature of the newsfeed algorithms, could create a situation where social media sites are incentivized to demote or shadowban content that isn’t “brand safe.” And though GARM was launched in 2019, social media companies like Twitter, TikTok, Meta, and YouTube are among its members—and have presumably agreed to follow GARM standards when it comes to ad-supported content.
This brings us back to Ashley Lake. In her August Twitter thread she linked to a comment by an official Reddit admin, who seemingly confirmed that Reddit uses GARM standards when labeling sexually explicit content. The admin was replying to a user question about artistic nudity and Reddit’s new automatic flagging system.
“Prior to releasing this update,” the admin wrote, “we established very clear rules for when and when not to classify content as sexually explicit (in accordance with GARM brand safety standards).”
Though the aforementioned NSFW filter isn’t meant to remove posts—only to label them—this is still significant. It means that GARM’s standards have become a widespread method for defining and categorizing content—even on a site like Reddit, which isn’t listed as a GARM member.
And though the recent bans of sex workers may not be directly related to GARM, Lake points out that Reddit has made changes over the past couple years that decrease the visibility of NSFW content on the site.
The question here isn’t necessarily about what content should be allowed on the internet, it’s about who gets to make that choice. And in other areas of content moderation, big businesses are having an outsized influence on what kinds of speech are permitted. This has happened before, when polices from payment processors have lead to crackdowns on legally permissible sexual content.
And when sex is censored, the effects aren’t limited to porn. We’ve already seen the impacts of FOSTA-SESTA, a 2018 law meant to fight sex trafficking that instead lead to widespread censorship of free expression. According to the ACLU, FOSTA-SESTA has caused online platforms to “shut down conversations about sex education and sex work, particularly by and for LGBTQ+ people.”
This whole GARM thing also ties in with one of the internet’s other major problems: the business model itself. As some privacy advocates have argued, social media’s reliance on ad revenue has lead to the types of algorithms that promote inflammatory content. Activist Evan Greer summed this up pretty well in a tweet last year:
“Instagram and Facebook use surveillance-driven algorithms that show you whatever content they think will keep you on the platform the longest, to sell ads. The way to stop that is to finally pass a Federal privacy law that makes it illegal to collect the data they need to do that”
A strong federal privacy law could shut down the targeted advertisement business, but it would still leave us with an internet powered by ads. But if social media companies had to move away from advertising, they might focus more on other sources of revenue—like user subscriptions or premium features. A paywalled version of Twitter might not be popular, but the current system definitely isn’t working.

