I have had a lot of content banned from FB and Insta and Etsy and this not only negatively impacts my business but also what I can say and where I can say it. I am tired of it.
My content relates to female sexual pleasure and is not porn related. My content is demure, educational, factual and doesn’t fetishise anyone. I sell life-size, anatomically correct clitoris models that are used by health professionals, teachers and therapists to promote sexual well-being. Apparently trying to talk about or sell these models goes against community standards.
The algorithms that control censorship on platforms like Instagram, Facebook, and TikTok rely on automated systems designed to identify and moderate content based on a set of rules, commonly referred to as ‘community guidelines’. These algorithms are programmed to flag, restrict, or remove content deemed inappropriate or violating these guidelines. While they are created to regulate explicit, harmful, or unsafe content, they often censor educational or sex-positive material, particularly content related to female sexual pleasure.
Here’s how these algorithms work:
1. Keyword Detection
One of the primary methods algorithms use to censor content is keyword detection. Platforms scan posts, captions, and comments for specific words or phrases that are associated with explicit or adult content. Terms like “vagina,” “clitoris,” “orgasm,” or “masturbation” often trigger flags, even when used in a medical or educational context. The problem is that these algorithms often lack the ability to understand the context in which these words are used. For example, a post explaining the clitoris’ anatomy for sexual health education is treated the same way as explicit pornography, leading to disproportionate censorship of female-focused content.
2. Image and Video Recognition
In addition to text, algorithms use image and video recognition technology to detect nudity, explicit content, or suggestive poses. Machine learning models analyse visual data by identifying patterns in pixels, shapes, and colours that might indicate sexual content. However, these systems are still not sophisticated enough to distinguish between an educational diagram of female anatomy and explicit imagery, resulting in over-censorship of sexual health content, such as posts about menstruation, breastfeeding, or female pleasure. Additionally, the visibility of female bodies, especially nudity, tends to be more strictly regulated than male bodies. Female nipples, for example, are censored more frequently than male nipples, highlighting the algorithm’s bias toward female-centric content, even in non-sexual contexts.
3. Natural Language Processing
Algorithms often use Natural Language Processing (NLP)to assess the tone and intent of a post. However, despite advances in NLP, many algorithms struggle to understand the nuances of human language. They may flag posts that discuss female sexual health, even if the content is respectful and informative. If a post mentions both pleasure and anatomy in a way that’s educational, it can still be categorised as violating guidelines due to the generalisation of sexual content.
4. Content Moderation by Association
Another issue stems from content moderation by association. Algorithms can flag content if it is similar to or associated with posts that have been previously flagged or removed. If an account regularly posts about female sexuality, reproductive health, or sex education, it is more heavily scrutinised by algorithms, even if the content itself doesn’t break explicit rules. This results in the “shadowbanning” or demotion of accounts, meaning that their content is less likely to appear in users’ feeds or searches, limiting reach and engagement.
5. User Reporting and Feedback Loops
User reports also play a role in shaping how the algorithm works. If enough users report a particular post or account for inappropriate content, the algorithm may automatically suppress or remove that content, even if it does not violate any guidelines. Because discussions of female sexuality can still provoke discomfort or controversy, posts about female pleasure are more likely to be flagged by users and, as a result, censored by the platform’s algorithms.
These reports feed into the platform’s machine learning models, which then reinforce what content the algorithm should consider inappropriate. Over time, this leads to increased sensitivity around sex-positive discussions about women, as the algorithm adapts to user-driven norms, which may be biased against open conversations on female sexual pleasure. And around and around we go…
6. Cultural and Gender Biases
The algorithms that moderate content are not neutral. They are built on datasets that reflect broader cultural norms and biases, often replicating the societal discomfort surrounding female sexual pleasure. The technology behind these algorithms is developed by teams of (often male) engineers and decision-makers who bring their own cultural understandings of what is deemed “appropriate” or “inappropriate” content. As a result, these systems tend to disproportionately censor content related to female sexuality, while discussions around male sexuality may face fewer restrictions.
7. Machine Learning and Continuous Training
These algorithms are powered by machine learning models that continually “learn” from new data. As more content is flagged, removed, or reported, the algorithm updates its understanding of what types of content should be censored. This creates a feedback loop where flagged content further trains the algorithm to be more conservative or aggressive in flagging similar content in the future. Unfortunately, since women’s sexual pleasure remains a culturally sensitive topic, this cycle reinforces the censorship of such discussions, further diminishing sex-positive content.
Consequences and Criticism
There is a lack of context sensitivity as algorithms are generally unable to understand the nuance or educational context in which terms related to female pleasure are used. This means that my sex educator’s post about the clitoris is flagged the same way as explicit adult content.
This disproportionately impacts on female sexuality as the algorithms tend to censor female sexual content more rigorously than male content, reinforcing cultural taboos and perpetuating gender inequality in discussions about pleasure and sexuality.
The shadowbanning and reduced visibility of sex educators and content creators focusing on female pleasure means that our posts are hidden from the broader audience, reducing their reach and stifling important conversations around sexual health and empowerment.
The algorithms are not our friends and they control censorship on social media platforms, built on systems that lack nuance, perpetuate cultural biases, and disproportionately censor female-focused discussions on sexual pleasure. Without greater transparency, contextual understanding, and an overhaul of these systems, sex-positive content aimed at normalising female pleasure will continue to face undue restrictions, limiting the spread of critical sexual health information and reinforcing societal taboos.
Recommended Readings
Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, 2018.
– This book explores how algorithms and digital platforms reinforce societal biases, including gender and race. It provides a foundational understanding of how tech infrastructures, including censorship algorithms, can perpetuate inequality.
Gillespie, Tarleton. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press, 2018.
– Gillespie delves into the complexities of content moderation on social media platforms, including issues around censorship, algorithms, and the politics of content regulation.
Duguay, Stefanie.‘There’s No Cure for This’: Sexuality and Online Censorship in LGBTQ+ Women’s Communities.” New Media & Society, vol. 21, no. 11-12, 2019, pp. 2609-2626.
– This research paper examines how online platforms censor discussions related to sexuality, especially within LGBTQ+ communities, and the impact that has on marginalised groups, including women.
Gerrard, Yasmine.”The Algorithmic Censorship of Sex Education on Social Media Platforms.” Digital Culture & Society, vol. 6, no. 1, 2020, pp. 183-200.
– This article focuses on how social media algorithms specifically target sex education content, highlighting how this disproportionately affects women and discussions surrounding female sexual pleasure.
West, Sarah Myers. “Censored, Suspended, Shadowbanned: User Interpretations of Content Moderation on Social Media Platforms.” New Media & Society, vol. 20, no. 11, 2018, pp. 4366-4383.
– West’s study investigates users’ experiences with content censorship, including shadowbanning, and highlights how women and marginalised groups are particularly affected by algorithmic moderation.