[THIS POST VIOLATES COMMUNITY GUIDELINES]
|
[THIS POST VIOLATES COMMUNITY GUIDELINES]
Dhrithi Mijar
B.Sc. Economics (2025-2029)
Estimated Reading Time: 7 minutes

You don’t need to be a doomscroller to notice a potent change that social media is having on our language: people now talk around things instead of about them. I’m not referring to brainrot or slang, but rather to the censorship of numerous topics deemed “no-no words” by the platforms that host social media interactions. You might have come across the word “rape” rendered as “grape” or “rake” or a similar censored version, and perhaps felt uneasy about how these infantilising substitutions are used in serious contexts. Words associated with sensitive topics are softened and obscured– fragmented into abbreviations, asterisks, and other special characters– until you’re left scratching your head, wondering what these phrases that resemble auto-generated passwords even stand for. Language is one of the most human things we own, but lately it feels as though we’ve handed over the keys to a mass of code.
This linguistic dancing-around is called algospeak. And people have been quite creative with it. Evasive terms for “gun” range from “pew-pew” to the somewhat-poetic “rooty-tooty-point-and-shooty”. While this shift might appear harmless – even natural and considerate – the consequences extend far beyond just the spread of silly internet jargon.
Social media users adopt algospeak to circumvent content moderation and platform community guidelines. These systems automatically flag anything that is deemed inappropriate. Posts and accounts are often restricted for inexplicable reasons because social media policies are heavy-handed yet largely invisible. Users aren’t sure where the limits lie, and where the tipping point for being banned begins. If there is a possibility that some words will be flagged, the inconsistency and opacity of these guidelines create uncertainty about whether their content would come under fire. Therefore, people come up with ways to work around them so that their content bypasses the algorithm filters. This seems smart, until they begin to hypercorrect and overcensor words everywhere, going off the expectations of social media platforms before their restrictions can be enforced, and stripping the weight off some very serious topics.
Timothy Snyder warns against something similar in his book, On Tyranny, which opens with the lesson “Do not obey in advance”. He describes what he calls “anticipatory obedience”– the tendency to adapt instinctively to a new situation, where “individuals think ahead about what a more repressive government will want, and then offer themselves without being asked”. It is important to note that Snyder’s work refers to twentieth – century authoritarian regimes, particularly drawing on Nazi Germany, where punishment was severe and certain. Social media moderation is by no means an equivalent to the example he takes, but it brings up the similarity of voluntary surrender. Snyder argues that tyranny thrives when people make themselves predictable in advance, internalising compliance without being asked.
In our case, we obey after we’ve experienced or witnessed others experience restrictions and bans. I would call this obedience in uncertainty rather than anticipatory obedience.
One platform that is notorious for manipulating what content is pushed and what isn’t, is TikTok. In August this year, thousands of social media users in the USA took to TikTok to post about a music festival that didn’t exist. It was a massive inside reference to protests against Immigration and Customs Enforcement (ICE) raids across the state. Those who spoke about it did so in coded terms, about concerts and stage sets and light shows, because they thought that social media companies would take down or suppress their content about the protests.
But here’s the thing: there is no evidence that any social media companies actually did suppress news of the mass demonstrations. Protest footage covered in a short content format is not always taken down. The whole music festival charade began because of this very phenomenon– obedience in uncertainty.
This dynamic resembles what French philosopher Michel Foucault described in Discipline and Punish. Modern power is not a blunt instrument but a system of training, where people test boundaries and are corrected. Over time, they adjust and the rule no longer needs to be stated, because it already lives inside them and newcomers fall in line by observing what has happened to their predecessors. Foucault has written a chapter on the Panopticon, a prison design of the eighteenth century where a single guard could watch every prisoner, but the prisoners could never actually tell if they were being watched. Because of the possibility that they might be watched at any moment, they began to behave as if they were always being watched. They became “docile bodies” as he put it, disciplining themselves.
Social media censorship produces a similar sense of Panopticism. The algorithm is an invisible power that is constantly sweeping through uploaded content with the help of Artificial Intelligence, automatically taking down anything that violates the standards. The fear of being banned and desire to retain engagement drives us to water down our language, sanding away at the serrated edges of unpleasant realities until they are fit to go on Disney scripts. It’s like the Eye of Sauron roving around from atop a tower, or some sort of Big Brother watching over language.
Speaking of Big Brother, apparently George Orwell had much to say about the evolution of language. In his essay Politics and the English Language, Orwell expresses a deep suspicion of euphemism because it allows people to describe violence without confronting it. Political language across all ideologies, he wrote, “is designed to make lies sound truthful and murder respectable”. Euphemism dulls moral perception by abstracting visceral harm. In his words, “The greatest enemy of clear language is insincerity”. When people use convoluted euphemism and algospeak to bypass algorithms, they end up disrespecting those who have actually gone through such experiences and reduce the gravity that those words carry. It is one thing to use “took their life” as a replacement for “killed themself” or “died by suicide”, but when you see words like “sewerslide”…that should, at the very least, have you question whether this fulfils the basic moral responsibility we all carry as human beings. Let’s be honest, most of us wouldn’t be able to bring themselves to refer to a suicide– of a real living, breathing person, a fellow human being– as “the s-word” or a “sewerslide”. People die and are killed, they are not “unalived”. By using such language, we are actively participating in doublespeak– the new 1984 ‘Newspeak’ which was designed to shrink the range of human thought. Pushing algospeak requires little mental effort because it feels safe(r), quietly thinning the graveness of language that no longer expresses meaning, but avoids risk.

As with many things, censorship and moderation began with good intentions of keeping online community spaces safe. You have probably seen people use “trigger warnings” before they mention a topic that could potentially cause discomfort or elicit a trauma response. Users add trigger warnings so that others know that it is something they’d rather avoid, so they can scroll away or block such content from their feeds. If someone has blocked the “eating disorder” tag because they don’t want to come across content that mentions it, it is unreasonable to expect that they have every other variation of “tw: e@t1ng d!s0rd3r” blocked as well. Using distorted versions of the words that really impact people causes them to be exposed to the very content they try avoiding because it harms them.
By no means is this an alarmist argument against moderation itself. When platforms operate at such an enormous scale, it is necessary to limit harm and hostility, however imperfect the attempt is. The issue arises when moderation prioritises advertiser-friendliness of content rather than moral responsibility and the real safety of users. Causing collective desensitisation of language by removing the “scary” words dissociates the moral and social friction that makes those actions unthinkable. The internet is a place where people connect with others, often finding support that saves them or gives them the reassurance that they are not alone. For those who go through tragedy now, they not only have to deal with immeasurable grief, they also need to worry about making their story sound palatable enough to post on social media if they want to spread awareness or seek help. A victim of sexual assault or rape would probably have to call it “seggsual assault” or use this 🍇emoji because this juvenile substitution is what the internet expects now. Ironically, they would probably be reported for using the direct words while speaking up about what they’ve been through. Beating around the bush risks trivialising real experiences, further stigmatising discussions around topics like mental health because of reinforcing the taboo surrounding them. If everything is sanitised for our protection, but people facing real issues cannot directly speak about what is hurting them, who does this really protect?
Preserving our humanity requires following another piece of advice from Snyder– “Be kind to our language”. “When we repeat the same words and phrases that appear in the daily media, we accept the absence of a larger framework,” he notes.
The most effective forms of control do not announce themselves; they fall like silent snow, covering the unpleasant and giving us the false impression that we do not have to confront reality if we give it lighter names. Calling a spade a spade is an act of accountability, not recklessness. Words are sensitive, which is what makes language powerful. Don’t let the algorithm take your words, no matter how heavily they may fall. In the end, losing the words to describe what disturbs us also means losing the ability to heal ourselves.