Although you are free to say what you want on various social platforms, it is understood that you should be cautious when attempting to express an opinion that offends, threatens, or insults groups, based on race, color, religion, national orientation, or disability. These are often categorized as hate speech and most social platforms have developed algorithms to prevent this from happening.
Over the years however, users of the social platform have found various ways of bypassing those algorithms to get their views and opinions posted. In the early 1980s there was Leetspeak, in which letters were replaced by numbers or special characters to create coded words that would have been banned in internet bulletin boards. An example of Leetspeak for the word Hacker would be H4X0R. Fastforward to 2022, and we now have Algospeak which is being used to bypass the artificial intelligence driven content moderation on social platforms such as Facebook, Instagram, TikTok, Twitch etc.
Algospeak is an ingenious way of using code words to represent other words and or phrases that would normally be removed or down-ranked by content moderation systems.
Some examples of algospeak include:
- Seggs (Sex)
- Unalive (Kill / Dead)
- Ouid (Weed)
- Pizza (Pfizer)
- Moana (Moderna)
- Swimmers (Vaccinated People)
- Backstreet Boys reunion tour (COVID-19)
- Spicy Eggplant (Vibrator)
- Nip Nops (Nipples)
- Leg Booty (LGBTQ)
- See You Next Tuesday (I’ll leave you to figure out this one)
While the above examples aren’t shocking enough to cause much of a stir, Siobhan Hanna of Telus International states that an area everyone should be concerned about is child exploitation and human exploitation as it is one of the fastest-evolving areas of algospeak. The term cheese pizza for example has been know to be used when referring to trading explicit imagery of children. Corn and the corn emoji has taken over the previous algospeak pron or pr0n used to reference porn / pornographic material.
Content moderation does have its place, and there is a fine line that separates this and censorship. Artificial Intelligence can only go so far to eliminate the ‘bad words’. However, depending on the contextual situation, those ‘bad words’ can actually be used to educate, inform and sensitize the public on various topics. Casey Fiesler, content creator of the MIT Technology Review says that content can get flagged because it refers to someone from a marginalized group talking about their experiences with racism. Hate speech and talking about hate speech can look very similar to an algorithm. Pro-eating disorder and pro-anorexia communities, which encourage members to adopt unhealthy eating habits, but in doing so may have to mention words and phrases that refer to those disorders, have reverted to using algospeak to continue operating undetected on social media platforms.
These are interesting times for social platforms who now have a challenging time in attempting to moderate content. If the algorithms are too strict, users will say that freedom of speech is being hindered. On the other hand, if the algorithms are too lapse, it can turn an otherwise safe space to an all out debauchery (see Twitter.com).
The virtual world is not that different from the real world. Both need to have rules and guidelines, some written and some that is just understood. Both will also have users who try to bypass the law. The question is, how much monitoring should be done so that these places don’t end up being a policed state.1