Borderline Speech: Caught In A Free Speech Limbo?

In late September 2020, the Facebook algorithm removed a picture of onions because of the ‘overtly sexual manner’ they were positioned. While this anecdote probably makes you smile because of its absurdity, the following findings from PhD student Amélie Heldt won’t.


Social media platforms will remove or algorithmically downgrade content, or suspend or shadow-ban accounts because of borderline speech. How open can communication be if it is almost exceeding normative limits? And who gets to define these limits? This text aims at giving a few insights on borderline speech and why the concept behind it is highly problematic.


The debate about ‘borderline’ content on social media platforms is about a categorisation of speech that is not covered by the barriers to freedom of expression but is considered inappropriate in the public debate. Generally, borderline means ‘being in an intermediate position or state: not fully classifiable as one thing or its opposite’.1 It suggests that it is very close to one thing while still being part of another and probably not describable without both. While this combination might seem familiar in many social situations, the general principle that an action cannot be ‘fully categorised’ is quite unusual in jurisprudence. Under certain legal provisions, an action might be forbidden or even punishable. The law defines the limits of legality, for speech too.

If an expression of opinion is not illegal but categorised as ‘borderline’, it means it is very close to being illegal but it is not.

Perhaps an accessible comparison is the presumption of innocence: the defendant is innocent until proven guilty, but he might already be considered guilty by society because he is a suspect.


When it comes to freedom of expression, not only laws can have a speech-restricting effect: social norms and other private rules define what should not be said (Noelle-Neumann, 1991, p. 91). The problem of harmful online content and the struggle to contain it might come from the fact that social norms, which have an effect in the analogue even when unexpressed, might not unfold equally in the digital sphere. Regarding borderline speech, we are confronted with two problems: (1) Where to position this type of expression in a legal framework? (2) How restrictive may other norms than laws be for freedom of expression? According to the European Court of Human Rights freedom of expression:

applies not only to “information” or “ideas” that are favorably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb. Such are the demands of pluralism, tolerance and broadmindedness without which there is no “democratic society”. (ECtHR, 2015)

The definition of borderline speech shows how thin the line is between protecting a deliberative public sphere even if the public debate is uncomfortable on the one hand, and protecting a “civilised” public debate on the other.


If content is considered borderline but still legal, what level of protection does it deserve? Freedom of expression is a human right, indispensable to democracies and protected by national constitutions and international treaties. Generally, what falls in the scope of application is protected, as long as it is not declared illegal and what is ‘legal’ speech depends on the scope of protection. In Europe, most constitutions allow the legislator to draw limits on speech, for example by criminal law. Notwithstanding high standards and strict requirements by constitutional proviso2, they can intervene. In the US, the scope of protection of freedom of speech is very broad and the legislator is not allowed to regulate speech since the First Amendment protects the citizens from the coercive power of the state. 

Any law regulating speech will therefore be subject to strict scrutiny.

Social norms can play a role in the creation of legislation, including criminal law provisions that restrict freedom of expression, because they naturally go beyond codified legal norms. Eventually, legal norms only reflect the need for action by the legislator and the social context. There are other rules that affect the public discourse such as social norms, i.e. behavioural norms that aren’t standardised but that will lead to a reaction by other members of society because they agree on a common notion of unwanted or even harmful speech, even if it’s not forbidden by law. 

Often, those norms are implicit and culturally presupposed

Consequently, they also manifest themselves in private legal relationships, because there, social norms can be concretised, i.e. leave the realm of the implicit. Private parties can contribute to standardising social norms by defining them as binding rules in contractual relationships. By doing so, they can agree on stricter rules than certain laws, like confidentiality clauses in employment agreements. If such private rules imposed by one party are too strict, the other party to the agreement might bring the speech-restricting clause to court and courts might find them disproportionate or otherwise void.


In sum, communication can be subject to all three types of limitations, but when the restriction is cemented in law there is a democratic consensus and legitimisation of it. When it is a social norm, the societal consensus is probably debatable but at least there is no sanction arising from infringement. The point is that when social media platforms define what can or cannot be said on their platforms there is neither a democratic process nor is there a common social consensus (probably impossible in a global user network). Infringing community standards, however, will have repercussions for users.


Content moderation is mostly based on community standards, that is, private rules drafted and enforced by social media platforms (referred to as private ordering). Private ordering comes along with private sanctions: unwanted content will be banned from the platform by removing or blocking it for specific regions. “Recidivist” users might see their accounts suspended, blocked or their content algorithmically downgraded or shadowbanned. But what are the boundaries of these private sanctions? Content moderation is an opaque process (Insights, 2020; Laub, 2019). Even if transparency efforts increased over the past two to three years due to public pressure, only little is known about how and what content is removed. 

There is a clear lack of data with regard to the influence of national regulations on community standards and their interpretation.

Hate speech, for example, is not a legal term in many jurisdictions but it is used comprehensively to remove content (Kaye, 2018, p. 10). The interconnection between speech-restricting provisions in criminal law (e.g., libel, incitement to violence) and the broad scope of application of categories like hate speech is indiscernible. And it becomes even more blurry when content is deemed to be borderline because it is only close to such undefined category. However, platforms hold on to the practice of removing content when it is classifiable as borderline, i.e. somewhat closer to categories of unwanted content (Zuckerberg, 2018; Alexander, 2019; Constine, 2018a, 2019b; Sands, 2019).


Borderline speech is in legal limbo because it is neither legal nor illegal, it is not even clearly unwanted (per community standards) – it is just too close to unwanted content to be fully permissible. Hence, ‘borderline’ is a category used by social media platforms to remove content although it is not manifestly violating laws or community standards. YouTube, for instance, defined it as “Content that comes close to — but doesn’t quite cross the line of — violating our Community Guidelines” and removing that sort of speech is a priority (YouTube, 2019). For many reasons, this practice is highly objectionable.

It relies on a term that is per se very vague and dependent on other definitions (generally not provided either).

It is inherently indefinable and does not allow to form potentially helpful categories to classify that type of speech. The expression of borderline speech itself is not unlawful but the fact that it might come close to a certain category favours its removal. This type of approach might lead to an over-removal of content, followed by chilling effects when users can simply no longer foresee whether their expression will be treated as inappropriate. From a user’s perspective, it is already challenging to understand community standards when they are too vague and moreover rely on social norms from another cultural and legal context.


Admittedly, the challenge posed by harmful content is a serious issue. We are witnessing that harmful content can be a real threat to communities in- and outside social media platforms and, in a broader sense, to democracy (Sunstein, 2017). These damaging effects can be even more serious if harmful content is algorithmically distributed and amplified. This leaves us with the classical dilemma of freedom of expression: how open can communication be if it is at other people’s expense? If the influence of harmful content online is somehow interrelated to real-life dangers like the genocide in Myanmar (Mozur, 2018), the spread of the Covid-19 pandemic (Meade, 2020) or other events, such as racist or antisemitic tragic ones (Newton, 2020), we need to take this fact into account when balancing freedom of expression with other fundamental rights.


It makes more sense to be over-precautious for some policy categories than for others. For instance, most platforms ban nudity, especially Facebook and its related platform Instagram. Banning any type of nudity without taking into account the type of picture and its context is a disproportionate limitation for many users who intend to express themselves. To learn more about how this policy is put into practice, I conducted a small series of semi-structured interviews with Instagram users who all experienced content removal by the platform. Although not part of a representative study, their views might help shed light here. 

Nine out of eleven had their content removed due to a ‘violation of the “nudity ban”’ but only one found the removal ‘understandable’.

All the pictures removed by Instagram showed a female body and most of them showed either bare skin or breasts. In four pictures viewers could see a nipple. The interviewees later confirmed the chilling effect of content removal: they were more prudent after their content had been removed by Instagram, including blurring potentially “problematic” body parts in pictures, always editing pictures so that you cannot see the nipples and posting fewer pictures of nudes. Moreover, they felt wronged by the platform. One said ‘The pictures don’t just show nudity, they have an artistic approach, mostly don’t even look like showing nudity’.

The interviewees expressed their concern about women being treated unequally and discriminated against by platforms like Instagram on the grounds of the ban on nudity.

And all this without violating any law.



Enforcing the rules of public communication in the digital sphere raises many questions as to setting the rules of permissible speech. Some are related to the nature of countermeasures against harmful content. At the level of individual users, the European approach is to balance freedom of expression with other fundamental rights. In the case of borderline content, it could translate into platforms tempering their sanctions against borderline speech.

It should only be flagged if the probability that it might cause damage is relatively high.

Regarding private sanctions may the platform go after the user or does it have to limit the enforcement to the mere content? As mentioned, the targeted content is neither unlawful nor does it manifestly infringe community standards. Platforms should therefore comply with due process standards for content that is not unlawful and provide mandatory explanatory statements when removing or algorithmically downgrading borderline content.

This article was first published in the Internet Policy Review on October the 15th 2020.

Head over to the original blogpost to find the complete list of references.

Amélie Heldt is a junior legal researcher and doctoral candidate at the Leibniz-Institute for Media Research. 

About #uncensoredme

We’d like to invite our community to share cases of biased & unfair censorship on social media. The existing guidelines are intransparent and oftentimes discriminate against certain groups of people. The vague and random enforcement of these rules pushes many contents into spaces that are anything but safe.
By participating in this campaign, you become part of a movement towards a destigmatized way of dealing with nudity & sexuality.

Use #uncensoredme to participate or to learn more about the campaign. 


Be inspired by Our Films

More Articles

Kink is Colorful

Positions on Pleasure: Interview with Illustrator Diana Bobb