Borderline Speech: Caught In A Free Speech Limbo?

Late in September 2020, the Facebook algorithm removed a picture of onions positioned in an “overtly sexual manner”. While the absurdity of this incident might elicit a smile, the following findings from PhD student Amélie Heldt certainly won’t.

Share

Social media platforms respond to borderline speech with measures ranging from removal and algorithmic downgrading to account suspension and shadow-banning. How open can communication be if approaches or exceeds social norms? Who gets to define the limits of these norms? This text aims to give a few insights into borderline speech and why the concept behind it is highly problematic.

Definition & Use

The debate about “borderline” content comprises speech that fits within the borders of freedom of expression but is considered inappropriate in the public debate. Generally, “borderline” translates to “being in an intermediate position or state: not fully classifiable as one thing or its opposite” [1]. It suggests that it is very close to one thing while still being part of another and probably not describable without both things. While this combination might seem familiar in many social situations, the general principle that an action cannot be “fully categorised” is quite unusual in jurisprudence. Under certain legal provisions, an action might be forbidden or even punishable. The law defines the limits of legality even for speech.

Expressions of opinion categorised as “borderline” are on the verge of illegality without being considered illegal. A comparable analogy is the presumption of innocence in legal proceedings: The defendant is presumed innocent until proven guilty, yet societal judgment may deem them guilty merely based on suspicion.

Law & Norms

Regarding freedom of expression, it’s not only laws that have a speech-restricting effect. Social norms and other private rules define what should not be said (Noelle-Neumann, 1991, p. 91). The challenge in managing harmful online content lies in the fact that social norms, affecting offline life even when unspoken, might not unfold equally in the digital sphere. Regarding borderline speech, we are confronted with two problems:

  1. Where does this type of expression fit in a legal framework?
  2. To what extent can non-legal norms restrict freedom of expression?

Human Rights Perspective

According to the European Court of Human Rights, freedom of expression “applies not only to ‘information’ or ‘ideas’ that are favourably received or regarded as inoffensive or as a matter of indifference but also to those that offend, shock or disturb. Such are the demands of pluralism, tolerance and broadmindedness without which there is no ‘democratic society’” (ECtHR, 2015).

The definition of borderline speech shows how thin the line is between protecting a deliberative public sphere even if the public debate is uncomfortable on the one hand and protecting a “civilised” public debate on the other.

Borderline Content in a Legal Framework

If content is considered borderline but still legal, what level of protection does it deserve? Freedom of expression, considered a fundamental human right crucial to the essence of democracies, is safeguarded by national constitutions and international treaties.

Generally, what falls within the scope of the application is protected, as long as it is not declared illegal. Conversely, whether speech is “legal” depends on the scope of protection. In Europe, constitutional provisions often grant legislators the authority to delineate limits on speech through means such as criminal law. Despite stringent requirements set by constitutional provisions [2], intervention is permissible. In the US, the scope of protection of freedom of speech is very broad and the legislator is not allowed to regulate speech since the First Amendment protects the citizens from the coercive power of the state. Any attempt to legislate speech therefore undergoes rigorous scrutiny.

Social Norms & Legislations

Social norms can play a role in the creation of legislation, including criminal law provisions that restrict freedom of expression. Ultimately, social norms, which extend beyond codified legal standards, underscore the dynamic interplay between the legal and social contexts. While legal norms may mirror the necessity for legislative action, social norms—behavioural standards without standardization—contribute to public discourse regulation. Even if not explicitly prohibited by law, societal consensus on what constitutes unwarranted or harmful speech prompts reactions, showcasing the impact of informal norms on public expression.

Implicit and culturally presupposed, social norms weave through the fabric of societies, extending their influence even into private legal relationships. Within these private realms, social norms undergo a process of concretization, shedding their implicit nature to become binding rules. Parties involved in contractual relationships contribute to the standardization of social norms by elevating them to the status of contractual obligations. This allows for the establishment of rules that may surpass legal mandates, exemplified by confidentiality clauses in employment agreements.

However, the imposition of stringent private rules can invite legal scrutiny. Should one party introduce excessively restrictive clauses, the opposing party has the recourse to challenge the speech-limiting stipulations in court. Courts may, in turn, evaluate the clauses for proportionality and, if deemed excessive, declare them void.

In the realm of communication, three distinct types of limitations can manifest. Legal restrictions, cemented in law, reflect a democratic consensus and legitimize the constraints imposed. Social norms, while subject to societal consensus (albeit debatable), lack the punitive sanctions that accompany legal infringements. A unique challenge arises when social media platforms dictate speech boundaries, as this lacks a democratic process and the elusive common social consensus, especially in the context of a global user network. Violating community standards, however, will have repercussions for users.

Social Media Platform's Private Ordering

Content moderation on social media platforms predominantly relies on the establishment and enforcement of community standards, that is, private rules drafted and enforced by social media platforms, referred to as “private ordering”. Private ordering comes along with private sanctions; unwanted content will be banned from the platform by removing or blocking it for specific regions. Users labelled as “recidivists” may encounter penalties ranging from account suspension to algorithmic downgrading or shadowbanning. However, the boundaries of these private sanctions remain elusive, shrouded in the opaque process of content moderation (Insights, 2020; Laub, 2019).

Despite increased transparency efforts spurred by public demand, little is known about the intricacies of content removal and the criteria guiding such decisions. There is a clear lack of data concerning the influence of national regulations on community standards and their interpretation. This lack of clarity extends to terms like “hate speech”, often absent in legal definitions but wielded comprehensively for content removal (Kaye, 2018, p. 10). The interplay between speech-restricting criminal law provisions (e.g., libel and incitement to violence) and the broad scope of application of categories like hate speech is indiscernible. This blurriness worsens when content is deemed borderline, hovering close to vaguely defined categories of undesired content. Despite these challenges, platforms maintain the practice of removing content classified as borderline, aligning it with their evolving community standards (Zuckerberg, 2018; Alexander, 2019; Constine, 2018a, 2019b; Sands, 2019).

Why call it a Free Speech Limbo?

The term “borderline speech” resides in legal limbo, straddling the nebulous space between legality and illegality. It is not even clearly unwanted (per community standards)—it is just too close to unwanted content to be fully permissible. Social media platforms employ the term to justify the removal of content that, while not overtly violating laws or community standards, hovers on the edge. For example, YouTube defines it as “[c]ontent that comes close to—but doesn’t quite cross the line of—violating our Community Guidelines”, making its removal a priority (YouTube, 2019). This practice raises numerous objections.The term “borderline” is inherently fraught with ambiguity, heavily reliant on definitions often left unarticulated. Its elusive nature obstructs the development of potentially beneficial categories for classifying such speech. While the expression of borderline speech itself is not unlawful, its proximity to certain categories makes it susceptible to removal. This approach risks over-removal of content, leading to chilling effects as users struggle to anticipate whether their expression will be deemed inappropriate. From a user’s standpoint, understanding community standards is already challenging when they are vague and grounded in social norms from different cultural and legal contexts.Admittedly, the challenge posed by harmful content is a serious issue. We are witnessing its potential as a real threat to communities within and beyond social media platforms, and more broadly, to democracy (Sunstein, 2017). These damaging effects can be even more serious if harmful content is algorithmically distributed and amplified. This leaves us with the classic dilemma of freedom of expression: How open can communication be if it comes at the expense of others? The influence of harmful online content intertwines with real-life dangers, such as the genocide in Myanmar (Mozur, 2018), the spread of the Covid-19 pandemic (Meade, 2020) and other tragic events marked by racism or antisemitism (Newton, 2020). A nuanced balance must be struck, considering freedom of expression alongside other fundamental rights.

The Ban On Nudity is an Example of Borderline Content

It makes more sense to be overly cautious for some policy categories than for others. For instance, most platforms ban nudity, especially Facebook and its related platform Instagram. Platforms like Facebook and Instagram maintain strict bans on nudity, a policy that, when applied indiscriminately, can disproportionately limit users’ self-expression. To delve into the practical implications of this policy, I conducted a series of semi-structured interviews with Instagram users who had experienced content removal.

Among the eleven interviewees, nine had their content removed for a “violation of the ‘nudity ban’”, yet only one found this removal “’understandable”. All the pictures removed by Instagram showed a female body and most of them showed either bare skin or breasts. In four pictures viewers could see a nipple. The interviewees later confirmed the chilling effect of content removal; they were more prudent after their content had been removed by Instagram, including blurring potentially “problematic” body parts in pictures, always editing pictures so that you cannot see the nipples and posting fewer pictures of nudes. Moreover, they felt wronged by the platform. One interviewee emphasized, “The pictures don’t just show nudity; they have an artistic approach, mostly don’t even look like showing nudity”.

The interviewees voiced concerns about gender inequality and discrimination, arguing that platforms like Instagram unjustly target women under the guise of the nudity ban, all without violating any existing laws.

At the Intersection of Law and Communication

Enforcing rules of public communication in the digital realm poses numerous questions about defining permissible speech. Many of these questions revolve around the nature of countermeasures against harmful content. At the individual user level, the European approach seeks to strike a balance between freedom of expression and other fundamental rights. In the context of borderline content, this approach could translate into platforms exercising more restraint in their sanctions against borderline speech.

The decision to flag content should be based on a relatively high probability that it might cause harm.

Concerning private sanctions, platforms face the dilemma of whether to target the user directly or limit enforcement to the content alone. Given that borderline content is neither illegal nor explicitly violating community standards, platforms should adhere to due process standards when dealing with non-unlawful content. They must provide mandatory explanatory statements when removing or algorithmically downgrading borderline content.

Amélie Heldt is a junior legal researcher and doctoral candidate at the Leibniz-Institute for Media Research. This article was first published in the Internet Policy Review on October the 15th 2020, head over to the original blogpost to find the complete list of references.

About #uncensoredme

We invite our community to share cases of biased and unfair censorship on social media. The current guidelines lack transparency and often discriminate against certain groups. The ambiguous and arbitrary enforcement of these rules pushes content into spaces that are anything but safe.

By participating in this campaign, you become part of a movement towards a destigmatized approach to nudity and sexuality.

Use #uncensoredme to participate or to learn more about the campaign.

Be inspired by Our Films

More Articles

Sex Horoscope 2024: Changing Positions

Previous
Next