Social media in Latin America: Caught between a rock and a hard place · Global Voices

Social media in Latin America: Caught between a rock and a hard place · Global Voices

Latin America has a long history of State censorship

Protesters at Plaza Baquedano, in the Chilean capital Santiago, on November 8, 2019. Credit: Wikimedia user B1mbo  (CC BY-SA 4.0)

This article was written by Agustina Del Campo, Director at the Center for Studies on Freedom of Expression and Access to Information (CELE) at Universidad de Palermo, Buenos Aires, and a professor of internet and human rights and international human rights law. 

In January 2020, following informal reports of issues with protest-related content on social media, three Chilean organizations, Fundacion Datos Protegidos, the University of Chile and the Observatorio del Derecho a la Comunicación carried out a study investigating social media censorship between October 18 and November 22, 2019. The study documented 283 censorship incidents where protest-related content was deleted or blocked on social media. In some cases, users active in the country’s protest movement had their accounts closed or suspended, with no timely recourse available. According to the authors, as well as other civil society organizations tracking the phenomenon, automation, lack of context, and lack of clarity as to the platforms’ rules were among the main stated causes. 

Understanding what the Latin American content rules are and how they are enforced is a constant challenge. 

Although companies claim to have adopted global policies, approaches differ from country to country and from region to region. The last couple of months have been very telling about these practices, for example, in the different treatment that misleading content received among major platforms. For example, while in March, Twitter deleted misleading tweets about COVID-19 cures posted by Brazilian President Jair Brazilian President, Jair Bolsonaro, they appear to have been more tolerant of similar tweets from US President Donald Trump. Likewise, in dealing with misleading tweets from Trump on election fraud, they kept them up, but assigned them a false claim warning.

In 2017, the Center for Studies on Freedom of Expression and Access to Information (CELE) conducted its own research into measures by Facebook, YouTube and Twitter to combat fake news and disinformation. Our intention was to track disinformation related announcements that companies had made globally, particularly in the aftermath of well-known events like the Brexit vote, the Cambridge Analytica scandal and the Colombian referendum, and contrast measures announced and implemented in light of these events with what had been implemented in Latin America. 

We found that policies were being announced, in some cases on a daily basis, and that new tools, policies and programs often overlapped or contradicted one another, making it difficult to assess what was actually being done and where. Disaggregated information about local implementation was difficult to find, and procedures or policies were not always translated into local languages, making it difficult for users to understand how their content was being assessed and what remedies might be available to them. Sometimes initiatives were deployed to different countries in the world with varying levels of resources behind them, leading to disparities in enforcement. As researchers, it is very difficult to know how, or even if, high profile global announcements are actually impacting users in Latin America. CELE is finalizing a new study to update its 2017 research on platform responses to disinformation and of the 61 most relevant actions identified and analyzed in the document—in at least 28 of them, researchers could not verify their implementation in Latin America.   

This all speaks to a broader challenge around transparency, accountability, and access to information regarding the operations of the biggest internet platforms. Although we recognize that efforts have been made to improve transparency reporting over the past couple of years, it is still difficult to find disaggregated data for our region. This includes even basic information about what policies are being applied where, as well as data about the regional and local impacts of content moderation, and how the local context differs from the global picture. Facebook and Twitter, for example, have recently started providing more information about their content moderation practices but it’s still not geographically disaggregated. This severely undermines state and non-state actors’ ability to assess the social implications of private content moderation locally.

Together, the lack of understanding of how content moderation is carried out in Latin America, along with failures to account for local context, has led to increasing calls for regulation against major platforms, coming from both governments and civil society. While intentions may be well placed,  governments in this region are less concerned with the narrowing space for freedom of expression than on developing new restrictions on speech, owing to their perception that the online space remains under-regulated. The propagation of laws or legal proposals from Europe and the United States which are similarly hostile to freedom of expression certainly does not help, nor do campaigns by many of the world’s oldest democracies, including in Europe and the UK, to pressure platforms into using their terms of service more aggressively to attack harmful but legal content. Although well intended, these initiatives promote vague and overbroad restrictions to freedom of expression in online platforms. Indeed, in the context of a region with a long history of State censorship, these proposals can provide political cover for governments to take a similarly aggressive approach to cracking down on online speech. 

Latin American internet users, particularly those involved in advocacy or social movements, are caught between a rock and a hard place. In this context, the need for Latin American civil society and activists to raise their voices in global debates vis-á-vis content moderation practices and free speech appears more necessary than ever.   


This article was developed as part of a series of papers by the Wikimedia/Yale Law School Initiative on Intermediaries and Information to capture perspectives on the global impacts of online platforms’ content moderation decisions. You can read all of the articles in the series on their blog, or on their Twitter feed @YaleISP_WIII.