Joan Barata is a Senior Fellow at Justitia’s Future Free Speech project, and is also a Fellow of the Program on Platform Regulation at the Stanford Cyber Policy Center.

Introduction

Brazil’s President, Luiz Inácio Lula da Silva, has referred a proposed law to the Congress. Colloquially referred to as the “Fake News Bill,” the draft legislation originates in a proposal made by Senator Viera in 2020 and is aimed at regulating online platforms and instant messaging services in the country. The proposed legislation has been under discussion in recent years, but enjoys a new political urgency under the new presidency.

Unfortunately, the Bill threatens to undo many of the rights-protective innovations of Brazil’s most important internet law, the 2014 Marco Civil da Internet. In its stead, the new law would severely limit the scope of the principle of intermediary liability exemption, enable the application of very strict yet vaguely-defined crisis protocols, impose risk assessment and mitigation obligations without sufficient safeguards against arbitrariness and excessive impact on human rights, as well as broadly criminalize the dissemination of “untrue facts” in violation of existing human rights standards, among other issues.

Debates around the Bill are taking place in a context of political polarization and an expected Supreme Court decision in a series of cases where the Court has accepted to assess the constitutionality of article 19 of the Marco Civil da Internet. At stake in these cases is the most important provision in Brazilian legislation granting a conditioned immunity to internet intermediaries. The Bill has also been accompanied by the adoption of specific regulations that already appear to introduce certain carve outs to the general regime incorporated into the Marco Civil, such as the Decision of the Ministry of Justice and Public Security on the “prevention of the dissemination of flagrantly illicit or harmful content by social media platforms” (Decision 351/2023, of 12 April 2023).

There are some key provisions that deserve some consideration:

1. Criminalization of a broadly defined class of “untrue facts”, which violates freedom of expression international standards and puts in the State’s hands the possibility of persecuting political speech.

2. Crisis (“security”) protocols, in which platforms must obey a non-identified administrative authority regarding content moderation decisions in one specific or several different areas (due diligence), and risk loss of immunities if they do not comply.

3. Due diligence regime in the form of risk assessment and mitigation obligations for platforms, superficially similar to those in the European legislation on online platforms, but lacking some of its constraints and raising similar concerns regarding legal certainty and impact on human rights.

4. A notice and action framework in which platforms must take as accurate all allegations by users that online content is illegal.

5. A remarkably broad must-carry obligation for an as-yet-undefined class of “journalistic” content and content posted by “public interest” accounts, meaning essentially government accounts.

Article 13 of the American Convention contains broad protection of the right to freedom of expression and a few clear indications about the State’s obligations in this area, including not only negative requirements not to interfere in individuals’ rights, but also possible venues for positive action from authorities to effectively protect the exercise of such right. Among these protections, it is important to note here the responsibility of States to prevent restrictions imposed“by indirect methods or means,” including limitations enforced or applied by private intermediaries based on obligations introduced via statutory regulation.

The Organization of American States (OAS) Special Rapporteur on Freedom of Expression published in 2013 a Report on “Freedom of Expression and the Internet”, establishing a series of very relevant and specific standards in this area. As for the responsibility of intermediaries, the Report affirms, above all, that it is conceptually and practically impossible “to assert that intermediaries have the legal duty to review all of the content that flows through their conduits or to reasonably assume, in all cases, that it is within their control to prevent the potential harm a third party could cause by using their services” (par. 96).

Regarding the role and capacities of intermediaries to assess the legality of a piece of content, it is important to note how the Special Rapporteur warns States about the fact that:

“…intermediaries do not have—and are not required to have—the operational/technical capacity to review content for which they are not responsible. Nor do they have—and nor are they required to have—the legal knowledge necessary to identify the cases in which specific content could effectively produce an unlawful harm that must be prevented. Even if they had the requisite number of operators and attorneys to perform such an undertaking, as private actors, intermediaries are not necessarily going to consider the value of freedom of expression when making decisions about third-party produced content for which they may be held liable” (par. 99).

For these reasons, and in the same sense, in a Report presented in 2011 the UN Special Rapporteur clearly emphasized that “[h]olding intermediaries liable for the content disseminated or created by their users severely undermines the enjoyment of the right to freedom of opinion and expression, because it leads to self-protective and over-broad private censorship, often without transparency and the due process of the law.” In other words, international human rights standards enshrine a general principle of intermediary liability exemption to avoid the imposition of private legal adjudication obligations and the creation of severe risks for freedom of expression.

The OAS Special Rapporteur has also had the opportunity to refer to the so-called “fault-based liability regimes, in which liability is based on compliance with extra-judicial mechanisms such as notice and takedown.” The Rapporteur particularly warns about the fact that “in general (…) this type of mechanism puts private intermediaries in the position of having to make decisions about the lawfulness or unlawfulness of the content, and for the reasons explained above, create incentives for private censorship” (par. 105). More particularly:

“the requirement that intermediaries remove content, as a condition of exemption from liability for an unlawful expression, could be imposed only when ordered by a court or similar authority that operates with sufficient safeguards for independence, autonomy, and impartiality, and that has the capacity to evaluate the rights at stake and offer the necessary assurances to the user” (par. 106)

The Rapporteur particularly insists on the fact that “leaving the removal decisions to the discretion of private actors who lack the ability to weigh rights and to interpret the law in accordance with freedom of speech and other human rights standards can seriously endanger the right to freedom of expression guaranteed by the Convention” (par. 105). A different and fairer model would of course be one where legal adjudications remain in the hands of the judiciary and independent authorities, and intermediaries become responsible and liable only when required to act by the former (which is, by the way, the model of conditioned liability also encompassed so far by the Marco Civil).

Last but not least, it is important to note that comparative models have had some influence on the drafters of the proposal. The language of certain sections of the recently adopted Digital Services Act (DSA) is recognizable across the text, although this does not necessarily mean that the proposal in question is based on the same principles and safeguards that inspired the legislation in the European Union.



Read the full article