Democracies across the world are grappling with the twin problems of hate speech and disinformation. The internet, as a common platform for communication, has been weaponised to spread a concoction of untruths, abuse and biases with far-ranging consequences. The European Union is of the view that these phenomena, if left unchecked, have the potential to undermine faith in electoral processes and subvert democratic institutions. In response, the EU has warned social media companies of greater liability for permitting or wilfully ignoring such speech on their platforms. United Kingdom lawmakers published a report in 2018 calling for stricter government regulation of social media companies and a clear definition of ‘disinformation’. In late 2019, reports emerged of disinformation campaigns allegedly coordinated by Russian actors in several African countries including Cameroon, the Central African Republic, Libya, Mozambique and Sudan. The campaigns focused on spreading a blend of pro-Russian propaganda and local content. In a first, these talking points were disseminated using not only fake profiles, but also local content providers. This threat of interference in African countries by foreign actors has been noted to be a form of ‘disinformation colonialism’, requiring careful consideration to effectively resist. Increasingly so, African countries are turning to legislation to prohibit such forms of speech, triggering accusations of censorship. Kenya, Burkina Faso, South Africa, and most recently Ethiopia, have all gone down the legislative path in dealing with digital misinformation and its ‘real world’ consequences.
Analysing the law
On February 13, 2020, the Ethiopian Parliament enacted the Hate Speech and Disinformation Prevention and Suppression Proclamation (“Hate Speech and Disinformation Law”). The Hate Speech Law prohibits and provides for civil and criminal penalties for the dissemination of both hate speech and disinformation. There is some nuance built into these penalties. The dissemination of hate speech attracts simple imprisonment for a period not exceeding two years or a maximum monetary fine of Br. 100,000 (approximately USD 3,000). If, however, there is an attack that take place as a result of the hate speech in question, the punishment is one to five years of simple imprisonment. Conversely, if there is no violence associated with the hate speech or disinformation then discretion is granted to the court to order mandatory community service in place of a prison sentence or fine. There is a similar gradation in penalty for the dissemination of disinformation. In the case of disinformation, a higher penalty is attracted if the dissemination takes place through a social media account with more than 5,000 followers or through the print or broadcast media. Some of the typical exceptions to prohibitions on hate speech are also built into the Hate Speech Law. Academic studies, news reports and political critiques and artistic expression are all permitted forms of speech. Curiously, the Hate Speech Law also exempts ‘religious teaching’ from the scope of the prohibitions. Moreover, a reasonable effort to establish the veracity of speech will also be considered an exemption from the prohibition on disinformation.
Rights organisations and international observers have flagged concerns surrounding the signalling effects of the Hate Speech Law on free speech in Ethiopia. Notwithstanding Prime Minister Abiy Ahmed’s steps towards protecting freedom of the press, the country’s track record of muzzling dissent and jailing journalists exacerbates these concerns surrounding the signalling effects. The definitions of ‘hate speech’ and ‘disinformation’ in the draft bill were criticised for being overbroad and vague in its scope, potentially leading to a chilling effect on speech. Moreover, concern that this law will be misused against political dissidents, journalists and minorities persists – partially due to low levels of trust in public institutions because of Ethiopia’s previous track record in this respect as noted above. Some contend that this stems from low levels of public trust in institutions. The law has been accused of being unilaterally drafted without adequate engagement and consultations with civil society. Moreover, the broad definition has led to the UN Special Rapporteur on the Right to Freedom of Opinion and Expression characterising the law as a one that would “threaten freedom of expression” that “could reinforce rather than ease ethnic and political tensions”. Hate speech is defined in the law as “speech that promotes hatred, discrimination or attack” against protected categories of persons. This effects-based approach is broadly consistent with the international standard for defining problematic speech, except for a few procedural safeguards explored later in this piece. What is considered ‘hateful’ is highly dependent on context and may employ a clever combination of innuendo and intrigue as opposed to outright bigotry. Therefore, a definition rooted in the effects of that speech ensures that the victims of hate speech are kept front and centre.
The nature of disinformation differs from that of hate speech. It’s definition therefore has a dual quality to it – it is defined as “speech that is false” (an objective standard) “that is disseminated ”and “is highly likely to cause a public disturbance, riot, violence or conflict” (a subject, effects standard). However, disinformation rarely has the same kinds of tangible consequences as hate speech. The harms are often more insidious. For instance, recent disinformation campaigns linking the spread of COVID19 to 5G technology is highly unlikely to lead to a public disturbance or conflict. However, it does delegitimise scientific information and undermine government efforts at containing the virus. Content platforms like Facebook appear to be striking much the same balance as Ethiopia has in the Hate Speech Law, in choosing only to remove misinformation linking COVID19 to 5G that cause physical harm. At the other extreme, the arbiter of what constitutes ‘false’ information carries the risk of the law being used to muzzle contrarian voices whose ‘truth’ does not serve government narratives. A recent report by Oxford University shows that Ethiopia’s government is looking to weaponise social media platforms to counter negative coverage, even sending staff members of the Information Network Agency (the country’s intelligence and cyber security agency) to China for formal training. Therefore, the definitions of hate speech and disinformation are at once both vague and narrow, thereby risking self-censorship.
One of the most striking aspects of the Hate Speech Law is the liability it imposes on social media service providers to moderate speech. Through a form of intermediary liability, the law states that social media enterprises should ‘endeavour’ to suppress and prevent the dissemination of hate speech and disinformation. Moreover, such companies are required to “act within twenty four hours to remove or take out of circulation disinformation or hate speech upon receiving notifications” of such speech. These are obligations that could have deleterious consequences on online speech. The obligation to take pre-emptive action is likely to result in automated content filtering, a tool that has been criticized for amounting to pre-censorship and chilling effects on speech. Moreover, companies are likely to simply cast a wider net and take down all forms of problematic speech in order to avoid liability. Empirical research on notice and takedown requests in the Digital Millennium Copyright Act framework of the United States, a report published by the French Ministry of Culture, actions taken by Indian intermediaries, and a Brennan Centre study all show that companies tend to err on the side of caution and block excessively. The absence in the Hate Speech Law of a reasonable amount of time for the company to evaluate the notification or have the option to contest it, accentuates these concerns.
A running thread through the criticisms surrounding the Hate Speech Law is concern regarding its disproportionate impact on online speech. Drawing from other regulatory approaches to these problems, a more robust system of checks and balances built into the Hate Speech Law will help in bridging public trust. For instance, France implemented a law in November, 2018 to combat misinformation in elections. This law requires a judicial body to issue notices to takedown speech that is deemed to amount to misinformation. The judge is required to act “proportionally”, with options for appeal. In Canada, government efforts to deal with misinformation are in the form of a task force led by non-political entities empowered to monitor and notify incidents of misinformation. The risk of misuse by political offices is therefore mitigated by the establishment of a non-partisan body. The Hate Speech Law can benefit from the following:
- Robust checks and balances via the establishment/use of a judicial body to temper the excesses of the executive and provide an opportunity to appeal penalties or notices;
- A reasonable amount of time for social media companies to either act on or challenge take down notices from the government;
- Conditions of necessity and proportionality in the law to regulate the conduct of the government and the judiciary in regulating speech;
- Greater clarity on the nature of responsibility that social media companies carry for speech on its platforms, so such companies have a predictable regulatory environment to operate in.
While Ethiopia has taken some important steps towards undermining these threats to democracy, bridging some of the gaps will ensure that the cure isn’t worse than the disease.
This piece has been authored by Varun Baliga, consultant, with inputs from Vrinda Bhandari, external consultant, and Tanya Sadana, principal associate, Ikigai Law.
For more on the topic, please feel free to reach out to us at [email protected]
 “Section 4: Any person disseminating hate speech by means of broadcasting, print or social media using text, image, audio or video is prohibited.”
“Section 5: Disseminating of any disinformation on public meeting by means of broadcasting, print or social media using text, image, audio or video is a prohibited act.”
 The definitions of hate speech and disinformation in the Hate Speech and Disinformation Law are as follows:
“Section 2(2): “Hate speech” means speech that deliberately promotes hatred, discrimination or attack against a person or an discernable group of identity , based on ethnicity, religion, race, gender or disability”
“Section 2(3): “Disinformation” means speech that is false, is disseminated by a person who knew or should reasonably have known the falsity of the information and is highly likely to cause a public disturbance, riot, violence or conflict”
 See Section 7(1) and 7(3) for penalties in relation to dissemination of hate speech and disinformation, respectively.
 Section 7(2), Hate Speech and Disinformation Law.
 “Section 7(6): If no violence or public disturbance has resulted due to the commission of the offense of hate speech or disinformation and if a court of law is convinced that the correction of the convict will be better served through alternatives other than fine or imprisonment, the court could sentence the convict to render mandatory community service.”
 Section 7(3), 7(5), Hate Speech and Disinformation Law.
 “Section 7(4): If the offence of hate speech or disinformation offence has been committed through a social media account having more than 5,000 followers or through a broadcast service or print media, the person responsible for the act shall be punished with simple imprisonment not exceeding three years or a fine not exceeding 100,000 birr.”
 Section 6, Hate Speech and Disinformation Law.
 Section 6(1), Hate Speech and Disinformation Law.
 Section 6(2), Hate Speech and Disinformation Law.
 Section 8(1), Hate Speech and Disinformation Law.
 Section 8(2), Hate Speech and Disinformation Law.