Amnesty International is demanding that Meta provide reparations to victims of the northern Ethiopia conflict, accusing the tech giant of using algorithms that fueled human rights abuses and incited violence against the Tigrayan community. A new report documents how the platform amplified hate speech and hostility, effectively turning Facebook into a catalyst for real-world harm.
Algorithmic Virality and Systemic Negligence
The report argues that Meta’s focus on engagement-driven algorithms created a dangerous environment in conflict-prone regions. By prioritizing content that triggered high interaction, the platform allowed disinformation and calls for violence to proliferate. Amnesty International maintains that Meta disregarded repeated warnings regarding the potential for mass violence in Ethiopia, choosing to ignore the risks inherent in its platform’s design.
Before the war erupted, Meta failed to take heed of warnings from researchers, its own Oversight Board, and various civil society groups. As early as June 2020, digital rights organizations formally warned the company that harmful content on Facebook could incite physical violence against minority groups, recommending temporary changes to sharing functionalities and a formal human rights impact assessment.
A Pattern of Failure: From Myanmar to Ethiopia
Amnesty International draws direct parallels between the situation in Ethiopia and the company’s previous failures in Myanmar. Years prior, Meta’s automated content removal systems proved unable to process local languages, allowing harmful content to remain online. Despite designating Ethiopia as an “at-risk” country, the company’s moderation efforts remained insufficient.
“Meta was not able to adequately moderate content in the main languages spoken in Ethiopia and was slow to respond to feedback from content moderators regarding terms which should be considered harmful,” the report states. Even when content was flagged, it often remained active because it did not technically violate the company’s rigid community standards.
Independent Findings and Meta’s Defense
Independent investigations, including a recent United Nations Human Rights Council report, confirm that Meta was slow to act on requests for content removal, citing inadequate staffing and poor language capabilities. A Global witness investigation further corroborated that Facebook struggled significantly to detect hate speech in Ethiopia’s primary languages.
Meta, however, rejects these conclusions. A company spokesperson stated: “We fundamentally disagree with the conclusions Amnesty International has reached in the report, and the allegations of wrongdoing ignore important context and facts. Ethiopia has, and continues to be, one of our highest priorities.” The company claims it employs staff with local expertise in Amharic, Oromo, Somali, and Tigrinya to curb violating content.
The Call for Structural Reform
Amnesty International contends that Meta’s late-stage interventions were insufficient because they failed to address the root cause: the company’s data-hungry business model. The report calls for a complete overhaul of Meta’s “Trusted Partner” program and demands that the company allow users to opt-in to content-shaping algorithms rather than forcing them upon the public.
The human rights organization also emphasizes that voluntary corporate responsibility is insufficient. It is calling on governments to enact and enforce strict regulations to hold tech giants accountable. “It is more crucial than ever that states honor their obligation to protect human rights by introducing and enforcing meaningful legislation that will rein in the surveillance-based business model,” the report concludes.
