top of page
Writer's picturemeowdini

Revealed: Meta-Approved Political Ads in India Inciting Violence

Meta, the parent company of Facebook and Instagram, has come under fire for approving a series of politically charged advertisements in India that incited violence and spread disinformation during the country’s recent elections. According to a report shared exclusively with the Guardian, the approved ads included AI-manipulated content that promoted hate speech and called for violence against Muslims, raising serious questions about Meta’s content moderation policies.



AI-Manipulated Ads and Hate Speech


The controversial ads were created by India Civil Watch International (ICWI) and Ekō, a corporate accountability organization, to test Meta’s mechanisms for detecting and blocking inflammatory political content. Shockingly, the ads, which contained extreme hate speech and disinformation, were approved and added to Meta’s ad library. Some examples included:


- Calls to "burn this vermin" and "these invaders must be burned" directed at Muslims.

- False claims that an opposition leader aimed to "erase Hindus from India" accompanied by an image of a Pakistan flag.


Out of the 22 ads submitted in various Indian languages, 14 were approved without any issue, while three were approved after minor tweaks. All approved ads were immediately removed by researchers before publication.


Meta’s Failure in Content Moderation


Meta’s ad approval system failed to recognize that these ads featured AI-manipulated images, despite the company’s public commitment to preventing the spread of AI-generated or manipulated content during the Indian election. Additionally, many of the approved ads violated Meta’s policies on hate speech, bullying, harassment, misinformation, and violence incitement.



A Meta spokesperson responded by stating that advertisers must go through an authorization process and comply with all applicable laws. The company also emphasized its efforts to reduce the distribution of altered content and the role of independent fact-checkers in reviewing such content.


Political and Election-Related Content


Meta’s approval system also did not flag the ads as political or election-related, which meant they bypassed additional scrutiny and authorization processes. This oversight allowed the ads to potentially violate India’s election rules, which ban political advertising during specific periods.


Broader Implications and Historical Context


This incident is not an isolated one. Meta has previously faced criticism for failing to curb the spread of Islamophobic hate speech and calls to violence on its platforms in India, which have, in some cases, led to real-world violence. The report underscores Meta’s struggle to manage the deluge of hate speech and disinformation, particularly during critical election periods.


Nick Clegg, Meta’s president of global affairs, described India’s election as a significant test for the company, stating that extensive preparations were made. Despite these efforts, the report’s findings highlight significant gaps in Meta’s content moderation strategies.


The report by ICWI and Ekō reveals serious flaws in Meta’s ability to manage and prevent harmful political content on its platforms. The findings suggest that Meta’s systems are not adequately equipped to handle the complex landscape of political advertising, particularly in a diverse and populous country like India. This raises concerns about the platform's readiness to manage content in future elections worldwide.




Disclaimer


The information presented in this article is based on reports and findings from ICWI and Ekō, as shared with the Guardian. The content aims to provide a summary and analysis of the reported issues surrounding Meta’s approval of politically charged advertisements in India. Readers are encouraged to refer to the sources and statements from Meta for further details. The authors and publishers of this article do not accept any liability for any loss or damage caused, directly or indirectly, by the use or reliance on this information.


Source: TheGuardian



Comments


bottom of page