Meta Intensifies Content Moderation Efforts, Removes Millions of Violative Posts


New Delhi: In a robust response to content moderation, Meta Platforms Inc. has reported the removal of over 13.8 million pieces of content that violated its community standards on Facebook and an additional 4.8 million on Instagram for February in India. This action aligns with the company’s commitment to creating a safe online environment and adhering to the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.

The company’s proactive stance is further evidenced by its handling of user reports through the Indian grievance mechanism. Facebook alone received 18,512 reports, with Meta providing resolution tools in 9,300 instances. These tools are designed to empower users to address issues such as content violations, data retrieval, and compromised accounts.

Meta’s monthly compliance report highlights that, of the remaining grievances requiring specialized review, 2,970 were actioned upon after thorough analysis. However, 6,242 reports did not result in action, indicating a rigorous review process that adheres to the platform’s policies.

Instagram’s figures mirror this approach, with 12,709 reports received and 5,344 resolved using the provided tools. Specialized review led to action on 2,470 reports, while 4,895 did not meet the criteria for action.


The enforcement of the new IT Rules 2021 mandates large digital and social media platforms, with user bases exceeding 5 million, to publish monthly compliance reports. These reports are a transparency measure, detailing the actions taken against content that breaches the platforms’ standards, which may include removal or the application of warning labels on sensitive content.

In comparison to the previous month, January saw a higher number of takedowns, with over 17.8 million pieces of content removed from Facebook and 4.8 million from Instagram, underscoring the fluctuating nature of online content violations and Meta’s ongoing efforts to combat them.