After record-setting quarters in terms of banning hateful comments and posts, Meta removed or flagged 15.1 million content pieces containing hate speech on Facebook between January and March of 2022, the lowest since the second quarter of 2020 and a year-over-year decrease of 40 percent. The prevalence rate of users encountering hateful posts and comments has allegedly also decreased to a record low of approximately 0.02 percent, meaning two out of 10,000 content pieces containing hate speech slipped past Meta's flagging and deletion processes.
This can partly be attributed to the improvements in the platform's AI algorithms. Still, the amount of content violating Meta's hate speech policy that was reported by users stood at about four percent, almost two percent higher than between April and June 2021. Heavily relying on algorithms is not without its downsides, though: In Q1 of 2022, 267,000 content pieces removed for hate speech were restored at a later date, 218,000 through automatic processes not requiring a manual appeal.
Since Meta started publishing its Community Standards Enforcement Report every quarter to create more transparency concerning their moderation measures, the total amount of flagged or removed content pieces containing hate speech as well as its proactive action rate reached a historic high in the second quarter of 2021 and has been dropping steadily ever since. The company defines hate speech as “violent or dehumanizing speech, statements of inferiority, calls for exclusion or segregation based on protected characteristics, or slurs. These characteristics include race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disability or disease.”