Facebook has published figures showing the amount of controversial content it took action on in the third quarter of 2019. Amid the spreading of fake news and increasing levels of inflammatory content circulating online, the social network has come under immense pressure to better regulate what's happening on its watch. The content that Facebook is actively trying to keep off its site can be broken down into nine categories: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, bullying and harassment, child nudity and sexual exploitation, suicide and self-injury, regulated goods (drugs and firearms) and, last but definitely not least, spam.
Between June and September of this year, 1.9 billion posts categorized as spam were removed from Facebook, accounting for 95 percent of all content taken action on (excluding fake accounts). 29 million posts containing violent and graphic content were also taken down or covered with a warning, 99 percent of which were found and flagged by Facebook's technology before they were reported. Likewise, 98 percent of all posts taken down or flagged for containing adult nudity or sexual activity were pinpointed and identified automatically before they were reported - 30 million posts were given warning labels or deleted in total.
Unfortunately, Facebook's technology has been significantly less successful at identifying posts containing hate speech. Of the 7 million pieces of content the company took action against for including hate speech only 80 percent were flagged by Facebook before users reported a violation of the platform's Community Standards. When it comes to spam, the content most frequently deleted, disabling fake accounts is critical. During the third quarter, 1.7 billion fake accounts were disabled and most of them were removed within minutes of registration.