Had the company failed to do so, its monthly user base would have swelled beyond its current 2.2 billion.
Facebook has released its Community Standards Enforcement Report which details the actions the firm has taken against content that's not allowed on its platform such as graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts.
But hate speech is a problem for Facebook today, as the company's struggle to stem the flow of fake news and content meant to encourage violence against Muslims in Myanmar has shown.More news: Ferrari must not be 'blind' to Spanish GP weaknesses - Sebastian Vettel
It also explains some of the reasons, usually external, or because of advances in the technology used to detect objectionable content, for large swings in the amount of violations found between Q4 and Q1.
After admitting that it let Cambridge Analytica use its network to grab unwitting users' data, Facebook has been on thin ice with both consumers and government officials.
Zuckerberg noted that there is still room for improvement with Facebook's AI tools - noticeably flagging hate-speech content. Hate speech is hard to flag using AI because it "often requires detailed scrutiny by our trained reviewers to understand context and decide whether the material violates standards", according to the report.
Commenting on the facts released, Richard Allan, Facebook's vice president of public policy for Europe, the Middle East and Africa said, "This is the start of the journey and not the end of the journey and we're trying to be as open as we can". It didn't disclose how long it takes Facebook to remove material violating its standards.More news: Major League Baseball suspends Seattle's Robinson Canó 80 games for drug violation
Over the a year ago, the company has repeatedly touted its plans to expand its team of reviewers from 10,000 to 20,000. "Often when there's real bad stuff in the world, lots of that stuff makes it on to Facebook".
How much content we detected proactively using our technology - before people who use Facebook reported it. But users are still reporting the majority of hate-speech posts, or about 62 percent of them, before Facebook takes them down. The rest came after Facebook users flagged the offending content for review.
The company says more than 96 percent of the posts removed by Facebook for featuring sex, nudity or terrorism-related content were flagged by monitoring software before any users reported them. Facebook removed 2.5 million pieces of hate speech in Q1 2018, of which just 38% was flagged by automated systems. He said removing fake accounts is the key to combating that type of content.
The social networking giant also said that it disabled 583m fake accounts in the first quarter of the year and now estimates that between 3pc and 4pc of all active accounts during the period were fake.More news: Malaysia PM To Extend Tenure, Probe Najib's Regime
In terms of graphic violent content, Facebook said more than 3.4 million posts were either taken down or given warning labels, 86% of which were spotted by its detection tools.