Facebook faces a monumental challenge: how can a force of 30,000 workers police billions of posts and comments every day to sift out abusive and dangerous content?
Just 18 months ago, Mark Zuckerberg, Facebook’s founder, was confident that rapid advances in artificial intelligence would solve the problem. Computers would spot and stop bullying, hate speech and other violations of Facebook’s policies before they could spread.
But while the company has made significant advances, the promise of AI still seems distant. In recent months, Facebook has suffered high-profile failures to prevent illegal content, such as live footage from terrorist shootings, and Mr Zuckerberg has conceded that the company still needs to spend heavily on humans to spot problems.