Facebook

Can Facebook really rely on artificial intelligence to spot abuse?

Facebook faces a monumental challenge: how can a force of 30,000 workers police billions of posts and comments every day to sift out abusive and dangerous content? 

Just 18 months ago, Mark Zuckerberg, Facebook’s founder, was confident that rapid advances in artificial intelligence would solve the problem. Computers would spot and stop bullying, hate speech and other violations of Facebook’s policies before they could spread. 

But while the company has made significant advances, the promise of AI still seems distant. In recent months, Facebook has suffered high-profile failures to prevent illegal content, such as live footage from terrorist shootings, and Mr Zuckerberg has conceded that the company still needs to spend heavily on humans to spot problems. 

您已閱讀10%(763字),剩餘90%(7169字)包含更多重要資訊,訂閱以繼續探索完整內容,並享受更多專屬服務。
版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。
設置字型大小×
最小
較小
默認
較大
最大
分享×