How should tech companies handle hateful or dangerous content? If you were to ask most web users that question, many might answer with one word: “delete”.
We constantly use the delete button on our own screens – and so do the internet giants. Take, for example, the way Facebook, Twitter and YouTube scrambled last weekend to remove video footage of the terrorist attack on Muslims in New Zealand mosques. Or how the same companies have hired armies of so-called “content moderators” to take down offensive material every day (arguably one of the 21st century’s most horrible new jobs).
But as this race to press delete intensifies, there is a rub: it is usually doomed to fail. Even as the tech giants scrambled to remove the horrific Christchurch footage from the web amid a public outcry, the material kept resurfacing because users were constantly republishing it. Deleting content is like chasing a bar of soap in the bath; it keeps slithering away.