



The most critical is natural language processing, which concerns itself with extracting information from large volumes of human text. Content moderators, meanwhile, are already suffering PTSD-like symptoms from the range of human misery and hate they’re exposed to on a daily basis.įor the second, what AI actually refers to, in practice, is various subfields of automated content detection. Facebook would have to hire half its users to moderate the other half or set up some sort of decentralized peer review system for content. Technically everybody is capable of hate speech, and we’re talking about a network with over 4.75 billion pieces of content shared daily. The first isn’t really scalable on its own. The second way is through what the Facebook representatives I met unhelpfully referred to simply as “artificial intelligence”-AI, which, to anyone who works in computing, is an eye-watering range of technologies that journalists and marketers seem to lump together without discretion. The most explainable way is through armies of content moderators: humans clicking through content that has been reported by users as posing a problem, usually working in countries with a mixture of cheap labor and English-language skills, such as in the Philippines. Back in 2018, as part of a delegation of civil society and activists, I met with Facebook policy teams to try to understand why they had let the situation spiral so far.įacebook moderates content in two ways. In March 2018, the government of Sri Lanka blocked social media, citing the hate speech running rife at the time - and they’ve just repeated that block in the aftermath of the Easter Sunday attacks. The problem of hate speech is a problem of speech itself-or rather of language, of the near-infinite variety of human language, compared to the narrow Anglocentrism on which global tech is built. The impression is that all Facebook CEO Mark Zuckerberg has to do is wake up on the right side of the bed, make a few phone calls, and the web becomes a utopia. Every year, these organizations boast they are using the latest and greatest buzzwords to combat hate speech.

The assumption is always that platforms like Facebook are capable of tackling these problems but just haven’t tried hard enough. Much has been written about the vast cesspools of hate consuming Facebook and WhatsApp in my home country of Sri Lanka, or in Myanmar, or in India. … We also remove content that expresses support or praise for groups, leaders, or individuals involved in these activities.”Īnd yet they don’t seem to be doing such a great job of enforcing these rules. For example, Facebook’s Community Standards say that “in an effort to prevent and disrupt real-world harm, we do not allow any organizations or individuals that proclaim a violent mission or are engaged in violence, from having a presence on Facebook. Future anthropologists, scouring a snapshot of today’s social network sites, might conclude that homo sapiens was a species that worshipped cats and hated each other.Ĭompanies like Facebook and Twitter say they’re doing their best to fight this: They have fairly comprehensive rules about the kind of content they want gone from their platforms.
