People use virtual spaces like social media platforms to communicate, connect, and participate in public conversations in the digital age. In places like Facebook, users address the current issues that the world currently faces. Because of the platform’s reach, it is an ideal place to raise awareness. Furthermore, it allows people to exercise their freedom of speech and expression. It lets the voices of the minority be heard. Because of this, we can say that social media is empowering.
But, as they say, if the good exists, so is the bad. Social media platforms do not come without imperfections. While people can use the platform to fight for what is right, it also allows the spread of negative things and beliefs. Racist, sexist, and people of the likes can post derogatory things and statements. As a result, the well-being of the targeted people is threatened.
Thankfully, Facebook pays attention. The platform’s goal is to promote positivity. So, it implements rules and Community Guidelines to fight these activities.
Facebook’s Way Of Combating Hate Speech
To achieve world peace, people must learn to accept each other. No one should feel disfavored because of their gender, color, race, or ethnicity. That is why hate speech should not be tolerated. The people who say these rude, discriminatory things should be stopped and corrected. That is what social media platforms like Facebook are trying to do. At least, that is what they claim to do so.
Facebook understands that online hateful behavior can trigger offline violence. It poses an imminent threat to the safety of not just the minority but also of the person who discriminates. Thus, it exerts efforts to combat the behavior.
Facebook removes hateful speech from the platform. It identifies those with the help of the users. Facebook users are given the ability to report or flag a post or comment they deem to be hateful speech. Facebook’s system and some human moderators will then review the flagged statement. If proven true, it will be removed from Facebook. The system will also send a report to the user to let them know what happened to the post they have flagged.
On top of flagging, Facebook also proactively identifies hate speech, bullying, and harassment content using Artificial Intelligence. In a report posted by Mike Schroepfer, Chief Technology Officer at Facebook, he claimed that their AI is further improving. Last year, it detected 97% of the hateful content that got removed from Facebook. The automated system did it before any human flagged the post. That is 3% more than what it did in 2019. While it seems like a small improvement, it actually is not. In fighting for equality, every little thing counts.
Facebook’s Failure to Pay Attention to Non-English Languages
Undeniably, Facebook has gotten better at taking down hateful speech. For instance, its AI can now check the comments in the post, which has been a huge challenge before.
Still, that does not mean the platform has no shortcomings. Particularly speaking, it failed to pay attention to minority languages. So, if hateful speech is posted in these languages, Facebook finds it hard to detect them. Most of the time, reporters will receive an automated response saying that a flagged post or comment does not breach the platform’s community standards.
A team of Australian scientists investigated how Facebook responds to hate speech on LGBTQIA+ community pages in Asia and Australia. The research included the Asian countries India, Indonesia, Myanmar, and the Philippines.
The Research for Growth of Likes on Pagegs
Facebook itself funds the investigation. The funding is won through Facebook’s content policy research awards. This includes Facebook pages that buy Facebook likes to boost their awareness.
In the research, the scientists focused on three aspects of hate speech regulation in the Asia Pacific region. First, they looked at the hate speech law in the case study countries. Then, they looked at Facebook’s definition of “hate speech.” They identified whether it recognizes all l recognized forms and contexts for this troubling behavior. Furthermore, they researched how Facebook policies and procedures worked to identify emerging forms of hate.
Unfortunately, due to privacy reasons, the company refused to give them access to a dataset of the hate speech it removed. Therefore, they were unable to directly see how effective Facebook’s in-house moderators are at classifying hate.
Because of that, the team had to resort to another method. They captured posts and comments from the top three LGBTQIA+ public Facebook pages in each case study country. The scientists went looking for hateful content missed by Facebook’s AI and human moderators.
Facebook has hate speech detection algorithms in more than 40 languages worldwide. For the languages not included, it relies on users and human moderators to police hate speech.
The problem is, these human moderators are not constantly searching for hate speech. So, they miss some of it. Furthermore, the researchers found that Facebook often rejects their reports of hate speech. That is no matter how clear the violation was. The page administrators also pointed out that sometimes, comments or posts that were removed would be re-posted on appeal.
Why is that the case? Because Facebook does not understand. Maybe Facebook contains the spread of hatred online in English, Spanish, and Mandarin-speaking communities. However, it fails to do the same in the developing world where those were not the first or secondary languages. It permits racial slurs, incitements to violence, and targeted abuse to still spread on the platform.
The scientists’ findings confirm this. They have found that in the Philippines and India, LGBTQIA+ members are exposed to an unacceptable level of discrimination and intimidation. It could be as bad as receiving death threats. But since they are written in languages Facebook does not understand, they stay on the platform.
Another noteworthy finding is Facebook’s inability to understand the context. For instance, in India, people have commented on vomiting emojis in response to gay wedding photos. Clearly, that was a sign of disapproval. Still, Facebook rejected the reports.
If Facebook really wants to become a hate-speech-free zone, this has to change. The Australian scientists have given some recommendations. For instance, Facebook should convene more regularly with persecuted groups in the region. That way, they can learn more about hate in their local contexts and languages. On top of that, the effort would increase the company’s specialists in minority languages. With more of them working in or with the company, Facebook can take its hate speech regulation to the next level. It will no longer be servicing the few but turning a blind eye on the many.