Source: rawpixel.com via Freepik In today's digital age, social media platforms play a central role in communication, self-expression, and community building. However, the widespread use of these platforms also brings challenges, most notably, the prevalence of hate speech.
To address the rapid spread of illegal hate speech on social media in 2016 The EU Code of Conduct on Countering Illegal Hate Speech Online was launched. Initially it was signed by Facebook, Twitter (now X), YouTube, and Microsoft, later some others were included. Major platforms have implemented policies and tools to help users report and combat abusive or discriminatory content. Understanding how to navigate these reporting systems is essential for maintaining a respectful and safe online environment.
How to report hate speech on Instagram?
Meta owns several prominent platforms, including Facebook and Instagram. According to Meta’s policy, hate speech is defined as anything that directly attacks people based on protected characteristics (race, ethnicity, gender, sexual orientation, national origin, religious affiliation). When it comes to enforcement, Meta uses a combination of both user reports and technology to find hate speech automatically.
Meta platforms promise to guarantee anonymity when sending a request, except in cases where it is necessary to establish true ownership.
Instagram provided official instructions on how to file a complaint:
How to report hate speech on Facebook?
Facebook as part of Meta shares the same Community Standards as Instagram (e.g. inadmissibility of nudity, harassment, hate speech etc.). Facebook provides settings for filtering content and controlling the user’s experience.
In attempt to handle bullying or personal attack Facebook offer a set of measures:
For more information and tips you can visit Facebook’s Bullying Prevention Hub (https://www.facebook.com/safety/bullying)
Find more about reporting tools and hate speech policy in TikTok, X and YouTube in the second part of the article.