No one should be made to feel unsafe, harassed, targeted, abused, or harmed online or off. When it’s happening on social media apps, it’s up to the platform itself to effectively tackle this. Like most social platforms, hate speech and abusive comments plague Instagram users. Meta, which owns Instagram, claims “we don’t allow hate speech” and “we do not tolerate” bullying and harassment on its platforms, but if you’ve nonetheless been the recipient of abuse on these very apps, these might seem like hollow words. SEE ALSO: When you become the target of racist disinformation So what has Instagram been doing to tackle hate speech and abuse on the platform? Between January and March 2023, the company says it took action on 5.1 million pieces of hate speech content, 95.3 percent of which Instagram says it identified before it was reported. Within the app, Instagram says it removes “photos or videos of hate speech or symbols,” “posts with captions that encourage violence or attack anyone based on who they are,” and “specific threats of physical harm, theft or vandalism.” But sadly, people find ways around these rules. If you see a comment or post on Instagram that’s abusive, bullying, hate speech, misinformation, or appears to be inciting violence or physical harm, you can report it. Whether or not Instagram does anything with your report, or as happened to one of Mashable’s reporters who reported racist comments on Instagram, simply suggests that the comments “didn’t go against their community guidelines,” well,…How to report abusive comments on Instagram