1 (888) 505-5689
If someone calls you a super flagger, don’t be offended. It’s possible that YouTube just gave you permission to flag 20 videos a day that violate their community guidelines.
Google is granting 200 organizations the ability to flag up to 20 videos at once. They already have a staff that monitors content 24/7, but with 100 hours of content uploaded each minute, a few videos are bound to slip through the cracks.
Users by and large have the ability to flag videos, but the flags by these 200 organizations will carry more weight. In fact, 90 percent of videos tagged by the super flaggers will be removed because their authority carries more weight about what is appropriate and what isn’t. I could flag a Justin Bieber music video because I find him annoying, but that doesn’t mean he really violates the community standards.
These super flaggers will actually play an important part in national security, and will flag jihadist propaganda that incites violence toward the west. Their job is to block content that contains hate speech or encourages violent acts against anyone.
Like YouTube, Facebook is massive, which means the site can’t see every single offensive post, picture, and page out there. This would be understandable, except the issues that Facebook has had in the past aren’t about content slipping through the cracks, they’re about innocent content getting flagged while offensive content remains up.
For example, pictures of mothers breastfeeding their children regularly get flagged by the community standards as pornography, even if nothing is in the picture. While a woman doing something perfectly natural to sustain human life is considered offensive, several anti-Semitic, racist, and misogynistic pages and photos are allowed to live on.
Protest groups targeted advertisers with tweets and screenshots highlighting their ads on violent pages, and a few major names pulled their ads. When in doubt, hit Facebook where it hurts: in the ad revenue. Facebook’s solution was to create a gray area for pages that are offensive, but might not be offensive enough to take down, which was a lukewarm PR move at best.
Earlier this year, a British think tank found that racist or derogatory messages are tweeted out about 10,000 times per day, or roughly seven tweets every minute. The data found that 70 percent of those tweets weren’t meant to be insulting, but that still leaves 2,000 tweets per day have an abusive context, which is about one tweet per minute.
Despite the rate of racist tweets, to say nothing about monitoring violent and sexist tweets, Twitter only recently added a “Report Abuse” button to both tweets and accounts. Up until last August, Twitter users who felt threatened had to fill out a form in the help section and were advised to talk to the police to settle the problem offline.
Twitter added a “Report Abuse” button right next to the “Report Spam” button, which has been a part of the social network since its inception. At the end of the day, reporting abuse is only somewhat effective. Yes, Twitter can deactivate an account and block and IP address, but the user can just create a new account from a different one. It’s whack-a-mole with trolls.
Reddit has quality moderators because it doesn’t filter content on a macro level. Each page, or subreddit, is moderated by a few volunteers. These moderators set the guidelines for the page and determine what can’t be posted and what can. They only have to focus on a handful of posts, and don’t have to worry what’s going on outside of their pages. Larger subreddits will have more moderators than smaller ones to accommodate the sheer number of posts and to make sure nothing slips through.
Oftentimes, moderators will message a poster who is rude or violated the guidelines to ask them about their content or explain why something was deleted. This means if something is wrongfully flagged, the poster has a chance to explain his or herself – unlike Facebook.
Reddit does have guidelines for the whole site called Reddiquette, which offers advice for posting links, commenting, responding, and being a good Reddit-citizen. Most subreddits direct users to the Reddiquette to remind them about what’s appropriate. Unlike most networks, the community guidelines are front-row, center for all to see.
For many, OkCupid is more than a shot at love, it’s a place to meet new people in the area and socialize. For others, the free dating site is a place to send unwanted messages that continue after the receiver has asked the sender to stop. OkCupid has developed a bad reputation as a home for people to message their most twisted fantasies to strangers and keep persisting even if they’re ignored. As a result, the “block” button gets clicked a lot.
When a post is flagged or a user is blocked, the content goes to OkCupid moderators, who are long-time users of the site who get randomly tapped to filter content. Like YouTube, some flags are irrelevant or harmless – people getting flagged for being too old or answering questions in ways that the flagger disagrees with – but sometimes the moderators do need to get involved.
When a moderator gets involved, he or she has the ability to ban a user if they’re scamming, being overly abusive, or sending inappropriate messages. The mod will also message the flagger and explain what action they took and why. This ranges from “I’m sorry you had to go through that, he/she was banned” to “Yes, he/she is a jerk, but that’s not to the level of needed action.”
OkCupid and its moderators are successful at reporting abuse because their complaints are taken on a personal, case-by-case basis by members of the community. There’s no algorithm or corporate team that does the work.
What This Means for Users
Internet 101 introduces the existence of trolls, where you’re bound to face hateful comments on any network whether you’re posting as a brand or person. You have four choices: develop a thick skin, find another social network, rejoin under an alias, or give up on social media. For now, it all depends on where you feel comfortable and where you feel safe.