There was enough outrage these past few weeks about Facebook’s inadequate Community Standards that the social network stepped up and released a statement with plans to purge the site of crude pages promoting rape culture and violence against women. While the movement was led by women’s rights groups and specifically targeted certain pages, Facebook has promised to completely review its guidelines against hate speech and work to fight against offensive pages on the network.
Many different groups which have historically faced discrimination in society, including representatives from the Jewish, Muslim, and LGBT communities, have reached out to us in the past to help us understand the threatening nature of content, and we are grateful for the thoughtful and constructive feedback we have received.
In the campaign to make Facebook take action against the offensive content, The Everyday Sexism Project targeted two audiences. First they spread the message to users via social channels including Facebook, Twitter, and Tumblr. Their CTAs were to boycott or decrease their use of Facebook and actively tweet, post, or comment about the movement to Facebook to voice their support.
The second audience was companies that advertise on Facebook. Tweets like “Hey @brandname, did you know that your company’s ads are showing up on [offensive page name here]?” Facebook ads are determined by demographics and interests, brands don’t pick and choose which pages their ads show up on and companies really have no control where they appeared. However, companies that write Facebook a check every month have more pull than the average user, and this attack might have been what convinced the social network to speak up.
Facebook Safety posted a response to the campaign which acknowledged the problem, gave an overview of previous steps made for other groups, and laid out specific steps the site is going to take to fix the situation. Not to editorialize, but I was impressed. It had everything an apology or crisis management blog article needed to fix the situation.
There was one controversial statement Facebook made which could be used in the future to allow offensive pages to stay on the site:
We work hard to remove hate speech quickly, however there are instances of offensive content, including distasteful humor, that are not hate speech according to our definition. In these cases, we work to apply fair, thoughtful, and scalable policies.
It was this “definition” that got Facebook into trouble in the first place. One person’s offensive humor was another person’s hate speech. They were leaving pages up that were considered violent, harassing, hateful, and/or graphic in the name of dark humor. Where does the protection of users end and censorship begin? Where is the border between freedom of speech and content that harms users?
Historically, Facebook hasn’t been in the good graces of women and their decision of what was considered appropriate or not – and it wasn’t just activists who were up in arms. Women who posted photos of themselves breastfeeding their children were getting flagged for nudity and pornography. They couldn’t understand how feeding their infants, something natural that women around the world do multiple times a day, could be considered lewd. Facebook had to backtrack again and create an addendum in their guidelines:
We aspire to respect people’s right to share content of personal importance, whether those are photos of a sculpture like Michelangelo’s David or family photos of a child breastfeeding.
Clearly, it’s not a perfect system and Facebook needs to keep working with representatives of different groups to make sure their site remains a place where everyone feels comfortable without being threatened or censored. At the end of the day, the best guideline and algorithm they can use is common sense.