With 2018 coming to an end, everyone expected the privacy scandals involving Facebook to be a truce, but by the way the name of the social network will be dragged in the mud until the last minute of the year.
And who discovered the new rotten company once again was the New York Times, whose reporter Max Fisher got access to the document of moderation rules of Facebook, which serve to define when a publication in the social network should be removed or not. The document was made available to the newspaper by a Facebook employee (who preferred to remain anonymous), who said he was tired of the power the company has and the fact that it can make so many mistakes and continue operating without supervision.
According to the report, the document is a mess made up of PowerPoint slides and slides organized in a way that ends up more confusing than helping moderators. And, according to the newspaper’s analysis, it contains numerous gaps, misinterpretations and does not care about being impartial.
For example, one of the documents describes a set of rules for determining when a word such as “martyr” or “jihad” is being used to incite terrorist conduct, while another describes a series of discussions that should be prohibited from happening on the platform, as strictly prohibited the use of words like “brother” and “companion”, in addition to more than a dozen emojis.
Facebook document of moderation rules
On the other hand, the document that defines the rules for identifying hate speech (one of the biggest problems of the social network today) is a 200-page bundle full of jargon and difficult to understand expressions, as well as defining different rules, such as separation at “levels “Of hate speech severity and lists of” dehumanizing comparisons, “like comparing Jews to rats. All of these rules should, in theory, be taken into account by social network moderators, who have about a second to decide whether a post hurts Facebook rules or not, in a paper that involves reviewing about 1,000 publications all days.
According to the newspaper, even if the regulators consulted specialists in each area so that the regulation followed by the social network was very comprehensive and fair, this attitude was not mandatory, and the company gave the engineers and lawyers responsible for the total sector freedom to create such rules as they pleased.
But the biggest problem with these rules and regulations appears not necessarily in shared memes on the platform, but in political posts, which often present problems that would lead them to be erased by moderation, but seemingly ignored by the network.
One of the examples of this type of post quoted by the Times was an extremely racist announcement by US President Donald Trump’s team, which incited public fears about an immigrant caravan crossing Central America. The propaganda alleged that these people were a group of criminals and terrorists who would invade the United States of assault, when in fact the group is made up of refugees and fugitives from Central American conflict zones who walk in groups as a way to protect themselves of the pursuers. The group had also made a formal request to enter the United States as war refugees.
But even outside the United States, social networking has been used to propagate ideas of segregation by political parties. For example in Myanmar, where the government used Facebook for years to fuel violence against Muslims, and Rodrigo Duterte, president of the Philippines, also used the social network as a political weapon in his country.
Although it is not a report that brings solutions, the Times story makes clear that Facebook has a giant problem in moderation, and the tools it has used to curb this problem are completely inappropriate – which gives us a dimension that, as well as the problem of data privacy, moderation of content will also not be a subject that the company will solve so soon.