This Friday (25), YouTube revealed another measure to improve the quality of the videos that are featured on its platform. The company will promote fewer videos containing conspiracy theories and disinformation in general, with the goal of also reducing the platform’s potential to carry extremist content to its users.
Tests with the new algorithms are already being conducted in the United States, and the idea here is to limit the exposure of videos that YouTube called “borderline content,” which are those that come close to violating community guidelines.
The change, according to the company, will affect less than 1% of the videos available on the platform, but the effect can still be significant. In a statement, YouTube said it will begin “to reduce content recommendations that may misinform users in harmful ways, such as videos promoting a miraculous and false cure for a serious illness, or asserting that the Earth is flat, or making blatantly false statements about events historical events such as September 11 “
YouTube has by default enabled the autoplay feature (which can be disabled by the user) by displaying new videos in the stream as soon as the currently playing video is finished. The choice of what is going to be reproduced next occurs through algorithmic decisions, and it is precisely this algorithm that is being redefined to reduce the reproduction of videos with misinformation and conspiracy theories considered dangerous.
However, YouTube’s initial statements about this change are still somewhat vague, and it’s unclear exactly what types of videos are considered borderline. And applying the new policy will require a mix of machine learning systems with human moderators, who train the system to recognize harmful videos. It is worth mentioning that YouTube will not remove videos considered borderline, only reduce the prominence of them in the platform. “We think this change strikes a balance between maintaining a platform for free speech and fulfilling our responsibility to users,” the company said.