YouTube strengthens its Rules About critical Conspiracy Theories, Especially QAnon Content

0
171

YouTube has shared an announcement of the new update about rules around hate speech. The focus rules on decreasing the spread of misinformation and conspiracy theories and especially QAnon that have caused some read-world violent incidents.

According to YouTube:

“Today we’re further expanding both our hate and harassment policies to prohibit content that targets an individual or group with conspiracy theories that have been used to justify real-world violence. One example would be content that threatens or harasses someone by suggesting they are complicit in one of these harmful conspiracies, such as QAnon or Pizzagate.”

This update came after Facebook toughened the stance against content about QAnon, considering the developing danger of the group and its activity. As Facebook’s move went a bit further, in that it will witness the removal of all Instagram accounts, Pages and groups on Facebook that represents QAnon.

YouTube has left some room for exemption in its updated process:

“As always, context matters, so news coverage on these issues or content discussing them without targeting individuals or protected groups may stay up. We will begin enforcing this updated policy today, and will ramp up in the weeks to come.”

This is the newest identification from the giant social media platforms that helping in that matter that can lead to danger in the real life. And as YouTube has stopped of a complete ban on all content related to QAnon. The new measurement will see more restrictions of the group that will restrict its impact.

Also, YouTube stated that it has already reduced much of discussion about QAnon. A couple of years ago, YouTube decreased the reach of dangerous misinformation by its ‘Up Next’ suggestion. It stated that has resulted in a 70% decline in views comes from discovery and search systems.

“In fact, when we looked at QAnon content, we saw the number of views that come from non-subscribed recommendations to prominent Q-related channels dropped by over 80% since January 2019. Additionally, we’ve removed tens of thousands of QAnon-videos and terminated hundreds of channels under our existing policies, particularly those that explicitly threaten violence or deny the existence of major violent events.”

Therefore, there are still channels that are known as ‘Q-related’ that will not be deleted after this new update.

It seems strange, We do understand the stance of YouTube in this matter, and it will just plan to delete content that targets a group or individual. However, part of the problem with QAnon, and other moves, they have been permitted to begin as harmless chatter, and have extended from there into serious and concerning movements.

You could dispute, earlier, that no one knew that QAnon would see this increase in what it has. However, we do this currently. Therefore, why let any of it stay?

Also, the QAnon case is highlighting the need for social media platforms to interest official warnings back in this matter. And that in order to stop the activity of these groups before they have real traction. Specialists have been alerting the giant social platforms about the danger of the QAnon for several years, but now they are only looking to seriously limit the discussion.

Why did this take so long? Now, If we are admitting the danger caused by these groups, will that drive to evolved action against such sorts of misinformation, and before they can become more dangerous too, and pose a real threat?

The social platforms are planning on tackling any conspiracy theory of COVID-19 and now any anti-vax groups are facing stronger restrictions. What about climate change conspiracy theories and movements about counter science? Aren’t they also a serious risk? Would ‘flat earthers’ in the end expand in more threatening risk? Would ‘flat earth’ movements expand into the more serious territory? Is there any danger in permitting any content against science to increase?

For the most part, it looks like the companies are still functioning in retrospect, and waiting for these movements to be an issue before taking any action.

In some situations, they need to wait and consider them ‘innocent till proven guilty’. However, again, analysts highlighted concerns around QAnon after a follower of them walked into a pizza restaurant in Washington in 2016 and was seeking to investigate what is happening inside himself, and he was armed with a semi-automatic rifle.

How do you define danger, in this sense, is hard yet it seems obvious that more would be done, and more active would be taken. Would that restrict free speech? With that limit users from what they can share? Should they be banned? Maybe with the developing change limit these movements, we will see a shift in approach to such warnings.