JAKARTA - YouTube is eyeing new steps to tackle misinformation on its platform. Of course companies will face various challenges and how to consider their options in managing these problems.
The Google-owned platform, along with Facebook, is often identified as a major source of misleading and potentially harmful content. Some people sometimes receive misinformation through YouTube recommendations.
Now, YouTube is working to address this in three elements. The first change under consideration, according to Chief Product Officer Neal Mohan, is an update that will effectively break the sharing feature for videos with restricted content types.
Mohan explained that YouTube can implement all the changes it wants in its app, but if people reshare videos on other platforms, or embed YouTube content on other websites, it will make it harder for YouTube to limit its spread.
“One possible way to work around this is to disable the share or unlink buttons on videos that we've restricted in the recommendations. That effectively means you can't embed or link to borderline videos on other sites," Mohan said.
"But we're grappling with whether preventing sharing might go too far in limiting viewer freedom. Our system reduces limit content in recommendations, but link sharing is an active choice that one can make, as opposed to more passive actions like watching a recommended video."
According to YouTube's official website, Friday, February 18, if YouTube can't limit the spread of content via sharing buttons, it's still a significant danger.
“Another approach is to have an interstitial appear before viewers can watch the embedded or linked video, letting them know that the content may contain misinformation. Interstitials are like speed bumps, that extra step makes viewers pause before they watch or share content," said Mohan.
In fact, YouTube has used it for age-restricted content and violent or vulgar videos, and it's an important tool for giving viewers choices about what to watch.
As for the second element, Mohan states it will catch misinformation before it gains traction, especially as it will be particularly challenging with newer conspiracy theories and misinformation pushes, as it can't update its automatic detection algorithm without a large amount of content to train the system.
The automatic detection process is built on an example, and for old conspiracy theorists, it works really well, because YouTube has enough data to enter, train classifiers on what they need to detect and limit. But more recent shifts complicate matters, presenting different challenges.
Mohan said he is considering various ways to update his process in this area, and limit the spread of malicious content that continues to grow, especially around news development.
Lastly, Mohan also plans to expand misinformation efforts globally, due to the various attitudes and approaches to information sources.
The only way around this is to hire more staff in each region, and create more localized content moderation centers and processes, to take into account regional nuances.
However, there are considerations as to how restrictions could potentially apply across borders, should a warning displayed on content in one region also appear in another.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)