Facebook And Google Are Considered Responsible For The Spread Of Fake Content, Here Are The Steps To Prevent It!
JAKARTA - Google and Facebook have made bold claims of late about their battle against misinformation and the steps they have taken to curb the threat.
But a new investigation highlights how the two giants are also responsible for funding a coordinated and misinformation campaign around the world. Facebook has received some criticism for allowing content that allegedly prompted the storming of the US Capitol earlier this year.
After the horrific incident, Facebook quickly carried out several checks and measures to contain the spread of fake news and scathing content.
Starting with Groups, Facebook introduced the ability to designate certain participants as 'Experts' to verify and stop the spread of problematic content. Similar to rival social media platforms, Facebook has also joined hands with health institutions and digital literacy groups to tackle COVID-19 conspiracies and hoaxes.
Google isn't too far behind. More than a year ago, Google extended its ban on political advertising and also imposed a stricter ban on COVID-related videos that might spread harmful information. However, it seems that the two companies also played a fundamental role in creating the problem.
According to an in-depth investigation by the MIT Technology Review, companies have actually paid out millions as part of their respective content initiatives that have exacerbated the global misinformation pandemic.
Starting with Facebook's Instant Articles initiative, all engagements were monopolized by clickbait websites and fake news sources who initially shared plagiarized content, and then started sharing sensational political content that caused a tragic human rights catastrophe for a minority sect in Myanmar.
One of Facebook's leaked internal research documents also revealed that the company was aware of rampant plagiarism on its platform, but that it didn't fix the problem for fear of legal tussles and decreased engagement.
The tutorials available online enabled the growth of the content field, and Facebook game security checks were made so easy that one person is said to be managing only 11.000 Facebook accounts. Clickbait content farms in Kosovo and Macedonia also reached half a million Americans ahead of the 2020 election, according to the report.
The Battle Against themselves
The company, which is now called Meta, reportedly paid millions of dollars to this bad actor. As bad as it gets, the MIT Technology Review explains that, at one point, 60 percent of all domains registered with Facebook's Instant Articles program engaged in spam activity.
Cheap automation tools allow malicious parties to distribute problematic articles, and even encourage live video and manage Instagram to multiply their reach. Everything is done while extracting stable revenue from Facebook.
SEE ALSO:
The researchers also found more than 2,000 pages from “account farms” in Vietnam and Cambodia, many of which have more than one million followers. Political figures have also reportedly paid these scammers to release content that influences election conversations and gives favors to one side.
Another big problem is the spread of videos with false sensitive content that scammers push as live videos to add weight to their sensational value.
Facebook is not the only enabler. The report explains how clickbait farms and bad actors are also exploiting Google's AdSense system to make money while eliminating misinformation.
Overseas clickbait farms reaching American audiences ahead of the 2016 election are pillared by AdSense dollars. Continuous recycling of content is commonplace, and because algorithms like the one behind YouTube pushes potentially viral content, these spammers have avoided punitive action.
Google Drive folders shared within the clickbait community also provide target details such as the most popular groups in over 20 countries to expand their reach.