in ,

Facebook bug led to harmful content escaping downranking

Facebook tries to keep its platform safe by watching out for harmful content. However, when your platform has billions of users, it gets complicated. As the mammoth social network just discovered, a bug led to users viewing more harmful content over a span of six months.

The engineers discovered a massive raking failure that allowed about half of all New Feeds to show content with potential integrity risks. This was contained in an internal report viewed by The Verge.

The issue first came to light in October when a large volume of misinformation began escaping the guards put in place. The News Feed distributed content from repeat offenders even after fact-checking by Facebook’s external reviewers. This led to a spike of 30 percent in misinformation globally.

However, the engineers could not immediately pinpoint what was wrong and helplessly watched as the misinformation level fell and rose repeatedly.

The bug also caused Facebook’s system for lowering the ranking of content that could have contained nudity, violence, and other restricted sources. The team designated the bug as level-one SEV, ranking very high among technical issues. Another example of level-one SEV is Russia’s block of Facebook and Instagram. Only severe events like a global outage rank higher.

Meta’s spokesperson, Joe Osborne, confirmed the bug to The Verge, explaining the social network “detected inconsistencies in downranking on five separate occasions, which correlated with small, temporary increases to internal metrics.”

However, the internal document obtained by The Verge shows the bug came to attention first in 2019. But there was no noticeable impact until October 2021.

Facebook has been using the downranking feature to enhance the quality of the News Feed. It expanded the type of content its system acted on. For instance, downranking has tackled thorny issues like wars, political stories, etc. This has led to fears about shadow-banning, with some calling for regulation through legislation.

Even with the growing interest in how Facebook’s downranking works, the network has yet to talk about the real impact on what users see in their feed. However, incidents like this show how bad things could get when something goes wrong.

Mark Zuckerberg, CEO of Meta, explained some years ago that downranking helps prevent people from acting on the natural impulse to engage and spread provocative content. He wrote on Facebook, “Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average — even when they tell us afterwards they don’t like the content.”

Facebook also uses downranking to address content that does quite violate its rules but comes close. 

The system also handles content detected by its AI as potentially in violation and needing human review.

While Facebook claims its AI systems have been improving, bugs like this occasionally remind everyone that there are limits to how far they can go to keep the platform safe. Sahar Massachi, who was on Facebook’s Civic Integrity team, commented, “In a large complex system like this, bugs are inevitable and understandable. But what happens when a powerful social platform has one of these accidental faults? How would we even know? We need real transparency to build a sustainable system of accountability, so we can help them catch these problems quickly.”

Written by HackerVibes

Microsoft renames Your Phone app for Android to Phone Link on Windows

Apple addresses battery drain issue by releasing 15.4.1 update