Assessing Current Platforms’ Attempts to Curb Misinformation

Black iphone 4 on brown wooden table photo – Free Social Image on Unsplash

In 2012 Facebook acquired the photo sharing app Instagram for 1 billion dollars. According to article on Medium, Facebook co-founder and CEO Mark Zuckerberg offered Instagram’s founder Kevin Systrom double what Twitter CEO Ev Williams was offering. Zuckerburg also assured that the app would not be integrated into Facebook and would be run like a separate company.

In 2021 Facebook changed Its name to Meta, although the social networking app would continue to be called Facebook. Meta owns Facebook and Instagram, although they are separately run. This blog post will look closely at what these two platforms are individually doing to decrease or eliminate the spread of misinformation.

Facebook

Blue and white number 8 photo – Free Symbol Image on Unsplash

The efforts that Meta is taking on Facebook to combat the spread of misinformation as stated on their website include removing content that violates community standards and monitoring for false content. Posts that contain misleading information are marked by fact checkers. If they don’t explicitly violate Facebook’s community standards the distribution of those stories is reduced.

Examples of posts containing misinformation that would be removed for violating community standards include posts that are likely to directly contribute to the risk of imminent physical harm or interfere with the functioning of political processes.

According to this article author Nick Cauldry says that it is Facebooks business model to circulate posts that optimize both engagement and profit. Cauldry also states that falsehoods generate more engagement. This means that the goals of disinformation operators closely align with Facebooks business model.

The community standards are publicly available on Facebook’s website. Facebook works with 3rd party fact-checkers to mark posts as false or misleading. If a post does not explicitly violate them, it’s distribution will be reduced. These fact-checkers can rate posts by selecting different options including false, partly false, missing content, or altered. Then, notifications are sent to anyone who tries to share a post that’s been rated by a fact-checker.

Fact-checking is Facebook’s central way to combat the spread of misinformation on the platform. It is of some concern whether the third-party fact-checkers have enough oversite or accountability to be effective according to this article on The BMJ. And of course, there is room for human error. Another article on the possible downside of fact-checking on Facebook brings another concern to the forefront of the discussion. Creating and spreading false content can be done in less time than it takes to fact-check. Even when a post has been publicly marked as containing false or missing information, if it doesn’t violate the community standards it will only be shown lower on the feed.

Given the propensity of human error when it comes to identifying fake information on the internet, a post that has been marked still does not prevent it from being shared or that the marked post actually contains false information. These efforts could be improved by tightening the community standards or by removing all posts containing false or missing information even at the cost of profit and engagement.

Instagram

Pink and white square illustration photo – Free Social media Image on Unsplash

Instagram only started using fact-checkers in May of 2019. This may be because there are over 2 billion active daily users on Facebook and only 500 million on Instagram which has grown from 25 million since it was acquired in 2012 according to an article on Medium.

In addition to fact-checking, since it is a photo sharing app, Instagram marks matching content by utilizing image matching technology in an effort to combat the spread of misinformation. Not only is marked content ranked lower, but Instagram also takes an additional step by removing it from the explore and hashtag pages.

Once fact-checking was implemented, content across both platforms (Facebook and Instagram) could be marked automatically to reduce its distribution. For example, if disinformation operators use both platforms to distribute content, all posts across both platforms would be marked and either removed or appear lower across all feeds.

Instagram could be more transparent about the image matching technology it uses as disclosed on its website. Some potential issues about the ethics of image matching and facial recognition include racial bias, lack of informed consent, and mass surveillance. The platform has also implemented a false information feedback option, where users can report posts from their own feeds to have them reviewed by fact-checkers. While this may help to locate these posts in less time, it does not necessarily mean that all posts being reported contain any false information. It is possible then that other posts actually containing false or misleading information are being overlooked by fact-checkers who are instead rating posts that have been reported by users.

Leave a Reply

Your email address will not be published. Required fields are marked *