How To Nip Fake News In The Bud By Choking It To DeathFORBES - Alberto Alemanno - 4/09/17
The days when social networks were dismissed as entertaining outlets for teenagers are long gone. Today they have become the primary medium over which we consume news, form our political identities and spend a considerable portion of our time.
Consequently, it shouldn’t come as a surprise that social media now are used to influence and manipulate public opinion and political behavior around the world. A growing number of governments, political parties and even some misguided individuals, such as the Macedonian teenagers who fabricated fake stories to draw traffics to their websites during the U.S. elections, are turning to internet platforms to exert influence over the flow of information.
Governments, for example, typically set up teams, made of public officials, volunteers, fake accounts, and bots — a software application that runs automated tasks over the Internet to interact with and mimic human users — or a mix of those to manage and influence public opinion online. These actors comment on shared posts or create content such as blog posts, YouTube videos, fake news stories or manipulated images.
Granted, fabricated information has always existed. What is new is this contemporary version’s tendency to spread globally at an extraordinary pace.
Okay, so how do we put the genie back in its bottle? Here are three approaches:
Solution 1 – State intervention
According to the most prescriptive model, public authorities are expected to police the media environment by themselves. However, this approach has been criticized insofar as it entails the creation of ‘Ministries of Truth’. This is the case of the recently-created Global Engagement Center, which helps the U.S. government ensure that streams of data are not contaminated by state-sponsored misinformation or falsehoods. The European Union has created a similar office called Disinformation Review. This is a network of 400-plus experts, journalists, officials, NGOs and Think Tanks in over 30 countries reporting disinformation articles to EU officials, and then to the public. It is devoted to debunking fake news and Russian propaganda.
Solution 2 – Imposing penalties
An alternative, equally prescriptive approach, involves sanctioning entities which engage, not just in content-creation but even mere circulation of ‘illegal content’. A good example of this is the German Network Enforcement Act, due for implementation in October 2017. Under this controversial law, social media companies have 24 hours to remove “obviously illegal” content, such as hate speech and defamation. Failing that, they face harsh fines. In parallel, social media networks, such as Facebook, Twitter, and YouTube, are required to submit public reports detailing how many posts were flagged and how many reports were removed. Again, failing to do so may lead to an initial fine of €5 million, which could rise to €50 million.
If the first model leaves it to the legislator and ultimately the courts to decide what constitutes fake news, the second outsources responsibility to social media. Unsurprisingly, social media companies are uneasy about playing the role of arbiters of truth, all the more so given their pay-as-you-go business model. As a result, companies like Facebook Germany have hired more human curators and partnered with fact-check organizations in an attempt to keep misinformation out of people’s feeds. However the track record of this system appears poor. When fake news stories do get named as potentially false, the denunciation often comes after the story has already gone viral. Sometimes, when users warn against sharing a story, traffic to that actually skyrockets.
Solution 3 – Swamping fake news with the truth
But there is a third, largely counterintuitive, approach that is overlooked in today’s public debate. Instead of killing the story, you surround that story with related articles so as to provide more context and alternative views to the reader. In other words, the social platform hosting the disputed news alters the environment in which that story is presented and consumed.
That’s exactly what Facebook is doing with its newly released feature offering ‘Related Articles’ directly beneath the disputed story. This invites “easier access to additional perspectives and information, including articles by third-party fact checkers.”
Facebook is testing the “swamping” approach on a voluntary basis. But it could be mandated by law across virtually all social networks. Although this method still leaves the deeper problem of algorithmic accountability open (as to the choice of the related articles representing alternative views), it appears a sensible approach worth experimenting. It boils down to an empirical question whether exposure to alternative viewpoints plays a role in combating (or reinforcing) misperceptions. Academic research suggests that this design-centered approach could make a real difference in readers’ perceptions. In addition, unlike the prescriptive approach embraced by Germany, a functionality such as Related Articles doesn’t necessarily imply any editorial judgment about their truthfulness. Rather it opens the door to a welcome architecture of serendipity that reminds readers of the beauty of the chance encounters that characterize real life.
The emergence of this feature underlines the ability of a platform such as Facebook to seriously engage with a problem as tricky as fake news. It also suggests its readiness to set aside — at least for a while — an obsessive business model based on increasing users’ engagement and monetizing their data.
Its implementation across social networks would therefore set an important precedent that might contribute to closing the gap between the dominant business model of platform capitalism and what is best for people.
Or, in other words, a cure for fake news that might actually work!