27
May
2019
|
07:00
Asia/Singapore

Fake news and social media

Social media platforms, by design, encourage users to share information effortlessly through the click of a button, and yet, information often gets distorted along the way

| By Dr Christian von der Weth and Professor Mohan Kankanhalli |

The rise of social media has changed how we perceive and handle information. For many, social media has become their main source of news. Never has it been so easy to access, publish and share information. Anyone can create, within minutes and sometimes anonymously, one or more Facebook or Twitter accounts. Social media platforms, by design, encourage users to share information effortlessly through a click of a button, and yet, information often gets distorted along the way. In some cases, humans may not even be controlling the spread of information —programmes that automatically post and share information, called ‘bots’, can also operate social media accounts.

Targeted information — or more maliciously: misinformation — has long been used to shape people's thoughts and decisions to influence their behavior. Newspapers — both traditional and online — generally adhere to journalistic norms of objectivity and balance, and thus enjoy high levels of trust and credibility. On social media, however, these norms are often forgotten, ignored or purposefully dismissed. Users are likely to share information without fact-checking, especially when it contains controversial or emotionally charged content. The emotional reactions of users contribute to the speed at which information spreads, and the sheer volume of information that people are now subjected to makes it very difficult to assess truthfulness.

The consequences of misinformation on social media came to the forefront during the 2016 United States presidential election and the 2016 United Kingdom European Union membership referendum ("Brexit"). Biased reporting, the under- and over-reporting of certain topics, and the outright lies published and shared on social media prevented many voters from making well-informed decisions. The aftermath of both events, as well as their ongoing impact on current affairs, has made the notion of "fake news" one of the most urgent global issues (and not incidentally the Word of the Year in 2017).

Addressing these problems can be very challenging. There is no consensus regarding a formal definition of fake news. Most people agree that fake news constitutes intentionally published falsehoods, although it is often difficult to prove malicious intent. While wider definitions of fake news also include misleading information, the problem is that the notion of "misleading" is highly subjective and dependent on the context of the information. 

So how can we tackle the issue of fake news on social media?

Firstly, malicious bots should be identified and blocked. However, this is far from trivial as not all bots are created to publish and spread misinformation and deciding whether an account is operated by a bot or a genuine user is extremely difficult. Some bots created with malicious intent behave in a very humanlike manner, for example they will post in irregular patterns or with minor typos. Bot detection can be viewed as a cat-and-mouse game — any improvements in their detection will result in the development of better bots. As such, estimates regarding the number of bots are often a vague and have to be treated with caution. Existing studies claim that 9–15 per cent of Twitter accounts, and around 60 million Facebook accounts, are bots. These numbers, however, likely miss accounts that are bots (false negatives) and include accounts that are not bots (false positives). False positives are a big problem for social media platforms since inadvertently blocking genuine users on a large scale can result in a public relations disaster.

Secondly, social media users should be educated and empowered to check facts. Many users often share information without evaluating its truthfulness, let alone diligently checking facts (for example, by consulting and comparing multiple news sources). This requires more time than most users may be willing to spend, whereas sharing information is exceedingly simple. In addition, people usually do not question information that confirms their own beliefs. As the unintentional sharing of fake news on social media generally has no immediate negative consequences for the user, it is difficult to change online behavior. However, if this trend continues, it risks reducing the perceived credibility of established and generally trusted news outlets. Thus, people need to be more critical and not blindly accept everything they read online. With greater education on critical thinking from an early age, hopefully, future generations of users will consider these consequences before sharing or spreading information online.  

Finally, more responsibility also needs to be placed on social media platforms like Facebook or Twitter. Apart from bot detection, platforms should make it more difficult for users to create multiple accounts or blindly share information. For example, WhatsApp now only allows users to forward messages to five others at a time. This can slow down the spread of fake news and allow time for interventions such as information campaigns. Social media platforms also utilise algorithms that decide which information to present to users. These algorithms are designed to select information that are most likely to match users' interests and maximise their engagement with the platform. This recommendation process creates what is now known as "filter bubbles" — users only see information that confirm their opinions, with no alternative views. Redesigning the algorithms to ensure that users are exposed to a wider variety of news would help to ensure more balanced opinions. However, all these efforts conflict with the business models of social media platforms. Any restrictions or subpar recommendations might drive users away, which could negatively affect the platforms' revenues. Serious efforts by a social media platform to combat fake news would require a strong voice coming from its user base and/or regulatory pressure.

Universities also have a role to play in terms of both education and research. At NUS Computing, faculty members conduct research on the detection, as well as dissemination of fake news on social media platforms. For example, a project under the NUS Centre for Research in Privacy Technologies actively works on social media analysis. This is called the Risk Pulse Monitor project and provides a dashboard visualising the number of times that local news articles within broad risk categories (defined by the World Economic Forum) are shared on social media. The project therefore has to be mindful of, and deal with, fake news and bots.

 

About the authors

Christian von der Weth 100.jpg

Dr Christian von der Weth is a Senior Research Fellow at the NUS Centre for Research in Privacy Technologies (N-CRiPT) in the School of Computing. He received his PhD from the Karlsruhe Institute of Technology. Before joining NUS, he worked as a Research Fellow at the Nanyang Technological University in Singapore and the National University of Ireland, Galway. His main research interests include social media and social network analysis, natural language processing, privacy, data mining and machine learning, and information systems.

 

 

Mohan 100.jpg

Professor Mohan Kankanhalli is the Dean of NUS School of Computing where he is the Provost’s Chair Professor of Computer Science. He is also Director of NUS Centre for Research in Privacy Technologies (N-CRiPT). Before becoming the Dean in 2016, he was the NUS Vice Provost (Graduate Education) from 2014-2016 and Associate Provost (Graduate Education) from 2011-2013. Prof Kankanhalli’s research interests are in multimedia computing, information security and privacy, image/video processing and social media analysis.