Understanding how information spreads on social media

The following text is an extract from: Mechanisms that Shape Social Media and their Impact on Society – Report on the State-of-the-Art in Research | Shaping Europe’s digital future (europa.eu)

The emergence of the network society has created a whole field of research looking and trying to understand how information spreads on social media. Why do some things go viral? [..]

The impact of algorithms and microtargeting have radically changed the information we are exposed to. Humans tend to search for information consistent with their opinions and beliefs, a mechanism known as confirmation bias. The tendency is exploited by online platforms for information search and social networking and media, which employ recommendation algorithms to catalyse users’ attention. As a side effect, the platform amplifies and reinforces individual bias, resulting in extreme polarization of opinions and filter bubbles at the social level, with dramatically negative consequences on the pluralistic public debate needed to nurture democracy. In addition, access to information is often maliciously biased by either commercially or politically motivated influence agents.

Research today demonstrates that it is impossible to describe with simple features the effects of social media in the development of our society as the complexity in having to take account of the actions of millions of individuals far extends our capacity. Social media are today an incredibly powerful instrument of news creation and distribution. The emergence and ubiquitous nature of issues like “fake news”, “microtargeting” and “computational propaganda” or eco-chambers demonstrate the power that a diverse range of actors ascribes to social media. It is therefore of the utmost importance to understand the forces and the causes that generate this phenomenon that is seriously changing the present society. The good news is that social media also provide the data to study how communication patterns develop and spread. Every year brings new insights. We know more about the emotions (especially morally-driven outrage) that shape social media [Brady et al. 2017], and we understand better how the new information ecosystems distort the notion of objectivity and credibility (especially for younger generations) [Marchi 2012], finally we are also more aware how this informational architecture originally designed to hijack our attention for marketing purposes, may easily be abused by hostile agents to spread fear and misinformation, and interfere with democratic processes.

[…] Communication patterns on online platforms are changed by the incentive model and advertising-based rationale where attention is monetized, and the motive is not to “inform” users but to capture their attention as long as possible. The impact of this incentive model in spreading discord, undermining social cohesion and increasing levels of cyber bullying and hate speech can be found across the social media platforms. Recent research has also proved that algorithms slow down pattern of consensus formation in society leaving confusion, fragmentation and online fights to linger for longer deeply undermining social cohesion and trust.

Human-centric AI, a body of research with a strong social dimension, may help us design novel platforms and mechanisms for the public access to news and information, focused on counterbalancing our built-in confirmation bias and transparently striving to expose people to assorted opinions, intelligently. It is possible to imagine mechanisms for helping individuals and communities become informed on controversial issues by offering multiple perspectives, connecting opposing views and conflicting arguments, and fostering critical thought. For example, a robot in a group conversation can highlight information that was unavailable to the group or suggest omitted sources of important information. Advances in person-machine interaction models based on explainable AI have the potential to reach novel cognitive trade-offs between our confirmation bias and our curiosity of novelty and diversity, making it possible for more sustainable and humanized information ecosystems to emerge.

This is, hoverer, just a beginning of a very long scientific journey; Big Data analytics, AI and other tools can significantly support our understanding of this new information ecosystem and Europe should increase its efforts to do so.