In this thesis, we examine how social media, with a focus on Facebook, contributed to the
spread of disinformation and hate speech in the case of the Rohingya in Myanmar and what
this means for the relationship between freedom of expression and platorm responsibility. Our
aim is to show how platform design, from recommendation and ranking systems to moderation
algorithms, amplifies harmful content and how these digital dynamics translate into material
effects in the physical world, including the legitimation of exclusion and violence. In the
theoretical section we define the concepts of hate speech, disinformation and social media
algorithms, and explain the mexhanisms of filter bubbles and echo chambers. In the empirical
section we conduct a qualitative case study based on a review of primary and secondary
sources, through which we analyse patterns of anti Rohingya discourse on Facebook before
and after the crisis. We find that the combination of algorithmic personalization, a weak
institutional and media environment and low digital literacy, intertwined with longstanding
political and historical disputes between the Rohingya and the Burmese majority, created a
digital environment in which disinformation and dehuminizing narratives quickly became
normalized and escalated into systematic persecution and mass atrocities. Our analysis shows
that the configuration of political history, media fragility and coordinated campaigns, with the
online sphere acting as a catalyst, led to the escalation of violence. We conclude by arguing
that debates on freedom of expression online must also include responsible managment of reach
and context sensitive moderation, since only the combination of these measures can curb the
harmful effects of platforms.
|