Tema for avhandling
Platform safety through algorithmic content moderation
Online platforms need to ensure that users are not only protected from harmful content but also from the risks arising from use of ACM. Two decades ago, internet intermediaries enjoyed ample exemptions from liability arising on account of third-party content. However, today their active control over users’ online experience warrants more accountability. With over 2.7 billion users, Facebook relies heavily on AI for content moderation and its Language Learning Model is used by smaller companies and start-ups for creating several products and services. The present research thus focuses on Facebook’s algorithmic content regulation system to tackle hate speech/misinformation. Although, the issues are transnational in nature, the present research is EU focused. Facebook’s content regulation policy, the DSA package and proposed AI Act will be studied to discuss the challenges arising from use of ACM and how the EU’s proposed ex-ante and ex-post mechanisms address the present and future concerns of online safety.