The automatic identification of harmful content online is of major concern for social media platforms, policymakers, and society. Researchers have studied textual, visual, and audio content, but typically in isolation. Yet, harmful content often combines multiple modalities, as in the case of memes. With this in mind, here we offer a comprehensive survey with a focus on harmful memes. Based on a systematic analysis of recent literature, we first propose a new typology of harmful memes, and then we highlight and summarize the relevant state of the art. One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism, partly due to the lack of suitable datasets. We further find that existing datasets mostly capture multi-class scenarios, which are not inclusive of the affective spectrum that memes can represent. Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual, blending different cultures. We conclude by highlighting several challenges related to multimodal semiotics, technological constraints, and non-trivial social engagement, and we present several open-ended aspects such as delineating online harm and empirically examining related frameworks and assistive interventions, which we believe will motivate and drive future research.
2022, Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, Pages 5597-5606
Detecting and Understanding Harmful Memes: A Survey (04b Atto di convegno in volume)
Sharma Shivam, Alam Firoj, Akhtar Md. Shad, Dimitrov Dimitar, Da San Martino Giovanni, Firooz Hamed, Halevy Alon, Silvestri Fabrizio, Nakov Preslav, Chakraborty Tanmoy