157
0

Detecting Offensive Memes with Social Biases in Singapore Context Using Multimodal Large Language Models

Abstract

Traditional online content moderation systems struggle to classify modern multimodal means of communication, such as memes, a highly nuanced and information-dense medium. This task is especially hard in a culturally diverse society like Singapore, where low-resource languages are used and extensive knowledge on local context is needed to interpret online content. We curate a large collection of 112K memes labeled by GPT-4V for fine-tuning a VLM to classify offensive memes in Singapore context. We show the effectiveness of fine-tuned VLMs on our dataset, and propose a pipeline containing OCR, translation and a 7-billion parameter-class VLM. Our solutions reach 80.62% accuracy and 0.8192 AUROC on a held-out test set, and can greatly aid human in moderating online contents. The dataset, code, and model weights have been open-sourced atthis https URL.

View on arXiv
@article{yuxuan2025_2502.18101,
  title={ Detecting Offensive Memes with Social Biases in Singapore Context Using Multimodal Large Language Models },
  author={ Cao Yuxuan and Wu Jiayang and Alistair Cheong Liang Chuen and Bryan Shan Guanrong and Theodore Lee Chong Jen and Sherman Chann Zhi Shen },
  journal={arXiv preprint arXiv:2502.18101},
  year={ 2025 }
}
Comments on this paper