Machine learning and artificial intelligence play an ever more crucial role in mitigating important societal problems, such as the prevalence of hate speech. We describe the Hateful Memes Challenge competition, held at NeurIPS 2020, focusing on multimodal hate speech. The aim of the challenge is to facilitate further research into multimodal reasoning and understanding.
We apply model-agnostic meta-learning (MAML) to the task of cross-lingual dependency parsing and find that meta-learning with pre-training can significantly improve performance for a variety of unseen, typologically diverse, and low-resource languages, in a few-shot learning setup.
We compare various multimodal transformer architectures for the task of detecting hateful memes, and achieve 4th place on the Hateful Meme Challenge organized by Facebook AI at NeurIPS 2020.
We propose to combine the merits of template-based and corpus-based dialogue response generation by introducing a prototype-based, paraphrasing neural network, called P2-Net, which aims to enhance quality of the responses in terms of both precision and diversity.