Natural Language Processing

Meta-learning for fast cross-lingual adaptation in dependency parsing

We apply model-agnostic meta-learning (MAML) to the task of cross-lingual dependency parsing and find that meta-learning with pre-training can significantly improve performance for a variety of unseen, typologically diverse, and low-resource languages, in a few-shot learning setup.

The Hateful Memes Challenge: Competition Report

Machine learning and artificial intelligence play an ever more crucial role in mitigating important societal problems, such as the prevalence of hate speech. We describe the Hateful Memes Challenge competition, held at NeurIPS 2020, focusing on multimodal hate speech. The aim of the challenge is to facilitate further research into multimodal reasoning and understanding.

A Multimodal Framework for the Detection of Hateful Memes

We compare various multimodal transformer architectures for the task of detecting hateful memes, and achieve 4th place on the Hateful Meme Challenge organized by Facebook AI at NeurIPS 2020.

Simultaneously Improving Utility and User Experience in Task-oriented Dialogue Systems

We propose to combine the merits of template-based and corpus-based dialogue response generation by introducing a prototype-based, paraphrasing neural network, called P2-Net, which aims to enhance quality of the responses in terms of both precision and diversity.