Skip to main content

Bio: Xiaoan Ding is a Ph.D. candidate in the Department of Computer Science at the University of Chicago, advised by Prof. Kevin Gimpel. Her interest lies in innovating machine learning methods to natural language processing and applying the deep learning approach in language applications. Her research seeks to build data-efficient, resilient, fair, trusted models for text classification and text generation, with her Ph.D. work focusing on developing models and algorithms spanning these directions. In the past, she’s interned at Microsoft Research NLP group working on hallucination detection, Amazon Alexa AI on neural information retrieval, and Google dialogue group on task-oriented dialogue systems.

Talk Title: Data-Efficient Text Classifier for Robust NLP

Talk Abstract: With the unprecedented progress in deep learning architectures, large-scale training, and learning algorithms, pre-trained models have become pivotal in AI. Concurrently, the definition of model robustness has transited to broader aspects: data-efficiency, model resilience, fairness, and faithfulness. In this talk, I will focus on data-efficient and model resilience aspects and present my efforts to build robust text classifiers where we introduced discrete latent variables into the generative story. In modeling we parameterized the distributions using standard neural architectures used in conditional language modeling. Our training objective combines generative pretraining and discriminative finetuning. The results shows that our generative classifiers outperform discriminative baselines including BERT-style models across several challenging experimental settings.