In the first part of the talk, I describe how adversarial text generation algorithms can be used to improve model robustness. I then introduce a pragmatic formalism for reasoning about harmful implications conveyed by social media text. I show how this pragmatic approach can be combined with generative neural language models to uncover implications of news headlines. I also address the bottleneck to progress in text generation posed by gaps in evaluation of factuality. I conclude with an interdisciplinary study showing how content moderation informed by pragmatics can be used to ensure safe interactions with conversational agents, and my future vision for development of context-aware systems.
Bio: I’m a final-year PhD candidate in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. I am very fortunate to be advised by Prof. Yejin Choi and Prof. Franziska Roesner. My work focuses on measuring factuality and intent of human-written language. Specifically, I am interested in designing generalizable end-to-end modeling frameworks based upon objectives that are directly aligned with the underlying motivations of a task. Two key dimensions of machine reasoning that excite me are social commonsense reasoning and fairness in NLP. Previously I interned at SRI, in the Mosaic group at AI2 and MSR.