Bio: Abhilasha is a Ph.D. student at Carnegie Mellon University, working in the Language Technologies Institute. Her research focuses on understanding neural model performance, and consequently developing robust and trustworthy NLP technologies. She has published papers in premier NLP conferences and has been the recipient of the outstanding reviewer awards at ACL and EMNLP. Her work has also received the “Area Chair Favorite Paper” award at COLING 2018. In the past, she interned at Allen Institute for AI and Microsoft Research, where she worked on understanding how deep learning models process challenging semantic phenomena in natural language.
Talk Title: Developing User-Centric Models for Question Answering
Talk Abstract: Everyday users now benefit from powerful QA technologies in a range of consumer-facing applications. Voice assistants such as Amazon Alexa or Google Home have brought natural language technologies to several million homes globally. Yet, even with millions of users now interacting with these technologies on a daily basis, there has been surprisingly little research attention devoted to studying the issues that arise when people use QA systems. Traditional QA evaluations do not reflect the needs of many users who can benefit from QA technologies. For example, users with a range of visual and motor impairments would prefer the option to interact with voice interfaces for efficient text entry. Keeping these needs in mind, we construct evaluations considering the interfaces through which users interact with QA systems. We analyze and mitigate errors introduced by three interface types that could be connected to a QA engine: speech recognizers converting spoken queries to text, keyboards used to type queries into the system, and translation systems processing queries in other languages. Our experiments and insights present a useful starting point for both practitioners and researchers, to develop usable question-answering systems.