Skip to main content

Algorithms make predictions about people constantly. The spread of such prediction systems has raised concerns that machine learning algorithms may exhibit problematic behavior, especially against individuals from marginalized groups. This talk will provide an overview of my research building a theory of “responsible” machine learning. I will highlight a notion of fairness in prediction, called Multicalibration (ICML’18), which requires predictions to be well-calibrated, not simply overall, but on every group that can be meaningfully identified from data. This “multi-group” approach strengthens the guarantees of group fairness definitions, without incurring the costs (statistical and computational) associated with individual-level protections. Additionally, I will present a new paradigm for learning, Outcome Indistinguishability (STOC’21), which provides a broad framework for learning predictors satisfying formal guarantees of responsibility. Finally, I will discuss the threat of Undetectable Backdoors (FOCS’22), which represent a serious challenge for building trust in machine learning models.

Bio: I am a Miller Postdoctoral Fellow at UC Berkeley, hosted by Shafi Goldwasser. Prior to this, I completed my Ph.D. in the Stanford Theory Group under the sage guidance of Omer Reingold.

My research investigates foundational questions about responsible machine learning. Much of my work aims to identify ways in which machine-learned predictors can exhibit problematic behavior (e.g., unfair discrimination) and develop algorithmic tools that provably mitigate such behaviors. More broadly, I am interested in how the computational lens (i.e., algorithms and complexity theory) can provide insight into emerging societal and scientific challenges.