Skip to main content

What are the optimal algorithms for learning from data? Have we found them already, or are better ones out there to be discovered? Making these questions precise, and answering them, requires taking on the mathematically deep interplay between statistical and computational constraints. It also requires reconciling our theoretical toolbox with surprising new phenomena arising from practice, which seem to violate conventional rules of thumb regarding algorithm and model design. I will discuss progress along these lines: in terms of designing new algorithms for basic learning problems, controlling generalization in large statistical models, and understanding key statistical questions for generative modeling.

Bio: I am currently at Stanford University as a Motwani Postdoctoral Fellow. Right before, I was a research fellow in UC Berkeley’s Simons Institute in the Program on Computational Complexity of Statistical Inference. I received my PHD in Mathematics and Statistics from MIT, where I was coadvised by Ankur Moitra and Elchanan Mossel, and before that I received my undergraduate degree in Mathematics at Princeton University. My current research interests include computational learning theory and related topics: probability theory, high-dimensional statistics, optimization, related aspects of statistical physics, etc. In particular, I am very interested in learning and inference in graphical models.