Skip to main content

Bio: Lijun Ding is a post-doctoral scholar at the Institute for Foundations of Data Science (IFDS) at the University of Wisconsin and the University of Washington, supervised by Stephen J. Wright, Dmitry Drusvyatskiy, and Maryam Fazel. Before joining IFDS, he obtained his Ph.D. in Operations Research at Cornell University, advised by Yudong Chen and Madeleine Udell. He graduated with an M.S. in Statistics from the University of Chicago, advised by Lek-Heng Lim. He received a B.S. in Mathematics and Economics from the Hong Kong University of Science and Technology.

Talk Title: Flat minima generalize for low-rank matrix recovery

Abstract: Empirical evidence suggests that for a variety of overparameterized nonlinear models, most notably in neural network training, the growth of the loss around a minimizer strongly impacts its performance. Flat minima — those around which the loss grows slowly — appear to generalize well. This work takes a step towards understanding this phenomenon by focusing on the simplest class of overparameterized nonlinear models: those arising in low-rank matrix recovery. We analyze overparameterized matrix and bilinear sensing, robust PCA, covariance matrix estimation, and single hidden layer neural networks with quadratic activation functions. In all cases, we show that flat minima, measured by the trace of the Hessian, exactly recover the ground truth under standard statistical assumptions.

arrow-left-smallarrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-long-yellowarrow-right-smallclosefacet-arrow-down-whitefacet-arrow-downCheckedCheckedlink-outmag-glass