Skip to main content

Bio: Oscar Leong is a von Karman Instructor in the Computing and Mathematical Sciences department at Caltech, hosted by Venkat Chandrasekaran. He also works with Katie Bouman and the Computational Cameras group. He received his PhD in 2021 in Computational and Applied Mathematics from Rice University under the supervision of Paul Hand, where he was an NSF Graduate Fellow. His research interests lie in the mathematics of data science, optimization, inverse problems, and machine learning. The core focus of his doctoral work was on proving recovery theorems for ill-posed inverse problems using generative models. He is broadly interested in using tools from convex geometry, high-dimensional statistics, and nonlinear optimization to better understand and improve data-driven, decision-making algorithms.

Talk Title: The power and limitations of convex regularization

Talk Abstract: The incorporation of regularization functionals to promote structure for inverse problems has had a long history in applied mathematics, with a recent surge of interest in the use of both convex and nonconvex functionals. The structure of such functionals largely lies in computational and modeling considerations, but there is a lack of an overarching understanding of the power and limitations of enforcing convexity versus nonconvexity and when one should be used over another for a given data distribution. In this presentation, we propose to tackle this question from a convex geometric perspective and ask the following question: for a given data source, what is the optimal regularizer induced by a convex body? To answer this, a variational optimization problem over the space of convex bodies is proposed to search for the optimal regularizer. We analyze the structure of population risk minimizers by connecting minimization of the population objective to Minkowski’s inequality for convex bodies. These results are also shown to be robust, as we establish convergence of empirical risk minimizers to the population risk minimizer in the limit of infinite data. Based on this characterization of optimal convex regularizers, we then consider whether convexity is the right structure for certain distributions and use our theory to characterize when convex or nonconvex structure should be preferred.

arrow-left-smallarrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-long-yellowarrow-right-smallfacet-arrow-down-whitefacet-arrow-downCheckedCheckedlink-outmag-glass