Skip to main content

Part of the Autumn 2023 Distinguished Speaker Series.

Distributional assumptions are ubiquitous in machine learning and underlie the development of core algorithms for supervised learning. Testing even the most basic distributional assumptions, however, is often statistically intractable.  In these scenarios, it is unclear how to verify the performance of an efficient learning algorithm on a given training set, undercutting the entire notion of provable correctness.

This motivates the recently introduced model of Testable Learning due Rubinfeld and Vasilyan where the goal is to replace hard-to-verify distributional assumptions with efficiently testable ones and to require that the learner succeed whenever the unknown distribution passes the corresponding test.  In this talk, we discuss a series of works establishing a powerful framework for algorithm design in this model and show, surprisingly, that the sample complexity is tightly characterized by the Rademacher complexity of the underlying function class.

Time permitting, we will discuss some core research directions at IFML, the National AI Institute for Foundations of Machine Learning (ifml.institute).

Bio: Adam Klivans is a recipient of the NSF Career Award. His research interests lie in machine learning and theoretical computer science, in particular, Learning Theory, Computational Complexity, Pseudorandomness, Limit Theorems, and Gaussian Space. He also serves on the editorial board for the Theory of Computing and Machine Learning Journal.

Agenda

Friday, October 13, 2023

12:00 pm–12:30 pm

Lunch

Lunch will be provided on a first come, first serve basis.

12:30 pm–1:30 pm

Talk and Q&A

arrow-left-smallarrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-long-yellowarrow-right-smallfacet-arrow-down-whitefacet-arrow-downCheckedCheckedlink-outmag-glass