Skip to main content

Bio: Misha Khodak is PhD student at Carnegie Mellon University advised by Nina Balcan and Ameet Talwalkar. He studies foundations and applications of machine learning, with a particular focus on designing and meta-learning algorithms—from statistical estimators to numerical solvers to online policies—that can take advantage of multi-instance data. He has also led the push to develop automated machine learning methods for diverse tasks and has worked on model compression, neural architecture search, and natural language processing. Misha is a recipient of the Facebook PhD Fellowship and has interned at Microsoft Research, Google Research, the Lawrence Livermore National Lab, and the Princeton Plasma Physics Lab. Previously, he received an AB in Mathematics and an MSE in Computer Science from Princeton University.

Talk Title: ARUBA: Efficient and Adaptive Meta-learning with Provable Guarantees

Abstract: Meta-learning has recently emerged as an important direction for multi-task learning, dynamic environments, and federated settings. We present a theoretical framework for designing practical meta-learning methods that integrates natural formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms. Our approach, which works by learning surrogate losses bounding the within-task regret of base learning algorithms, enables the task-similarity to be learned adaptively and leads to straightforward derivations of average-case regret bounds that improve if the tasks are similar. We highlight how our theory can be extended to numerous settings, especially for deriving multi-task guarantees for bandit algorithms.

 

arrow-left-smallarrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-long-yellowarrow-right-smallfacet-arrow-down-whitefacet-arrow-downCheckedCheckedlink-outmag-glass