Bio: Omar Montasser is a fifth year PhD student at TTI-Chicago advised by Nathan Srebro. His main research interest is the theory of machine learning. Recently, his research focused on understanding and characterizing adversarially robust learning, and designing algorithms with provable robustness guarantees under different settings. His work has been recognized by a best student paper award at COLT (2019).
Talk Title: What, How and When can we Learn Adversarially Robustly?
Talk Abstract: In this talk, we will discuss the problem of learning an adversarially robust predictor from clean training data. That is, learning a predictor that performs well not only on future test instances, but also when these instances are corrupted adversarially. There has been much empirical interest in this question, and in this talk we will take a theoretical perspective and see how it leads to practically relevant insights, including: the need to depart from an empirical (robust) risk minimization approach, and thinking of what kind of accesses and reductions can allow learning.