Skip to main content

Bio: Linyi Li is a fifth-year Ph.D. student at the Computer Science Department of University of Illinois Urbana-Champaign advised by Prof. Bo Li and Prof. Tao Xie. Linyi’s research lies in the intersection of machine learning and computer security. Recently, he focuses on building certifiably trustworthy deep learning systems at scale, achieving state-of-the-art certifiable robustness against noise perturbations, semantic perturbations, poisoning attacks, distributional shift, and state-of-the-art certifiable fairness. His research is published at top-tier deep learning and computer security conferences, including ICML, NeurIPS, ICLR, S&P, and CCS. Linyi is a recipient of AdvML Rising Star Award and Wing Kai Cheng Fellowship, and a finalist of Two Sigma PhD Fellowship and Qualcomm Innovation Fellowship. Previously, Linyi got his bachelor’s degree in computer science with distinction from Tsinghua University in 2018.

Talk Title: Enabling Large-Scale Certifiable Deep Learning towards Trustworthy Machine Learning

Talk Abstract: Given the rising societal safe and ethical concerns for modern deep learning systems in deployment, designing certifiable large-scale deep learning systems for real-world requirements is in urgent demand. This talk will introduce our series of work on building certifiable large-scale deep learning systems towards trustworthy machine learning, achieving robustness against noise perturbations, semantic transformations, poisoning attacks, distributional shifts; fairness; and reliability against numerical defects. I will also present the applications of these certifiable methods in modern deep reinforcement learning and computer vision systems. These works are recently published at machine learning and security conferences such as ICML, ICLR, NeurIPS, CCS, and S&P. Then, I will introduce the shared core backbone for designing certifiable deep learning systems, including threat-model-dependent smoothing, efficient and exact model state abstraction, statistical worst-case characterization, and diversity-enhanced model training. These backbone methodologies not only enable us to achieve the state-of-the-art certified trustworthiness under multiple existing notations and outperform existing baselines (if exists) with a large margin, but also indicate a promising direction towards achieving certified and “meta-trustworthy” ML. At the end of the talk, I will summarize several challenges in certifiable ML, such as scalability challenges, tightness challenges, deployment challenges, the gap between theory and practice, and the societal implications/impacts of certifiably trustworthy ML. The talk will be concluded with highlighted future directions for research and applications.

arrow-left-smallarrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-long-yellowarrow-right-smallfacet-arrow-down-whitefacet-arrow-downCheckedCheckedlink-outmag-glass