Lingxiao Wang
Talk Title: How to Preserve Privacy in Data Analysis?
Talk Abstract: The past decade has witnessed the tremendous success of large-scale data science. However, recent studies show that many existing powerful machine learning tools used in large-scale data science pose severe threats to personal privacy. Therefore, one of the major challenges in data analysis is how to learn effectively from the enormous amounts of sensitive data without giving up on privacy. Differential Privacy (DP) has recently emerged as a new gold standard for private data analysis due to the statistical data privacy it can provide for sensitive information. Nevertheless, the adaptation of DP to data analysis remains challenging due to the complex models we often encountered in data analysis. In this talk, I will focus on two commonly used models, i.e., the centralized and distributed/federated models, for differentially private data analysis. For the centralized model, I will present my efforts to provide strong privacy and utility guarantees in high-dimensional data analysis. For the distributed/federated model, I will discuss new efficient and effective privacy-preserving learning algorithms.
Bio: Lingxiao Wang is a final year Ph.D. student in the Department of Computer Science at the University of California, Los Angeles, advised by Dr. Quanquan Gu. Previously he obtained his MS degree in Statistics at the University of Washington. Lingxiao’s research interests are broadly in machine learning, including privacy-preserving machine learning, optimization, deep learning, low-rank matrix recovery, high-dimensional statistics, and data mining. Lingxiao aims to apply his research for social good, and he is one of the core members of the Combating COVID-19 project (https://covid19.uclaml.org/).