Skip to main content

To build a responsible data economy and protect data ownership, it is crucial to enable learning models from separate, heterogeneous data sources without data centralization. For example, federated learning aims to train models across massive networks of remote devices or isolated organizations, while keeping user data local. However, federated networks introduce a number of unique challenges such as extreme communication costs, privacy constraints, and data and systems-related heterogeneity.

Motivated by the application of federated learning, my work aims to develop simple, principled methods for scalable and trustworthy learning in heterogeneous networks. In the talk, I discuss how heterogeneity affects federated optimization, and lies at the center of accuracy and trustworthiness constraints in federated learning. To address these concerns, I present scalable federated learning objectives and algorithms that rigorously account for and directly model the practical constraints. I will also explore trustworthy objectives and optimization methods for general ML problems beyond federated settings.

Bio: Tian Li is a fifth-year Ph.D. student in the Computer Science Department at Carnegie Mellon University working with Virginia Smith. Her research interests are in distributed optimization, federated learning, and trustworthy ML. Prior to CMU, she received her undergraduate degrees in Computer Science and Economics from Peking University. She received the Best Paper Award at the ICLR Workshop on Security and Safety in Machine Learning Systems, was invited to participate in the EECS Rising Stars Workshop, and was recognized as a Rising Star in Machine Learning/Data Science by multiple institutions.

 

arrow-left-smallarrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-long-yellowarrow-right-smallfacet-arrow-down-whitefacet-arrow-downCheckedCheckedlink-outmag-glass