Skip to main content

Bio: I am currently a Postdoctoral Scholar at AI for Science Lab at the California Institute of Technology. I received my PhD in 2023 from the School of Engineering and Applied Sciences at Harvard University, where I was also an affiliate to the Center for Brain Science. During my PhD, I also worked at Amazon AI and Microsoft as a Research Intern. I obtained my BASc with distinction in 2017 from the Department of Electrical and Computer Engineering at the University of Waterloo, where I received a President’s Scholarship and multiple First in Class Engineering Scholarships. My research achievements are recognized by multiple distinguished awards and distinctions, including the Swartz Foundation Fellowship in Theoretical Neuroscience, the AWS Machine Learning Research Award and the Harvard Quantitative Biology Student Fellowship. My research advances artificial intelligence for science and engineering with a focus on computational and theoretical neuroscience.

Talk Title: Deep Interpretable Generative Learning for Science and Engineering

Abstract: Discriminative and generative AI represent two deep learning paradigms that have sparked a revolution in predicting and generating high-quality realistic images from text prompts. Nonetheless, discriminative learning lacks the capacity to generate data, while deep generative models face challenges in decoding capabilities. A key challenge, which could brought the next breakthrough in AI, lies in the unification of these two paradigms. Moreover, both deep learning approaches are data-hungry and thus does not perform well in data-scarce applications. Furthermore, deep learning suffers from low interpretability; no comprehensive framework exists to describe construction of non-trivial representations and predictions by discriminative models. These drawbacks have posed significant barriers to the adoption of deep learning in applications where a) acquiring supervised data is expensive or infeasible, and b) goals extend beyond data fitting to attain scientific insights. Specifically, deep learning applications are fairly unexplored in fields with rich mathematical and optimization frameworks such as inverse problems, or those in which interpretability matters. This talk discusses the theory and applications of deep learning in data-limited or unsupervised inverse problems. These include applications in radar sensing, Poisson image denoising, and computational neuroscience.

arrow-left-smallarrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-long-yellowarrow-right-smallclosefacet-arrow-down-whitefacet-arrow-downCheckedCheckedlink-outmag-glass