Skip to main content

Bio: Liyue Shen is a final-year Ph.D. candidate in Electrical Engineering at Stanford University, co-advised by Professor John Pauly and Professor Lei Xing. Her research focuses on Medical AI, which spans the interdisciplinary research areas of AI/ML, computer vision, biomedical imaging and data science. Her dissertation research develops efficient AI/ML-driven computational algorithms and techniques for carrying out biomedical imaging and informatics to tackle real-world biomedicine and healthcare problems through engineering and data science. Her works have been published in both computer vision conferences (ICCV, CVPR) and medical journals (Nature Biomedical Engineering, IEEE TMI, MedIA). She is the recipient of Stanford Bio-X Bowes Graduate Student Fellowship (2019-2021) and is selected as Rising Star in EECS by MIT (2021). She co-organized Women in Machine Learning (WiML) Workshop at ICML 2021 and Machine Learning for Healthcare (ML4H) Workshop at NeurIPS 2021. Prior to her PhD, Liyue received her bachelor’s degree in Electronic Engineering from Tsinghua University.

Talk Title: Exploiting Prior Knowledge in Physical World Incorporated with Machine Learning for Solving Medical Imaging Problems

Talk Abstract: Medical imaging is crucial for image-guided clinical patient care. In my research of the interdisciplinary area in medical AI, I develop efficient machine learning algorithms for medical imaging by exploiting prior knowledge from the physical world — exploit what you know — to incorporate with machine learning models.

I present two main directions of my research. First, since the data-driven machine learning methods always suffer from limitations in generalizability, reliability and interpretability, By exploiting geometry and physics priors from the imaging system, I proposed physics-aware and geometry-informed deep learning frameworks for radiation-reduced sparse-view CT and accelerated MR imaging. Incorporating geometry and physics priors, the trained deep networks show more robust generalization across patients and better interpretability. Second, motivated by the unique characteristics of medical images that patients are often scanned serially over time during clinical treatment, where earlier images provide abundant prior knowledge of the patient’s anatomy, I proposed a prior embedding method to encode internal information of image priors through coordinate-based neural representation learning. Since this method requires no training data from external subjects, it relaxes the burden of data collection, and can be easily generalized across different imaging modalities and anatomies. Following this, I developed a novel algorithm of temporal neural representation learning for longitudinal study. Combining both physics priors and image priors, I showed proposed algorithm can successfully capture subtle yet significant structure changes such as tumor progression in sparse-sampling image reconstruction, which can be applied to tackle real-world challenges in cancer patients treatment and radiation therapy.

arrow-left-smallarrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-long-yellowarrow-right-smallfacet-arrow-down-whitefacet-arrow-downCheckedCheckedlink-outmag-glass