Skip to main content

Bio: Lu Gan is currently a Postdoctoral Scholar at the California Institute of Technology, working with Soon-Jo Chung and Yisong Yue. She completed her Ph.D. in robotics at the University of Michigan, where she was co-advised by Ryan Eustice, Jessy Grizzle, and Maani Ghaffari. She is broadly interested in Robotics, Computer Vision, and Machine Learning. Her current research focuses on perception and navigation for robot autonomy in unstructured environments and adverse conditions. Her work has been published at premier conferences and journals in robotics such as ICRA, IROS, BMVC, IEEE RAL and IEEE TRO. More details can be found at https://ganlumomo.github.io/.

Talk Title: Semantic-Aware Robotic Mapping in Unknown, Loosely Structured Environments

Talk Abstract: Robotic mapping is the problem of inferring a representation of a robot’s surroundings using noisy measurements as it navigates through an environment. As robotic systems move toward more challenging behaviors in more complex scenarios, such systems require richer maps so that the robot understands the significance of the scene and objects within. This talk focuses on semantic-aware robotic mapping in unknown, loosely structured environments. The first part is a Bayesian kernel inference semantic mapping framework that formulates a unified probabilistic model for occupancy and semantics, and provides a closed-form solution for scalable dense semantic mapping. This framework significantly reduces the computational complexity of learning-based continuous semantic mapping and achieves high accuracy in the meantime. Next, I will present a novel and flexible multi-task multi-layer Bayesian mapping framework that provides even richer environmental information. A two-layer robotic map of semantics and traversability is built as a constructive example. Moreover, it is readily extendable to include more layers according to needs. Both mapping algorithms were verified using publicly available datasets or through experimental results on a Cassie-series bipedal robot. Finally, instead of modeling the terrain traversability using metrics defined by domain knowledge, an energy-based deep inverse reinforcement learning method that learns robot-specific traversability from demonstrations will be presented. This method considers robot proprioception and can learn reward maps that lead to more energy-efficient future trajectories. Experiments are conducted using a dataset collected by a Mini-Cheetah robot in various scenes of a campus environment.

arrow-left-smallarrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-long-yellowarrow-right-smallfacet-arrow-down-whitefacet-arrow-downCheckedCheckedlink-outmag-glass