Skip to main content

Consider map F: U –> V. Given data pairs {u_j,F(u_j)} the goal of supervised learning is to approximate F. Neural networks have shown considerable success in addressing this problem in settings where X is a finite dimensional Euclidean space and where Y is either a finite dimensional Euclidean space (regression) or a set of finite cardinality (classification). Motivated by the need for surrogate modeling and for scientific discovery, we focus on the design and analysis of algorithms which address supervised learning for settings where U and V comprise spaces of functions; thus F is an operator. The talk describes emerging methodology in this area, emerging theory which underpins the methodology and numerical experiments which elucidate the efficiency of different approaches. Various applications from continuum mechanics are described, including the Navier-Stokes equation, the Helmholtz equation, nonlinear elasticity and the advection equation.

Bio: Andrew Stuart has research interests in applied and computational mathematics, and is interested in particular in the question of how to optimally combine complex mechanistic models with data. He joined Caltech in 2016 as Bren Professor of Computing and Mathematical Sciences, after 17 years as Professor of Mathematics at the University of Warwick (1999–2016). Prior to that he was on the faculty in The Departments of Computer Science and Mechanical Engineering at Stanford University (1992–1999), and in the Mathematics Department at Bath University (1989–1992). He obtained his PhD from the Oxford University Computing Laboratory in 1986, and held postdoctoral positions in Mathematics at Oxford University and at MIT in the period 1986–1989.

This event is sponsored by the Committee on Applied Math (CAM), the DSI AI+Science Research Initiative, and the National Science Foundation (NSF).

Host: Daniel Sanz-Alonso, Department of Statistics