Skip to main content

Talk Title: Asymptotically Optimal Exact Minibatch Metropolis-Hastings

Talk Abstract: Metropolis-Hastings (MH) is one of the most fundamental Bayesian inference algorithms, but it can be intractable on large datasets due to requiring computations over the whole dataset. In this talk, I will discuss minibatch MH methods, which use subsamples to enable scaling. First, I will talk about existing minibatch MH methods, and demonstrate that inexact methods (i.e. they may change the target distribution) can cause arbitrarily large errors in inference. Then, I will introduce a new exact minibatch MH method, TunaMH, which exposes a tunable trade-off between its batch size and its theoretically guaranteed convergence rate. Finally, I will present a lower bound on the batch size that any minibatch MH method must use to retain exactness while guaranteeing fast convergence—the first such bound for minibatch MH—and show TunaMH is asymptotically optimal in terms of the batch size.

Bio: Ruqi Zhang is a fifth-year Ph.D. student in Statistics at Cornell University, advised by Professor Chris De Sa. Her research interests lie in probabilistic modeling for data science and machine learning. She currently focuses on developing fast and robust inference methods with theoretical guarantees and their applications with modern model architectures, such as deep neural networks, on real-world big data. Her work has been published in top machine learning venues such as NeurIPS, ICLR and AISTATS, and has been recognized through an Oral Award at ICLR and two Spotlight Awards at NeurIPS.

arrow-left-smallarrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-long-yellowarrow-right-smallfacet-arrow-down-whitefacet-arrow-downCheckedCheckedlink-outmag-glass