logo EECS Rising Stars 2023




Thuy-Duong Vuong

Towards optimal sampling algorithms



Research Abstract:

Many challenging computational problems can be solved using sampling and specifically the Markov Chain Monte Carlo (MCMC) method, which estimates desired quantities using samples from a suitable (often very complex) distribution. Examples include numerical integration, Bayesian inference, and computing equilibrium estimates of physical systems, as well as novel recent applications in generative AI. In MCMC, a Markov chain moves between elements of the state space according to a probabilistic rule that is chosen so that the chain converges to a steady-state distribution that is exactly the target distribution. Aside from being a powerful computational tool, Markov chains have also been studied as natural models for the evolution of a physical system. In particular, they have a deep connection to statistical physics concepts such as phase transitions and spatial mixing. Despite the long history of research on Markov chains, and much recent activity in the area, many challenges still remain. I plan to push forward the boundaries of this field in the following directions. 1. Optimally bounding mixing time: The main question regarding Markov chains is their mixing time, i.e., the number of steps needed to reach the steady-state distribution. %To bound mixing time, Each Markov chain step contracts the entropy of an arbitrary test function by some factor, and the mixing time is roughly the inverse of this entropy contraction factor. Previous analyses have typically obtained a suboptimal bound on the mixing time since they can only bound the variance contraction factor, that is, how much each Markov chain step contracts the variance of a test function. The entropy and variance contraction factors are conjectured to be of the same order in many cases. Is there a systematic way to boost variance contraction to entropy contraction factor without loss? 2. Continuous vs. discrete Markov chains: Markov chains with discrete and continuous state spaces have both been widely studied, but using largely disjoint techniques. Recently, exchanging and uniting these techniques has resulted in novel algorithms: for example, the PI recently developed a fast parallel algorithm to sample from a discrete distribution using Langevin dynamics (a continuous process). More generally, the Glauber dynamics (a widely used discrete chain) is believed to be a discrete analog of the Langevin dynamics. I plan to investigate more deeply the possibility of using techniques from discrete Markov chain analysis to solve problems in continuous domains, and vice versa. 3. Accelerating slow-mixing Markov chains using judicious initialization. It is well known empirically that judiciously choosing the initial state of a Markov chain simulation can significantly reduce the number of steps to the steady-state distribution, but existing methods are insensitive to the initial state. I plan to build a theory for analyzing Markov chains beyond worst case initialization.

Bio:

Thuy-Duong “June” Vuong is a fourth-year PhD student at Stanford University advised by Nima Anari and Moses Charikar. She completed her undergraduate at MIT in 2019, double majoring in Mathematics and Computer Science. Her research is in designing and analyzing sampling, counting, and optimization algorithms using geometry of polynomials and high-dimensional expanders. Her research has been supported by a Microsoft graduate fellowship.