清华主页 EN
导航菜单

Quantum speedup of Monte Carlo methods and Markov Chains

来源: 05-11

时间:Thursday, 15:00-16:00 May 11, 2023

地点:Tencent 394-2709-5975

组织者:应用与计算数学团队

主讲人:Jin-Peng Liu the Center for Theoretical Physics, MIT

Abstract

Sampling from a given distribution is a fundamental computational problem and has broad applications in statistics, machine learning, physics, etc. We systematically investigate the quantum speedup of Monte Carlo methods, quantum mean estimation, and fast-forwarding of reversible Markov chains. We develop quantum algorithms for sampling log-concave distributions (with density e^{-f(x)} for convex f(x)) and for estimating their normalizing constants, achieving polynomial speedups in query complexity over the best-known classical algorithms. This is a joint work with Andrew M. Childs, Tongyang Li, Chunhao Wang, and Ruizhe Zhang.

Reference:

[1] Quantum algorithms for sampling log-concave distributions and estimating normalizing constants.

https://arxiv.org/abs/2210.06539


Speaker

Jin-Peng Liu is a Simons Quantum Postdoctoral Fellow at Simons Institute, UC Berkeley in 2022-2023 (hosted by Umesh Vazirani and Lin Lin). He will be a Postdoctoral Associate at the Center for Theoretical Physics, MIT in 2023-2024 (hosted by Aram Harrow). He received a Ph.D. in applied mathematics at University of Maryland in 2022 spring (advised by Andrew Childs). He received a B.S. in math at Beihang University and Chinese Academy of Sciences Hua Loo Keng Class (supervised by Ya-xiang Yuan and Cong Sun).

Jin-Peng is serving as an editor of the journal Quantum from 2023. He received the NSF QISE-NET Triplet Award in 2021. His research focuses on Quantum for Science. He attempts to develop, analyze, and optimize provably efficient quantum algorithms for computational challenges in natural and data sciences, including quantum simulations, quantum ODE/PDE solvers, q-sampling, and quantum machine learning.

返回顶部
相关文章
  • Variational formulas for asymptotic variance of general Markov processes

    AbstractThe asymptotic variance is an important criterion to evaluate the performance of Markov processes, especially for the central limit theorems. In this talk, we give a variational formula for the asymptotic variance of (nonreversible) Markov processes. The variational formula provides many applications, extending the classical Peskun’s comparison theorem to non-reversible Markov processe...

  • Markov Chain Mixing

    Record: YesLevel: GraduateLanguage: EnglishPrerequisiteUndergraduate Probability and Linear Algebra, some measure theoretic probability: Conditional expectation, Laws of large numbers. The first chapter of Durrett’s graduate text or taking the course “Probability 1” given by Prof. Hao Wu at Tsinghua will provide ample background. That course can be taken in parallel with this course. The te...