清华主页 EN
导航菜单

Towards Provably Efficient Quantum Algorithms for Nonlinear Dynamics and Large-scale Machine Learning Models

来源: 04-07

时间:2023-04-07 Fri 09:30-10:30

地点:Venue:JCY-1 Tencent:494 8360 9451(PW: 2023)

组织者:Zhengwei Liu

主讲人: Jin-Peng Liu Simons Institute, UC Berkeley

Abstract

Nonlinear dynamics play a prominent role in many domains and are notoriously difficult to solve. Whereas previous quantum algorithms for general nonlinear equations have been severely limited due to the linearity of quantum mechanics, we gave the first efficient quantum algorithm for nonlinear differential equations with sufficiently strong dissipation. This is an exponential improvement over the best previous quantum algorithms, whose complexity is exponential in the evolution time. We also established a lower bound showing that nonlinear differential equations with sufficiently weak dissipation have worst-case complexity exponential in time, giving an almost tight classification of the quantum complexity of simulating nonlinear dynamics. Furthermore, we designed end-to-end quantum machine learning algorithms, combining efficient quantum (stochastic) gradient descent with sparse state preparation and sparse state tomography. We benchmarked instances of training sparse ResNet up to 103 million parameters, and identify the dissipative and sparse regime at the early phase of fine-tuning could receive quantum enhancement. Our work showed that fault-tolerant quantum algorithms could potentially contribute to the scalability and sustainability of most state-of-the-art, large-scale machine learning models. References: [1] Liu et al. Efficient quantum algorithm for dissipative nonlinear differential equations, Proceedings of the National Academy of Science 118, 35 (2021), arXiv:2011.03185. [2] Liu et al. Towards provably efficient quantum algorithms for large-scale machine learning models, arXiv:2303.03428.


Speaker Intro

Jin-Peng is a Simons Quantum Postdoctoral Fellow at Simons Institute, UC Berkeley in 2022-2023 (hosted by Umesh Vazirani and Lin Lin). He will be a Postdoctoral Associate at the Center for Theoretical Physics, MIT in 2023-2024 (hosted by Aram Harrow). He received a Ph.D. in applied mathematics at University of Maryland in 2022 spring (advised by Andrew Childs). He received the NSF QISE-NET Triplet Award in 2021. He received a B.S. in math at Beihang University and Chinese Academy of Sciences Hua Loo Keng Class (supervised by Ya-xiang Yuan and Cong Sun). His research focuses on Quantum for Science. He attempts to develop, analyze, and optimize provably efficient quantum algorithms for computational challenges in natural and data sciences, including quantum simulations, quantum ODE/PDE solvers, q-sampling, and quantum gradient descent, toward end-to-end applications in areas such as quantum chemistry, biology and epidemiology, fluid dynamics, finance, statistics, optimization, and machine learning.

返回顶部
相关文章
  • Bayesian machine learning

    Record: YesLevel: GraduateLanguage: EnglishPrerequisiteProbability theory, Mathematical statistics, Machine learningAbstractProbabilistic approach in machine and deep learning leads to principled solutions. It provides explainable decisions and new ways for improving of existing approaches. Bayesian machine learning consists of probabilistic approaches that rely on Bayes formula. It can help in...

  • Elliptic PDE learning is provably data-efficient

    AbstractCan one learn a solution operator associated with a differential operator from pairs of solutions and righthand sides? If so, how many pairs are required? These two questions have received significant research attention in operator learning. More precisely, given input-output pairs from an unknown elliptic PDE, we will derive a theoretically rigorous scheme for learning the associated G...