Academics

Towards Provably Efficient Quantum Algorithms for Nonlinear Dynamics and Large-scale Machine Learning Models

Time:2023-04-07 Fri 09:30-10:30

Venue:Venue:JCY-1 Tencent:494 8360 9451(PW: 2023)

Organizer:Zhengwei Liu

Speaker: Jin-Peng Liu Simons Institute, UC Berkeley

Abstract

Nonlinear dynamics play a prominent role in many domains and are notoriously difficult to solve. Whereas previous quantum algorithms for general nonlinear equations have been severely limited due to the linearity of quantum mechanics, we gave the first efficient quantum algorithm for nonlinear differential equations with sufficiently strong dissipation. This is an exponential improvement over the best previous quantum algorithms, whose complexity is exponential in the evolution time. We also established a lower bound showing that nonlinear differential equations with sufficiently weak dissipation have worst-case complexity exponential in time, giving an almost tight classification of the quantum complexity of simulating nonlinear dynamics. Furthermore, we designed end-to-end quantum machine learning algorithms, combining efficient quantum (stochastic) gradient descent with sparse state preparation and sparse state tomography. We benchmarked instances of training sparse ResNet up to 103 million parameters, and identify the dissipative and sparse regime at the early phase of fine-tuning could receive quantum enhancement. Our work showed that fault-tolerant quantum algorithms could potentially contribute to the scalability and sustainability of most state-of-the-art, large-scale machine learning models. References: [1] Liu et al. Efficient quantum algorithm for dissipative nonlinear differential equations, Proceedings of the National Academy of Science 118, 35 (2021), arXiv:2011.03185. [2] Liu et al. Towards provably efficient quantum algorithms for large-scale machine learning models, arXiv:2303.03428.


Speaker Intro

Jin-Peng is a Simons Quantum Postdoctoral Fellow at Simons Institute, UC Berkeley in 2022-2023 (hosted by Umesh Vazirani and Lin Lin). He will be a Postdoctoral Associate at the Center for Theoretical Physics, MIT in 2023-2024 (hosted by Aram Harrow). He received a Ph.D. in applied mathematics at University of Maryland in 2022 spring (advised by Andrew Childs). He received the NSF QISE-NET Triplet Award in 2021. He received a B.S. in math at Beihang University and Chinese Academy of Sciences Hua Loo Keng Class (supervised by Ya-xiang Yuan and Cong Sun). His research focuses on Quantum for Science. He attempts to develop, analyze, and optimize provably efficient quantum algorithms for computational challenges in natural and data sciences, including quantum simulations, quantum ODE/PDE solvers, q-sampling, and quantum gradient descent, toward end-to-end applications in areas such as quantum chemistry, biology and epidemiology, fluid dynamics, finance, statistics, optimization, and machine learning.

DATEApril 7, 2023
SHARE
Related News
    • 0

      Efficient natural gradient method for large-scale optimization problems

      AbstractFirst-order methods are workhorses for large-scale optimization problems, but they are often agnostic to the structural properties of the problem under consideration and suffer from slow convergence, being trapped in bad local minima, etc. Natural gradient descent is an acceleration technique in optimization that takes advantage of the problem's geometric structure and preconditions the...

    • 1

      Fast algorithms for bio-inspired fluid simulations | BlMSA Thursday Machine Learning Applications Seminar

      AbstractThe dynamics of microswimmers immersed in viscous fluid can be described by incompressibleStokes equation. We will discuss our recent work on the algorithms for two numerical challenges ofthe simulations of such problems.(1) Parallel-in-time algorithm: The long-ime numericasimulations of biofluid applications often require the use of parallel computing methods due to highcomputation cos...