Academics

Quantum Spectral Method for Gradient and Hessian Estimation

Time:Tues., 13:30-14:30 Dec. 31, 2024

Venue:B626, Shuangqing Complex Building A

Organizer:Jin-Peng Liu

Speaker:Yuxin Zhang

Quantum Scientific Computation and Quantum Artificial Intelligence


Organizer:

Jin-Peng Liu 刘锦鹏

Speaker:

Yuxin Zhang (AMSS)

Time:

Tues., 13:30-14:30

Dec. 31, 2024

Venue:

B626, Shuangqing Complex Building A

Title:

Quantum Spectral Method for Gradient and Hessian Estimation

Abstract:

Gradient and Hessian estimation of multivariable functions plays a vital role across a wide range of fields, including optimization problems, machine learning, and others.

In this talk, we begin by revisiting the gradient estimation algorithms proposed by Jordan in 2005 and further improved by Gilyén, Arunachalam, and Wiebe in 2019, based on the finite difference method. We will then introduce our novel quantum algorithm for gradient estimation based on the spectral method, which achieves exponential speedup over classical algorithms in terms of the dimension d. As an extension, quantum algorithms for Hessian estimation are developed using either finite difference method or spectral method. We show that our quantum spectral method for Hessian estimation is optimal in terms of dimension d by proving a nontrivial lower bound.

Furthermore, when the Hessian is sparse, we can obtain better results.

References:

[Jor05] Stephen Jordan. Fast quantum algorithm for numerical gradient estimation. PRL, 2005.

[GAW19] András Gilyén, Srinivasan Arunachalam, and Nathan Wiebe. Optimizing quantum optimization algorithms via faster quantum gradient computation. SODA, 2019.

[ZS24] Yuxin Zhang, and Changpeng Shao. Quantum spectral method for gradient and Hessian estimation. arXiv:2407.03833, 2024.

Speaker: Yuxin Zhang is a PhD student at the Academy of Mathematics and Systems Science, Chinese Academy of Sciences. Her research interests include quantum algorithms, quantum complexity theory, and other intriguing fields yet to be explored.

DATEDecember 30, 2024
SHARE
Related News
    • 0

      Efficient natural gradient method for large-scale optimization problems

      AbstractFirst-order methods are workhorses for large-scale optimization problems, but they are often agnostic to the structural properties of the problem under consideration and suffer from slow convergence, being trapped in bad local minima, etc. Natural gradient descent is an acceleration technique in optimization that takes advantage of the problem's geometric structure and preconditions the...

    • 1

      AdaBB: A Parameter-Free Gradient Method for Convex Optimization

      Organizer:Chenglong Bao 包承龙Speaker:Shiqian Ma (Rice University)Time:Thur., 11:00 am -12:00, Nov. 14, 2024Online:Tencent Meeting:127-784-846Title:AdaBB: A Parameter-Free Gradient Method for Convex OptimizationAbstract:We propose AdaBB, an adaptive gradient method based on the Barzilai-Borwein stepsize. The algorithm is line-search-free and parameter-free, and essentially provides a conv...