Academics

AdaBB: A Parameter-Free Gradient Method for Convex Optimization

Time:Thur., 11:00 am -12:00, Nov. 14, 2024

Venue:Tencent Meeting:127-784-846

Organizer:Chenglong Bao

Speaker:Shiqian Ma (Rice University)

Organizer:

Chenglong Bao 包承龙

Speaker:

Shiqian Ma (Rice University)

Time:

Thur., 11:00 am -12:00, Nov. 14, 2024

Online:

Tencent Meeting:127-784-846

Title:

AdaBB: A Parameter-Free Gradient Method for Convex Optimization

Abstract:

We propose AdaBB, an adaptive gradient method based on the Barzilai-Borwein stepsize. The algorithm is line-search-free and parameter-free, and essentially provides a convergent variant of the Barzilai-Borwein method for general unconstrained convex optimization. We analyze the ergodic convergence of the objective function value and the convergence of the iterates for solving general unconstrained convex optimization. Compared with existing works along this line of research, our algorithm gives the best lower bounds on the stepsize and the average of the stepsizes. Moreover, we present an extension of the proposed algorithm for solving composite optimization where the objective function is the summation of a smooth function and a nonsmooth function. Our numerical results also demonstrate very promising potential of the proposed algorithms on some representative examples.


About the speaker:

Shiqian Ma is a professor in Department of Computational Applied Mathematics and Operations Research and Department of Electrical and Computer Engineering at Rice University. He received his PhD in Industrial Engineering and Operations Research from Columbia University. His main research areas are optimization and machine learning. His research is currently supported by ONR and NSF Grants from the DMS, CCF, and ECCS programs. Shiqian received the 2024 INFORMS Computing Society Prize and the 2024 SIAM Review SIGEST Award, among many other awards from both academia and industry. Shiqian is an Associate Editor of Journal of Machine Learning Research, Journal of Scientific Computing, Journal of Optimization Theory and Applications, Pacific Journal of Optimization, and IISE Transactions, a Senior Area Chair of NeurIPS, an Area Chair of ICML, ICLR and AISTATS, and a Senior Program Committee of AAAI. He is a plenary speaker of the Texas Colloquium on Distributed Learning in 2023 and a semi-plenary speaker of the International Conference on Stochastic Programming in 2023.

DATENovember 13, 2024
SHARE
Related News
    • 0

      Efficient natural gradient method for large-scale optimization problems

      AbstractFirst-order methods are workhorses for large-scale optimization problems, but they are often agnostic to the structural properties of the problem under consideration and suffer from slow convergence, being trapped in bad local minima, etc. Natural gradient descent is an acceleration technique in optimization that takes advantage of the problem's geometric structure and preconditions the...

    • 1

      Recent theoretical advances in non convex optimization

      AbstractIn this report, we give an overview of recent theoretical results on global performance guarantees of optimization algorithm for non convex optimization about deep neural network and data analysis