Organizer:
Chenglong Bao 包承龙
Speaker:
Shiqian Ma (Rice University)
Time:
Thur., 11:00 am -12:00, Nov. 14, 2024
Online:
Tencent Meeting:127-784-846
Title:
AdaBB: A Parameter-Free Gradient Method for Convex Optimization
Abstract:
We propose AdaBB, an adaptive gradient method based on the Barzilai-Borwein stepsize. The algorithm is line-search-free and parameter-free, and essentially provides a convergent variant of the Barzilai-Borwein method for general unconstrained convex optimization. We analyze the ergodic convergence of the objective function value and the convergence of the iterates for solving general unconstrained convex optimization. Compared with existing works along this line of research, our algorithm gives the best lower bounds on the stepsize and the average of the stepsizes. Moreover, we present an extension of the proposed algorithm for solving composite optimization where the objective function is the summation of a smooth function and a nonsmooth function. Our numerical results also demonstrate very promising potential of the proposed algorithms on some representative examples.
About the speaker:
Shiqian Ma is a professor in Department of Computational Applied Mathematics and Operations Research and Department of Electrical and Computer Engineering at Rice University. He received his PhD in Industrial Engineering and Operations Research from Columbia University. His main research areas are optimization and machine learning. His research is currently supported by ONR and NSF Grants from the DMS, CCF, and ECCS programs. Shiqian received the 2024 INFORMS Computing Society Prize and the 2024 SIAM Review SIGEST Award, among many other awards from both academia and industry. Shiqian is an Associate Editor of Journal of Machine Learning Research, Journal of Scientific Computing, Journal of Optimization Theory and Applications, Pacific Journal of Optimization, and IISE Transactions, a Senior Area Chair of NeurIPS, an Area Chair of ICML, ICLR and AISTATS, and a Senior Program Committee of AAAI. He is a plenary speaker of the Texas Colloquium on Distributed Learning in 2023 and a semi-plenary speaker of the International Conference on Stochastic Programming in 2023.