Academics

Beyond Second Order Methods for Nonconvex Optimization

Time:Thursday, 16:00-17:00 Aug 28, 2025

Venue:B627, Shuangqing Complex Building

Organizer:Jin-Peng Liu

Speaker:Kate Wenqi Zhu

Kate Wenqi Zhu

Ph.D. Student, University of Oxford

Kate Wenqi Zhu is a fourth-year Ph.D. student in Applied Mathematics at the University of Oxford, under the supervision of Professor Coralia Cartis, and is fully funded by the CIMDA–Oxford Studentship. Her research focuses on leveraging higher-order information for efficient nonconvex optimization, with interests spanning computational complexity analysis, tensor approximation, sum-of-squares techniques, implementable high-order subproblem solvers, and adaptive regularization methods. She completed both her undergraduate and first master's degrees in Mathematics at Oxford, followed by an M.Sc. in Mathematical Modelling and Scientific Computing, supervised by Professor Yuji Nakatsukasa.

Organizer

Jin-Peng Liu 刘锦鹏

Time

Thursday, 16:00-17:00

Aug 28, 2025

Venue

B627, Shuangqing Complex Building

Abstract

Traditionally, first-order gradient-based techniques, such as stochastic gradient descent (SGD), and second-order methods, such as the Newton method, have dominated the field of optimization. In recent years, high-order tensor methods with regularization for nonconvex optimization have garnered significant research interest. These methods offer superior local convergence rates, improved worst-case evaluation complexity, enhanced insights into data geometry through higher-order information, and better parallelization compared to SGD.

The most critical challenge in implementing the $p$th-order method ($p \geq 3$) lies in efficiently minimizing the $p$th-order subproblem, which typically consists of a $p$th-degree multivariate Taylor polynomial combined with a $(p+1)$th-order regularization term. In this talk, we address the research gaps by characterizing the local and global optimality of the subproblem and investigating its potential NP-hardness. In this talk, we will introduce and discuss a series of provably convergent and efficient algorithms for minimizing the regularized subproblem both locally and globally, including the Quadratic Quartic Regularization Method (QQR), the Cubic Quartic Regularization Method (CQR), and the Sums-of-Squares Convex Taylor Method (SoS-C). More interestingly, our research adopts an AI-integrated approach, using the mathematical reasoning capabilities of large language models (LLMs) to verify the nonnegativity of multivariate polynomials. This problem is closely related to Hilbert’s Seventeenth Problem and the challenge of globally minimizing subproblems.

DATEAugust 26, 2025
SHARE
Related News
    • 0

      On properties of solutions to fractional and higher order systems

      AbstractIn this talk, we introduce some recent results on properties of solutions to fractional and higher order systems. For instance, the classification of solutions to conformally invariant systems with mixed order and exponentially increasing or nonlocal nonlinearity, and the uniform a priori estimates for positive solutions to the n-th order superlinear Lane-Emden system in bounded domains...

    • 1

      Optimization Problems and Approaches in Computational Materials Science

      AbstractAccelerated by the ever-growing power of computers, computational materials science has underpinned materials modeling and simulation. Many ingredients in this field, from both electronic structure and atomistic levels, can be (re)formulated into optimization problems. Numerous optimization approaches have been constantly emerging, unleashing their exceptional efficiency, robustness, an...