清华主页 EN
导航菜单

Proximal linearization methods for Schatten p-quasi-norm minimization

来源: 04-10

时间:Mon.,10:00-11:00am, April 10, 2023

地点:Conference Room 1, Floor 1, Jin Chun Yuan West Building

组织者:应用与计算数学团队

主讲人:Chao Zeng(Nankai University)

Abstract

Schatten p-quasi-norm minimization has advantages over nuclear norm minimization in recovering low-rank matrices. However, Schatten p-quasi-norm minimization is much more difficult, especially for generic linear matrix equations. We first extend the lower bound theory of l_p minimization to Schatten p-quasi-norm minimization. Motivated by this property, we propose a proximal linearization method, whose subproblems can be solved efficiently by the (linearized) alternating direction method of multipliers. The convergence analysis of the proposed method involves the nonsmooth analysis of singular value functions. We give a necessary and sufficient condition for a singular value function to be a Kurdyka–Lojasiewicz function. The subdifferentials of related singular value functions are computed. The global convergence of the proposed method is established under some assumptions. Experiments on matrix completion, Sylvester equation and image deblurring show the effectiveness of the algorithm.


About the speaker

Chao Zeng 

Nankai University

曾超,南开大学数学科学学院副教授。分别于 2010 年和 2016 年在中国科学技术大学大学获得学士和博士学位。曾在南开大学、香港浸会大学、香港大学从事博士后研究工作。2022 年加入南开大学数学科学学院。研究领域为数值代数、数值优化、数值逼近与计算几何。近年来在计算数学知名杂志比如 Numerische Mathematik,SIAM Journal on Numerical Analysis,SIAM Journal on Matrix Analysis and Applications,SIAM Journal on Imaging Sciences 等上发表多篇学术论文。

返回顶部
相关文章
  • CM minimization and special K-stability

    Abstract Odaka proposed a conjecture predicting that the degrees of CM line bundles for families with fixed general fibers are strictly minimized if the special fibers are K-stable. This conjecture is called CM minimization and a quantitative strengthening of the conjecture of separatedness of moduli spaces of K-stable varieties (K-moduli). This conjecture was already shown for K-ample (Wang-Xu...

  • Optimization Methods for Machine Learning

    IntroductionStochastic Gradient Descent (SGD), in one form or another, serves as the workhorse method for training modern machine learning models. Amidst its myriad variations, the SGD domain is both extensive and burgeoning, presenting a significant challenge for both practitioners and even experts to understand its landscape and inhabitants. This course offers a mathematically rigorous and co...