清华主页 EN
导航菜单

应用数学讨论班:密歇根州立大学严明主讲

来源:清华大学丘成桐数学科学中心 05-11

时间:5月13日 10:00-11:00

地点:腾讯会议635-963-958

组织者:Prof. Ming Yan (MSU) 严明(密歇根州立大学)

主讲人:Prof. Ming Yan (MSU) 严明(密歇根州立大学)

讲座提要:

Large-scale machine learning models are trained by parallel (stochastic) gradient descent algorithms on distributed systems. The communications for gradient aggregation and model synchronization become the major obstacles for efficient learning as the number of computing nodes and the model's dimension scale up. In this talk, I will introduce several ways to compress the transferred data and reduce the overall communication such that the obstacles can be immensely mitigated. More specifically, I will introduce methods to reduce or eliminate the compression error without additional communication for both deterministic and stochastic algorithms.



讲座人介绍:

Ming Yan is an associate professor in the Department of Computational Mathematics, Science and Engineering (CMSE) and the Department of Mathematics at Michigan State University. His research interests lie in computational optimization and its applications in image processing, machine learning, and other data-science problems. He received his B.S. and M.S in mathematics from University of Science and Technology of China in 2005 and 2008, respectively, and then Ph.D. in mathematics from University of California, Los Angeles in 2012. After completing his PhD, Ming Yan was a Postdoctoral Fellow in the Department of Computational and Applied Mathematics at Rice University from July 2012 to June 2013, and then moved to University of California, Los Angeles as a Postdoctoral Scholar and an Assistant Adjunct Professor from July 2013 to June 2015. He received a Facebook faculty Award in 2020.



返回顶部
相关文章