Academics

Optimization for Machine Learning

Time:13:30-15:05

Venue:A7-101

Speaker:Peter Richtárik

Lecturer: Peter Richtárik (Visiting Scholar)

Date: Nov. 18, 20, 21, 27 and 28, 2024

Time: 13:30-15:05

Venue: A7-101

Zoom: 442 374 5045

Password: BIMSA

Introduction

In this series of five 90 minute long lecures I will cover several selected topics related to modern optimization for training machine learning models. I will choose a small number of topics from the following areas: stochastic gradient descent, stochastic proximal point method, minibatching, gradient compression, importance sampling, variance reduction, adaptive stepsizes, distributed and federated learning.

Lecturer Intro

Peter Richtárik is a professor of Computer Science at the King Abdullah University of Science and Technology (KAUST), Saudi Arabia, where he leads the Optimization and Machine Learning Lab. His research interests lie at the intersection of mathematics, computer science, machine learning, optimization, numerical linear algebra, and high-performance computing. Through his work on randomized and distributed optimization algorithms, he has contributed to the foundations of machine learning, optimization, and randomized numerical linear algebra. He is one of the original developers of Federated Learning. Prof. Richtárik’s works attracted international awards, including the Charles Broyden Prize, SIAM SIGEST Best Paper Award, Distinguished Speaker Award at the 2019 International Conference on Continuous Optimization, the IMA Leslie Fox Prize (three times), and a Best Paper Award at the NeurIPS 2020 Workshop on Scalability, Privacy, and Security in Federated Learning. Several of his works are among the most read papers published by the SIAM Journal on Optimization and the SIAM Journal on Matrix Analysis and Applications. Prof. Richtárik serves as an Area Chair for leading machine learning conferences, including NeurIPS, ICML, and ICLR, and is an Action Editor of JMLR, and Associate Editor of Numerische Mathematik and Optimization Methods and Software. In the past, he served as an Action Editor of TMLR and an Area Editor of JOTA.

DATENovember 17, 2024
SHARE
Related News
    • 0

      Optimization Methods for Machine Learning

      IntroductionStochastic Gradient Descent (SGD), in one form or another, serves as the workhorse method for training modern machine learning models. Amidst its myriad variations, the SGD domain is both extensive and burgeoning, presenting a significant challenge for both practitioners and even experts to understand its landscape and inhabitants. This course offers a mathematically rigorous and co...

    • 1

      Machine Learning for Finance

      Lecturer: Zhen Li (李振, Assistant Professor)Time: Thu 08:50-12:15Venue: A3-4-312Zoom: 518 868 7656Password: BIMSAWebsite: https://bimsa.net/activity/MacLeaforFin/IntroductionThe financial sector is experiencing a profound transformation, driven by unprecedented technological advancements. At the forefront of this revolution is machine learning, a powerful subset of artificial intelligence. Thi...