Academics

The Elusive Power of Local Training in Federated Learning: 10 Years of Research and a Resolution of the Mystery

Time:14:00 - 15:00, Nov. 22, 2024

Venue:Shuangqing-C548

Speaker:Peter Richtárik

Speaker: Peter Richtárik (King Abdullah University of Science and Technology)

Time: 14:00 - 15:00, Nov. 22, 2024

Venue: Shuangqing-C548

ZOOM: 388 528 9728

PW: BIMSA

Organizer: Yi-Shuai Niu

Abstract

I will outline the history of the theoretical development of the local training “trick” employed in virtually all successful federated learning algorithms. In particular, I will identify five distinct generations of methods and results: 1) heuristic, 2) homogeneous, 3) sublinear, 4) linear, and 5) accelerated. The 5th generation, initiated by the ProxSkip algorithm by Mishchenko et al (ICML 2022), finally led to the proof that local training, if carefully executed, leads to provable acceleration of communication complexity, without requiring any data homogeneity assumptions. Because these latest advances are very new, there are many opportunities to develop the 5th generation of local training methods further. I will give a brief overview of what we know now, and what problems still remain open.

Speaker Intro

Peter Richtárik is a professor of Computer Science at the King Abdullah University of Science and Technology (KAUST), Saudi Arabia, where he leads the Optimization and Machine Learning Lab. His research interests lie at the intersection of mathematics, computer science, machine learning, optimization, numerical linear algebra, and high-performance computing. Through his work on randomized and distributed optimization algorithms, he has contributed to the foundations of machine learning, optimization, and randomized numerical linear algebra. He is one of the original developers of Federated Learning. Prof. Richtárik’s works attracted international awards, including the Charles Broyden Prize, SIAM SIGEST Best Paper Award, Distinguished Speaker Award at the 2019 International Conference on Continuous Optimization, the IMA Leslie Fox Prize (three times), and a Best Paper Award at the NeurIPS 2020 Workshop on Scalability, Privacy, and Security in Federated Learning. Several of his works are among the most read papers published by the SIAM Journal on Optimization and the SIAM Journal on Matrix Analysis and Applications. Prof. Richtrik serves as an Area Chair for leading machine learning conferences, including NeurIPS, ICML, and ICLR, and is an Action Editor of JMLR, and Associate Editor of Numerische Mathematik and Optimization Methods and Software. In the past, he served as an Action Editor of TMLR and an Area Editor of JOTA.

DATENovember 20, 2024
SHARE
Related News
    • 0

      60 Years of Matching: From Gale and Shapley to Trading Networks

      AbstractGale and Shapley’s 1962 American Mathematical Monthly paper, “College Admissions and the Stability of Marriage,” is by now one of the most cited articles in the journal’s history, having served as the foundation for an entire branch of the field of market design. This success owes in large part to the beautiful, applicable, and surprisingly general theory of matching mechanisms unco...

    • 1

      Elliptic PDE learning is provably data-efficient

      AbstractCan one learn a solution operator associated with a differential operator from pairs of solutions and righthand sides? If so, how many pairs are required? These two questions have received significant research attention in operator learning. More precisely, given input-output pairs from an unknown elliptic PDE, we will derive a theoretically rigorous scheme for learning the associated G...