Academics

Elliptic PDE learning is provably data-efficient

Time:Wednesday, 16:00-17:00 January 3rd, 2024

Venue:Room 548 Shuangqing Complex Building

Speaker:Alex Townsend Cornell University

Abstract

Can one learn a solution operator associated with a differential operator from pairs of solutions and righthand sides? If so, how many pairs are required? These two questions have received significant research attention in operator learning. More precisely, given input-output pairs from an unknown elliptic PDE, we will derive a theoretically rigorous scheme for learning the associated Green's function. By exploiting the hierarchical low-rank structure of Green’s functions and randomized linear algebra, we will have a provable learning rate. Along the way, we will develop a more general theory for the randomized singular value decomposition and show how these techniques extend to parabolic and hyperbolic PDEs. This talk partially explains the success of operator networks like DeepONet in data-sparse settings.


Speaker

Alex Townsend is an Associate Professor at Cornell University in the Mathematics Department. His research is in Applied Mathematics and focuses on spectral methods, low-rank techniques, fast transforms, and theoretical aspects of deep learning. Prior to Cornell, he was an Applied Math instructor at MIT (2014-2016) and a DPhil student at the University of Oxford (2010-2014). He was awarded a SIAM CSE best paper prize in 2023, a Weiss Junior Fellowship in 2022, Simons Fellowship in 2022, an NSF CAREER in 2021, a SIGEST paper award in 2019, the SIAG/LA Early Career Prize in applicable linear algebra in 2018, and the Leslie Fox Prize in 2015.

DATEJanuary 3, 2024
SHARE
Related News
    • 0

      Model Selection for Optimal Regression Learning

      In statistical learning, various mathematical optimalities are used to characterize performances of different learning methods. They include minimax optimality from a worst-case standpoint and asymptotic efficiency from a rosy view that the regression function to be learned sits there to be discovered. When multiple models, e.g., trees, neural networks and support vector machines, are considere...

    • 1

      Data-driven optimization --- Integrating data sampling, learning, and optimization

      Abstract:Traditionally machine learning and optimization are two different branches in computer science. They need to accomplish two different types of tasks, and they are studied by two different sets of domain experts. Machine learning is the task of extracting a model from the data, while optimization is to find the optimal solutions from the learned model. In the current era of big data and...