Academics

Deep learning of multi-scale PDEs based on data generated from particle methods

Time: Time / 时间 10:00-11:00, Nov. 17th (Thur.) 2022

Venue: Online / 线上 Zoom ID: 276 366 7254 ; PW: YMSC

Organizer:应用与计算数学团队

Speaker: Speaker / 主讲人 Zhongjian Wang, The University of Chicago

Abstract

Solving multiscale PDEs is difficult in high dimensional and/or convection dominant cases. The Lagrangian computation, interacting particle method, is shown to outperform solving PDEs directly (Eulerian). Examples include computing effective diffusivities, KPP front speed, and asymptotic transport properties in topological insulators. However the particle simulation takes long before convergence and does not have a continuous model. In this regard, we introduce the DeepParticle methods, which learn the pushforward map from arbitrary distribution to IPM-generated distribution by minimizing the Wasserstein distance. In particular, we formulate an iterative scheme to find the transport map and prove the convergence. On the application side, in addition to KPP invariant measures, our method can also investigate the blow-up behavior in chemotaxis models.


Speaker

Zhongjian Wang is a William H. Kruskal Instructor in the Department of Statistics at the University of Chicago.

“Applied analysis and numerical computation for physics and engineering problems have always been a passion of mine. There have been models developed for describing different systems basing on different problems and my research interests are to numerically simulate mathematical models and analyze the error in calculating the phenomena. During my Ph.D. study, I worked on calculating effective diffusivity in chaotic and random flows. Recently I am focusing on reduced-order models in the propagation of chaos and their probabilistic analysis. Related methods include, among others, POD, tensor-train, time-dependent PCE, and some machine learning algorithms.”

https://www.stat.uchicago.edu/~zhongjian/


DATENovember 17, 2022
SHARE
Related News
    • 0

      Optimization, Generalization and Implicit bias of Gradient Methods in Deep Learning

      Abstract: Deep learning has enjoyed huge empirical success in recent years. Although training a deep neural network is a highly nonconvex optimization problem,simple (stochastic) gradient methods are able to produce good solutions that minimize the training error, and more surprisingly, can generalize well to out-of sample data, even when the number of parameters is significantly larger than t...

    • 1

      Uncertainty estimation in deep learning | BIMSA Member Seminar

      Speaker IntroAlexey has deep expertise in machine learning and processing of sequential data. He publishes at top venues, including KDD, ACM Multimedia and AISTATS. Industrial applications of his results are now in service at companies Airbus, Porsche and Saudi Aramco among others