Academics

Geometry Aware Operator Transformer as an Efficient and Accurate Neural Surrogate for PDEs on Arbitrary Domains

Time:Fri., 16:00 , Mar. 20, 2026

Venue:Voov (Tencent): 201-467-303

Organizer:Angelica Aviles-Rivero

Speaker:Shizheng Wen

Math+ML+X Seminar Series

Organizer:

Angelica Aviles-Rivero

Speaker:

Shizheng Wen (ETH Zürich)

Time:

Fri., 16:00 , Mar. 20, 2026

Online:

Voov (Tencent): 201-467-303

Title:

Geometry Aware Operator Transformer as an Efficient and Accurate Neural Surrogate for PDEs on Arbitrary Domains

Abstract:

Neural operators have emerged as promising surrogates for PDE solvers, yet applying them to domains with complex geometries — as encountered in most engineering applications — remains challenging. Among existing approaches, we observe a fundamental accuracy-efficiency tradeoff: accurate models tend to be computationally expensive and poorly scalable, while efficient ones sacrifice accuracy. In this talk, I will present GAOT (Geometry Aware Operator Transformer), which overcomes this tradeoff by combining a novel multiscale attentional graph neural operator encoder/decoder with geometry embeddings and a vision transformer processor. This design enables GAOT to handle arbitrary point cloud inputs, produce outputs at any query point, and scale to very large meshes efficiently. Experiments on 28 benchmarks across diverse PDEs show that GAOT achieves top accuracy and robustness while being the most efficient model among all baselines. I will further demonstrate GAOT's scalability on three large-scale 3D industrial CFD datasets — including automobile and aerospace aerodynamics with meshes up to 9 million points — where it achieves state-of-the-art performance.

DATEMarch 19, 2026
SHARE
Related News
    • 0

      Accurate and efficient quantum Gibbs state preparation

      OrganizerJin-Peng Liu 刘锦鹏SpeakerWeiliang WangTimeThur., 13:30-14:30, Dec. 26, 2024VenueB626, Shuangqing Complex Building AAccurate and efficient quantum Gibbs state preparationAbstract:Preparing Gibbs state on quantum computer is an essential task for quantum algorithm, which is believed to be one of the natural candidates for quantum advantage. Efforts to generalize classical gibbs sampler...

    • 1

      Ten minutes for the transformer

      AbstractTransformer is a powerful architecture that achieves superior performance on various sequence learning tasks, including neural machine translation, language understanding, and so on. As the core of the architecture, the self-attention mechanism is a kind of kernel smoothing method, or "local model" by the speaker's word. The whole architecure also could be seen as a sequence model of me...