基础科学报告
Basic Science Lecture
首届国际基础科学大会(International Congress of Basic Science,简称 ICBS)将于2023年7月16-28日在北京举行,主题为“聚焦基础科学,引领人类未来”。届时,三百余名海外顶尖科学家与国内一流学者近千人将共聚怀柔科学城,深入探讨基础科学领域的前沿成果,展望基础研究未来的发展方向。
在为期两周的会议期间, 诺贝尔奖得主 David Gross, 菲尔兹奖得主 David Mumford 、 Alessio Figalli, 图灵奖得主 Adi Shamir, 微软研究院高级研究员 Greg Yang 领衔主讲 5 场基础科学报告(Basic Science Lecture),其中7月17日举行3场,7月18日1场,7月23日1场。
Venue:LH A (A2), BIMSA
7月17日(星期一)上午 9:00-10:00
大卫·曼福德 David Mumford
1974年菲尔兹奖得主,2008年沃尔夫奖得主,2006年邵逸夫奖得主,美国国家科学院院士,英国皇家科学院外籍院士,美国哈佛大学和布朗大学教授
Title
Consciousness, robots and DNA
Abstract
Consciousness was not even considered a scientific term until recently. But with the prospect of fully intelligent humanoid robots, one must confront the question of whether they are conscious. This question has many sides but here I want to focus on what physics has to say and, in particular, whether DNA creates cat-states.
7月17日(星期一)上午 10:00-11:00
阿迪·萨莫尔 Adi Shamir
2002年图灵奖得主,以色列科学院院士,美国国家科学院院士,美国艺术与科学院院士,英国皇家科学院外籍院士,法国科学院院士,以色列魏茨曼科学研究所教授
Title
Manifolds in Machine Learning
Abstract
The talk should be accessible and of interest to both computer scientists and mathematicians, and in fact, this research was inspired by a talk by Prof. Shing-Tung Yau about the geometry of machine learning that I attended a few years ago.
7月17日(星期一)上午 11:20-12:20
阿莱西奥·菲加利 Alessio Figalli
2018年菲尔兹奖得主,欧洲科学院院士,瑞士苏黎世联邦理工大学教授
Title
Generic regularity of free boundaries for the obstacle problem
Abstract
The classical obstacle problem consists of finding the equilibrium position of an elastic membrane whose boundary is held fixed and which is constrained to lie above a given obstacle. By classical results of Caffarelli, the free boundary is $C^\infty$ outside a set of singular points. Explicit examples show that the singular set could be in general (n−1)-dimensional — that is, as large as the regular set. In a recent paper with Ros-Oton and Serra we show that, generically, the singular set has zero $H^{n-4}$ measure (in particular, it has codimension three inside the free boundary), solving a conjecture of Schaeffer in dimension $n \leq 4$. The aim of this talk is to give an overview of these results.
7月18日(星期二)下午 17:00-18:00
戴维·格罗斯 David Gross
2004年诺贝尔物理学奖得主,美国国家科学院院士,欧洲科学院院士,中国科学院外籍院士,美国加利福尼亚大学圣塔芭芭拉分校教授
Venue:Chaoyang Kexie Blue Hall
7月23日(星期日)下午 16:00-16:45
Greg Yang 杨格
微软研究院高级研究员
Title
The unreasonable effectiveness of mathematics in large scale deep learning
Abstract
Recently, the theory of infinite-width neural networks led to the first technology, muTransfer, for tuning enormous neural networks that are too expensive to train more than once. For example, this allowed us to tune the 6.7 billion parameter version of GPT-3 using only 7% of its pretraining compute budget, and with some asterisks, we get a performance comparable to the original GPT-3 model with twice the parameter count. In this talk, I will explain the core insight behind this theory. In fact, this is an instance of what I call the *Optimal Scaling Thesis*, which connects infinite-size limits for general notions of “size” to the optimal design of large models in practice. I'll end with several concrete key mathematical research questions whose resolutions will have incredible impact on the future of AI.