PKU HEP Seminar and Workshop (北京大学高能物理组)

Towards a Physics of Learning

by Ziyin Liu (MIT)

Asia/Shanghai
S408

S408

Description

The learning process of neural networks is high-dimensional, irreversible, and noisy—exactly the regime where physics shines. If we are ever to build a theory to understand both modern AI and our brain, physics is required. This talk develops a concise “physics-of-learning” view built on a central idea of physics and modern mathematics: understanding a structure object requires understanding what it is invariant to. I will start from the perspective of symmetries and invariants, which, I will argue, crucially determine the learning behavior of modern neural networks through symmetry breaking and phase transitions. I will then discuss the possibility of characterizing the learning process of modern neural networks through thermodynamic irreversibility, which I will show to lead to a few intriguing and universal phenomena in deep learning. Together, these results show that key concepts in theoretical physics are essential to analyze AI systems. When possible, I will discuss broader links to statistics, information theory, and neuroscience.

Bio:
Liu Ziyin (刘子寅) is a postdoctoral researcher at MIT's Research Laboratory of Electronics and at NTT Research's Physics & Informatics Laboratories, working with Isaac Chuang and Tomaso Poggio. His research ranges broadly across physics, AI, and neuroscience, with a special focus on developing physics-inspired foundations for deep learning—drawing on symmetry, thermodynamics—to explain and understand how learning happens in modern neural networks. He received his PhD in theoretical physics from the University of Tokyo under Masahito Ueda, and his work has been published in major AI and physics venues.

Tencent Meeting: 591-215-117

Organised by

Prof. Yinan Wang