Harvard ML Theory Group

We are a research group focused on building towards a theory of modern machine learning. We are interested in both experimental and theoretical approaches that advance our understanding.

Key topics include: generalization, over-parameterization, robustness, dynamics of SGD, and relations to kernel methods.

We also run a research-level seminar series on recent advances in the field.

People

Researchers

Avatar

Boaz Barak

Faculty

Avatar

Preetum Nakkiran

PhD Student

Avatar

Gal Kaplun

PhD Student

Avatar

Yamini Bansal

PhD Student

Avatar

Tristan Yang

Undergraduate

Avatar

Ben Edelman

PhD Student

Avatar

Fred Zhang

PhD Student

Avatar

Sharon Qian

PhD Student

Affiliated

Recent Publications

By our group and its members.

SGD on Neural Networks Learns Functions of Increasing Complexity

Computational Limitations in Robust Classification and Win-Win Results

Minnorm training: an algorithm for training over-parameterized deep neural networks

Adversarial Robustness May Be at Odds With Simplicity

On the Information Bottleneck Theory of Deep Learning

Recent & Upcoming Talks