Harvard Machine Learning Theory

We are a research group focused on building towards a theory of modern machine learning. We are interested in both experimental and theoretical approaches that advance our understanding.

Key topics include: generalization, over-parameterization, robustness, dynamics of SGD, and relations to kernel methods.

We also run a research-level seminar series on recent advances in the field. Join the seminar mailing list for talk announcements.




Boaz Barak



Preetum Nakkiran

PhD Student


Gal Kaplun

PhD Student


Yamini Bansal

PhD Student


Tristan Yang



Ben Edelman

PhD Student


Fred Zhang

PhD Student


Sharon Qian

PhD Student


Recent Publications

By our group and its members.

Deep Double Descent: Where Bigger Models and More Data Hurt

SGD on Neural Networks Learns Functions of Increasing Complexity

More Data Can Hurt for Linear Regression: Sample-wise Double Descent

Computational Limitations in Robust Classification and Win-Win Results

Minnorm training: an algorithm for training over-parameterized deep neural networks

Adversarial Robustness May Be at Odds With Simplicity

On the Information Bottleneck Theory of Deep Learning

Recent & Upcoming Talks

We examine gradient descent on unregularized logistic regression problems, with homogeneous linear predictors on linearly separable …

This talk will survey the role played by margins in optimization, generalization, and representation of neural networks. A specific …

Deep Learning has had phenomenal empirical successes in many domains including computer vision, natural language processing, and speech …

Classical theory that guides the design of nonparametric prediction methods like deep neural networks involves a tradeoff between the …

Much recent theoretical work has concentrated on “solving deep learning”. Yet, deep learning is not a thing in itself and …

Seminar Calendar

Join the mailing list for talk announcements.