Harvard Machine Learning Theory

We are a research group focused on building towards a theory of modern machine learning. We are interested in both experimental and theoretical approaches that advance our understanding.

Key topics include: generalization, over-parameterization, robustness, dynamics of SGD, and relations to kernel methods.

We also run a research-level seminar series on recent advances in the field. Join the seminar mailing list for talk announcements.

People

Researchers

Avatar

Boaz Barak

Faculty

Avatar

Preetum Nakkiran

PhD Student

Avatar

Gal Kaplun

PhD Student

Avatar

Yamini Bansal

PhD Student

Avatar

Tristan Yang

Undergraduate

Avatar

Ben Edelman

PhD Student

Avatar

Fred Zhang

PhD Student

Avatar

Sharon Qian

PhD Student

Affiliated

Recent Publications

By our group and its members.

Deep Double Descent: Where Bigger Models and More Data Hurt

SGD on Neural Networks Learns Functions of Increasing Complexity

More Data Can Hurt for Linear Regression: Sample-wise Double Descent

Computational Limitations in Robust Classification and Win-Win Results

Minnorm training: an algorithm for training over-parameterized deep neural networks

Adversarial Robustness May Be at Odds With Simplicity

On the Information Bottleneck Theory of Deep Learning

Recent & Upcoming Talks

How should we go about creating a science of deep learning? One might be tempted to focus on replicability, reproducibility, and …

The existence of adversarial examples in which tiny changes in the input can fool well trained neural networks has many applications …

We examine gradient descent on unregularized logistic regression problems, with homogeneous linear predictors on linearly separable …

Seminar Calendar

Join the mailing list for talk announcements.