Harvard Machine Learning Theory

We are a research group focused on building towards a theory of modern machine learning. We are interested in both experimental and theoretical approaches that advance our understanding.

Key topics include: generalization, over-parameterization, robustness, dynamics of SGD, and relations to kernel methods.

We also run a research-level seminar series on recent advances in the field. Join the seminar mailing list for talk announcements.

People

Researchers

Avatar

Boaz Barak

Faculty

Avatar

Preetum Nakkiran

PhD Student

Avatar

Gal Kaplun

PhD Student

Avatar

Yamini Bansal

PhD Student

Avatar

Tristan Yang

Undergraduate

Avatar

Ben Edelman

PhD Student

Avatar

Fred Zhang

PhD Student

Avatar

Sharon Qian

PhD Student

Affiliated

Recent Publications

By our group and its members.

Deep Double Descent: Where bigger models and more data hurt

SGD on Neural Networks Learns Functions of Increasing Complexity

Computational Limitations in Robust Classification and Win-Win Results

Minnorm training: an algorithm for training over-parameterized deep neural networks

Adversarial Robustness May Be at Odds With Simplicity

On the Information Bottleneck Theory of Deep Learning

Recent & Upcoming Talks

Deep Learning has had phenomenal empirical successes in many domains including computer vision, natural language processing, and speech …

Classical theory that guides the design of nonparametric prediction methods like deep neural networks involves a tradeoff between the …

Much recent theoretical work has concentrated on “solving deep learning”. Yet, deep learning is not a thing in itself and …

Inductive biases from specific training algorithms like stochastic gradient descent play a crucial role in learning overparameterized …

Machine learning has made tremendous progress over the last decade. It’s thus tempting to believe that ML techniques are a …

Seminar Calendar

Join the mailing list for talk announcements.