Graph Neural Networks (GNNs) have become a popular tool for learning representations of graph-structured inputs, with applications in …

An exciting area of intellectual activity in this century may well revolve around a synthesis of machine learning, theoretical physics, …

Standard machine learning produces models that are accurate on average but degrade dramatically on when the test distribution of …

As datasets continue to grow in size, in many settings the focus of data collection has shifted away from testing pre-specified …

Deep learning seeks to discover universal models that work across all modalities and tasks. While self-attention has enhanced the …

Many supervised learning methods are naturally cast as optimization problems. For prediction models which are linear in their …

Gradient descent algorithms and their noisy variants, such as the Langevin dynamics or multi-pass SGD, are at the center of attention …

From music recommendations to high-stakes medical treatment selection, complex decision-making tasks are increasingly automated as …

Quantifying uncertainty in deep learning is a challenging and yet unsolved problem. Predictive uncertainty estimates are important to …

A major challenge in the theory of deep learning is to understand the computational complexity of learning basic families of neural …

GPT3 has shown that large generative models are unexpectedly powerful and capable. In this talk, I will review some of these …

Why do large learning rates often produce better results? Why do “infinitely wide” networks trained using kernel methods …

“What is learnable?” is a fundamental question in learning theory. The talk will address this question for deep learning, …

Convolution is one of the most essential components of architectures used in computer vision. As machine learning moves towards …

One desired capability for machines is the ability to transfer their understanding of one domain to another domain where data is …

The fundamental breakthroughs in machine learning, and the rapid advancements of the underlying deep neural network models have enabled …

Understanding deep learning calls for addressing the questions of: (i) optimization — the effectiveness of simple gradient-based …

Modern deep generative models like GANs, VAEs and invertible flows are showing amazing results on modeling high-dimensional …

As neural networks become wider their accuracy improves, and their behavior becomes easier to analyze theoretically. I will give an …

Autonomous systems require efficient learning mechanisms that are fully integrated with the control loop. We need robust learning …

A common view of deep learning is that deep networks provide a hierarchical means of processing input data, where early layers extract …

When predictions support decisions they may influence the outcome they aim to predict. We call such predictions performative; the …

Abstract: Machine Learning is invaluable for extracting insights from large volumes of data. A key assumption enabling many methods, …

How should we go about creating a science of deep learning? One might be tempted to focus on replicability, reproducibility, and …

The existence of adversarial examples in which tiny changes in the input can fool well trained neural networks has many applications …

We examine gradient descent on unregularized logistic regression problems, with homogeneous linear predictors on linearly separable …

This talk will survey the role played by margins in optimization, generalization, and representation of neural networks. A specific …

Deep Learning has had phenomenal empirical successes in many domains including computer vision, natural language processing, and speech …

Classical theory that guides the design of nonparametric prediction methods like deep neural networks involves a tradeoff between the …

Much recent theoretical work has concentrated on “solving deep learning”. Yet, deep learning is not a thing in itself and …

Inductive biases from specific training algorithms like stochastic gradient descent play a crucial role in learning overparameterized …

Machine learning has made tremendous progress over the last decade. It’s thus tempting to believe that ML techniques are a …

Algorithms in deep learning have a regularization effect: different optimizers with different hyper-parameters, on the same training …