Sho Yaida - Effective Theory of Deep Neural Networks

Abstract

Large neural networks perform extremely well in practice, providing the backbone of modern machine learning. The goal of this talk is to provide a blueprint for theoretically analyzing these large models from first principles. In particular, we’ll overview how the statistics and dynamics of deep neural networks drastically simplify at large width and become analytically tractable. In so doing, we’ll see that the idealized infinite-width limit is too simple to capture several important aspects of deep learning such as representation learning. To address them, we’ll step beyond the idealized limit and systematically incorporate finite-width corrections.

Date
Event