Much recent theoretical work has concentrated on “solving deep learning”. Yet, deep learning is not a thing in itself and cannot be “solved” unless the broad principles underlying all of machine learning are better understood. Indeed, it has recently become clear that our understanding of basic concepts, such as “overfitting”, is lacking or, at least, is incomplete. I will start with the question of why we need to rethink the foundations and where the classical frameworks come short. I will then discuss some simple and (hopefully) helpful models for thinking about modern ML. Indeed, methods, such as linear regression or nearest neighbor classifiers, help understand many aspects of modern ML without engaging with the complexity of modern models, providing powerful intuition for both generalization and optimization. Finally, I will offer a few thoughts on gaps in our understanding and strengths and weaknesses of some recent analyses of deep learning.