Deconstructing models and methods in deep learning
Pavel Izmailov (New York University)
Colloquium
Tuesday, April 4, 2023, 3:30 pm
Abstract
Machine learning models are ultimately used to make decisions in the real world, where mistakes can be incredibly costly. We still understand surprisingly little about neural networks and the procedures that we use to train them, and, as a result, our models are brittle, often rely on spurious features, and generalize poorly under minor distribution shifts. Moreover, these models are often unable to faithfully represent uncertainty in their predictions, further limiting their applicability. In this talk, I will present works on neural network loss surfaces, probabilistic deep learning, uncertainty estimation and robustness to distribution shifts. In each of these works, we aim to build foundational understanding of models, training procedures, and their limitations, and then use this understanding to develop practically impactful, interpretable, robust and broadly applicable methods and models.
Bio
Pavel Izmailov is a final year PhD student in Computer Science at New York University, working with Andrew Gordon Wilson. Pavel is primarily interested in understanding and improving deep neural networks. In particular his interests include out of distribution generalization, probabilistic deep learning, representation learning and large models. I am also excited about generative models, uncertainty estimation, semi-supervised learning, language models and other topics. Recently, our work on Bayesian model selection was recognized with an outstanding paper award at ICML 2022.