Jamie Morgenstern - Shifts in Distributions and Preferences in Response to Learning

Abstract

In this talk, I’ll describe some recent work outlining how distribution shifts are fundamental to working with human-centric data. Some of these shifts come from attempting to “join” datasets gathered in different contexts, others may be the result of people’s preferences affecting which data they provide to which systems, and even more can arise when peoples’ preferences themselves are shaped by ML systems’ recommendations. Each of these types of shift require different modeling and analysis to more accurately predict the behavior of ML pipelines deployed in a way where they interact repeatedly with people who care about their predictions.

Date
Event
Location
SEC 1.413