Frameworks for Efficient Algorithms for Learning: Robustness and Data Compression

Talk
Abhishek Shetty
Talk Series: 
Time: 
04.02.2024 11:00 to 12:00

Though modern machine learning has been highly successful, as we move towards more critical applications, many challenges towards building trustworthy systems, such as ensuring robustness, privacy, and fairness, arise. Ad hoc and empirical approaches have often led to unintended consequences for these objectives, thus necessitating a principled approach. Traditional solutions often require redesigning entire pipelines or come with a significant loss in quality. In this talk, we will look at principles towards incorporating important desiderata into existing pipelines without significant computational and statistical overhead. We will see two vignettes of this line of research. First, we introduce the smoothed adversary model for sequential decision making, which serves as a general model for learning under distribution shifts. In this setting, we will statistically and computationally efficient algorithms for decision making under uncertainty. Second, we will see a nearly linear-time algorithm for distribution compression leading to improved computational efficiency in diverse downstream statistical tasks.