Unsupervised Learning Revisited

Talk
Soheil Feizi
Stanford University
Talk Series: 
Time: 
02.20.2018 11:00 to 12:00
Location: 

AVW 4172

Modern datasets are massive, complex and often unlabeled. These attributes make unsupervised learning important in several data-driven application domains. This ever-growing area, although demonstrating excellent empirical performance, suffers from a major drawback. Unlike for classical learning methods, there is a lack of fundamental understanding of several modern approaches, hindering development of principled methods for improvement. To resolve this issue, one approach is to draw appropriate connections between modern and classical learning methods. Thus, leveraging from the vast body of classical results, one can either develop new learning algorithms or improve existing ones in a principled way. In this talk, I am going to illustrate the success of this approach in two unsupervised learning problems, namely (1) learning a nonlinear dimensionality reduction of the data, and (2) learning probabilistic models from the data. In the first problem, by drawing connections with Maximal Correlation and PCA, our approach produces a new method called Maximally Correlated PCA, a nonlinear generalization of PCA with a data-dependent nonlinearity. In the second problem, by drawing connections to optimal transport, supervised learning and rate-distortion theory, our approach leads to a principled design of Generative Adversarial Networks (GANs) in a baseline scenario.