PhD Defense: UNDERSTANDING FAILURE MODES OF DEEP LEARNING MODELS
IRB IRB-5105
https://umd.zoom.us/j/5589957224?pwd=VTU1MW9HaklYRDBNV0szWTc3ZVh3QT09
Abstract:
In the past few years, deep learning models have advanced significantly, achieving remarkable results in challenging problems across multiple domains such as vision and language. Despite their proficiency, these models exhibit numerous failure modes and intriguing behaviors that cannot be mitigated solely through scaling. A comprehensive understanding of these failure modes is imperative for their safe application and utilization.First, I will discuss our study of various frameworks to detect replication. Then, I will show how we used these frameworks to identify memorization in the Stable Diffusion 1.4 model. In the second part, I will discuss various factors contributing to memorization in diffusion models. While it is widely believed that duplicated images in the training set are responsible for content replication at inference time, I will show results on how the text conditioning of the model also plays an important role. I will discuss several techniques we proposed based on these findings for reducing data replication in both training and inference times. Lastly, I will discuss my recent work on understanding style memorization in diffusion models where we propose a feature extractor for representing style, which we then use to detect style memorization in Stable Diffusion generated images.