PhD Defense: Everything efficient all at once - Compressing data and deep networks

Talk
Sharath Girish
Time: 
10.10.2024 11:00 to 13:00
Location: 

IRB IRB-4105

https://umd.zoom.us/j/6937178419?pwd=Wjh0cS9qWUVIU0I3WEE5N3d2Sy9xZz09&omn=99510265092ogBpcbslJ8ppLx

Abstract:

Over the past decade, there has been a surge in the use of bulky deep networks demanding significant memory and computation resources limiting their deployment on many edge devices with storage and power constraints. They rely on more data which is being created and transmitted at an exponential rate. My talk will introduce a unified framework to reduce memory and computation costs and explore its application for data compression via efficient representations.

In the first part, I will discuss the framework for compressing convolutional neural networks (CNNs). We use quantized latent representations to improve storage efficiency on disk. We simultaneously achieve sparsity in the network to improve computational efficiency.
The second part of my talk will focus on the application of the framework for data compression via implicit neural representations (INRs). We develop a method to compress multi-scale hash grid INRs for various forms of data such as images, videos, and even 3D scene representations. I will also discuss our work on video-specific compression exploiting the inherent spatio-temporal redundancies present.
Next, I will cover methods to improve the efficiency of 3D Gaussian Splatting (3D-GS) as an explicit 3D representation. I will begin by introducing a training framework for static scene 3D-GS, which enhances training and rendering speeds while reducing storage and runtime memory requirements. I will then extend this approach to dynamic scenes in a streamable setting using efficient per-frame deformable 3D-GS. Our joint quantization-sparsity framework, combined with an adaptive masking technique, significantly reduces training time and memory usage while maintaining real-time rendering speeds and high reconstruction quality.