PhD Proposal: 3D Reconstruction in Challenging Environments with Differentiable Rendering
IRB IRB-5107
obust 3D reconstruction in challenging conditions, such as fog, rain, dust, and snow is essential for autonomous systems. Although researchers have developed new sensing modalities for these environments, they have not integrated most of them with modern differentiable rendering techniques, which have demonstrated state-of-the-art performance for 3D reconstruction. This proposal bridges that gap by developing differentiable rendering methods tailored to these novel sensing modalities. We outline two contributions: (1) enhancing differentiable rendering-based 3D reconstruction from forward-looking sonar using generative priors, (2) reconstructing 3D geometry from millimeter-wave (mmWave) sensor data by using differentiable rendering.
First, we introduce Multiview Optical-Acoustic Diffusion (MOAD), a technique that applies diffusion models to reconstruct 3D structure from dense sonar images captured at sparse poses. By leveraging visual similarities between sonar and natural grayscale imagery, we adapt pretrained diffusion models using limited paired data. Second, we present 3D reconstruction from electro-optic mmWave data (EOmm-3D), a method that incorporates the mmWave image formation model into modern 3D reconstruction pipelines to enable reconstruction in visually degraded environments.