PhD Proposal: Fusing Multimedia Data Into Dynamic Virtual Environments

Talk
Ruofei Du
Time: 
10.16.2017 14:00 to 15:30
Location: 

AVW 3438

In spite of the dramatic growth of virtual and augmented reality (VR and AR)
technology, content creation for immersive and dynamic virtual environments
remains a significant challenge. In this proposal, we present our research in
automatically fusing multimedia data, including text, photos, panoramas,
point clouds, and multi-view videos, to create rich and compelling virtual
environments.

First, we present Social Street View, which renders geo-tagged social media
in its natural geo-spatial context provided by 360° panoramas, such as
Google Street View. Our system takes into account visual saliency and uses
maximal Poisson placement with spatio-temporal filters to render social
multimedia in an immersive setting. We explore several potential use cases
including immersive social storytelling, experiencing culture, and
crowd-sourced tourism.

Second, we present Video Fields, a novel web-based interactive system to
create, calibrate, and render dynamic videos overlaid on 3D scenes.
Our system renders dynamic entities from multiple videos, using early and
deferred texture sampling. Video Fields can be used for immersive surveillance
in virtual environments.

Third, we present our work on Montage4D, an interactive system for seamlessly
fusing multi-view video textures with dynamic meshes. We use geodesics on
meshes with view-dependent rendering to mitigate spatial occlusion seams
while maintaining temporal consistency. We believe that Montage4D will be
critical for several applications such as immersive telepresence, immersive
training, and live entertainment.

We next plan to work on efficient processing and rendering of 360° videos,
geo-spatial registration of social media with immersive maps, and using
multi-view video data for reconstruction of dynamic 3D models.

Examining Committee: Chair: Dr. Amitabh Varshney Dept rep: Dr. Furong Huang Members: Dr. Matthias Zwicker