My research focuses on fundamental methods to generate and manipulate images using computers. We are exploiting the capabilities of deep learning-based AI techniques, and we develop algorithms and systems for realistic and real-time rendering, and animation and modeling of three-dimensional shapes. My research has applications in virtual reality, digital entertainment, multimedia, and data visualization. Visit my list of publications for a detailed overview of our published work, or follow the links below to individual projects.
We are developing deep learning-based AI techniques to address various problems in computer graphics, including realistic rendering, scene reconstruction, 3D shape analysis, and image processing and editing. For example, check out our SIGGRAPH Asia 2021 paper on Neural Radiosity or our SIGGRAPH 2018 paper on sketch-based image editing using conditional generative adversarial networks. We have also developed a deep learning-based image prior for image restoration published at NIPS 2017, which leads to state of the art results for applications such as image deblurring and super-resolution. This work establishes an interesting connection between mean shift filtering and denoising autoencoders. We are further investigating deep learning techniques for shape analysis, for example this unsupervised representation learning technique published at AAAI 2019, which allows us to classify shapes with high accuracy without requiring any labeled training data. Finally, we have developed an AI-based technique to learn to importance sample for realistic rendering. This approach treats a renderer as a black box and it has the potential to accelerate image synthesis with minimal changes to the underlying rendering algorithm.
3D shape representations form the foundation for most higher level applications, and we are investigating new 3D representations and 3D geometry processing techniques that are suitable for these tasks. In our early research we pioneered point-based techniques for reconstruction, rendering, and editing of 3D geometry, which have inspired a whole new research area, and we continue to contribute state of the art techniques for processing point-sampled 3D data.
We consider the development of 3D shape representations that support intuitive interactive modeling one of the key problems in geometry processing. We have introduced the idea of example-based mesh deformation, which we called mesh-based inverse kinematics. Recently, the proliferation of inexpensive, real-time RGB+depth video cameras has offered a boon of new research opportunities. Our vision is to exploit this data to automatically construct 3D models that mimic the properties of their real-world counterparts, such as the degrees of freedom of articulated objects.
We are developing efficient, physically-based rendering algorithms for off-line (movie production) and interactive applications (AR, VR, games). Our research strives to finally make realistic Monte Carlo rendering practical in a broader range of real world application scenarios. Some examples include a method for filtering noise textures, efficient multidimensional adaptive sampling techniques, novel approaches to photon mapping, rendering participating media, and flexible, meshless precomputed radiance transfer, all published in ACM Transactions on Graphics.
Recently, our work on image space denoising and adaptive rendering has had a strong impact in the research community and in industry, because it effectively reduces noise while easily interfacing with conventional rendering systems. We are licensing our work to Innobright Technologies Inc., and other players in the industry have adopted techniques inspired by our research. We led the community by organizing an ACM SIGGRAPH 2015 course on denoising in Monte Carlo rendering.
We also developed an innovative new Monte Carlo rendering technique called gradient-domain path tracing, which we presented at ACM SIGGRAPH 2015. This approach can significantly reduce the computation time compared to conventional techniques, and it could become a key building block of the next generation of rendering algorithms. Check out our SIGGRAPH 2018 course for an overview. We are also excited to explore the implications of novel display technologies, such as AR and VR goggles or multiview 3D displays, on signal processing and realistic rendering algorithms.
Automultiscopic displays show stereoscopic images that can be viewed from any viewpoint without special glasses. They hold great promise for the future of television and digital entertainment. We develop signal processing techniques to optimize image quality by reducing sampling artifacts and adapting the signal to the display properties. We are also interested in multi-view content creation and manipulation techniques.
I am also intrigued by applications of algorithms and numerical techniques from Computer Graphics to other scientific areas. For example, in a collaboration with Michel Milinkovitch, a biologist at the University of Geneva, we have been investigating the mechanisms behind structural and color patterns on animal skins.
Noise, or variance, is a fundamental problem in Monte Carlo rendering, and we have contributed several algorithms for effective denoising of Monte Carlo renderings, hence significantly reducing the required rendering time. This has inspired us to also consider the general image denoising problem. With Claude Knaus, a former PhD student, we developed the dual-domain filter, a state of the art denoising filter that can be implemented in only a few lines of Matlab code. We are currently also investigating image restoration techniques using deep convolutional neural networks.