Home Members Applications Projects Seminars Publications Facilities Internal Web Contact Us | ||||
|
Graphics Lunch is a forum for informal and formal discussions over lunch for those interested in graphics and visualization issues at Maryland. It also serves as a forum for talks from visitors to our lab about their recent research in graphics and visualization. Students and faculty can use this venue to practice and prepare for their conference papers, discuss recent and upcoming papers and conferences, or inform others about graphics and visualization news. Meetings are held on Mondays from 12:00pm to 12:50pm in Room CSIC 2107.
Mailing List: To join the graphics seminar mailing list, go to page http://www.cs.umd.edu/mailman/listinfo/graphics and follow the instruction there.
Top |
Top |
Sep 9 | A 2.5D Approach to Photo-Realistic Talking Heads |
---|---|
Presented By | Amir Khella |
Abstract |
We describe a system for creating a photo-realistic model of the
human head that can be animated and lip-synched from phonetic
transcripts of text. Combined with a state-of-the-art text-to-speech
synthesizer (TTS), it generates video animations of talking heads that
closely resemble real people. To obtain a naturally looking head, we
choose a "data-driven" approach.We record a talking person and apply
image recognition to extract automatically bitmaps of facial parts.
These bitmaps are normalized and parameterized before being entered
into a database. For synthesis, the TTS provides the audio track, as
well as the phonetic transcript from which trajectories in the space of
parameterized bitmaps are computed for all facial parts. Sampling these
trajectories and retrieving the corresponding bitmaps from the database
produces animated facial parts. These facial parts are then projected
and blended onto an image of the whole head using its pose
information. This talking head model can produce new, never recorded
speech of the person who was originally recorded. Talking-head
animations of this type are useful as a front-end for agents and
avatars in multimedia applications such as virtual operators, virtual
announcers, help desks, educational, and expert systems.
Project page: http://www.research.att.com/projects/AnimatedHead/ |
Sep 23 | Wave Optics and Its Application in Computer Graphics |
Presented By | Xuejun Hao |
Abstract |
In this talk, I will present a short summary of the wave optics, and
its application in CG, especially in computing the BRDF. The emphasis
will be on plane wave representation, the Kirchhoff diffraction integral,
tangent plane approximation, and evaluation of the Kirchhoff integral
using Fourier transforms.
|
Sep 30 | A Decomposition Approach to Non-manifold Geometric Modeling |
Presented By | Enrico Puppo
Dept of Computer Science, University of Genova, Genova (Italy) |
Abstract |
Non-manifold objects contain geometric singularities and may be made of
parts of different dimensionalities. Such objects are relevant in CAD/CAM
applications and may be useful in other contexts, such as virtual reality,
object recognition systems, and object retrieval. Non-manifold objects are
difficult to model and manipulate. Critical issues are the identification
and representation of geometric singularities; and the design of data
structures that achieve a good trade-off between efficiency of operations
and spatial complexity.
In this talk, we consider objects modeled through simplicial complexes. We first present a data structure for encoding 2D complexes, i.e., triangle-segment meshes, which is both efficient and compact, and scales very well to the manifold case. Next, we consider complexes in arbitrary dimensions, and we present an approach to modeling based on decomposition into parts. We show that there exist a natural decomposition that splits non-manifold objects into components, by removing all singularities, which can be removed without cutting the object through manifold parts. The resulting components are manifold in the 2D case, while they belong to a more general class in higher dimensions. Such decomposition is the basis for designing a two-level data structure for both 2D and 3D (volumetric) meshes, which is composed of a collection of components and of a connectivity structure to glue them together at non-manifold joints. |
Oct 7 | Volumetric Shadows Using Splatting |
Presented By | Chang Ha Lee |
Abstract |
Authors: Caixia Zhang and Roger Crawfis
This paper describes an efficient algorithm to model the light attenuation due to a participating media with low albedo. The light attenuation is modeled using splatting volume renderer for both the viewer and the light source. During the rendering, a 2D shadow buffer attenuates the light for each pixel. When the contribution of a footprint is added to the image buffer, as seen from the eye, we add the contribution to the shadow buffer, as seen from the light source. We have generated shadows for point lights and parallel lights using this algorithm. The shadow algorithm has been extended to deal with multiple light sources and projective textured lights. Image: http://www.cis.ohio-state.edu/~zhangc/research/research.html |
Oct 14 | Hierarchical Face Clustering on Polygonal Surfaces |
Presented By | Thomas Baby |
Abstract |
Authors: Michael Garland, Andrew Willmott, and Paul S. Heckbert
Many graphics applications, and interactive systems in particular, rely on hierarchical surface representations to efficiently process very complex models. Considerable attention has been focused on hierarchies of surface approximations and their construction via automatic surface simplification.Such representations have proven effective for adapting the level of detail used in real time display systems. However, other applications such as ray racing, collision detection, and radiosity benefit from an alternative multiresolution framework: hierarchical partitions of the original surface geometry. We present a new method for representing a hierarchy of regions on a polygonal surface which partition that surface into a set of face clusters. These clusters, which are connected sets of faces, represent the aggregate properties of the original surface at different scales rather than providing geometric approximations of varying complexity. We also describe the combination of an effective error metric and a novel algorithm for constructing these hierarchies. |
Nov 4 | High Quality Compatible Triangulations |
Presented By | Craig Gotsman (Technion - Israel Institute of Technology) |
Abstract |
Compatible meshes are isomorphic meshing of the interiors of two
polygons having a correspondence between their vertices.
Compatible meshing may be used for constructing sweeps, suitable
for finite element analysis, between two base polygons. They
may also be used for meshing a given sequence of polygons forming
a sweep. We present a method to compute compatible triangulations
of planar polygons with a very small number of Steiner (interior)
vertices. Being close to optimal in terms of the number
of Steiner vertices, these compatible triangulations are usually
not of high quality, i.e., do not have well-shaped triangles. We
show how to increase the quality of these triangulations by adding
Steiner vertices in a compatible manner, using several novel
techniques for remeshing and mesh smoothing. The total scheme
results in high-quality compatible meshes with a small number
of triangles. These meshes may then be morphed using barycentric
coordinate representations to obtain the intermediate triangulated
sections of a sweep.
Joint work with Vitaly Surazhsky |
Nov 4 | Immersive Virtual Reality for Scientific Visualization |
Presented By | Andries Van Dam (Brown University) |
Abstract |
Immersive virtual reality (IVR) has the potential to be a powerful tool
for the visualization of burgeoning scientific datasets and models.
While IVR has been available for well over a decade, its use in
scientific visualization is relatively new and many challenges remain
before IVR can become a standard tool for the working scientist. In
this presentation we provide a progress report and sketch a research
agenda for the technology underlying IVR for scientific visualization.
Among the interesting problem areas are how to do computational
steering for exploration, how to use art-inspired visualization
techniques for multi-valued data, and how to construct interaction
techniques and metaphors for pleasant and efficient control of the
environment. To illustrate our approaches to these issues, we will
present specific examples of work from our lab, including immersive
visualizations of arterial blood flow and of medical imaging.
|
Nov 11 | Interactive Procedural Shading for Realistic and Non-Realistic Rendering |
Presented By | Marc Olano (University of Maryland, Baltimore County) |
Abstract |
"Procedural shading" is the use of short procedures in a high
level language to describe aspects of the appearance of surfaces when
creating computer graphic images. It provides a powerful and flexible
means for describing an enormous range of surface appearances.
Procedural shading has been used since the mid-80's for non-interactive
rendering, where each individual frame may take seconds or hours to
render, only to be recorded and later displayed at many frames per
second. In contrast, interactive graphics, requires frames to be
rendered as fast as they are displayed, usually between 10 and 60
frames per second, thus allowing a user to interact with the objects in
a rendered scene, see an immediate response, and experience the
illusion that those objects have some physical presence. In the late
1990's, PixelFlow at the University of North Carolina was the first
machine capable of using procedural shaders in interactive computer
graphics. Recent advances in graphics hardware have brought us almost
to the point of having the same capabilities in every new computer.
This talk will focus specifically on some of the uses and application
areas for interactive procedural shading, with examples from work done
primarily at UNC and at SGI.
|
Nov 18 | Turning to the Masters: Motion Capturing Cartoons |
Presented By | Jennifer McDonald |
Abstract |
Authors: Christoph Bregler, Lorie Loeb, Erika Chuang, and Hrishi Deshpande
(Standford University)
In this paper, we present a technique we call "cartoon capture and retargeting" which we use to track the motion from traditionally animated cartoons and retarget it onto 3-D models, 2-D drawings, and photographs. By using animation as the source, we can produce new animations that are expressive, exaggerated or non-realistic.Cartoon capture transforms a digitized cartoon into a cartoon motion representation. Using a combination of affine transformation and key-shape interpolation, cartoon capture tracks non-rigid shape changes in cartoon layers. Cartoon retargeting translates this information into different output media. The result is an animation with a new look but with the movement of the original cartoon. |
Nov 22 | Virtual Medicine, Medical Imaging, and Large Data Visualization |
Presented By | Dirk Bartz (University of Tuebingen, Germany) |
Abstract |
Medical imaging is one of the most established practical fields of
visualization. While most used methods deal with individual images
from 3D scanners - volumes are seen as stack of images -, 3D
visualization is slowly moving into the daily practice of research
hospitals.
Major challenges in this process are the difficult specification of how the features in volume datasets are visualized (transfer functions, etc), occlusion of interesting features by others, and fast increasing size of datasets. While a few years ago 256^3 datasets were the standard size in radiology, the current standard size already increased to 512^2x1000 volumes. Soon, highfield-MRI scanners will produce volumes of 2048^2 x 1000. In this talk, I will discuss several techniques how to deal with large medical data. In particular, I will present work in the context of virtual endoscopy, a medical procedure oriented visualization technique that provides an environment familiar to physicians. |
Nov 25 | Geometric Surface Processing via Normal Maps |
Presented By | Indrajit Bhattacharya |
Abstract |
Authors: Tolga Tasdizen, Ross Whitaker, Paul Burchard, and Stanley Osher.
The generalization of signal and image processing to surfaces entails filtering the normals of the surface, rather than filtering the positions of points on a mesh. Using a variational framework, smooth surfaces minimize the norm of the derivative of the surface normals i.e the total curvature. Penalty functions on the surface normals are computed using geometry-based shape metrics and minimized using gradient descent. This produces a set of partial differential equations (PDE). In this paper, we introduce a novel framework for implementing geometric processing tools for surfaces using a two-step algorithm: (i) operating on the normal map of a surface, and (ii) manipulating the surface to fit the processed normals. The computational approach uses level set surface models; therefore the processing does not depend on any underlying parameterization. Iterating this two-step process, we can implement geometric fourth order flows efficiently by solving a set of coupled second-order PDEs. This paper will demonstrate that the framework provides for a wide-range of surface processing operations, including edge-preserving smoothing and high-boost filtering. Furthermore, the generality of the implementation makes it appropriate for very complex surface models, e.g. those constructed directly from measured data. Relevant Web Page: http://www.cs.utah.edu/~whitaker/ross_aniso.html |
Dec 2 | Interactive Geometry Remeshing |
Presented By | Aravind Kalaiah |
Abstract |
Authors: Pierre Alliez (USC/INRIA), Mark Meyer (Caltech), and Mathieu Desbrun (USC)
We present a novel technique, both flexible and efficient, for interactive remeshing of irregular geometry. First, the original (arbitrary genus) mesh is substituted by a series of 2D maps in parameter space. Using these maps, our algorithm is then able to take advantage of established signal processing and halftoning tools that offer real-time interaction and intricate control. The user can easily combine these maps to create a control map --- a map which controls the sampling density over the surface patch. This map is then sampled at interactive rates allowing the user to easily design a tailored resampling. Once this sampling is complete, a Delaunay triangulation and fast optimization are performed to perfect the final mesh.As a result, our remeshing technique is extremely versatile and general, being able to produce arbitrarily complex meshes with a variety of properties including: uniformity, regularity, semi-regularity, curvature sensitive resampling, and feature preservation. We provide a high level of control over the sampling distribution allowing the user to interactively custom design the mesh based on their requirements thereby increasing their productivity in creating a wide variety of meshes. |
Dec 4 | Inverse Vision |
Presented By | Pat Hanrahan |
Abstract |
The fields of computer graphics and computer vision are normally
considered duals of each other. In graphics, rendering is the process
of generating an image from a model of the world. In vision, a model of
the world is formed from an image. Rendering is normally considered the
forward problem; that is, a mathematical procedure for simulating the
physics of light and its interactions with the environment. And vision
is considered an inverse problem; that is, the more challenging problem
of ondoing the effects of light moving through the environment. As is
well-known, inverse problems often do not have unique solutions or are
ill-conditioned. In this talk I will explore this duality. I will discuss
briefly the state of the art in rendering, illustrating the talk with
recent research on lighting simulation and material models. I will then
discuss the relationship between rendering and vision, and suggest that
it is often better to think of rendering as the inverse of vision.
|
Bio |
Pat Hanrahan is the CANON USA Professor of Computer Science and Electrical
Engineering at Stanford University where he teaches computer graphics.
His current research involves visualization, image synthesis, and graphics
systems and architectures. Before joining Stanford he was a faculty
member at Princeton. He has also worked at Pixar where he developed
developed volume rendering software and was the chief architect of the
RenderMan(TM) Interface - a protocol that allows modeling programs to
describe scenes to high quality rendering programs. Previous to Pixar
he directed the 3D computer graphics group in the Computer Graphics
Laboratory at New York Institute of Technology. Professor Hanrahan has
received three university teaching awards. He has received an Academy
Award for Science and Technology, the Spirit of America Creativity
Award, the SIGGRAPH Computer Graphics Achievement Award, and was recently
elected to the National Academy of Engineering.
|
Dec 5 | The Fast Multipole Method for Global Illumination |
Presented By | Sharat Chandran (IIT, Bombay) |
Abstract |
Global illumination enables production of pictures that look less like those
synthesized by computers. The methods advance our knowledge of the physical
environment such as the simulation of light transport. In this work we
present what appears to be the first DIRECT application of FMM to global
illumination (GI). The talk will be accessible to students, and of interest
to researchers in numerical methods, and graphics.
|
Dec 9 | QuadTIN: Quadtree based Triangulated Irregular Networks |
Presented By | Betul Atalay |
Abstract |
Authors: Renato Pajarola, Marc Antonijuan, and Roberto Lario
Interactive visualization of large digital elevation models is of continuing interest in scientific visualization, GIS, and virtual reality applications. Taking advantage of the regular structure of grid digital elevation models, efficient hierarchical multiresolution triangulation and adaptive level-of-detail (LOD) rendering algorithms have been developed for interactive terrain visualization. Despite the higher triangle count, these approaches generally outperform mesh simplification methods that produce irregular triangulated network (TIN) based LOD representations. In this project we combine the advantage of a TIN based mesh simplification preprocess with high-performance quadtree based LOD triangulation and rendering at run-time. This QuadTIN called approach generates an efficient quadtree triangulation hierarchy over any irregular point set that may originate from irregular terrain sampling or from reducing oversampling in high-resolution grid digital elevation models. |
Top |