Home Members Applications Projects Seminars Publications Facilities Internal Web Contact Us | ||||
|
Graphics Lunch is a forum for informal and formal discussions over lunch for those interested in graphics and visualization issues at Maryland. It also serves as a forum for talks from visitors to our lab about their recent research in graphics and visualization. Students and faculty can use this venue to practise and prepare for their conference papers, discuss recent and upcoming papers and conferences, or inform others about graphics and visualization news. Meetings are held on Mondays from 12:00pm to 1:30pm in the CFAR Seminar Room (AVW 4424) .
August 19, 2002 | Irregular, Unknown Light Sources For Dynamic Global Illumination |
Presented By | Sharat Chandran, Visiting Professor, University of Maryland, College Park |
Comments | Graphics Seminar Series |
Abstract |
The goal in global illumination solutions for dynamic environments is to update a scene based on past scenes. Current state of the art solutions are either not applicable, or unduly complex, when there are large changes in the illumination of unbounded number of objects. Such changes may be caused by the appearance of unexpected (at modeling time), irregular light sources. We define a subset of dynamic environments in which new light sources may be user introduced, and implement solutions that complement existing schemes. (Joint work with Mayur Srivastava) |
August 5, 2002 | Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting Environments |
Presented By | Aravind Kalaiah, University of Maryland, College Park |
Comments | Graphics Seminar Series |
Abstract |
Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting Environments By Peter-Pike Sloan, Jan Kautz, John Snyder We present a new, real-time method for rendering diffuse and glossy objects in low-frequency lighting environments that cap-tures soft shadows, interreflections, and caustics. As a preprocess, a novel global transport simulator creates functions over the object's surface representing transfer of arbitrary, low-frequency incident lighting into transferred radiance which includes global effects like shadows and interreflections from the object onto itself. At run-time, these transfer functions are applied to actual incident lighting. Dynamic, local lighting is handled by sampling it close to the object every frame; the object can also be rigidly rotated with respect to the lighting and vice versa. Lighting and transfer functions are represented using low-order spherical harmonics. This avoids aliasing and evaluates efficiently on graphics hardware by reducing the shading integral to a dot product of 9 to 25 element vectors for diffuse receivers. Glossy objects are handled using matrices rather than vectors. We further introduce functions for radiance transfer from a dynamic lighting environment through a preprocessed object to neighboring points in space. These allow soft shadows and caustics from rigidly moving objects to be cast onto arbitrary, dynamic receivers. We demonstrate real-time global lighting effects with this approach. |
July 15, 2002 | Linear Combination of Transformations |
Presented By | Indrajit Bhattacharya, University of Maryland, College Park |
Comments | Graphics Seminar Series |
Abstract |
Linear Combination of Transformations By Marc Alexa Geometric transformations are most commonly represented as square matrices in computer graphics. Following simple geometric arguments, we derive a natural and geometrically meaningful definition of scalar multiples and a commutative addition of transformations based on the matrix representation, given that the matrices have no negative real eigenvalues. Together, these operations allow linear combination of transformations. This provides the ability to create weighted combination of transformations, interpolate between transformations, and to construct or use arbitrary transformations in a structure similar to a basis of a vector space. These basic techniques are useful for synthesis and analysis of motions or animations. Animations through a set of key transformations are genarated using standard techniques such as subdivision curves. For analysis and progessive compression a PCA can be applied to sequences of transformations. We describe an implememnattion of techniques that enable and easy-to-use and transparent way of dealing with geometric transformations in graphics software. We compare and relate our approach to other techniques such as matrix decomposition and quaternion interpolation. |
July 8, 2002 | Reconstruction and Representation of 3D Objects with Radial Basis Functions |
Presented By | Thomas Baby, University of Maryland, College Park |
Comments | Graphics Seminar Series |
Abstract |
Reconstruction and Representation of 3D Objects with Radial Basis Functions By J. C. Carr, R. K. Beatson, J.B. Cherrie T. J. Mitchell, W. R. Fright, B. C. McCallum, and T. R. Evans We use polyharmonic Radial Basis Functions (RBFs) to reconstruct smooth, manifold surfaces from point-cloud data and to repair incomplete meshes. An objects surface is defined implicitly as the zero set of an RBF fitted to the given surface data. Fast methods for fitting and evaluating RBFs allow us to model large data sets, consisting of millions of surface points, by a single RBF previously an impossible task. A greedy algorithm in the fitting process reduces the number of RBF centres required to represent a surface and results in significant compression and further computational advantages. The energy-minimisation characterisation of polyharmonic splines result in a smoothest interpolant. This scale-independent characterisation is well-suited to reconstructing surfaces from non-uniformly sampled data. Holes are smoothly filled and surfaces smoothly extrapolated. We use a non-interpolating approximation when the data is noisy. The functional representation is in effect a solid model, which means that gradients and surface normals can be determined analytically. This helps generate uniform meshes and we show that the RBF representation has advantages for mesh simplification and re-meshing applications. Results are presented for real-world rangefinder data. |
July 1, 2002 | Robust Epsilon Visibility |
Presented By | Xuejun Hao, University of Maryland, College Park |
Comments | Graphics Seminar Series |
Abstract |
Robust Epsilon Visibility By Florent Duguet and George Drettakis Analytic visibility algorithms, for example methods which compute a subdivided mesh to represent shadows, are notoriously unrobust and hard to use in practice. We present a new method based on a generalized definition of extremal stabbing lines, which are the extremities of shadow boundaries. We treat scenes containing multiple edges or vertices in degenerate configurations, (e.g., collinear or coplanar). We introduce a robust epsilon method to determine whether each generalized extremal stabbing line is blocked, or is touched by these scene elements, and thus added to the lines generators. We develop robust blocker predicates for polygons which are smaller than epsilon . For larger epsilon values, small shadow features merge and eventually disappear. We can thus robustly connect generalized extremal stabbing lines in degenerate scenes to form shadow boundaries. We show that our approach is consistent, and that shadow boundary connectivity is preserved when features merge. We have implemented our algorithm, and show that we can robustly compute analytic shadow boundaries to the precision of our chosen epsilon threshold for non-trivial models, containing numerous degeneracies. |
June 24, 2002 | Interactive Global Illumination in Dynamic Scenes |
Presented By | F. Betul Atalay, University of Maryland, College Park |
Comments | Graphics Seminar Series |
Abstract |
Interactive Global Illumination in Dynamic Scenes By Parag Tole, Fabio Pellacini, Bruce Walter and Donald P. Greenberg In this paper, we present a system for interactive computation of global illumination in dynamic scenes. Our system uses a novel scheme for caching the results of a high quality pixel-based renderer such as a bidirectional path tracer. The Shading Cache is an object-space hierarchical subdivision mesh with lazily computed shading values at its vertices. A high frame rate display is generated from the Shading Cache using hardware-based interpolation and texture mapping. An image space sampling scheme refines the Shading Cache in regions that have the most interpolation error or those that are most likely to be affected by object or camera motion. Our system handles dynamic scenes and moving light sources efficiently, providing useful feedback within a few seconds and high quality images within a few tens of seconds, without the need for any pre-computation. Our approach allows us to significantly outperform other interactive systems based on caching ray-tracing samples, especially in dynamic scenes. Based on our results, we believe that the Shading Cache will be an invaluable tool in lighting design and modelling while rendering. |
June 17, 2002 | A Procedural Approach to Authoring Solid Models |
Presented By | Chang Ha Lee, University of Maryland, College Park |
Comments | Graphics Seminar Series |
Abstract |
A Procedural Approach to Authoring Solid Models Barbara Cutler, Julie Dorsey, Leonard McMillan, Matthias Muller, and Robert Jagnow We present a procedural approach to authoring layered, solid models. Using a simple scripting language, we define the internal structure of a volume from one or more input meshes. Sculpting and simulation operators are applied within the context of the language to shape and modify the model. Our framework treats simulation as a modeling operator rather than simply as a tool for animation, thereby suggesting a new paradigm for modeling as well as a new level of abstraction for interacting with simulation environments. Capturing real-world effects with standard modeling techniques is extremely challenging. Our key contribution is a concise procedural approach for seamlessly building and modifying complex solid geometry. We present an implementation of our language using a flexible tetrahedral representation. We show a variety of complex objects modeled in our system using tools that interface with finite element method and particle system simulations. |
June 10, 2002 | Perspective Shadow Maps |
Presented By | Xuejun Hao, University of Maryland, College Park |
Comments | Graphics Seminar Series |
Abstract |
Perspective Shadow Maps By Marc Stamminger and George Drettakis Shadow maps are probably the most widely used means for the generation of shadows, despite their well known aliasing problems. In this paper we introduce perspective shadow maps, which are generated in normalized device coordinate space, i.e., after perspective transformation. This results in important reduction of shadow map aliasing with almost no overhead. We correctly treat light source transformations and show how to include all objects which cast shadows in the transformed spaces. Perspective shadow maps can directly replace standard shadow maps for interactive hardware accelerated rendering as well as in high-quality, offline renderers. |
June 3, 2002 | Light Field Mapping: Efficient Representation and Hardware Rendering of Surface Light Fields |
Presented By | Aravind Kalaiah, University of Maryland, College Park |
Comments | Graphics Seminar Series |
Abstract |
Light Field Mapping: Efficient Representation and Hardware Rendering of Surface Light Fields By Wei-Chao Chen, Jean-Yves Bouguet, Michael H. Chu, Radek Grzeszczuk A light field parameterized on the surface offers a natural and intuitive description of the view-dependent appearance of scenes with complex reflectance properties. To enable the use of surface light fields in real-time rendering we develop a compact representation suitable for an accelerated graphics pipeline. We propose to approximate the light field data by partitioning it over elementary surface primitives and factorizing each part into a small set of lower-dimensional functions. We show that our representation can be further compressed using standard image compression techniques leading to extremely compact data sets that are up to four orders of magnitude smaller than the input data. Finally, we develop an image-based rendering method, light field mapping, that can visualize surface light fields directly from this compact representation at interactive frame rates on a personal computer. We also implement a new method of approximating the light field data that produces positive only factors allowing for faster rendering using simpler graphics hardware than earlier methods. We demonstrate the results for a variety of non-trivial synthetic scenes and physical objects scanned through 3D photography. |
Top |