Home Members Applications Projects Seminars Publications Facilities Internal Web Contact Us | ||||
|
Date | Speaker | Title |
---|---|---|
Oct 8 | Ravi Ramamoorthi | Signal-Theoretic Representations of Appearance |
Oct 1 | Chang Ha Lee | Light Collages: Lighting Design for Effective Visualization |
Sep 30 | Michael Kass | Physical Simulation at Pixar |
Sep 10 | Xuejun Hao | Efficient Geometry and Illumination Representations for Interactive Protein Visualization |
Jun 9 | Benjamin Watson | Improving Realism in Interactive Display |
Feb 20 | Ramesh Raskar | Vision, Graphics and HCI Research at MERL |
Top |
Top |
Oct 8 | Signal-Theoretic Representations of Appearance |
---|---|
Presented By | Ravi Ramamoorthi (Columbia University) |
Abstract |
Many problems in computer graphics require compact and accurate representations
of the appearance of objects, and the mathematical algorithms to manipulate them.
For instance, high quality real-time rendering needs models for appearance effects
like natural illumination from wide-area light sources such as skylight,
realistic material properties like velvet, satin, paints, or wood, and
shading effects like soft shadows.
These effects are also important in many computer vision problems like recognition
and surface reconstruction. In these problems, we must often deal with complex
high-dimensional spaces.
For instance, for real-time relighting in computer graphics, or for lighting-insensitive
recognition in computer vision, we must consider the space of images of an object under
all possible lighting conditions. Since the illumination can in principle come
from anywhere, the appearance manifold would seem to be infinite-dimensional.
However, one can find lower-dimensional and more compact structures that lead to
efficient algorithms.
In this talk, we discuss a signal-theoretic approach to representing appearance, where the illumination and reflection function are signals and filters, and we apply many signal-processing tools such as convolution, wavelet-based representation and non-linear approximation. These representations and tools are applicable to a variety of problems in computer graphics and vision, and we will present examples in real-time rendering in computer graphics, as well as image-based and inverse problems, multiple scattering for volumetric effects, and efficient sampling for image synthesis. |
Bio |
Ravi Ramamoorthi is currently an assistant professor of Computer Science
at Columbia University, since August 2002, when he received his PhD in computer
science from Stanford University. He is interested in many aspects of computer
graphics and vision, including mathematical foundations, real-time photorealistic
rendering, image-based and inverse rendering, and lighting and appearance in
computer vision.
|
Oct 1 | Light Collages: Lighting Design for Effective Visualization |
Presented By | Chang Ha Lee |
Abstract |
We introduce Light Collages - a lighting design system for effective
visualization based on principles of human perception. Artists and
illustrators enhance perception of features with lighting that is locally
consistent and globally inconsistent. Inspired by these techniques, we
design the placement of light sources to convey a greater sense of realism
and better perception of shape with globally inconsistent lighting. Our
algorithm segments the objects into local surface patches and uses a number
of perceptual heuristics, such as highlights, shadows, and silhouettes, to
enhance the perception of shape. We show our results on scientific and
sculptured datasets.
|
Sep 30 | Physical Simulation at Pixar |
Presented By | Michael Kass (Pixar) |
Abstract |
"Monsters Inc." marked Pixar's first extensive use of physical
simulation in a feature film. Pixar animators directly controlled the
movements of the characters' bodies and faces, but much of their hair and
clothing movement was computed using simulations of Newtonian physics. In
Pixar's forthcoming film, "The Incredibles," this technology was extended
and taken to a new Ievel with large numbers of human characters, styles
of hair, and different garments. In this talk, some of the details of the
simulations will be described, as well as their impact on the production
process. Physical simulation allowed a degree of realism of motion that
would not have been possible with traditional methods. Nonetheless,
adding this type of simulation into the Pixar production pipeline
sometimes caused surprising and amusing results -- both successes and
bloopers will be shown. One of the key developments that allowed clothing
simulation to go smoothly during the production was a set of algorithms
for untangling simulated clothing when it was excessively tortured by
the animators. The algorithms allowed the simulator to handle a range
of non-physical situations like character interpenetrations without
producing unpleasant visual artifacts. Details of the algorithms, and
examples of their use will be presented.
|
Bio |
Michael Kass is a Senior Scientist at Pixar Animation Studios
where he worked with David Baraff and Andrew Witkin to develop the
physically-based clothing and hair animation software that was used
in "Monsters Inc." and Pixar's forthcoming film "The Incredibles" He
received his B.A. from Princeton in 1982, his M.S. from M.I.T. in 1984,
and his Ph. D. from Stanford in 1988. Dr. Kass has received numerous
awards for his research on physically-based methods in computer graphics
and computer vision including several conference best paper awards, the
Prix Ars Electronica for the image "Reaction Diffusion Texture Buttons,"
and the Imagina Grand Prix for the animation "Splash Dance." Before
joining Pixar in 1995, Dr. Kass held research positions at Schlumberger
Palo Alto Research and Apple Computer.
|
Sep 10 | Efficient Geometry and Illumination Representations for Interactive Protein Visualization |
Presented By | Xuejun Hao |
Abstract |
We explore techniques of designing efficient geometric and illumination data
representations for developement of algorithms that achieve better interactivity
for visual and computational proteomics, as well as graphics rendering.
In particular, we will talk about efficient computation and visualization
of molecular electrostatics, together with interactive rendering of translucent materials.
Molecular electrostatics is important for studying the structures and interactions
of proteins, and is vital in many computational biology applications, such as protein
folding and rational drug design. We have developed a system to efficiently solve the
non-linear Poisson-Boltzmann equation governing molecular electrostatics.
Our system simultaneously improves the accuracy and the efficiency of the solution
by adaptively refining the computational grid near the solute-solvent interface.
In addition, we have explored the possibility of mapping the PBE solution onto GPUs.
We use pre-computed accumulation of transparency with spherical-harmonics-based
compression to accelerate volume rendering of molecular electrostatics.
In addition, we present a compact mathematical model to efficiently represent
the six-dimensional integrals of bidirectional surface scattering reflectance
distribution functions (BSSRDFs) to render scattering effects in translucent
materials interactively. Our analysis first reduces the complexity and dimensionality
of the problem by decomposing the reflectance field into non-scattered and
subsurface-scattered reflectance fields. While the non-scattered reflectance field
can be described by 4D bidirectional reflectance distribution functions (BRDFs),
we show that the scattered reflectance field can also be represented by a
4D field through pre-processing the neighborhood scattering radiance transfer integrals.
We use a novel reference-points scheme to compactly represent the pre-computed
integrals using a hierarchical and progressive spherical harmonics representation.
Our algorithm scales linearly with the number of mesh vertices.
|
Jun 9 | Improving Realism in Interactive Display |
Presented By | Benjamin Watson (Northwestern) |
Abstract |
Making computer imagery more "real" is one of the most basic goals of graphics researchers.
But what does "realism" mean, and how can we get more of it? I will describe two recent
components of our research that address these questions.
The first is a basic investigation of visual sensitivity and level of detail (LOD).
Many are already familiar with the contrast sensitivity function (CSF), which
describes the relationship of the thresholds of perceptibility to viewed spatial
frequency and contrast. Many systems base control of LOD on the CSF, despite the
fact that their manipulations take place well above perceptibility thresholds.
We find strong evidence that supra-threshold LOD control should not be based on
threshold perceptibility. Indeed, we find that the spatial frequency of detail should
play only a minor role in supra-threshold LOD control, and that often, detail should
be increased (not decreased) in low contrast or peripheral display regions.
The second is research in temporally adaptive display. Computer graphics has long
studied techniques for making rendering spatially adaptive within the frame.
How could rendering be improved if it sampled some regions not only more densely,
but also more often? Our nearly interactive prototype extends Bishop et al.'s
frameless rendering with new sampling and reconstruction techniques. We use closed
loop feedback to guide sampling to image regions that change significantly over space or time.
Adaptive reconstruction emphasizes older samples in static settings, resulting in
sharper images; and new samples in dynamic settings, resulting in images that may
be blurry but are up-to-date. In terms of peak signal-to-noise ratio, this prototype
produces much better image streams than framed or non-adaptive frameless renderers
with the same simulated sampling rates.
|
Bio |
Dr. Benjamin Watson is an Assistant Professor at the Computer Science
department of Northwestern University in Evanston, Illinois, USA.
There he leads the Realism Lab, which strives to bring realistic complexity
to interactive display. His research interests address understanding, generating,
displaying, and interacting with realism, and therefore include topics such as
measuring visual similarity, procedural modeling of human artifacts, temporally
adaptive rendering, simplification, visualization and 3D interaction.
His work has been applied to digital entertainment and training, corporate and
national intelligence, medical therapy and assessment, and education.
Dr. Watson earned his Ph.D. at Georgia Tech's GVU Center, co-chaired the
Graphics Interface 2001 and chaired the IEEE Virtual Reality 2004 conference.
|
Feb 20 | Vision, Graphics and HCI Research at MERL |
Presented By | Ramesh Raskar (Mitsubishi Electric Research Laboratories) |
Abstract |
As computing has moved off the desktop, research in interaction,
computer graphics, computer vision and usability has become more
interdisciplinary and requires teams of very diverse composition.
I will illustrate this point with a brief description of selected
projects at MERL. They include algorithms for smart elevators,
quantifying presence and flow of people, multi-user touch screens,
LED based communication and chemical sensing. I will also describe
my group's work in locale-aware mobile projectors, composite RFIDs,
image fusion and multi-flash depth-edge detecting camera- for
interaction, display and augmentation. (The projects above represent
the research efforts of many members of the MERL staff. For
appropriate credits, please visit the MERL web site,
http://www.merl.com).
|
Bio |
Ramesh Raskar joined MERL as a Research Scientist in 2000
after his doctoral research at the University of North Carolina at
Chapel Hill, where he developed a framework for projector based
displays. Dr. Raskar's work spans a range of topics in computer
vision and graphics including projective geometry,
non-photorealistic rendering and intelligent user interfaces.
He has developed algorithms for image projection on planar,
non-planar and quadric curved surfaces that simplify constraints
on conventional displays and has proposed Shader Lamps, a new
approach for projector-based augmented reality. Current projects
include composite RFID, multi-flash non-photorealistic camera for
depth edge detection, locale-aware mobile projectors, high dynamic
range video, image fusion for context enhancement and quadric
transfer methods for multi-projector curved screen displays.
Dr. Raskar received the Mitsubishi Electric Information Technology
R&D Award in June 2003. Recently, he was named a winner of the
Global Indus Technovator Award, instituted at MIT to recognize
the top 20 Indian technology innovators on the globe. His papers
have appeared in SIGGRAPH, Eurographics, IEEE Visualization, CVPR
and many other graphics and vision conferences. He has taught
courses and has served as a member of international program
committees at major conferences. He is a member of the ACM and IEEE.
http://www.merl.com/people/raskar/raskar.html
|
Top |