Home Members Applications Projects Seminars Publications Facilities Internal Web Contact Us | ||||
|
Date | Speaker | Title |
---|---|---|
Apr 25 | Christopher R. Johnson | Biomedical Computing and Visualization |
Apr 7 | Joao Comba | Broad-Phase Collision Detection Using Semi-Adjusting BSP-trees |
Mar 14 | David Luebke | THE FUTURE IS NOT FRAMED |
Top |
Top |
Apr 25 | Biomedical Computing and Visualization |
---|---|
Presented By | Christopher R. Johnson (University of Utah) |
Abstract |
Computational problems in biomedicine often require a researcher to apply diverse
skills in confronting problems involving very large data sets, three-dimensional
complex geometries, large-scale computing, scientific problem solving environments,
imaging, and large-scale visualization. In this presentation, I will provide examples
recent research in biomedical computing and visualization in
cardiology (medical device design), neuroscience (epilepsy localization
techniques and surgical planning), and imaging (new methods for interactive
visualization of large-scale 3D MRI and CT volumes, and new methods for diffusion
tensor imaging).
|
Bio |
Professor Johnson directs the Scientific Computing and Imaging Institute
at the University of Utah where he is a Distinguished Professor of Computer Science
and holds faculty appointments in the Departments of Physics, and Bioengineering.
His research interests are in the area of scientific computing.
Particular interests include inverse and imaging problems, adaptive methods,
problem solving environments, biomedical computing, and scientific visualization.
Dr. Johnson founded the SCI research group in 1992, which has since grown to become
the SCI Institute employing over 100 faculty, staff and students.
Professor Johnson serves on several international journal editorial boards,
as well as on advisory boards to several national research centers.
Professor Johnson was awarded a Young Investigator's (FIRST) Award from the NIH in 1992,
the NSF National Young Investigator (NYI) Award in 1994,
and the NSF Presidential Faculty Fellow (PFF) award from President Clinton in 1995.
In 1996 he received a DOE Computational Science Award and in 1997 received
the Par Excellence Award from the University of Utah Alumni Association and
the Presidential Teaching Scholar Award. In 1999, Professor Johnson was awarded
the Governor's Medal for Science and Technology from Governor Michael Leavitt.
In 2003 he received the Distinguished Professor Award from the University of Utah.
|
Apr 7 | Broad-Phase Collision Detection Using Semi-Adjusting BSP-trees |
Presented By | Joao Comba (UFRGS, Brazil) |
Abstract |
The broad-phase step of collision detection in scenes composed of n moving objects
is a challenging problem because enumerating collision pairs has an inherent O(n^2)
complexity. Spatial data structures are designed to accelerate this process,
but often their static nature makes it difficult to handle dynamic scenes.
In this work we propose a new semi-adjusting algorithm for BSP-trees representing
scenes composed of thousands of moving objects. The algorithm evaluates locations
where the BSP-tree becomes unbalanced, uses several strategies to alter cutting planes,
and defer updates based on their re-structuring cost. We show that the tree does not
require a complete re-structuring even in highly dynamic scenes, but adjusts itself
while maintaining desirable balancing and height properties.
This work will appear at I3D 2005, and is joint work with Rodrigo Luque and Carla Freitas |
Bio |
He is an assistant professor in the Graphics Group at the Informatics deparment of
the Federal University at Rio Grande do Sul, Brazil. He received a Ph.D in Computer
Science from Stanford University under the supervision of Leonidas J. Guibas.
Before that, he received a masters degree in Computer Science from the Federal
University of Rio de Janeiro, Brazil, working with Ronaldo Marinho Persiano.
His bachelor's degree in Computer Science was given by the Federal University
of Rio Grande do Sul, Brazil.
|
Mar 14 | THE FUTURE IS NOT FRAMED |
Presented By | David Luebke (Univ. of Virginia) |
Abstract |
The ultimate display will not show "images". To drive the display of the future, we must abandon our traditional concepts of pixels, and of images as grids of coherent pixels, and of imagery as a sequence of images.
So what is this ultimate display? One thing is obvious: the display of the future will have incredibly high resolution. A typical monitor today has 100 dpi?far below a satisfactory printer. Several technologies offer the prospect of much higher resolutions; even today you can buy a 300 dpi e-book. Accounting for hyperacuity, one can make the argument that a "perfect" desktop-sized monitor would require about 6000 dpi?call it 11 gigapixels. Even if we don't seek a perfect monitor, we do want large displays. The very walls of our offices should be active display surfaces, addressable to a resolution comparable to or better than current monitors. It's not just spatial resolution, either. We need higher temporal resolution: hardcore gamers already use single buffering to reduce delays. The human factors literature justifies this: even 15 ms of delay can harm task performance. Exotic technologies (holographic, autostereoscopic...) just increase the spatial, temporal, and directional resolution required. Suppose we settle for 1 gigapixel displays that can refresh at 240 Hz?roughly 4000x typical display bandwidths today. Recomputing and refreshing every pixel every time is a Bad Idea, for power and thermal reasons if nothing else. I will present an alternative: discard the frame. Send the display streams of samples (location+color) instead of sequences of images. Build hardware into the display to buffer and reconstruct images from these samples. Exploit temporal coherence: send samples less often where imagery is changing slowly. Exploit spatial coherence: send fewer samples where imagery is low-frequency. Without the rigid sampling patterns of framed renderers,sampling and reconstruction can adapt with very fine granularity to spatio-temporal image change. Sampling uses closed-loop feedback to guide sampling toward edges or motion in the image. A temporally deep buffer stores all the samples created over a short time interval for use in reconstruction. Reconstruction responds both to sampling density and spatio-temporal color gradients. I argue that this will reduce bandwidth requirements by 1-2 orders of magnitude, and show results from our preliminary experiments. |
Bio |
David Luebke is an Assistant Professor in the Department of Computer Science at the University of Virginia. He earned his Ph.D. in Computer Science at the University of North Carolina under Frederick P. Brooks, Jr., and earned his Bachelors degree in Chemistry at the Colorado College. Professor Luebke's principal research interest is interactive computer graphics, particularly the problem of acquiring and rendering very complex real-world scenes at interactive rates. Specific projects include polygonal level of detail (LOD), temperature-aware graphics architecture, scientific computation on graphics hardware, advanced reflectance and illumination models for real-time rendering, and image-based acquisition of real-world environments. Funded by the National Science Foundation, Professor Luebke and his students worked with colleagues at the University of North Carolina to create the Virtual Monticello museum exhibit. This exhibit ran for over 3 months and helped attract over 110,000 visitors as a centerpiece of the major exhibition Jefferson's America and Napoleon's France at the New Orleans Museum of Art. Luebke is also a co-author of the book "Level of Detail for 3D Graphics" with U. Maryland's own Amitabh Varshney.
|
Top |