This appendix summarizes the research
interests of our faculty. While the
research program is highly interdisciplinary, we have organized this section
according to the major core computer science research areas in which we have
critical mass. These include:
Algorithms and Theory of Computation
Artificial Intelligence
Databases
Computer Vision and Graphics
Scientific Computing
Programming Languages and Software
Engineering
Human-Computer Interactions
Systems and Networks
As some faculty conduct research in multiple
committees, and as there is widespread collaboration within and between areas,
some faculty's research is split into multiple parts below for clarity.
This group's research interests are broadly
in the areas of algorithms, logic, complexity theory, cryptography,
computational geometry, data structures, learning theory and parallel
computing. Several faculty in this group
have been working with other faculty in application areas such as
graphics, security, networking, image processing and information retrieval. Over the last five
years, we have hired two new faculty (Katz and Srinivasan), and published
numerous journal and conference papers, including ten papers in the most
prestigious and highly competitive FOCS and STOC conferences. In addition, many
papers have been published in other highly selective conferences such as SODA,
ICALP, CRYPTO and Computational Geometry.
Many of our theory faculty have given invited talks at conferences and
other Universities about our research.
For the first time,
Bill Gasarch
studies primarily complexity theory and games. Complexity theory's ultimate goal is, given a
problem, determine how hard it is under a variety of measures. This involves
logic and combinatorics. Games are of
interest for math and computer science education as they can be used to
illustrate many concepts in a fun way.
Jonathan Katz studies cryptography,
network/computer security, and distributed algorithms. One recent
representative project is the design of efficient cryptographic algorithms
which remain secure in the face of key exposure. He has explored cryptosystems
in which the secret key evolves over time, and for which exposure(s) of the
secret key corresponding to some time periods have minimal effect of the
security of the system at all other times. As part of this work, he has
constructed the first forward-secure public-key encryption scheme (answering a
problem which had been open for 5 years), introduced the notion of
"key-insulated security" and designed encryption/signature schemes
secure in this sense, and designed the first intrusion-resilient public-key
encryption scheme. Other work in this area has included the development of new
threshold cryptosystems and consideration of "forward-secrecy" in
password-authenticated key exchange.
Several methods have been developed and
patent applications are currently pending. In addition, Dr. Khuller has been exploring
data movement applications over the
Internet as well, via the use of network flow based methods in the context of
the Bistro Project (a system to upload data over wide area networks such as the
Internet).
Clyde Kruskal studies parallel computers and
parallel algorithms. In addition, he has been working on discrete geometry
problems related to coloring the plane.
David Mount investigates the design of
efficient algorithms and data structures for geometric problems. His principal interest has been problems with
applications in areas such as image processing, pattern recognition,
information retrieval, and computer graphics.
The major thrust of his recent work has been in approximation
algorithms. This work is motivated by
applications in which existing exact algorithms are unacceptably slow or use
excessive space. He has shown that by
accepting a small error relative to exact solutions, it is possible to achieve
significantly faster algorithms using relatively simple algorithms and data
structures. Examples of problems for
which such efficient solutions have been found include nearest-neighbor
searching in multidimensional Euclidean spaces, clustering multidimensional
data sets, performing pattern matching and registration in digital images, and
robustly fitting lines and curves to noisy data. These projects have resulted in theoretical
advancements, and they have also been publicly released as software systems,
such as ANN, a C++ library for approximate nearest neighbor searching.
Carl Smith studies algorithmic learning
theory, the process whereby computers learn from examples; discovery science,
the process by which computers can automatically discover interesting facts
from huge data sets; and models of computation, a study of computation from a
variety of view points.
Aravind Srinivasan conducts research on the
design and (theoretical/experimental) analysis of algorithms with applications
in networking, combinatorial optimization, information retrieval, and related
areas, with an emphasis on probabilistic methods. He has developed rigorous
approaches for the design and analysis of approximation algorithms through
randomization; the focus here has been on efficiently solving a
``relaxation" of a given (difficult) combinatorial optimization problem,
and using randomization to restore the violated constraints. This has led to
the current-best approximation algorithms for low-congestion packet routing and
scheduling, scheduling broadcasts in pull-based systems, minimal-redundancy
storage in distributed networks, etc. He has also developed algorithms for
security and multicast in peer-to-peer networks. His interests also include
distributed algorithms: a representative recent result is on fast distributed
algorithms for constructing ``backbones" for communication in wireless ad
hoc networks.
Uzi Vishkin studies parallel algorithms and
architectures. The ongoing PRAM-On-Chip project is a direct outgrowth of
theoretical work. The project offers a
concrete agenda for challenging the 1946 von-Neumann architecture through
streamlining the massive knowledge base developed by the parallel algorithms
community with the roadmap for CMOS VLSI.
The AI research group at the
Bonnie Dorr and her colleagues focus on
several areas of broadscale multilingual processing, e.g., machine translation,
scalable translingual document detection, and cross-language information
retrieval. They have investigated the
problem of creating new statistical models that are linguistically informed,
leading to higher quality output for a wide range of languages while still
being practical to train and use. She has developed the technique ``divergence
unraveling", i.e., the detection and resolution of language-pair
mismatches where the realization of a concept is distributed differently in
different languages. This technique is
currently being explored as a vehicle for translating into English from
languages as diverse as Arabic, Chinese, and Spanish. This approach (and sub-components thereof) is
being tested in a wide range of applications including statistical machine
translation, interactive cross-lingual browsing, and headline generation.
Dana Nau,
V. Subrahmanian and
This work is closely tied to the ontology and
uncertainty work listed below, and the planning work listed above. IMPACT has
been used in several DoD applications.
Don Perlis works in the areas of common sense
reasoning, cognitive modeling, and computational theories of the conscious
mind. The underlying methods that he brings to bear on all of these are principally
those of metareasoning and time-sensitive reasoning. Together these two provide very powerful
tools for tackling many real-world aspects of intelligence, including reasoning
in the presence of contradictory information, and with applications to natural-language
human-computer dialog, planning with tight deadlines, and recognition and
repair of mistakes.
Jim Reggia and his students are working on
three aspects of biologically-inspired computation and AI. First, in
computational neuroscience, his research group is studying neural models with a
goal of understanding their self-organization, response to sudden damage, the
processing of temporal sequences, and neural network learning in general. Second, in evolutionary computation, this
group has been using genetic algorithms and related methods to evolve neural
networks (as a means of better understanding biological neural circuits and
architectures) and multi-agent communication systems in artificial worlds. Finally, cellular automata models of
self-replicating machines have been developed to study the fundamental information
processing principles involved in self-replication, and how self-replication
might arise spontaneously from non-replicating components.
V. Subrahmanian is interested in uncertainty
management. He has developed logic
programming languages to handle uncertainty, as well as uncertainty and time.
In addition, he has developed extensions of the relational algebra to handle
uncertain reasoning in relational DBMSs and object oriented DBMSs. Database
support for temporal uncertainty has also been studied. More recently,
Sudarshan Chawathe's primary research
interest is semistructured data, which is data whose structure is irregular,
incomplete, and frequently changing.
Examples of such data include structured documents (e.g., memos, legal
briefs, forms) and data obtained by integrating disparate information sources
(e.g., Web sites). With the adoption of
XML, semistructured data is growing in both quantity and variety. Semistructured data (and its XML
serialization) is modeled as an edge- and node-labeled rooted graph. His SEuS
data mining system for efficiently determining frequent substructures in graph
data is based on using a concise summary of
data as a filter to prune the search for frequent structures. This approach is several orders of magnitude
faster than comparable methods, and can efficiently process gigabytes of
disk-resident data. He is also working on methods for processing streaming
data, which is data that is accessible only in a serial order determined by the
data source. Each data item is presented
only once. It is not possible to seek
forward or backward in the data stream, and data cannot be recalled unless
explicitly buffered. He has developed
automaton-based methods for evaluating XPath queries on streaming XML data, and
implemented them in the XSQ system. He
is currently working on expanding this work to more powerful query languages
for streaming data. He is also working on differencing, summarization, and
visualization of graph-structured data.
Louiqa Raschid's interests include
architectures for wide-area computation
with heterogeneous information servers; publishing and locating sources based on quality and content
metadata using the Web and XML; and
query planning and optimization. Wide-area applications utilize the wide-area
network to connect hundreds of servers with thousands of clients. Such
applications face significant challenges.
This dynamic network may result in a wide variability in end-to-end
latency. Similarly, as cached resources
become obsolete, the staleness of delivered information may vary. Her research
is undertaking a comprehensive study of the changing behavior of resources. Her objective is to develop appropriate
resource profiles to characterize this behavior and to use these profiles to
customize service and information delivery to clients. She is also working on
managing the rapidly growing and diverse datasets available to the biological
enterprise. Such data presents
significant opportunities and challenges for data integration and seamless
access. Her research is applying prior
expertise with developing data integration architectures based on wrappers and
mediators to provide seamless access to heterogeneous Web accessible sources.
She is developing techniques from areas such as query optimization, adaptive
query evaluation, machine learning and schema mapping and integration of
heterogeneous databases, to solve problems of data integration.
Nick Roussopoulos' recent research focuses on
data warehousing, dynamic Web content, and network intrusion detection. This
work has developed storage architectures and indexing for efficient computation
and management of data cubes generated from very large multidimensional data
sets. Data cubes provide summaries and
aggregations of all possible views of data, and their sizes grow
exponentially. The latest result of this
storage technology, the Dwarf Cube, obtains a data-reduction reduction up to
one million to one. This technology is being evaluated by the largest database
companies. The WebViews project aims at improving the performance of
database-backed Web servers which are commonly used to generate dynamic content
on the Web today. Such content drains
Web server resources and inhibits scalability.
This work has shown that by using our WebView technology, servers can be
scaled up to two orders of magnitude without sacrificing the timeliness of the
served information. Nick Roussopoulos is also studying data acquisition for
network intrusion detection. The objective of his effort is to design an
efficient, adaptive, and decentralized vent data acquisition system for
tactical intrusion detection.
Techniques are being developed in three basic
areas: (a) online correlation and value dependency discovery over stream data
using a limited amount of memory and instructions per data item, (b)
compression of data cubes for log data, and (c) delivery of compressed
aggregated/correlated data under limited bandwidth.
V.S. Subrahmanian is working on several major
topics in databases as well as AI. First, he is developing ontology-based
models for semantic integration of diverse, distributed heterogeneous
databases. Second, he is developing
models of databases (relational, object oriented and semistructured) to
represent and manipulate time, uncertainty and spatial data. Third, he is developing models of databases
to support sophisticated AI planning applications. Fourth, he is developing extensions of the
relational algebra to query and summarize multimedia data sets. Finally, he is focusing on efficiently
scaling heterogeneous databases via a variety of techniques.
Yiannis Aloimonos' research during the last
five years has concentrated on the problem of visual motion, and specifically
on the recovery of three-dimensional models of the world from multiple views.
One basic result obtained is the relationship of the error in building 3D models
from video to the field of view of the camera. As a result sensors with a full
field of view can better estimate 3D motion and thus acquire better 3D models.
This has led to several efforts in building a variety of panoramic sensors. In
Aloimonos' lab, the geometry of eye design has led to an understanding of and developed two
principles governing eye design as it relates to the ability to recover 3D
models from the processing of the images acquired by the eye. An outcome of
this study was the understanding of geometric constraints characterizing the
moving plenoptic function. Finally, a new mathematical framework was introduced
recently under the heading of Harmonic Computational Geometry as a tool for
addressing the correspondence problem (matching) by relating properties of the
signal to the 3D geometry. This work bridges the gap between harmonic analysis
(signal processing) and geometry.
Larry Davis's principal research area is
computer vision, with a focus on visual surveillance. He and his colleagues have been developing
new vision algorithms and systems for detection and tracking of people from
collections of fixed surveillance cameras, and for analysis of their actions
and interactions. His most recent work in surveillance involved the development
of kernel density estimation techniques
for detection using background models and tracking using combined spatial-color
models; multiperspective Bayesian methods for 3D detection and tracking of
people in cluttered environments; methods for fitting 3D density models of
articulated objects to 3D volumetric reconstructions; and various methodologies
for detection of people from moving camera platforms.
A. Varshney's research in the Graphics and
Visual Informatics Laboratory deals with a range of issues in visual computing.
These include multiresolution modeling and rendering including view-dependent
and view-independent simplification hierarchies as well as topology-preserving
and topology-reducing hierarchies. The goal here is graphics acceleration with
minimal visual degradation as well as enabling 3D graphics over low-bandwidth
networks. Recent work also includes
developing a lighting model to incorporate subsurface scattering effects within
the local illumination framework. In the
area of computational biology, he is developing visual informatics tools and
technologies that will give scientists deeper insights in understanding the
relationships between form and function in various biological proteins. He is
developing new methods for efficiently computing and displaying electrostatic
potentials by explicitly generating and incorporating the solvent interface.
Among many factors involved in protein-protein interactions, shape complementarity is of major importance. The
goal of ongoing research in shape complementarity is to develop fast and
reliable methods for finding docking sites and corresponding transformations to
align the two molecules into complementary conformations. In research on
display technologies, he has built a tiled-display system that achieves
geometric alignment for 3D graphics applications by pre-warping 3D objects. An
ultrasonic tracker is used for user interactions with the displayed objects.
Numerical analysis at
·
Linear
Algebra and Its Applications,
·
·
·
Numerische
Mathematik,
·
Mathematics
of Computation
·
Pete Stewart has published two volumes of a
projected 5-volume series on Matrix
Algorithms: Basic Decompositions (1998)
and Eigensystems, (2001). We have
published 38 journal articles during 1998-2002, with 6 more scheduled for
publication.
Howard Elman, Diane O'Leary and Pete Stewart
are all investigating various aspects of
Krylov subspace methods for solving linear
equations and eigenvalue problems. Krylov subspace methods (e.g., the conjugate
gradient and GMRES methods for solving linear systems and the
Lanczos and Arnoldi methods for the
eigenproblem) are the workhorse algorithms for large matrices, and our group
has contributed to the understanding of these methods. Elman and O'Leary, have
studied problems for which GMRES makes
no progress in its initial iterations.
The tool for analysis was a nonlinear system of equations, the
stagnation system, that characterizes this behavior, and we developed several
new results on when and why matrices stagnate. O'Leary and Stewart are pursuing
some new phenomena in the convergence of Krylov subspaces with error. These new techniques promise to drastically
reduce the work involved in some variants of these methods
Elman, O'Leary, and colleagues have developed
a multigrid algorithm for numerical solution of acoustic scattering problems
that overcomes the breakdown of the standard algorithm as the
wavenumber in the Helmholtz equation
increases. They also developed efficient numerical methods for solving the
Helmholtz equation when the data or the forcing function is stochastic.
O'Leary, Stewart, and their students have
studied algorithms for computing regularized solutions to ill-posed problems,
with emphasis on the restoration of blurred images. Recent work has focused on blind
deconvolution (where the blurring function is not known exactly) and on tools
for computing and displaying the uncertainty in reconstructed images (joint
work with James Nagy at Emory).
Howard Elman and his collaborators have
developed new algorithms for solving the incompressible Navier-Stokes
equations. The approach is based on preconditioning methodologies that take
advantage of the saddle point structure of the problem and account for
couplings of physical quantities (velocities and pressure) while
maintaining computational efficiency.
Diane O'Leary and her collaborators have
worked on document retrieval and summarization through linear algebra. For retrieval, she developed the
semi-discrete matrix decomposition for use in latent semantic indexing (LSI).
For summarization, she uses hidden Markov models plus a pivoted QR
decomposition.
Pete Stewart and a student are developing a
Fortran95 wrapper called Matwrap for matrix operations that should make it easy
to turn code from matrix oriented languages, such as MATLAB, into highly
efficient programs.
These three inter-related areas are covered
by a single field committee and include
a diverse set of activities from work on compiler optimization, to
reconfiguring software on-the-fly, to the empirically evaluating software
processes and products, to the development of visualization techniques. There are several faculty collaborations and
research groups and laboratories, such as the Human Computer Interaction Laboratory
and the Experimental Software Engineering Group.
Victor Basili's primary interest is an
empirical understanding of the relationship between software processes and
products. He has studied the application of techniques in software development
organizations and used the data collected to build models that characterize,
evaluate, predict and improve the techniques and their effects
(Goal/Question/Metric Approach, Quality Improvement Paradigm). He is involved
in the development of methods and tools that support the analysis, synthesis,
and feedback of project information to support organizational learning
(Experience Factory). Current activities involve the development of an
experience base of empirical studies, the development of stakeholder-based models
for dependable systems, and the evaluation of the maturity of technologies for
application in building dependable systems. He has developed families of
techniques for abstracting information from software artifacts of various
kinds, including requirements, OO designs, and code.
Jeff Foster's research focuses on programming
languages and program analysis with applications to software engineering. His goal is to help programmers increase the
safety and reliability of software while simultaneously making programs easier
to write and maintain. His most recent
work proposes type qualifiers as a lightweight, specification-based mechanism
for improving the quality of software. Using novel, constraint-based type
inference algorithms allows type qualifier systems to scale to analysis of
hundreds of thousands of lines of code.
As part of this research, type qualifiers have been used to find
security vulnerabilities in C programs and to find deadlocks in the Linux
kernel.
Michael Hicks' research is oriented primarily
towards learning how to develop more flexible, reliable, and secure software.
His work bridges the areas of ``systems" and programming languages, in
that he has frequently applied or developed language-based technology to solve
systems problems, particularly in networking and distributed systems. His most recent research emphasis is the
areas of on-the-fly software reconfigurability, programmable networking,
garbage collection and memory management, and designing and implementing safe
low-level programming languages.
Atif
Memon's work focuses primarily on the
development of techniques for state-based testing of event-driven systems such
as network protocol systems, graphical user interfaces (GUIs), and web
interfaces. He has developed techniques to test GUIs and used case studies to
show that the GUI testing techniques are both practical and effective. He plans to conduct detailed experiments to
provide further empirical evidence of their strengths and weaknesses of the
techniques. To this end, he has packaged the techniques into a comprehensive
tool for GUI testing, GUITAR. The tool
will be available to researchers and practitioners and its deployment will
provide opportunities for further research including the development of new
testing paradigms for event-based systems. He is also investigating the
tailoring of these techniques for protocol testing, web testing, security
testing, and configuration space reduction.
Adam Porter's research interests include
empirical methods for identifying and eliminating bottlenecks in industrial
development processes, the experimental evaluation of fundamental software
engineering hypotheses, and development of tools that demonstrably improve the software
development process. The goal of a new 5-year, multi-million dollar NSF grant
involving a multidisciplinary team from five universities and research
institutions is to enable the dynamic analyses of software systems,
around-the-worldand around-the-clock, leveraging fielded resources during
local, off-peak hours. To do this the team is devising analysis techniques that
are both highly distributed and lightweight from the perspective of individual
system users and that are incremental and adaptive in the sense that they
change their behavior over time based on earlier results. This approach is
expected to give software developers unprecedented insight into the behavior of
their systems as they actually run in the field.
Marvin Zelkowitz is looking at the problem of
understanding new software development technologies and specifically how those
technologies get transferred into industrial practices. All too often new
technology is promoted by hype with little empirical data supporting it. A
scientific approach to technology validation often needs experimental
approaches to validating this new technology. Recently completed activities
includes 25 years of experiences with the NASA GSFC Software Engineering
Laboratory from 1976 through 2001 as well as a recently completed study of return
on investment from independent verification and validation (IV\&V) in the
NASA space shuttle program. A new 5-year research project involves
understanding high dependability within the NASA domain. He is also Chief
Scientist of the
Ben Shneiderman works on the development and
application of information visualization tools, including treemaps and
starfield displays. Many of these tools have had commercial success in a
variety of applications, e.g., treemaps have been applied to the stock market
and business analysis, starfield displays spawned a commercial product called
Spotfire. He continues to develop new algorithms for treemaps, refining the
techniques of dynamic queries and extending them to new domains such as time
series data. Current projects include
TimeSearcher, a general purpose tool for exploration and pattern identification
in time series data, and Microarray, a set of experiments used to examine
changes in gene expression over time where data sets are analyzed using
clusters, self-organizing maps, heat maps, and other standard microarray
analysis tools. TimeSearcher is based on the use of timeboxes - rectangular,
direct-manipulation queries - to support interactive exploration via dynamic
queries and provides overviews of query results and drag-and-drop support for
query-by-example.
Francois Guimbretiere is investigating novel
interaction techniques for interactive surfaces. While his previous work
focused on large vertical interactive surfaces like the white-board like
Stanford Interactive Mural, his current project will explore horizontal
interactive surfaces like digital tables, tablet computers and digital paper
systems such as the Anoto pen. Over the next few years, he will implement and
compare interfaces designed for each of these systems to understand how to
narrow the gap between computers and paper.
Ben Bederson works on interaction and
visualization techniques, focusing on three areas: Zoomable User Interfaces
(ZUIs), mobile devices, and interfaces for children. ZUIs are dynamic contextual information
displays that use spatial representations and smooth zooming for
navigation. They use animated zooming to
present information in context. ZUIs
have been applied to tree browsing, web history navigation, presentations, and
photo browsing. There are
general-purpose ZUI toolkits (Jazz, Piccolo) that are broadly used. ZUIs are useful in many contexts where there
is more information than fits on the screen, e.g., mobile devices with small
displays. He has applied ZUIs and other
techniques to PDAs for applications such as calendars, menu selection, and
photo browsing. He has applied ZUIs as a
base technology to a broad set of applications for children - the most recent
of which is the International Children's Digital Library, a collaborative
effort to provide access to thousands of books from around the world to
children.
Keleher, Shankar, Sussman
Ashok Agrawala and Udaya Shankar are studying
primarily networking systems and network performance, including location-based
systems, andhave developed systems like NetDyn and Rover. Netdyn has been used
to monitor end-to-end behavior of packet losses and round trip delays and has
uncovered irregular behavior of network components such as routers. As a part of project
Udaya Shankar is also studying performance of
large networking systems involving development of efficient techniques for
performance evaluation. This approach is
based on the Z-iteration method, which is applicable to time-dependent queueing
systems and yields time evolution of various instantaneous probabilistic
measures (eg, blocking probabilities, average number of customers at resource
and in service, etc) several orders faster than numerical or simulation
approaches. He is developing a
compositional design and analysis framework for the analysis and testing of
correctness properties of concurrent systems, including real-time and security
properties. The approach used is layered
compositionality and assertional techniques.
The testing framework is in Java and is being applied to undergraduate
networking classes.
Bobby Bhattacharjee and Pete Keleher are
studying multi-party security. They have developed new techniques for securing
large group communications over the
Internet. The work includes scalable techniques
for re-keying of TerraDir, which is a
distributed peer-to-peer directory protocol that can be used as the basis for implementing customized directories for
Internet applications. Bobby Bhattacharjee is also working on other network
security projects.
For example, NICE is a cooperative framework
for scalably implementing distributed applications over the Internet.
Applications in NICE are cooperative: they devote a part of their own resources
to be used by any member of a cooperative group. The goal of NICE is to show
that cooperative applications can achieve overall better performance than
applications that do not cooperate. They
have developed a set of protocols for application-layer multicast,
application-layer distance estimation, and secure multicast within the NICE
framework.
The unique aspect of the Active Harmony work
is the emphasis on adapting to heterogeneous and changing environments. The
primary result of this research will be an infrastructure and a set of
algorithms that permit global resource optimization under changing conditions.
Alan Sussman is also working in the area of
high performance computing. Some of his
work focuses on runtime and compiler support for data-intensive applications.
The goal of this research is to build a common set of software tools and
infrastructure that can support the development of many classes of parallel and
distributed data intensive applications.
He is addressing the problem of coordinating the various components of
the application, namely computation, I/O and interprocessor communication. He has been investigating these issues in
both tightly coupled parallel environments and the distributed heterogeneous
Computational Grid environment across multiple application domains, and has
built both an object-oriented framework and a component-basedsoftware
environment for creating high performance data intensive applications.
Sussman is also studying interoperability of
data parallel programs. While in sequential programs applications can use
simple abstractions for moving data between address spaces, such as sockets,
pipes or shared memory segments, no such facilities have existed for parallel
programs. He has been working on a
meta-library approach to solving the problem, and has built prototype software
that enables exchange of data between separate (sequential or parallel)
programs, and can also be used to allow data transfers between data managed by
different data parallel regions in the same program.
Bill Arbaugh is working primarily in the area
of computer security. One of his projects is wireless mobility and
security. In this effort the areas being
investigated include fast hand-off's, probabilistic based ad-hoc routing, and
ad-hoc service discovery. The research to date has resulted in several widely
used software artifacts, and widely read technical reports and
publications.Bill Arbaugh is also working on platform security and
configuration management.
Rather than take the standard approach to
securing platforms, i.e., trusted operating systems, this work is focusing on
improving systems management by providing a dynamic and independent auditing
capability that is OS independent.
Liviu Iftode works in the area of operating
systems and distributed computing. His
Split-OS is new operating system architecture for the next generation of
internet servers built as clusters of intelligent devices. The lab hosts
several projects related to Split-OS covering networking and file system issues
as well as highly-available services. In
another project called Smart Messages, he is developing a system architecture
to support the computation diffusion on ad-hoc networks. He is also actively
pursuing research in the area of massive networks of embedded devices such as
networks of sensors.