If you have any questions about this research, or problems downloading papers, please contact Richard Gerber.
Click here to remove the abstracts.
Dong-In Kang, Richard Gerber and Manas Saksena.
Technical Report CS-TR-3885, UMIACS-TR-98-18.
Department of Computer Science, University of Maryland.
College Park, March 1998.
Note: A previous report on this work was reported in
"Performance-Based Design of Distributed Real-Time Systems,"
in the Proceedings of IEEE Real-Time Technology and Applications
Symposium, June 1997.
Abstract: This paper presents a design synthesis method for distributed embedded
systems. In such systems, computations can flow through long
pipelines of interacting software components, hosted on a variety of
resources, each of which is managed by a local scheduler. Our method
automatically calibrates the local resource schedulers to achieve the
system's global end-to-end performance requirements.
A system is modeled as a set of distributed task chains (or
pipelines), where each task represents an activity requiring nonzero
load from some CPU or network resource. Task load requirements can
vary stochastically, due to second-order effects like cache memory
behavior, DMA interference, pipeline stalls, bus arbitration delays,
transient head-of-line blocking, etc. We aggregate these effects --
along with a task's per-service load demand -- and model them via a
single random variable, ranging over an arbitrary discrete probability
distribution. Load models can be obtained via profiling tasks in
isolation, or simply by using an engineer's hypothesis about the
system's projected behavior.
The end-to-end performance requirements are posited in terms of
throughput and delay constraints. Specifically, a pipeline's delay
constraint is an upper bound on the total latency a computatation can
accumulate, from input to output. The corresponding throughput
constraint mandates the pipeline's minimum acceptable output rate --
counting only outputs which meet their delay constraints. Since
per-component loads can be generally distributed, and since resources
host stages from multiple pipelines, meeting all of the system's
end-to-end constraints is a nontrivial problem.
Our approach involves solving two sub-problems in tandem: (A) finding
an optimal proportion of load to allocate each task and channel; and
(B) deriving the best combination of service intervals over which all
load proportions can be guaranteed. The design algorithms use
analytic approximations to quickly estimate output rates and
propagation delays for candidate solutions.
When all parameters are synthesized, the estimated end-to-end
performance metrics are re-checked by simulation. The per-component
load reservations can then be increased, with the synthesis algorithms
re-run to improve performance. At that point the system can be
configured according to the synthesized scheduling parameters -- and
then re-validated via on-line profiling.
In this paper we demonstrate our technique on an example system, and
compare the estimated performance to its simulated on-line behavior.
Dong-In Kang, Richard Gerber and Manas Saksena.
IEEE Real-time Technology and Applications Symposium (RTAS) '97.
Montreal, June 1997.
Best Student Paper Award (Dong-in Kang)
Also in Technical Report CS-TR-3769, UMIACS-TR-97-29.
Department of Computer Science, University of Maryland.
College Park, March 1997.
Abstract:
This paper presents a design method for distributed systems with
statistical, end-to-end real-time constraints, and with underlying
stochastic resource requirements. In our framework, a system is
modeled as a set of transaction chains. A chain is a distributed
pipeline of tasks, which communicate in pairwise, producer-consumer
relationships. A task can represent any activity requiring nonzero
load from some CPU or network resource. Every chain has two
end-to-end performance requirements, which are postulated between its
input and output points: Its delay constraint denotes the maximum
amount time a computation can take to flow through the pipeline, from
input to output; computations that exceed this time are dropped. A
chain's quality constraint mandates a minimum allowable success rate
for outputs that meet their delay constraints.
Since a task's resource requirements can be arbitrarily distributed,
and since resources can host multiple tasks from many chains, meeting
all of the system's delay and quality constraints is a nontrivial
problem. Our design method solves it by using techniques based on
real-time scheduling theory and Markovian analysis. Specifically,
these techniques derive (1) a fixed proportion of resource load to
give each task; and (2) a deterministic processing rate for every
chain, in which the objective is to optimize the output success rate
(as determined by an analytical approximation). If a chain fails to
meet its minimal success requirement, its load is increased, and the
algorithm is run again. The final result is a schedulable set of
periodic tasks with statistical guarantees on the end-to-end
processing behavior.
We demonstrate our technique on an example system, and compare
the estimated success rates with those derived via
simulated on-line behavior.
Richard Gerber, Seongsoo Hong, Dong-in Kang and Manas Saksena.
In Formal Methods in Real-Time Computing, D. Mandrioli and
C. Heitmeyer, editors, John Wiley & Sons, 1996.
Also in UMD Technical Report CS-TR-3476, UMIACS TR 95-61, May 1995.
Abstract: This chapter presents a comprehensive design tool for guaranteeing end-to-end requirements of real-time systems. The tool works as follows. First the system is designed via a graphical interface, and a binding engine connects tasks to their communication points. Then an optimization algorithm, whose objective is to minimize CPU utilization, transforms the end-to-end requirements into a set of intermediate rate constraints on the tasks. If the algorithm fails, a restructuring tool attempts to eliminate bottlenecks by transforming the application, which is then re-submitted into the assignment algorithm. The final result is a schedulable set of fully periodic tasks, which collaboratively maintain the end-to-end constraints.
R. Gerber and S. Hong.
ACM Transactions on Programming Languages and Systems, May 1997.
Also in UMD Technical Report CS-TR-3477, UMIACS TR 95-62, May 1995.
Abstract:
In this paper we present an automated, compiler-based technique to
help developers synthesize correct real-time systems. The domain we
consider is that of multi-programmed real-time applications, in which
periodic tasks control a physical systems via interacting with
external sensors and actuators. While a system is up and running,
these operations must be performed as specified -- otherwise the
system may fail.
Correctness depends not only on each program individually, but also on
complex task interactions which are usually exposed at runtime.
Errors at this point are usually remedied by a costly process of
instrumentation, measurement and code tuning.
We describe a static alternative to this process, which relies on
well-accepted technologies from optimizing compilers and
fixed-priority scheduling. Specifically, when an application is found
to be overloaded, the scheduling component determines good candidate
tasks to get transformed via program slicing. The slicing engine
decomposes each of the selected tasks into two fragments: one that is
``time-critical,'' and the other ``unobservable.'' The unobservable
part is then spliced to the end of the time-critical code, with the
semantics being maintained. The benefit is that the scheduler may
postpone the unobservable code beyond its original deadline, which can
enhance overall schedulability. While the optimization is completely
local, the improvement is realized globally, for the entire task set.
R. Gerber, S. Hong and M. Saksena.
IEEE Transactions on Software Engineering 21(7), July 1995.
Abstract:
This paper presents a comprehensive design methodology for
guaranteeing end-to-end requirements of real-time
systems. Applications are structured as a set of process
components connected by asynchronous channels, in which
the endpoints are the system's
external inputs and outputs. Timing constraints are then
postulated between these inputs and outputs; they express
properties such as end-to-end propagation delay, temporal
input-sampling correlation, and allowable separation
times between updated output values.
The automated design method works as follows:
First the end-to-end requirements are transformed into a set of
intermediate rate constraints on the tasks, and new
tasks are created to correlate related inputs.
The intermediate constraints are then solved by an
optimization algorithm, whose objective is to
minimize CPU utilization. If the algorithm fails, a
restructuring tool attempts to eliminate bottlenecks by
transforming the application, which is then re-submitted
into the assignment algorithm.
The final result is a schedulable
set of fully periodic tasks, which collaboratively
maintain the end-to-end constraints.
R. Gerber
Presented at ACM SIGPLAN Workshop on Language, Compiler, and Tool Support
for Real-Time Systems, June 1994.
Also published as University of Maryland Technical Report
CS-TR-3362, UMIACS-TR-94-117, October 1994.
Abstract:
This report summarizes two talks I gave at
the ACM SIGPLAN Workshop on Language, Compiler, and Tool Support
for Real-Time Systems, which took place on June 21, 1994, in
Orlando, Florida.
The workshop was held in concert with ACM SIGPLAN Conference on
Programming Languages Design and Implementation.
The first talk (``Statements about Real-Time: Truth or Bull?'') was
given in the early morning. At the behest of the workshop's organizers,
its primary function was to seed the ongoing discourse and provoke some
debate. Besides asking controversial questions, and
positing opinions, the talk also identified some several
fertile research areas that might interest PLDI attendees.
The second talk (``Languages and Transformations: Some Solutions'') was
more technical, and it reviewed our research on program optimizations
for real-time domains. However, I tried as much as possible to
revisit the research problems raised in the morning talk,
and present some possible approaches to them.
The following paragraphs contain the text from my
viewgraphs, laced with some commentary.
Since so much work has been done in real-time systems -- and
even more in programming languages -- my references are by
necessity incomplete.
R. Gerber and S. Hong.
IEEE Transactions on Software Engineering, May 1995.
Also as UMD CS-TR-3323, UMIACS-TR-94-90, July 1994.
Abstract:
We present a programming language called TCEL
(Time-Constrained Event Language),
whose semantics is based on time-constrained
relationships between observable events.
Such a semantics infers only those timing constraints necessary
to achieve real-time correctness, without over-constraining
the system. Moreover, an optimizing compiler
can exploit this looser semantics to help tune the
code, so that its worst-case execution time is consistent with its real-time
requirements.
In this paper we describe such a transformation system, which works
in two phases. First the TCEL source code is translated into
an intermediate representation. Then
an instruction-scheduling algorithm rearranges selected unobservable
operations, and synthesizes tasks guaranteed to
respect the original event-based constraints.
S. Hong
Ph.D. Dissertation. Also available as
UMD CS-TR-3436, UMIACS-TR-95-33, December 1994.
Abstract:
Developing a real-time system requires finding a balance
between the timing constraints and the functional requirements.
Achieving this balance often requires last-minute, low-level
intervention in the code modules -- via intensive hardware-based
instrumentation and manual program optimizations.
In this dissertation we present an automated, static alternative
to this kind of human-intensive work.
Our approach is motivated by recent advances in compiler
technologies, which we extend to two specific issues
on real-time programming, that is, feasibility and
schedulability.
A task is infeasible if its execution time stretches
over its deadline. To eliminate such faults,
we have developed a synthesis method that
(1) inspects all infeasible paths, and then (2) moves
instructions out of those paths to shorten the execution time.
On the other hand, schedulability of a task set denotes
an ability to guarantee the deadlines of all tasks
in the application. This property is affected by
interactions between the tasks, as well as their
individual execution times and deadlines.
To address the schedulability problem,
we have developed a task transformation method based on
program slicing. The method decomposes
a task into two subthreads: the IO-handler component
that must meet the original deadline, and the state-update
component that can be postponed past the deadline.
This delayed-deadline approach contributes to the
schedulability of the overall application.
We also present a new fixed-priority preemptive scheduling
strategy, which yields both a feasible priority ordering
and a feasible task-slicing metric.
M. Saksena
Ph.D. Dissertation. Also available as
UMD CS-TR-3321, UMIACS-TR-94-88, July 1994.
Abstract:
In this dissertation we address the problem of scheduling real-time
tasks with inter-task temporal dependencies. While such
constraints occur frequently in timing specifications, they have not
received much attention in scheduling theory. In the presence of such
constraints, a strategy to create a schedule based on maximum
execution times is often not adequate, and one must account for the
variation in execution times of tasks. We address this issue by
developing the notion of parametric scheduling, in which a
schedule adapts itself to varying execution times. This is achieved by
creating dispatch functions for tasks, which give the start time of a
task as a function of the start and execution times of previous tasks
in the schedule. The parametric scheduling paradigm improves upon the
traditional time-driven approach by providing better support for
complex timing constraints, as well as by giving the run-time
dispatcher flexibility to manage ``spare'' time.
We tackle the problem by decomposing it into three parts: (i) a
parametric scheduler for totally ordered task sets, (ii) an order
generator, and (iii) a pre-processor. The parametric scheduler
generates the dispatch functions used by run-time dispatcher for an
ordered set of tasks. We show that the dispatch functions can be
generated efficiently, and that the run-time dispatching overhead is
small for realistic constraints. The ordering of the tasks is created
by the order-generator which uses deterministic scheduling for this
purpose. Since the problem of deterministic scheduling for arbitrary,
unordered task sets is NP-Hard, we develop a technique to pre-process
the timing constraints so as to prune infeasible parts of the search
space. We also develop complexity results and algorithms for the
special case of merging two sequential task sets, which is useful for
incremental scheduling. Our results show that it is feasible to
construct efficient parametric schedules, and that parametric
scheduling offers an attractive alternative for designing real-time
systems.
R. Gerber, S. Hong and M. Saksena.
In 1994 IEEE Real-Time Systems Symposium, December 1994,
and in UMD CS-TR-3274, UMIACS-TR-94-58, May 1994.
The extended version of this paper appeared
in IEEE Transactions on Software Engineering.
Abstract:
This paper presents a comprehensive design methodology for
guaranteeing end-to-end requirements of real-time
systems. Applications are structured as a set of process
components connected by asynchronous channels, in which
the endpoints are the system's
external inputs and outputs. Timing constraints are then
postulated between these inputs and outputs; they express
properties such as end-to-end propagation delay, temporal
input-sampling correlation, and allowable separation
times between updated output values.
The automated design method works as follows:
First the end-to-end requirements are transformed into a set of
intermediate rate constraints on the tasks, and new
tasks are created to correlate related inputs.
The intermediate constraints are then solved by an
optimization algorithm, whose objective is to
minimize CPU utilization. If the algorithm fails, a
restructuring tool attempts to eliminate bottlenecks by
transforming the application, which is then re-submitted
into the assignment algorithm.
The final result is a schedulable
set of fully periodic tasks, which collaboratively
maintain the end-to-end constraints.
R. Gerber and S. Hong.
Principles of Real-Time Systems,
edited by Sang H. Son, Prentice-Hall, 1994.
Also available as UMD CS-TR-3217, UMIACS-TR-94-15, January 1994.
Abstract: We present a compiler-based approach to automatically assist in constructing real-time systems. In this approach, source programs are written in TCEL (or Time Constrained Event Language) which possesses high-level timing constructs, and whose semantics characterizes time-constrained relationships between observable events. A TCEL program infers only those timing constraints necessary to achieve real-time correctness, without over-constraining the system. We exploit this looser semantics to help transform programs to automatically achieve schedulability. In this article we present two such transformations. The first is trace-scheduling, which we use to achieve consistency between a program's worst-case execution time and its real-time requirements. The second is program-slicing, which we use to automatically tune application programs driven by rate-monotonic scheduling.
R. Gerber and S. Hong.
In 1993 IEEE Real-Time Systems Symposium, December 1993.
Also available as UMD CS-TR-3071, UMIACS-TR-93-37, May 1993.
Abstract: We present TCEL (Time-Constrained Event Language), whose timing semantics is based solely on the constrained relationships between observable events. Using this semantics, the unobservable code can be automatically moved to convert an unschedulable task set into a schedulable one. We illustrate this by an application of program-slicing, which we use to automatically tune control-domain systems driven by rate-monotonic scheduling.
S. Hong and R. Gerber.
In SIGPLAN Programming Language Design and
Implementation, June 1993.
Also available as UMD CS-TR-3001, UMIACS-TR-92-127.
Abstract: We present a programming language with first-class timing constructs, whose semantics is based on time-constrained relationships between observable events. Since a system specification postulates timing relationships between events, realizing the specification in a program becomes a more straightforward process. Using these constraints, as well as those imposed by data and control flow properties, our objective is to transform the code so that its worst-case execution time is consistent with its real-time requirements. To accomplish this goal we first translate an event-based source program into intermediate code, in which the timing constraints are imposed on the code itself, and then use a compilation technique which synthesizes feasible code from the original source program.
M. Saksena, R. Gerber and A. Agrawala.
In IEEE Workshop on Real-Time Operating Systems and
Software, May 1993.
S. Hong and R. Gerber.
In IEEE Workshop on Real-Time Operating Systems and
Software, May 1993.
Abstract: We present TCEL (Time-Constrained Event Language), which possesses first-class timing constructs, and whose semantics is based on time-constrained relationships between observable events. Since the only timing constraints are imposed by observable events, the unobservable code can be automatically moved to achieve consistency between the program's worst-case execution time and its real-time requirements. Based on our initial success in applying compiler transformations to sequential programs, we present a task transformation technique for control domain applications driven by rate monotonic scheduling.
R. Gerber, W. Pugh and M. Saksena.
IEEE Transactions on Computers, 44(3), March 1995.
Also available as UMD CS-TR-2985, UMIACS TR-92-118.
Abstract:
In many real-time systems, relative timing constraints are imposed on a set of
tasks. Generating a correct ordering for the tasks and deriving their proper
start-time assignments is a NP-hard problem; it subsumes the Non-preemptive
Scheduling Problem.
If a total order is imposed on the tasks and the exact execution times of tasks
are known a priori, we can statically determine feasible start times using
linear-programming techniques. However, if a total order is imposed but we can
only obtain upper and lower bounds on execution time, verifying that the
constraints can be satisfied requires proving a formula with alternating
existential and universal quantifiers.
We present the technique of parametric dispatching to enforce such timing
constraints. During an offline component, we check if the constraints can be
guaranteed. If so, a calendar is produced that allows our online algorithm to
generate upper and lower bounds on the start time of each task, based on the
start times and execution times of previous tasks. A suitable start time for
the task may then be selected taking into account the presence of other
non-critical tasks in the system.
For arbitrary constraints, our algorithm has worst-case exponential time
complexity. But we have also identified a subclass of constraints, for which the
offline component requires $O(n^3)$ time and the online component requires
worst-case linear time to generate the bounds on the start times for each task.
The subclass is general enough to encompass many timing constraints envisioned
for real-time systems.
Tevfik Bultan and Richard Gerber.
Technical Report CS-TR-3871, UMIACS-TR-98-08.
Department of Computer Science,
University of Maryland, College Park, February 1998.
We present a new symbolic model checking
technique, which analyzes temporal properties in multi-typed transition
systems. Specifically, the method uses multiple
type-specific data encodings to represent system states, and
it carries out fixpoint computations via the corresponding
type-specific symbolic operations.
In essence, different symbolic encodings are unified
into one composite model checker.
Any type-specific language
can be included in this framework - provided that
the language is closed under Boolean connectives,
propositions can be checked for satisfiability,
and relational images can be computed.
Our technique relies on conjunctive partitioning of
transition relations of atomic events
based on variable types involved, which allows
independent computation of one-step
pre- and post-conditions for each variable type.
In this paper we demonstrate the effectiveness of our method
on a nontrivial data-transfer protocol, which contains a
mixture of integer and Boolean-valued variables.
The protocol operates over an unreliable channel that can
lose, duplicate or reorder messages. Moreover,
the protocol's send and receive window sizes are not specified in advance;
rather, they are represented as symbolic constants.
The resulting system was automatically
verified using our composite model checking approach,
in concert with a conservative approximation technique.
Tevfik Bultan, Richard Gerber and William Pugh.
Technical Report CS-TR-3870, UMIACS-TR-98-07.
Department of Computer Science.
University of Maryland, College Park, February 1998.
Model checking is a powerful technique for analyzing large,
finite-state systems. In an
infinite-state system, however, many basic properties are
undecidable. In this paper, we present a new symbolic model checker
which conservatively evaluates safety and liveness properties on
infinite-state programs. We use Presburger formulas to
symbolically encode a program's transition system, as well as its
model-checking computations. All fixpoint calculations are executed
symbolically, and their convergence is guaranteed by using
approximation techniques.
We demonstrate the promise of this technology on some well-known
infinite-state concurrency problems.
Tevfik Bultan, Richard Gerber and Christopher League.
ACM SIGSOFT International Symposium on Software Testing
and Analysis, March 1998. Also in
Technical Report CS-TR-3822, UMIACS-TR-97-62.
Department of Computer Science,
University of Maryland, College Park, August 1997.
Abstract:
Symbolic model-checking has proved highly successful for large
finite-state systems, in which states can be compactly encoded using
binary decision diagrams (BDDs) or their variants. The inherent
limitation of this approach is that it cannot be applied to systems
with an infinite number of states -- even those with a single unbounded
integer.
Alternatively, we recently proposed a model-checker for integer-based
systems that uses Presburger constraints as the underlying
state representation. While this approach easily verified some
subtle, infinite-state concurrency problems, it proved inefficient in
its treatment of Boolean and (unordered) enumerated types -- which
possess no natural mapping to the Euclidean coordinate space.
In this paper we describe a model-checker which combines the strengths
of both approaches. We use a composite model, in which a formula's
valuations are encoded in a mixed BDD-Presburger form, depending on
the variables used. We demonstrate our technique's effectiveness on a
nontrivial requirements specification, which includes a mixture of
Booleans, integers and enumerated types.
Tevfik Bultan, Richard Gerber, and William Pugh.
Conference on Computer-Aided Verification (CAV 97), Haifa, Israel,
June 1997.
Also in Technical
Report CS-TR-3687, UMIACS-TR-96-66, Department of Computer Science,
University of Maryland, College Park, September 1996.
Abstract:
Model checking is a powerful technique for analyzing large, finite-state
systems. In an infinite transition system, however, many basic properties
are undecidable. In this paper we present a new symbolic model checker
which conservatively evaluates safety and liveness properties on
infinite-state programs. We use Presburger formulas to symbolically
encode a program's transition system, as well as its model-checking
computations. All fixpoint calculations are executed symbolically, and
their convergence is guaranteed by using approximation techniques. We
demonstrate the promise of this technology on some well-known infinite-state
concurrency problems.
Tevfik Bultan, Jeffrey Fischer, Richard Gerber.
ACM SIGSOFT International Symposium on Software Testing
and Analysis, January 1996. Also in
UMD Technical Report CS-TR-3541, UMIACS-TR-95-98, October 1995.
Abstract:
Many concurrent systems are required to maintain certain
safety and liveness properties.
One emerging method of achieving confidence in such systems is
to statically verify them using model checking.
In this approach an abstract, finite-state model of the system is
constructed; then an automatic
check is made to ensure that the requirements
are satisfied by the model.
In practice, however, this method is limited by
the state space explosion problem.
We have developed a compositional method that directly addresses this
problem in the context of multi-tasking programs. Our
solution depends on three key space-saving ingredients:
(1) checking for counter-examples, which leads to simpler search
algorithms; (2) automatic extraction of interfaces, which
allows a refinement of the finite model -- even
before its communicating partners have been compiled; and
(3) using propositional ``strengthening assertions'' for
the sole purpose of reducing state space.
In this paper we present our compositional approach,
and describe the software tools that support it.
Jeffrey Fischer
Ph.D. Dissertation, Department of Computer Science,
University of Maryland, June 1996.
Abstract:
Many concurrent systems are required to maintain certain safety and liveness
properties. One emerging method of achieving confidence in such systems is
to statically verify them using model checking. In this approach an abstract,
finite-state model of the system is constructed; then an automatic check is
made to ensure that the requirements are satisfied by the model. In practice,
however, this method is limited by the state space explosion problem.
We have developed a compositional method that directly addresses this problem
in the context of multi-tasking programs. Our solution depends on three key
space-saving ingredients: (1) checking for counter-examples, which leads to
simpler search algorithms; (2) automatic extraction of interfaces, which allows
a refinement of the finite model - even before its communicating partners have
been compiled; and (3) using propositional "strengthening assertions" for the
sole purpose of reducing state space.
J. Fischer and R. Gerber.
In 9th Annual IEEE Conference on Computer Assurance.
Also available as UMD CS-TR-3225, UMIACS-TR-94-19, January 1994.
Abstract: Model checking [CES, 86] has proven to be an effective analysis tool for domains such as hardware circuits and communication protocols. However, it has not yet been widely applied to more general concurrent systems, such as those realized by Ada multi-tasking programs. A major impediment to the use of model checking in such systems is the exponential growth of the state space, which results from the parallel composition of component tasks. Various compositional approaches have been proposed to address this problem, in which the parts of a system are analyzed separately, and then the results are combined into inferences about the whole. One of the more promising of these techniques is called compositional minimization [Chiodo et al, 92], which eliminates each component's ``uninteresting'' states as the model checking proceeds; this in turn can lead to a significant reduction in the composite state-space. In this paper we evaluate the application of this approach to Ada multi-tasking programs, particularly highlighting the design choices made to accommodate Ada's semantics. We also discuss the types of systems (and properties) for which this method produces significant time/space savings, as well as those for which the savings are less pronounced.
I. Lee, P. Bremond-Gregoire and R. Gerber.
In Proceedings of the IEEE, January 1994.
Abstract: Recently, significant progress has been made in the development of timed process algebras for the specification and analysis of real-time systems. This paper describes a timed process algebra called ACSR, which supports synchronous timed actions and asynchronous instantaneous events. Timed actions are used to represent the usage of resources and to model the passage of time. Events are used to capture synchronization between processes. To be able to specify real systems accurately, ACSR supports a notion of priority that can be used to arbitrate among timed actions competing for the use of resources and among events that are ready for synchronization. The paper also includes a brief overview of other timed process algebras and discusses similarities and differences between them and ACSR.
R. Gerber and I. Lee.
Information and Computation, 113(1):102-142, August 1994.
Also available as UMD CS-TR-2984, UMIACS-TR-92-117.
Abstract: The behavior of concurrent, real-time systems can be specified using a process algebra called CCSR. The underlying computation model of CCSR is resource-based, in which multiple resources execute synchronously, while processes assigned to the same resource are interleaved according to their priorities. CCSR allows the algebraic specification of timeouts, interrupts, periodic behaviors and exceptions. This paper develops a natural treatment of preemption, which is based not only on priority, but also on resource utilization and inter-resource synchronization. The preemption ordering leads to a term equivalence based on strong bisimulation, which is also a congruence with respect to the operators. Consequently the equivalence yields a compositional proof system, which is illustrated in the verification of a resource-sharing, producer-consumer problem.
R. Gerber and I. Lee.
IEEE Transactions on Software Engineering 18(9),
September 1992.
Abstract: We describe a layered approach to the specification and verification of real-time systems. Application processes are specified in the CSR Application Language, which includes high-level language constructs such as timeouts, deadlines, periodic processes, interrupts and exception-handling. Then, a configuration schema is used to map the processes to system resources, and to specify the communication links between them. We automatically translate the result of the mapping into the CCSR process algebra, which characterizes CSR's resource-based computation model by a prioritized transition system. For the purposes of verification, we have implemented a reachability analyzer based on the CCSR semantics. This tool mechanically evaluates the correctness of the CSR specification, by checking whether an exception state can be reached in its corresponding CCSR term. We illustrate the effectiveness of this technique on a multi-sensor robot example.
Ladan Gharai and Richard Gerber.
Multimedia Computing and Networking 1998 (MMCN98).
Also in Technical Report CS-TR-3749, Department of Computer Science,
University of Maryland, College Park, January 1997.
To preview slides from MMCN Talk,
click here.
Abstract: We describe a compositional simulation system, which predicts streamed video performance on multiple platform configurations. System behavior is modeled with a set of key deterministic and stochastic variables, each of which characterizes part of a ``virtual system component,'' e.g., an IO device, a particular CPU, a codec, etc. These variables are profiled in isolation, by inserting lightweight instrumentation code into the main threads and upcalls involved in the playout datapath. Then, a post-processor converts the derived samples into synthesized probability distribution functions, after which the results are stored in the simulator's library. At simulation time, the user selects a set of ``virtual components,'' which are then composed into an ``end-user system.'' The resulting model is used to predict the performance of any video, which usually requires no more than a few seconds. The output is a list of frame-display times, accompanied by statistics on the mean-playout rate, variance, jitter, etc. This scheme lets developers extend the range of their target platforms, by automatically benchmarking new components, storing the results, and then simulating an entirely new set of systems. So, if a developer possesses a large set of pre-profiled components, he or she can quickly estimate a video's performance on a huge spectrum of target platforms -- without ever having to actually assemble them. In this paper we demonstrate evidence that our method works within a reasonable degree of accuracy, when compared to actual on-line playout. We present results for a generic, streamed Quicktime video system -- subjected to multiple configurations. These were assembled (combinatorially) using four different CPUs, three types of SCSI devices, two common codecs (Radius Cinepak and Intel Indeo), and two full-frame video masters. On most configurations tested, the simulator's on-line predictions were accurate within a margin of 15% error.
Richard Gerber and Ladan Gharai
In ACM Sigmetrics '96. Also in
UMD Technical Report CS-TR-3551, UMIACS TR 95-103, October 1995.
Abstract:
In this paper we describe our experiments on digital video applications,
concentrating on the static and dynamic tradeoffs involved in
video playback. Our results were extracted from a controlled
series 272 tests, which we ran in three stages.
In the first stage of 120 tests, we used a simple player-monitor tool
to evaluate the effects of various static parameters:
compression type, frame size,
digitized rate, spatial quality and keyframe distribution.
The tests were carried out on two Apple Macintosh
platforms: at the lower end a
Quadra 950, and at the higher end, a Power PC 7100/80. Our quantitative
metrics included average playback rate, as well as the rate's variance
over one-second intervals.
The first set of experiments unveiled several anomalous latencies. To
track them down we ran an additional 120 tests, from which we
concluded that the video and IO operations were insufficiently tuned
to each other.
In the next step we attempted to correct this problem, by implementing
our own video playback software and accompanying device-level
handlers. Our emphasis was on achieving a controlled, deterministic
coordination between the various system components. An additional set
of 32 experiments were carried out on our platforms, which showed
frame-rate increases of up to 325%, with associated reductions in rate
variance.