|
Regression testing means
rerunning test cases from existing test suites to build confidence that
software changes have no unintended side-effects. The
“ideal” process would be to create an extensive test suite
and run it after each and every change. Unfortunately, for many
projects this is just impossible because test suites are too large,
because changes come in too fast, because humans are in the testing
loop, because scarce, highly in-demand simulation laboratories are
needed, or because testing must be done on many different hardware and
OS platforms.
Researchers have tried to make regression testing more effective and
efficient by developing regression test selection (RTS) techniques, but
many problem remain, such as:
- Unpredictable performance.
RTS techniques sometimes save time and money, but they sometimes select
most or all of the original test cases. Thus, developers using RTS
techniques can find themselves worse off for having done so.
- Incompatible process assumptions.
Testing time is often limited (e.g., must be done overnight). RTS
techniques do not consider such constraints and, therefore, can and do
select more test cases than can be run.
- Inappropriate evaluation models.
RTS techniques try to maximize average regression testing performance
rather than optimize aggregate performance over many testing sessions.
However, companies that test frequently might accept less effective,
but cheaper individual testing sessions if the system would,
nonetheless, be well-tested over some short period of time.
These and other issues have not been adequately considered in current
research, yet they strongly affect the applicability of proposed
regression testing processes. Moreover, we believe that solutions to
these problems can be exploited, singly and in combination, to
dramatically improve the costs and benefits of the
regression testing process.
Some of our recent work is described in:
-
- Jung-Min Kim and Adam Porter, A History-Based Test Prioritization Technique for Regression Testing in Resource Constrained Environments. Proceedings of the Twenty-fourth International Conference on Software Engineering. Orlando, Fl. May 2002.
- Todd Graves, Mary Jean Harrold, Jung-Min Kim, Adam Porter, and Gregg Rothermel, An Empirical Study of Regression Test Selection Techniques, ACM Transactions on Software Engineering, 10(2), pp. 184-208, 2001.
|
|