Course Number: CMSC737.
Meeting Times: Tue. Thu.
- 9:30AM - 10:45AM (CSIC 2120)
Office Hours: Tue. Thu. - 10:45AM - 12:00PM
(4115 A. V. Williams Building)
Catalog Course Description: This course will examine fundamental software testing and related program analysis techniques. In particular, the important phases of testing will be reviewed, emphasizing the significance of each phase when testing different types of software. The course will also include concepts such as test generation, test oracles, test coverage, regression testing, mutation testing, program analysis (e.g., program-flow and data-flow analysis), and test prioritization.
Course Summary: This course will examine fundamental software testing and program analysis techniques. In particular, the important phases of testing will be reviewed, emphasizing the significance of each phase when testing different types of software. Students will learn the state of the art in testing technology for object-oriented, component-based, concurrent, distributed, graphical-user interface, and web software. In addition, closely related concepts such as mutation testing and program analysis (e.g., program-flow and data-flow analysis) will also be studied. Emerging concepts such as test-case prioritization and their impact on testing will be examined. Students will gain hands-on testing/analysis experience via a multi-phase course project. By the end of this course, students should be familiar with the state-of-the-art in software testing. Students should also be aware of the major open research problems in testing.
Student Presentations: All students are strongly encouraged to present a topic related to software testing. You must prepare slides and select a date for your presentation. Group presentations are encouraged if the selected topic is broad enough. Topics include, but are not limited to:
Þ Combining and Comparing Testing Techniques
Þ Defect and Failure Estimation and Analysis
Þ Testing Embedded Software
Þ Fault Injection
Þ Load Testing
Þ Testing for Security
Þ Software Architectures and Testing
Þ Test Case Prioritization
Þ Testing Concurrent Programs
Þ Testing Database Applications
Þ Testing Distributed Systems
Þ Testing Evolving Software
Þ Testing Interactive Systems
Þ Testing Object-Oriented Software
Þ Testing Spreadsheets
Þ Testing vs. Other Quality Assurance Techniques
Þ Usability Testing
Þ Web Testing
The grade of the course will be
determined as follows: 25% mid-term, 25% final exam, 50% project.
Credits: 3
Prerequisites: Software engineering CMSC435 or equivalent.
Status with respect to graduate program: MS qualifying course (Midterm+Final exam), PhD core (Software Engineering).
Syllabus: The following topics will be discussed.
[Mar. 25]Generating test cases for GUI responsibilities using complete interaction sequences, White, L.; Almezen, H.; Software Reliability Engineering, 2000. ISSRE 2000. Proceedings. 11th International Symposium on; 8-11 Oct. 2000 Page(s):110 – 121.
Date |
Topic |
Status |
Presented by |
Apr. 8 |
Test Oracle Generation. Read [1] http://citeseer.ist.psu.edu/peters95generating.html [2] http://citeseer.ist.psu.edu/181183.html [3] http://rupak.info/article/33/test-oracles [4] http://www.cse.ogi.edu/PacSoft/conf/jvw02/slides/jmlunit.pdf |
Alford, Ronald Wayne |
|
Object Oriented Software Testing. |
Bahety, Anand Baldeodas & Bucatanschi, Dan George |
||
Apr. 10 |
Protocol testing. Paper: “ASPIRE: Automated Systematic Protocol Implementation Robustness Evaluation” by Arunchandar Vasan and Atif M. Memon. (Questions; Slides; Slides in pdf format) |
Chandra, Deepti Jagdish |
|
Testing concurrent software: paper Reachability testing of concurrent programs, Lei, Y.; Carver, R.H., Software Engineering, IEEE Transactions on, Volume 32, Issue 6, June 2006 Page(s): 382 – 403. (Questions; Slides; Slides in pdf format) |
Huynh, Thuan Quang |
||
Apr. 15 |
Using Execution Feedback in Test Case Generation; paper Using
GUI Run-Time State as Feedback to Generate Test Cases |
Nguyen, Bao Ngoc |
|
Testing Embedded Software; paper Pao-Ann
Hsiung, Shang-Wei Lin, Chih-Hao
Tseng, Trong-Yen Lee, Jih-Ming
Fu, Win-Bin See, "VERTAF: An
Application Framework for the Design and Verification of Embedded Real-Time
Software," IEEE Transactions on Software Engineering, vol. 30, no. 10,
pp. 656-674, Oct., 2004. (Questions; Slides;
Slides in pdf
format) |
Konda, Shravya Reddy |
||
Apr. 17 |
Reisner, Elnatan Benjamin |
||
Database Testing; paper A framework for testing database
applications (http://portal.acm.org/citation.cfm?id=348954&dl=GUIDE&dl=ACM) |
Liu, Liping |
||
Apr. 22 |
Web Testing; paper VeriWeb: Automatically testing dynamic web sites by Michael Benedikt, Juliana Freire, Patrice Godefroid, In: Proceedings of the 11th international world wide web conference (WWW2002) (Questions; Slides; Slides in pdf format) |
Wongsuphasawat, Krist |
|
A
Theory of Fault-Based Testing (Questions; Slides; Slides in pdf
format) |
Lee, Joonghoon |
||
Apr. 24 |
Pip: Detecting the Unexpected in Distributed Systems (Questions; Slides; Slides in pdf format) |
Schulman, Aaron David |
|
Machine Learning approaches for Statistical Software Testing; paper Saraph, P. Last, M. Kandel, A., "Test case generation and reduction by automated input-output analysis". Proceedings of IEEE International Conference on Systems, Man and Cybernetics, 2003. (Questions; Slides; Slides in pdf format) |
Sharara, Hossam Samy Elsai |
||
Apr. 29 |
PhD Proposal https://forum.cs.umd.edu/calendar.php?do=getinfo&day=2008-4-29&c=5
|
|
Jaymie Strecker |
May 1 |
Using Formal Software Specifications for Testing; paper Boyapati, C., Khurshid, S., and
Marinov, D. 2002. Korat:
automated testing based on Java predicates. In Proceedings of the 2002 ACM
SIGSOFT international Symposium on Software Testing and Analysis ( |
Stuckman, Jeff |
|
Fault Injection; paper “Software Fault Injection for
Survivability" by Jeffrey M. Voas & Anup K. Ghosh. http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=821531
(Questions; Slides; Slides in pdf
format) |
Teoh, Alison Lui Koon |
||
May 6 |
Web Testing. Alessandro Marchetto,
Paolo Tonella and Filippo
Ricca. State-Based Testing of |
Thakor, Shashvat Advait |
|
Network Security testing Techniques; source is this report (Questions; Slides; Slides in pdf format) |
Vador, Sachin Shashikant |
||
May 8 |
Security Testing: paper Bypass testing of web applications., J Offutt, Y Wu, X Du, H Huang - Proc. of the IEEE International Symposium on Software E, 2004 - http://cs.gmu.edu/~offutt/rsrch/papers/bypass-issre.pdf (Questions; Slides; Slides in pdf format) |
Donlon, Eileen Merle |
|
Usability testing: reading list (1) Nielsen, Molich: Heuristic evaluation of user interfaces (8 pages): http://portal.acm.org/citation.cfm?id=97243.97281 (2) Spool, Schroeder: Testing web sites: five users is nowhere near enough (2 pages) http://portal.acm.org/citation.cfm?id=634067.634236 (Questions; Slides; Slides in pdf format) |
Zazworka, Nico |
Phase 1
Goal: Black-box test-case generation.
Procedure: Take at least three subject applications from the TerpOffice web-site; select five methods, each with at least 5 parameters (you may select methods with 3 or 4 parameters only if the methods are reasonably complex with at least two branches); create JUnit test cases for these methods using the category-partition method. Reduce the number of test cases using pair-wise testing. Compare the statement coverage of the original and reduced suites.
Deliverables: Source code of the five methods and all the JUnit test cases; and a document describing the categories/choices and constraints. At least two sets of test cases, each of which is sufficient to satisfy the pair-wise testing criterion. Coverage reports. Also a one-page document describing the difficulties you faced with using the subject applications – including downloading, installing, etc. and how you handled the difficulties.
Due on Feb. 26 in class. [Late submission policy – you lose 20% (of
the maximum points) per day]
Phase 2
Goal: Making each JUnit tests fail by seeding artificial faults in the method source code.
Procedure: Examine each JUnit test cases and the method source code. Obtain a set of source-code changes (from this paper) that will cause each test case to fail. Insert, in the method, a comment /*FAULT## FAILURE INDUCING CODE */ at line N. Simple string replacement of line N with “FAILURE INDUCING CODE” should cause the JUnit test case to fail. If the change requires changes to multiple lines, then replace ## with an integer; use the same value of the integer for all lines that are related to one failure. Write a script to perform the string replacement automatically, one fault at a time to avoid fault interaction. Use the classification of the faults from this paper.
Deliverables: The modified source of methods, the script, and JUnit test cases. An standalone executable that will demonstrate the entire process automatically.
Due on Mar. 28. [Late submission policy – you lose 20% (of the maximum
points) per day]
Phase 3
Goal: Running the pair-wise test cases on fault-seeded code. Also, running the application (via the GUI) to detect the seeded faults.
Procedure: Execute all your pair-wise test cases on the methods seeded with faults. Report the number of faults that you detected. Compare with full suite. For the second part of this phase, compile and run the application (with one fault turned ON at a time) and come up with an interaction (sequence of GUI events) that reveals the fault, i.e., produces an outcome that is different from that of the original code.
Deliverables: The failure report and the event-sequences.
Due on Apr. 29 in class. [Late submission policy – you lose 20% (of
the maximum points) per day]