Last Updated: 2024-11-20 Wed 12:07

CMSC216 Project 4: Chester, a C High-level Tester

CODE/TEST DISTRIBUTION: p4-code.zip

VIDEO OVERVIEW: https://youtu.be/ZD43Hy12Uc4

CHANGELOG:

Wed Nov 20 12:06:40 PM EST 2024
As per Post 1917 the deadline for P4 has been extended by 2 days to Sun 24-Nov-2024 at 11:59pm.
Mon Nov 18 04:56:50 PM EST 2024
An optional MAKEUP credit problem has been added to the project. The problem is described in the updated Makeup Credit section of the project specification. To get the tests and data files necessary to run evaluate the Makeup credit problem, run make update.
Thu Nov 14 01:36:43 PM EST 2024

A minor update to the project testing files is no available by using the command:

  >> make update

The update corrects the following two issues.

Post 1677 reported that unreadable files that are intentionally created during testing raise errors with VS Code / SFTP syncing. These should not be harmful, just annoying, but the test update ensures that on finishing the tests, the unreadable files are removed so that syncing will not raise errors.

Post 1679 identified typos in test_prob5.org and which are corrected.

Wed Nov 13 02:29:19 PM EST 2024

Post 1662 pointed out a mismatch on file name patterns Chester should create between the Documentation Comments and the test cases. The Doc Comments were incorrect: file names should follow the pattern

  TESTDIR/PREFIX-output-05.txt

and the Doc Comments are now updated to reflect that. This is true for other files as well such as input files and results files.

Wed Nov 13 01:44:21 PM EST 2024
A video overview of P4 has been posted here: https://youtu.be/ZD43Hy12Uc4
Wed Nov 13 12:43:31 PM EST 2024
The Grading Criteria for P4 has been updated with Manual Inspection criteria for all problems. Staff have assembled a C Coding Style Guide which will be used to assign style points on the project. Students may wish to examine this style guide ahead of time and configure their editor of choice to make it easy to meet the style requirements. The Guide contains a section on configuring VS Code to automatically adjust style to meet expectations.

1 Overview

Software testing has gained a central role in the maintenance and development of reliable computing systems. By automating the task of check at least some aspects of correctness as code evolves, testing frameworks enable new features to be added while guaranteeing old functionality is not compromised.

This project will build a simple testing framework in C. It is a "high-level" tester which allows the tests to be specified in a textual format and focuses on testing full programs. The being a C, High-level Tester, it is dubbed Chester.

To complete Chester, a variety of systems programming techniques will be utilized including the following.

  • Creating child processes to execute external commands
  • Coordination of child processes with the Chester parent process
  • Creation of files and directories
  • Redirection of input/output
  • Reading and writing data from files to evaluate test results

2 Download Code and Setup

Download the code pack linked at the top of the page. Unzip this which will create a project folder. Create new files in this folder. Ultimately you will re-zip this folder to submit it.

File State Notes
Source Files    
chester.h Provided Header file for Chester
chester_util.c Provided Utility functions provided
chester_parse.c Provided Parsing functions provided to read input files
chester_parse.h Provided Header for parsing functions
chester_parse.peg Provided Parser generator used to create the parser
     
chester_funcs.c CREATE Functions that operate data in the Chester system
chester_main.c CREATE main() function for the Chester
Build/Testing Files    
Makefile Provided Build file to compile all programs
testy Testing Test running script
test_chester.c Testing Unit tests for Chester
test_prob1.org Testing Tests for Problem 1
test_prob2.org Testing Tests for Problem 2
test_prob3.org Testing Tests for Problem 3
test_prob4.org Testing Tests for Problem 4
test_prob5.org Testing Tests for Problem 5
data/ Testing Subdirectory with files / programs used during testing
data/four_tests.md Testing One of several sample input files

3 Overview of Chester

3.1 Chester Testing Files

Chester uses input files that contain the tests to run with each test containing description, a program to run, expected output for the program, optional input for the program, and options that adjust Chester behavior. The file format is roughly in Markdown format with each heading titling a test and code blocks used for input/output for the test. An example comes from data/two_tests.md:

 1: !prefix=two-tests
 2: !testdir=chester-test-two-tests
 3: 
 4: # Basic bash Test
 5: Checks that bash properly produces output.
 6: 
 7: !program=bash -c 'echo Chester is; echo a Tester'
 8: 
 9: ```output
10: Chester is
11: a Tester
12: ```
13: 
14: # Count chars with wc
15: Checks that wc (word count) works with provided
16: input.
17: 
18: !program=wc
19: 
20: ```input
21: This is a test.
22: This is only a test.
23: Keep calm and carry on.
24: ```
25: 
26: ```output
27:  3 14 61
28: ```

The file specifies two tests

  • Test 0: Basic bash Tests runs the program

    bash -c 'echo Chester is; echo a Tester'
    

    and checks that the output matches the output indicated in the output block

  • Test 1: Count chars with wc runs the wc program on the provided input to count the lines, words, and characters in the provided input block. The expected output is given.

On completion, running the completed Chester program on this test file, the tests are run and checked against the expected output which will produce the following output and results files.

>> chester data/two_tests.md               ## RUN CHESTER
data/two_tests.md : running 2 / 2 tests
Running with single process: .. Done
 0) Basic bash Test      : ok
 1) Count chars with wc  : ok
Overall: 2 / 2 tests passed

>> ls chester-test-two-tests               ## SHOW TESTING DIRECTORY CONTENTS 
two-tests-input-01.txt	 two-tests-output-01.txt  two-tests-result-01.md
two-tests-output-00.txt  two-tests-result-00.md

                                           ## SHOW ONE OF THE RESULTS FILE
>> cat chester-test-two-tests/two-tests-result-00.md
# TEST 0: Basic bash Test (ok)
## DESCRIPTION
Checks that bash properly produces output.

## PROGRAM: bash -c 'echo Chester is; echo a Tester'

## INPUT: None

## OUTPUT: ok

## EXIT CODE: ok

## RESULT: ok
>> 
  • All of the tests pass here but this will not always be so
  • Note that the top of two_tests.md file contains the directives

      !prefix=two-tests
      !testdir=chester-test-two-tests
    

    These cause files associated with testing to be created in the directory chester-test-two-tests, referred to as the Test Directory and all files within it to be named with the prefix two-tests

This setup should feel familiar: it is modeled after the behavior of the testing framework used throughout the course so far to perform automated tests on all submitted programs.

3.2 Data Structures

There are 2 central data structures used in Chester which are found in the header chester.h.

test_t
Contains all information about a specific test to run such as the program to run in the test, input for the program, expected and actual output, files associated with testing, and the results of running the test.
suite_t
A collection of tests along with which of them to run and where to store the testing results. The most important field in a suite is its tests[] array which contains an array of test_t structs which are possible tests to run.

It is worthwhile spend some time acquainting yourself with some of the documentation on these structs provided in chester.h. Refer back to the header often as you need to recall parts of the data. Both structs have quite a few fields but almost all of them will be used at some point during the project.

3.3 Strings in Chester

The data structures contain many char * fields which are pointers to strings. All strings associated with structs are malloc()'d and need to be free()'d eventually. Expect to do this in the following way.

  • ALLOCATION: Most strings will initially be stored in non-heap locations can be quickly copied using the strdup() function. An example for the suite_t infile_name field appears in chester_util.c in the following function:

      int suite_init_from_file_peg(suite_t *suite, char *fname){
        suite_init(suite);
        FILE *infile = fopen(fname,"r");
        if(infile == NULL){
          printf("Unable to open file '%s'\n",fname);
          return -1;
        }
        suite->infile_name = strdup(fname); // malloc() a copy of the string 
        ...;
    

    Here strdup() is used to create a heap-allocated copy of the string the "belongs" to the suite structure.

  • DEALLOCATION: Later, the provided suite_dealloc() function is used to deallocate memory associated with the suite. Any fields that are non-NULL will be free()'d:

      void suite_dealloc(suite_t *suite){
        if(suite->infile_name != NULL) free(suite->infile_name);
        if(suite->prefix != NULL)      free(suite->prefix);
        ...;
    

3.4 Outline of chester_funcs.c

The primary implementation files required are chester_funcs.c and chester_main.c. As the name suggests, chester_main.c will contain the main() function and is part of the final problem.

chester_funcs.c has a number of "service" functions which manipulate Suite and Tests data. Each of these is required and will be tested. The outline and some brief documentation for them is below.

// chester_funcs.c: Service functions for chester primarily operating
// upon suite_t structs.

#include "chester.h"

////////////////////////////////////////////////////////////////////////////////
// PROBLEM 1 Functions
////////////////////////////////////////////////////////////////////////////////

int suite_create_testdir(suite_t *suite);
// PROBLEM 1: Creates the testing results directory according to the
// name in the suite `testdir` field. If testdir does not exist, it is
// created as directory with permisions of User=read/write/execute
// then returns 1. If testdir already exists and is a directory, does
// nothing and returns 0. If a non-directory file named testdir
// already exists, print an error message and return -1 to indicate
// testing cannot proceed. The error message is:
//
// ERROR: Could not create test directory 'XXX'
//        Non-directory file with that name already exists
//
// with XXX substituted with the value of testdir
//
// CONSTRAINT: This function must be implemented using low-level
// system calls. Use of high-level calls like system("cmd") will be
// reduced to 0 credit. Review system calls like stat() and mkdir()
// for use here. The access() system call may be used but keep in mind
// it does not distinguish between regular files and directories.

int suite_test_set_outfile_name(suite_t *suite, int testnum);
// PROBLEM 1: Sets the field `outfile_name` for the numbered
// tests. The filename is constructed according to the pattern
//
// TESTDIR/PREFIX-output-05.txt
//
// with TESTDIR and PREFIX replaced by the testdir and prefix fields
// in the suite and the 05 replaced by the test number. The test
// number is formatted as indicated: printed in a width of 2 with 0
// padding for single-digit test numbers. The sprintf() or snprintf()
// functions are useful to create the string. The string is then
// duplicated into the heap via strdup() and a pointer to it saved in
// `outfile_name`. The file is not created but the name will be used
// when starting a test as output will be redirected into
// outfile_name. This function should always return 0.

int suite_test_create_infile(suite_t *suite, int testnum);
// PROBLEM 1: Creates a file that is used as input for the numbered
// test. The file will contain the contents of the `input` field. If
// that field is NULL, this function immediately returns. Otherwise, a
// file named like
//
//   TESTDIR/PREFIX-input-05.txt
//
// is created with TESTDIR and PREFIX replaced by the `testdir` field
// and `prefix` fields of the suite and the 05 replaced by the test
// number. A copy of this filename is duplicated and retained in the
// `infile_name` field for the test. After opening this file, the
// contents of the `input` field are then written to this file before
// closing the file and returning
// 0. The testing directory is assumed to exist by this function. The
// options associated with the file are to be the following:
// - Open write only
// - Create the file if it does not exist
// - Truncate the file if it does exist
// - Created files have the User Read/Write permission set
// If the function cannot create the input file due to open() failing,
// an error message is printed and -1 is returned; the error message is
// printed using perror() and will appear as:
//
//   Could not create input file : CAUSE
//
// with the portion to the right being added by perror() to show the
// system cause

int suite_test_read_output_actual(suite_t *suite, int testnum);
// PROBLEM 1: Reads the contents of the file named in field
// `outfile_name` for the given testnum into heap-allocated space and
// assigns the output_actual field to that space. Uses a combination
// of stat() and read() to efficiently read in the entire contents of
// a file into a malloc()'d block of memory, null terminates it (\0)
// so that the contents may treatd as a valid C string. Returns the
// total number of bytes read from the file on on success (this is
// also the length of the `output_actual` string). If the file could
// not be opened or read, the `output_actual` field is not changed and
// -1 is returned.
//
// CONSTRAINT: This function should perform at most 1 heap allocation;
// use of the realloc() function is barred. System calls like stat()
// MUST be used to determine the amount of memory needed before
// allocation, Failure to do so will lead to loss of credit.

////////////////////////////////////////////////////////////////////////////////
// PROBLEM 2 Functions
////////////////////////////////////////////////////////////////////////////////
//
// PROBLEM 2: Start a child process that will run program in the
// indicated test number. The parent process first sets the
// outfile_name and creates infile_name with the program input. It
// then creates a child process, sets the test field `child_pid` to
// the child process ID and returns 0.
//
// The child sets up output redirection so that the standard out AND
// standard error streams for the child process is channeled into the
// file named in field `outfile_name`. Note that standard out and
// standard error are "merged" so that they both go to the same
// `outfile_name`. This file should have the same options used when
// opening it as described in suite_test_create_infile(). If
// infile_name is non-NULL, input redirection is also set up with
// input coming from the file named in field `infile_name`. Uses the
// split_into_argv() function to create an argv[] array which is
// passed to an exec()-family system call.
//
// Any errors in the child during input redirection setup, output
// redirection setup, or exec()'ing print error messages and cause an
// immediate exit() with an associated error code. These are as
// follows:
//
// | CONDITION            | EXIT WITH CODE         |                                             |
// |----------------------+------------------------+---------------------------------------------|
// | Input redirect fail  | exit(TESTFAIL_INPUT);  |                                             |
// | Output redirect fail | exit(TESTFAIL_OUTPUT); |                                             |
// | Exec failure         | exit(TESTFAIL_EXEC);   | Prints 'ERROR: test program failed to exec' |
//
// Since output redirection is being set up, printing error messages
// in the child process becomes unreliable. Instead, the exit_code for
// the child process should be checked for one of the above values to
// determine what happened.
//
// NOTE: When correctly implemented, this function should never return
// in the child process though the compiler may require a `return ??`
// at the end to match the int return type. NOT returning from this
// function in the child is important as if a child manages to return,
// there will now be two instances of chester running with the child
// starting its own series of tests which will not end well...

int suite_test_start(suite_t *suite, int testnum);
////////////////////////////////////////////////////////////////////////////////
// PROBLEM 2 Functions
////////////////////////////////////////////////////////////////////////////////
//
// PROBLEM 2: Start a child process that will run program in the
// indicated test number. The parent process first sets the
// outfile_name and creates infile_name with the program input. It
// then creates a child process, sets the test field `child_pid` to
// the child process ID and returns 0.
//
// The child sets up output redirection so that the standard out AND
// standard error streams for the child process is channeled into the
// file named in field `outfile_name`. Note that standard out and
// standard error are "merged" so that they both go to the same
// `outfile_name`. This file should have the same options used when
// opening it as described in suite_test_create_infile(). If
// infile_name is non-NULL, input redirection is also set up with
// input coming from the file named in field `infile_name`. Uses the
// split_into_argv() function to create an argv[] array which is
// passed to an exec()-family system call.
//
// Any errors in the child during input redirection setup, output
// redirection setup, or exec()'ing print error messages and cause an
// immediate exit() with an associated error code. These are as
// follows:
//
// | CONDITION            | EXIT WITH CODE         |                                             |
// |----------------------+------------------------+---------------------------------------------|
// | Input redirect fail  | exit(TESTFAIL_INPUT);  |                                             |
// | Output redirect fail | exit(TESTFAIL_OUTPUT); |                                             |
// | Exec failure         | exit(TESTFAIL_EXEC);   | Prints 'ERROR: test program failed to exec' |
//
// Since output redirection is being set up, printing error messages
// in the child process becomes unreliable. Instead, the exit_code for
// the child process should be checked for one of the above values to
// determine what happened.
//
// NOTE: When correctly implemented, this function should never return
// in the child process though the compiler may require a `return ??`
// at the end to match the int return type. NOT returning from this
// function in the child is important as if a child manages to return,
// there will now be two instances of chester running with the child
// starting its own series of tests which will not end well...

int suite_test_finish(suite_t *suite, int testnum, int status);
// PROBLEM 2
//
// Processes a tests after its child process has completed and
// determines whether the tests passes / fails.
//
// The `status` parameter comes from a wait()-style call and is use to
// set the `exit_code_actual` of the test. `exit_code_actual` is one
// of the following two possibilities:
// - 0 or positive integer: test program exited normally and the exit
//   code/status is stored.
// - Negative integer: The tested program exited abnormally due to
//   being signaled and the negative of the signal number is
//   stored. Ex: child received SIGSEGV=11 so exit_code_actual is -11.
// If `status` indicated neither a normal nor abnormal exit, this
// function prints an error and returns (this case is not tested).
//
// Output produced by the test is read into the `output_actual` field
// using previously written functions.
//
// The test's `state` field is set to one of TEST_PASSED or
// TEST_FAILED. Comparisons are done between the fields:
// - output_expect vs output_actual (strings)
// - exit_code_expect vs exit_code_actual (int)
//
// If there is a mismatch with these, the test has failed and its
// `state` is set to TEST_FAILED. If both sets of fields match, the
// state of the test becomes `TEST_PASSED` and the suite's
// `tests_passed` field is incremented.
//
// Special Case: if output_expect is NULL, there is no expected output
// and comparison to output_actual should be skipped. This covers
// testing cases where a program is being run to examine only whether
// it returns the correct exit code or avoids segfaults.

////////////////////////////////////////////////////////////////////////////////
// PROBLEM 1 Functions
////////////////////////////////////////////////////////////////////////////////

void print_window(FILE *out, char *str, int center, int lrwidth);
// PROBLEM 3
// 
// Print part of the string that contains index center to the given
// out file. Print characters in the string between
// [center-lrwidth,center+lrwidth] with the upper bound being
// inclusive. If either the start or stop point is out of bounds,
// truncate the printing: the minimum starting point is index 0, the
// maximum stopping point is the string length.
//
// EXAMPLES:
// char *s = "ABCDEFGHIJKL";
// //         012345678901
// print_window(stdout, s, 4, 3);
// // BCDEFGH
// // 1234567
// print_window(stdout, s, 2, 5);
// // ABCDEFGH
// // 01234567
// print_window(stdout, s, 8, 4);
// // EFGHIJKL
// // 45678901
//
// NOTE: this function is used when creating test results to show
// where expected and actual output differ

int differing_index(char *strA, char *strB);
// PROBLEM 3
// 
// Finds the lowest index where different characters appear in strA and
// strB. If the strings are identical except that one is longer than
// the other, the index returned is the length of the shorter
// string. If the strings are identical, returns -1.
//
// EXAMPLES:
// differing_index("01234567","0123x567") -> 4
// differing_index("012345","01234567")   -> 6
// differing_index("012345","01x34567")   -> 2
// differing_index("012345","012345")     -> -1
// 
// NOTE: this function is used when creating test results to show
// where expected and actual output differ

int suite_test_make_resultfile(suite_t *suite, int testnum);
// PROBLEM 3
//
// Creates a result file for the given test. The general format is shown in the example below.
//   # TEST 6: wc 1 to 10 (FAIL)              // testnum and test title, print "ok" for passed tests
//   ## DESCRIPTION
//   Checks that wc works with input          // description field of test
//
//   ## PROGRAM: wc                           // program field of test
//
//   ## INPUT:                                // input field of test, "INPUT: None" for NULL input
//   1
//   2
//   3
//   4
//   5
//   6
//   7
//   8
//   9
//   10
//                                            // if output_expect is NULL, print "OUTPUT: skipped check"
//   ## OUTPUT: MISMATCH at char position 3   // results of differing_index() between 
//   ### Expect                               // output_expect and output_actual fields
//   10 10 21                                 // output_expect via calls to print_window()
//
//   ### Actual
//   10  9 20                                 // output_actual via calls to print_window()
//                                            // if no MISMATCH in output, prints ## OUTPUT: ok
//
//   ## EXIT CODE: ok                         // MISMATCH if exit_code_expect and actual don't match and
//                                            // prints Expect/Actual values
//   ## RESULT: FAIL                          // "ok" for passed tests
//
// The file to create is named according to the pattern
//
// TESTDIR/PREFIX-result-05.md
//
// with TESTDIR / PREFIX and 05 substituted for the `testdir` and
// `prefix` fields of suite and 05 for the testnum (width 2 and
// 0-padded). Note the use of the .md extension to identify the output
// as Markdown formatted text.
//
// The output file starts with a heading which prints the a heading
// with the testnum and title in it along with ok/FAIL based on the
// `state` of the test. Then 6 sections are printed which are
// 1. DESCRIPTION
// 2. PROGRAM
// 3. INPUT
// 4. OUTPUT (comparing output_expect and output_actual)
// 5. EXIT CODE (comparing exit_code_expect and exit_code_actual)
// 6. RESULT
//
// In the OUTPUT section, if a difference is detected at position N
// via the differing_index() function, then a window around position N
// is printed into the file for both the expected and actual
// output. The window width used is defined in the header via the
// constant TEST_DIFFWIDTH and is passed to print_window() function.
//
// If the output_expect field is NULL, the OUTPUT section header has
// the message "skipped check" printed next to it.
//
// In the EXIT CODE section, if there is a mismatch between the
// expected and actual exit_code, then they are both printed as in:
// ## EXIT CODE: MISMATCH
// - Expect: 0
// - Actual: 1
//
// The final RESULT section prints either ok / FAIL depending on the
// test state.
//
// If the result file cannot be opened/created, this file prints the
// error message
//   ERROR: Could not create result file 'XXX'
// with XXX substituted for the file name and returns -1. Otherwise
// the function returns 0 on successfully creating the resultfile.

////////////////////////////////////////////////////////////////////////////////
// PROBLEM 4 Functions
////////////////////////////////////////////////////////////////////////////////

int suite_run_tests_singleproc(suite_t *suite);
// PROBLEM 4
//
// Runs tests in the suite one at time. Before begining the tests,
// creates the testing directory with a call to
// suite_create_testdir().  If the directory cannot be created, this
// function returns -1 without further action.
//
// The tests with indices in the field `tests_torun[]` are run in the
// order that they appear there. This is done in a loop.
// `suite_test_start(..)` is used to start tests and wait()-style
// system calls are used to suspend execution until the child process
// is finished. Additional functions previously written are then used
// to
// - Assign the exit_code for the child
// - Read the actual output into the test struct
// - Set the pass/fail state
// - Produce a results file for the test
//
// Prints the "Running with single process:" and each test that
// completes prints a "." on the screen to give an indication of
// progress. "Done" is printed when all tests complete so that a full
// line which runs 8 tests looks like
//
//    Running with single process: ........ Done
//
// If errors arise such as with waiting for a child process, failures
// with getting the test output, or other items, error messages should
// be printed but the loop should continue. No specific error messages
// are required and no testing is done; error messages are solely to
// aid with debugging problems.

void suite_print_results_table(suite_t *suite);
// PROBLEM 4
//
// Prints a table of test results formatted like the following.
//
//  0) echo check           : FAIL -> see chester-test/prob1-result-00.txt
//  1) sleep 2s             : ok
//  2) pwd check            : FAIL -> see chester-test/prob1-result-02.txt
//  3) seq check            : ok
//  4) ls check             : FAIL -> see chester-test/prob1-result-04.txt
//  5) ls not there         : ok
//  6) wc 1 to 10           : FAIL -> see chester-test/prob1-result-06.txt
//  7) date runs            : ok
//
// The test number at the beginning of the line is printed with width
// 2 and space padded. The Test title is printed with a width of 20,
// left-aligned using capabilities of printf().  If the test passes,
// the message "ok" is added while if it fails, a FAIL appears and the
// result file associated with the test is indicated. This function
// honors the `tests_torun[]` array and will only print table results
// for tests with indices in this array.

////////////////////////////////////////////////////////////////////////////////
// PROBLEM 5 Functions
////////////////////////////////////////////////////////////////////////////////

int main(int argc, char *argv[]);;
// PROBLEM 5
//
// Defined in the file chester_main.c. Entry point for the Chester
// application which may be invoked with one of the following command
// line forms along with expected output.
//
// >> ./chester tests.md        # RUNS ALL TESTS
// tests.md : running 8 / 8 tests
// Running with single process: ........ Done
//  0) echo check           : FAIL -> see chester-test/prob1-result-00.txt
//  1) sleep 2s             : ok
//  2) pwd check            : FAIL -> see chester-test/prob1-result-02.txt
//  3) seq check            : ok
//  4) ls check             : FAIL -> see chester-test/prob1-result-04.txt
//  5) ls not there         : ok
//  6) wc 1 to 10           : FAIL -> see chester-test/prob1-result-06.txt
//  7) date runs            : ok
// Overall: 4 / 8 tests passed
//
// >> ./chester tests.md 2 4 6  # RUNS ONLY 3 TESTS NUMBERED 2 4 6
// tests.md : running 3 / 8 tests
// Running with single process: ... Done
//  2) pwd check            : FAIL -> see chester-test/prob1-result-02.txt
//  4) ls check             : FAIL -> see chester-test/prob1-result-04.txt
//  6) wc 1 to 10           : FAIL -> see chester-test/prob1-result-06.txt
// Overall: 0 / 3 tests passed
//
// main() parses the indicated input file to create a test suite
// struct. It then determines if all tests or only specified tests
// will be run by analyzing the command line argument structure. The
// `suite.tests_torun[]` and `suite.test_torun_count` fields are set
// according to which tests will be run: either specified tests only
// or 0..tests_count-1 for all tests. 
//
// Before running tests, output lines are printed indicating the test
// file and number of tests to be run versus the total number of tests
// in the file. The tests are then run and an output table is produced
// using appropriate functions. The "Overall" line is printed with the
// count of tests passed and that were actually run.
//
// MAKEUP CREDIT: Support the following additional invocations that
// support concurrent test execution.
//
// >> ./chester -max_procs 4 tests.md         # RUN ALL TESTS WITH
// tests.md : running 8 / 8 tests             # 4 CONCURRENT PROCESSES
// Running with 4 processes: ........ Done
//  0) echo check           : FAIL -> see chester-test/prob1-result-00.txt
//  1) sleep 2s             : ok
//  2) pwd check            : FAIL -> see chester-test/prob1-result-02.txt
//  3) seq check            : ok
//  4) ls check             : FAIL -> see chester-test/prob1-result-04.txt
//  5) ls not there         : ok
//  6) wc 1 to 10           : FAIL -> see chester-test/prob1-result-06.txt
//  7) date runs            : ok
// Overall: 4 / 8 tests passed
//
// >> ./chester -max_procs 3 tests.md 2 4 6   # RUN 3 SELECTED TESTS WITH
// tests.md : running 3 / 8 tests             # 3 CONCURRENT PROCESSES
// Running with 3 processes: ... Done
//  2) pwd check            : FAIL -> see chester-test/prob1-result-02.txt
//  4) ls check             : FAIL -> see chester-test/prob1-result-04.txt
//  6) wc 1 to 10           : FAIL -> see chester-test/prob1-result-06.txt
// Overall: 0 / 3 tests passed
//
// Concurrently running processes are run via the associated
// `suite_run_tests_multiproc()` program.  The `-max_procs` command
// line flag sets the `suite.max_procs` field which is used in
// `suite_run_tests_multiproc()` to launch multiple processes to speed
// up test completion.

int suite_testnum_with_pid(suite_t *suite, pid_t pid);
// MAKEUP CREDIT: Finds the test that has child_pid equal to pid and
// returns that its index. If no test with the pid given is found,
// returns -1. This function is used during
// suite_run_tests_singleproc() to look up the test associated with a
// completed child process.

int suite_run_tests_multiproc(suite_t *suite);
// MAKEUP CREDIT: Like suite_run_tests_singleproc() but uses up to
// suite.max_procs concurrent processes to do so. This will speed up
// tests if multiple processors are available or if tests are bounded
// by I/O operations. The general algorithm to achieve concurrent
// execution is described in the project description and should be
// consulted carefully when implementing this function.

3.5 Provided Utility Functions

The emphasis of the project is on utilizing some system calls to achieve an interesting effect. To that end, some code is provided to focus attention on these aspects of the project. Provided code is in the chester_util.c file and the functions there may be used elsewhere to ease the implementation of the remainder of the program.

Notable functions are:

Dprintf()
Print a debugging message which is only shown if an environment variable is set. Extremely useful during debugging and alleviates the need to remove debugging messages after the fact.
split_into_argv()
Splits a string into an array of tokens that may be used as the argv[] in an exec() call
suite_init() / suite_dealloc90
Initializes suite_t structs to default values and deallocate memory associated with it.
suite_init_from_file_peg()
Reads a file and initializes a suite_t struct with its contents.
suite_do_global_directive() / suite_do_local_directive()
Functions used when parsing Chester input files to process directives like !testdir=some-directory. These functions would only possibly be used by students if the implement optional MAKEUP problems described later.

Below is an outline of the provided function. The full source can be viewed in the indicated source file.

// chester_util.c: Provided functions that do not require
// implementation.

#include "chester.h"

#include "chester_parse.h"

void Dprintf(const char* format, ...);
// Prints out a message if the environment variable DEBUG is set;
// Try running as `DEBUG=1 ./some_program`

int split_into_argv(char *line, char *argv[], int *argc_ptr);
// Splits `line` into tokens with pointers to each token stored in
// argv[] and argc_ptr set to the number of tokens found. This
// function is in the style of strtok() and destructively modifies
// `line`. A limited amount of "quoting" is supported to allow single-
// or double-quoted strings to be present. The function is useful for
// splitting lines into an argv[] / argc pair in preparation for an
// exec() call.  0 is returned on success while an error message is
// printed and 1 is returned if splitting fails due to problems with
// the string.
//
// EXAMPLES:
// char line[128] = "Hello world 'I feel good' today";
// char *set_argv[32];
// int set_argc;
// int ret = split_into_argv(line, set_argv, &set_argc);
// // set_argc: 4
// // set_argv[0]: Hello
// // set_argv[1]: world
// // set_argv[2]: I feel good
// // set_argv[3]: today

char *test_state_str(test_state_t state);
// Returns a string constant representing the test_state_t for
// easy printing

void suite_init(suite_t *suite);
// Initialize fields of `suite` to default values.

void suite_dealloc(suite_t *suite);
// Deallocate internal memory in the suite. All strings must be
// free()'d in the suite

int suite_init_from_file_peg(suite_t *suite, char *fname);
// Initialize `suite` from a file using the instructor-provided PEG
// parser. The file `fname` is copied strdup()'d to the appropriate
// field as are various other fields.

int suite_do_global_directive(suite_t *suite, const char *key, const char *val);
// Processes a global directive which changes attributes of the suite
// struct. This function is used during parsing for !key=val
// diretictives in the global directive section.

int suite_do_local_directive(suite_t *suite, const char *key, const char *val);
// Processes a local directive in the suite that changes attributes of
// the tests[] entry at index suite.tests_count.

3.6 Sample Usage

Below is an example use of the complete chester program to perform testing activities using input files in the provided data/ directory.

 1: # Demonstration of chester 
 2: >> cd p4-code
 3: 
 4: >> make chester
 5: gcc -Wall -Wno-comment -Werror -g  -c chester_main.c
 6: gcc -Wall -Wno-comment -Werror -g  -c chester_funcs.c
 7: gcc -Wall -Wno-comment -Werror -g  -c chester_util.c
 8: gcc -Wall -Wno-comment -Werror -g  -c chester_parse.c
 9: gcc -Wall -Wno-comment -Werror -g  -o chester chester_main.o chester_funcs.o chester_util.o chester_parse.o
10: 
11: >> chester data/four_tests.md                      # RUN TESTS IN four_tests.md
12: data/four_tests.md : running 4 / 4 tests
13: Running with single process: .... Done
14:  0) seq check            : ok                      # USE DIRECTORY  chester-test BY DEFAULT
15:  1) wc 1 to 10           : FAIL -> see chester-test/four-tests-result-01.md
16:  2) bash with output     : ok
17:  3) tail with input      : FAIL -> see chester-test/four-tests-result-03.md
18: Overall: 2 / 4 tests passed
19: 
20: >> cat chester-test/four-tests-result-01.md        # SHOW RESULTS OF ONE TEST THAT FAILED
21: # TEST 1: wc 1 to 10 (FAIL)
22: ## DESCRIPTION
23: Checks that wc works with input; should fail as the input is slightly
24: mangled.
25: 
26: ## PROGRAM: wc
27: 
28: ## INPUT:
29: 1
30: 2
31: 3
32: 4
33: 
34: 6
35: 7
36: 8
37: 9
38: 10
39: 
40: ## OUTPUT: MISMATCH at char position 3
41: ### Expect
42: 10 10 21
43: 
44: ### Actual
45: 10  9 20
46: 
47: 
48: ## EXIT CODE: ok
49: 
50: ## RESULT: FAIL
51: 
52: >> chester data/special_cases.md                   # RUN special_cases.md TESTS
53: data/special_cases.md : running 9 / 9 tests
54: Running with single process: ......... Done        # DIRECTIVE USES A DIFFERENT TEST DIRECTORY
55:  0) Segfault Test A      : FAIL -> see chester-test-special/special-cases-result-00.md
56:  1) Segfault Test B      : ok
57:  2) Error Redirect       : ok
58:  3) Empty Input          : ok
59:  4) Term Signal A        : FAIL -> see chester-test-special/special-cases-result-04.md
60:  5) Term Signal B        : FAIL -> see chester-test-special/special-cases-result-05.md
61:  6) Term Signal C        : ok
62:  7) Ignore Output        : ok
63:  8) Empty Description    : ok
64: Overall: 6 / 9 tests passed
65:                                                    # SHOW A FAILED TEST RESULT
66: >> cat chester-test-special/special-cases-result-00.md
67: # TEST 0: Segfault Test A (FAIL)
68: ## DESCRIPTION
69: Checks that the data/raise_sigsegv.sh program runs and the return code
70: is properly handled. The test should fail.
71: 
72: ## PROGRAM: bash data/raise_sigsegv.sh
73: 
74: ## INPUT: None
75: 
76: ## OUTPUT: ok
77: 
78: ## EXIT CODE: MISMATCH
79: - Expect: 0
80: - Actual: -11                                      # SEGFAULT IN PROGRAM DETECTED
81: 
82: ## RESULT: FAIL
83: 
84: >> chester data/special_cases.md 2 4 7             # SPECIFY ONLY 3 TESTS TO RUN ON
85: data/special_cases.md : running 3 / 9 tests        # THE COMMAND LINE RATHER THAN ALL
86: Running with single process: ... Done
87:  2) Error Redirect       : ok
88:  4) Term Signal A        : FAIL -> see chester-test-special/special-cases-result-04.md
89:  7) Ignore Output        : ok
90: Overall: 2 / 3 tests passed

4 Problem 1: Test Service Functions

The functions here are meant to acquaint students with some of the data types and functionalities required in Chester and the conventions around string usage. None of the functions is terribly long and should be considered a "warm-up" for later work.

4.1 Creating the Test Directory

Chester requires creation of a variety of files for each test

  • Input to the program
  • Output from the program
  • Results of the test

To keep a source code directory reasonably clean, these are stored in a Test Directory that is created by Chester. The following function is used to create the testing directory early on in a Chester run.

int suite_create_testdir(suite_t *suite);
// PROBLEM 1: Creates the testing results directory according to the
// name in the suite `testdir` field. If testdir does not exist, it is
// created as directory with permisions of User=read/write/execute
// then returns 1. If testdir already exists and is a directory, does
// nothing and returns 0. If a non-directory file named testdir
// already exists, print an error message and return -1 to indicate
// testing cannot proceed. The error message is:
//
// ERROR: Could not create test directory 'XXX'
//        Non-directory file with that name already exists
//
// with XXX substituted with the value of testdir
//
// CONSTRAINT: This function must be implemented using low-level
// system calls. Use of high-level calls like system("cmd") will be
// reduced to 0 credit. Review system calls like stat() and mkdir()
// for use here. The access() system call may be used but keep in mind
// it does not distinguish between regular files and directories.

4.2 Setting the Output File Name

The output for all tests will be captured in files. The following function sets the field with the name for this file. It introduces the use of formatting strings using sprintf() / snprintf() and then copying data using strdup() will recur throughout the project.

int suite_test_set_outfile_name(suite_t *suite, int testnum);
// PROBLEM 1: Sets the field `outfile_name` for the numbered
// tests. The filename is constructed according to the pattern
//
// TESTDIR/PREFIX-output-05.txt
//
// with TESTDIR and PREFIX replaced by the testdir and prefix fields
// in the suite and the 05 replaced by the test number. The test
// number is formatted as indicated: printed in a width of 2 with 0
// padding for single-digit test numbers. The sprintf() or snprintf()
// functions are useful to create the string. The string is then
// duplicated into the heap via strdup() and a pointer to it saved in
// `outfile_name`. The file is not created but the name will be used
// when starting a test as output will be redirected into
// outfile_name. This function should always return 0.

4.3 Creating Input Files

If a test has input, then parsing will place it in a test_t in the input field. The following function creates a file with the contents of the input. This file will be used later to feed input to a tested program.

int suite_test_create_infile(suite_t *suite, int testnum);
// PROBLEM 1: Creates a file that is used as input for the numbered
// test. The file will contain the contents of the `input` field. If
// that field is NULL, this function immediately returns. Otherwise, a
// file named like
//
//   TESTDIR/PREFIX-input-05.txt
//
// is created with TESTDIR and PREFIX replaced by the `testdir` field
// and `prefix` fields of the suite and the 05 replaced by the test
// number. A copy of this filename is duplicated and retained in the
// `infile_name` field for the test. After opening this file, the
// contents of the `input` field are then written to this file before
// closing the file and returning
// 0. The testing directory is assumed to exist by this function. The
// options associated with the file are to be the following:
// - Open write only
// - Create the file if it does not exist
// - Truncate the file if it does exist
// - Created files have the User Read/Write permission set
// If the function cannot create the input file due to open() failing,
// an error message is printed and -1 is returned; the error message is
// printed using perror() and will appear as:
//
//   Could not create input file : CAUSE
//
// with the portion to the right being added by perror() to show the
// system cause

4.4 Reading Output Files

The output from a program will be stored in a file. In order to determine if the program produced the correct output, the its output is read from that file into memory and compared against the expected output. The below function is responsible for reading in the contents of an output file and filling in the output_actual field. It is required to use certain system programming techniques outline in the documentation string to ensure it operates as efficiently as possible.

HINT: stat() is essential to use to complete this function correctly.

int suite_test_read_output_actual(suite_t *suite, int testnum);
// PROBLEM 1: Reads the contents of the file named in field
// `outfile_name` for the given testnum into heap-allocated space and
// assigns the output_actual field to that space. Uses a combination
// of stat() and read() to efficiently read in the entire contents of
// a file into a malloc()'d block of memory, null terminates it (\0)
// so that the contents may treatd as a valid C string. Returns the
// total number of bytes read from the file on on success (this is
// also the length of the `output_actual` string). If the file could
// not be opened or read, the `output_actual` field is not changed and
// -1 is returned.
//
// CONSTRAINT: This function should perform at most 1 heap allocation;
// use of the realloc() function is barred. System calls like stat()
// MUST be used to determine the amount of memory needed before
// allocation, Failure to do so will lead to loss of credit.

5 Problem 2: Start/Finish Tests

5.1 Starting Tests

Tests are run by forking a child process and later executing the program indicated in the program field of the test struct. Before executing the program, several items of setup are done.

  • The parent process must first create the input file for the tested program
  • Once a child is forked, the parent process retains the PID for the child process and sets its state to TEST_RUNNING
  • The child process redirects its standard input to come from the input file and standard output/error to go into the specified output file. This requires the use of system calls like open() and dup2()
  • In order to Execute the program, the child must also set up an argv[] array using provided split functions.

Along the way a variety of things may go wrong in which case the child process may exit prematurely with exit codes to indicate the problems that arose.

The function below is meant to manage this whole process.

int suite_test_start(suite_t *suite, int testnum);
////////////////////////////////////////////////////////////////////////////////
// PROBLEM 2 Functions
////////////////////////////////////////////////////////////////////////////////
//
// PROBLEM 2: Start a child process that will run program in the
// indicated test number. The parent process first sets the
// outfile_name and creates infile_name with the program input. It
// then creates a child process, sets the test field `child_pid` to
// the child process ID and returns 0.
//
// The child sets up output redirection so that the standard out AND
// standard error streams for the child process is channeled into the
// file named in field `outfile_name`. Note that standard out and
// standard error are "merged" so that they both go to the same
// `outfile_name`. This file should have the same options used when
// opening it as described in suite_test_create_infile(). If
// infile_name is non-NULL, input redirection is also set up with
// input coming from the file named in field `infile_name`. Uses the
// split_into_argv() function to create an argv[] array which is
// passed to an exec()-family system call.
//
// Any errors in the child during input redirection setup, output
// redirection setup, or exec()'ing print error messages and cause an
// immediate exit() with an associated error code. These are as
// follows:
//
// | CONDITION            | EXIT WITH CODE         |                                             |
// |----------------------+------------------------+---------------------------------------------|
// | Input redirect fail  | exit(TESTFAIL_INPUT);  |                                             |
// | Output redirect fail | exit(TESTFAIL_OUTPUT); |                                             |
// | Exec failure         | exit(TESTFAIL_EXEC);   | Prints 'ERROR: test program failed to exec' |
//
// Since output redirection is being set up, printing error messages
// in the child process becomes unreliable. Instead, the exit_code for
// the child process should be checked for one of the above values to
// determine what happened.
//
// NOTE: When correctly implemented, this function should never return
// in the child process though the compiler may require a `return ??`
// at the end to match the int return type. NOT returning from this
// function in the child is important as if a child manages to return,
// there will now be two instances of chester running with the child
// starting its own series of tests which will not end well...

5.2 Finishing Tests

After a child process completes running a program, the parent process will need to collect information about the test results. This happens after a wait()-style system call has indicated the child has finished. Actives here include gather the program output, comparing it and the child exit code to expected values, and setting the state of the test to either TEST_PASSED or TEST_FAILED. The following function handles these activities.

int suite_test_finish(suite_t *suite, int testnum, int status);
// PROBLEM 2
//
// Processes a tests after its child process has completed and
// determines whether the tests passes / fails.
//
// The `status` parameter comes from a wait()-style call and is use to
// set the `exit_code_actual` of the test. `exit_code_actual` is one
// of the following two possibilities:
// - 0 or positive integer: test program exited normally and the exit
//   code/status is stored.
// - Negative integer: The tested program exited abnormally due to
//   being signaled and the negative of the signal number is
//   stored. Ex: child received SIGSEGV=11 so exit_code_actual is -11.
// If `status` indicated neither a normal nor abnormal exit, this
// function prints an error and returns (this case is not tested).
//
// Output produced by the test is read into the `output_actual` field
// using previously written functions.
//
// The test's `state` field is set to one of TEST_PASSED or
// TEST_FAILED. Comparisons are done between the fields:
// - output_expect vs output_actual (strings)
// - exit_code_expect vs exit_code_actual (int)
//
// If there is a mismatch with these, the test has failed and its
// `state` is set to TEST_FAILED. If both sets of fields match, the
// state of the test becomes `TEST_PASSED` and the suite's
// `tests_passed` field is incremented.
//
// Special Case: if output_expect is NULL, there is no expected output
// and comparison to output_actual should be skipped. This covers
// testing cases where a program is being run to examine only whether
// it returns the correct exit code or avoids segfaults.

6 Problem 3: Creating Test Results

It is handy for folks running tests to receive both summary information about tests being run and detailed results for individual tests. The functions in this problem deal with the latter: they construct a results file for a individual tests indicating whether it passed and if not the reasons for its failure.

6.1 String Comparison Utilities

When comparing expected and actual test output, the following two functions will be used to hone in on differences between them. These functions are used in the next one to produce the results file for a test.

void print_window(FILE *out, char *str, int center, int lrwidth);
// PROBLEM 3
// 
// Print part of the string that contains index center to the given
// out file. Print characters in the string between
// [center-lrwidth,center+lrwidth] with the upper bound being
// inclusive. If either the start or stop point is out of bounds,
// truncate the printing: the minimum starting point is index 0, the
// maximum stopping point is the string length.
//
// EXAMPLES:
// char *s = "ABCDEFGHIJKL";
// //         012345678901
// print_window(stdout, s, 4, 3);
// // BCDEFGH
// // 1234567
// print_window(stdout, s, 2, 5);
// // ABCDEFGH
// // 01234567
// print_window(stdout, s, 8, 4);
// // EFGHIJKL
// // 45678901
//
// NOTE: this function is used when creating test results to show
// where expected and actual output differ
int differing_index(char *strA, char *strB);
// PROBLEM 3
// 
// Finds the lowest index where different characters appear in strA and
// strB. If the strings are identical except that one is longer than
// the other, the index returned is the length of the shorter
// string. If the strings are identical, returns -1.
//
// EXAMPLES:
// differing_index("01234567","0123x567") -> 4
// differing_index("012345","01234567")   -> 6
// differing_index("012345","01x34567")   -> 2
// differing_index("012345","012345")     -> -1
// 
// NOTE: this function is used when creating test results to show
// where expected and actual output differ

6.2 Results Files

Results files for individual tests are stored in the Testing Directory and are named according to the test number and prefix for the suite. An example result file with some comments is shown below. This is the same example that appears in the documentation comments for the following function. Result files are in Markdown format just like input files.

# TEST 6: wc 1 to 10 (FAIL)              <!--- testnum and test title, print "ok" for passed tests -->
## DESCRIPTION
Checks that wc works with input          <!--- description field of test -->

## PROGRAM: wc                           <!--- program field of test -->

## INPUT:                                <!--- input field of test, "INPUT: None" for NULL input -->
1
2
3
4
5
6
7
8
9
10
                                         <!--- if output_expect is NULL, print "OUTPUT: skipped check" -->
## OUTPUT: MISMATCH at char position 3   <!--- results of differing_index() between  -->
### Expect                               <!--- output_expect and output_actual fields -->
10 10 21                                 <!--- output_expect via calls to print_window() -->

### Actual
10  9 20                                 <!--- output_actual via calls to print_window() -->
                                         <!--- if no MISMATCH in output, prints ## OUTPUT: ok -->

## EXIT CODE: ok                         <!--- MISMATCH if exit_code_expect and actual don't match and -->
                                         <!--- prints Expect/Actual values -->
## RESULT: FAIL                          <!--- "ok" for passed tests -->

The function below produces a result file for a given test.

int suite_test_make_resultfile(suite_t *suite, int testnum);
// PROBLEM 3
//
// Creates a result file for the given test. The general format is shown in the example below.
//   # TEST 6: wc 1 to 10 (FAIL)              // testnum and test title, print "ok" for passed tests
//   ## DESCRIPTION
//   Checks that wc works with input          // description field of test
//
//   ## PROGRAM: wc                           // program field of test
//
//   ## INPUT:                                // input field of test, "INPUT: None" for NULL input
//   1
//   2
//   3
//   4
//   5
//   6
//   7
//   8
//   9
//   10
//                                            // if output_expect is NULL, print "OUTPUT: skipped check"
//   ## OUTPUT: MISMATCH at char position 3   // results of differing_index() between 
//   ### Expect                               // output_expect and output_actual fields
//   10 10 21                                 // output_expect via calls to print_window()
//
//   ### Actual
//   10  9 20                                 // output_actual via calls to print_window()
//                                            // if no MISMATCH in output, prints ## OUTPUT: ok
//
//   ## EXIT CODE: ok                         // MISMATCH if exit_code_expect and actual don't match and
//                                            // prints Expect/Actual values
//   ## RESULT: FAIL                          // "ok" for passed tests
//
// The file to create is named according to the pattern
//
// TESTDIR/PREFIX-result-05.md
//
// with TESTDIR / PREFIX and 05 substituted for the `testdir` and
// `prefix` fields of suite and 05 for the testnum (width 2 and
// 0-padded). Note the use of the .md extension to identify the output
// as Markdown formatted text.
//
// The output file starts with a heading which prints the a heading
// with the testnum and title in it along with ok/FAIL based on the
// `state` of the test. Then 6 sections are printed which are
// 1. DESCRIPTION
// 2. PROGRAM
// 3. INPUT
// 4. OUTPUT (comparing output_expect and output_actual)
// 5. EXIT CODE (comparing exit_code_expect and exit_code_actual)
// 6. RESULT
//
// In the OUTPUT section, if a difference is detected at position N
// via the differing_index() function, then a window around position N
// is printed into the file for both the expected and actual
// output. The window width used is defined in the header via the
// constant TEST_DIFFWIDTH and is passed to print_window() function.
//
// If the output_expect field is NULL, the OUTPUT section header has
// the message "skipped check" printed next to it.
//
// In the EXIT CODE section, if there is a mismatch between the
// expected and actual exit_code, then they are both printed as in:
// ## EXIT CODE: MISMATCH
// - Expect: 0
// - Actual: 1
//
// The final RESULT section prints either ok / FAIL depending on the
// test state.
//
// If the result file cannot be opened/created, this file prints the
// error message
//   ERROR: Could not create result file 'XXX'
// with XXX substituted for the file name and returns -1. Otherwise
// the function returns 0 on successfully creating the resultfile.

7 Problem 4: Running Suite Tests

Running the tests in a suite is performed by the following function. Note that it uses the tests_torun[] array to determine which tests to run: this array contains indices of tests to run and may be a subset of all tests. The array is set before running the function such as during the main() function.

int suite_run_tests_singleproc(suite_t *suite);
// PROBLEM 4
//
// Runs tests in the suite one at time. Before begining the tests,
// creates the testing directory with a call to
// suite_create_testdir().  If the directory cannot be created, this
// function returns -1 without further action.
//
// The tests with indices in the field `tests_torun[]` are run in the
// order that they appear there. This is done in a loop.
// `suite_test_start(..)` is used to start tests and wait()-style
// system calls are used to suspend execution until the child process
// is finished. Additional functions previously written are then used
// to
// - Assign the exit_code for the child
// - Read the actual output into the test struct
// - Set the pass/fail state
// - Produce a results file for the test
//
// Prints the "Running with single process:" and each test that
// completes prints a "." on the screen to give an indication of
// progress. "Done" is printed when all tests complete so that a full
// line which runs 8 tests looks like
//
//    Running with single process: ........ Done
//
// If errors arise such as with waiting for a child process, failures
// with getting the test output, or other items, error messages should
// be printed but the loop should continue. No specific error messages
// are required and no testing is done; error messages are solely to
// aid with debugging problems.

After the specified tests are run, a summary table is useful and can be printed using the following function.

void suite_print_results_table(suite_t *suite);
// PROBLEM 4
//
// Prints a table of test results formatted like the following.
//
//  0) echo check           : FAIL -> see chester-test/prob1-result-00.txt
//  1) sleep 2s             : ok
//  2) pwd check            : FAIL -> see chester-test/prob1-result-02.txt
//  3) seq check            : ok
//  4) ls check             : FAIL -> see chester-test/prob1-result-04.txt
//  5) ls not there         : ok
//  6) wc 1 to 10           : FAIL -> see chester-test/prob1-result-06.txt
//  7) date runs            : ok
//
// The test number at the beginning of the line is printed with width
// 2 and space padded. The Test title is printed with a width of 20,
// left-aligned using capabilities of printf().  If the test passes,
// the message "ok" is added while if it fails, a FAIL appears and the
// result file associated with the test is indicated. This function
// honors the `tests_torun[]` array and will only print table results
// for tests with indices in this array.

8 Problem 5: Chester Main

The final step in constructing Chester is to provide a main() function in chester_main.c which sequences all the previous functionality properly.

Parsing the Input File

A function is provided in chester_util.c to parse input files and fill in a suite_t struct which should be used to avoid the need to do manual parsing. A typical invocation will be:

{
  char *infilename = ...;
  suite_t mysuite;
  int ret = suite_init_from_file(&suite, infilename);
  if(ret == -1){
    // error case;
  }
  ...
}

Note that this opts to place the suite in the stack which is typical but not required.

Command Line options

For convenience, chester can be run with two variations:

>> chester tests.md             # (A) run all tests in the input file
>> chester tests.md 2 5 9 11    # (B) specify which tests to run on the command line

Processing of the command line options will need to take place in main() to set up the tests_torun[] array to reflect these two possibilities.

Output

The main program should print a title message like:

>> chester data/special_cases.md 2 4 7
data/special_cases.md : running 3 / 9 tests
...

The title message indicates the input file name and how many tests are being run versus how many are in the input file.

The program should then call suite_run_tests_singleproc() to run the indicated tests and use suite_print_results_table() to print a results table. Errors uncounted while running tests may cause the program to bail early and not print any test results such as if the testing directory could not be created.

At the end of teasing, a summary message should be printed like the following:

Overall: 2 / 3 tests passed

which is the number of test passed versus the number that were run.

9 Grading Criteria   grading 100

Weight Criteria
50 AUTOMATED TESTS TOTAL: 1 point per test
   
10 Problem 1: make test-prob1 runs tests from test_prob1.org for correctness
15 Problem 2: make test-prob2 runs tests from test_prob2.org for correctness
10 Problem 3: make test-prob3 runs tests from test_prob3.org for correctness
10 Problem 4: make test-prob4 runs tests from test_prob4.org for correctness
5 Problem 5: make test-prob5 runs tests from test_prob5.org for correctness
50 MANUAL INSPECTION TOTAL
   
10 PROBLEM 1 chester_funcs.c
  suite_create_testdir() : Makes use of system calls like stat() / mkdir() to create the test directory
  suite_test_set_outfile_name(): Uses sprintf()-style function and strdup() to fill the outfile_name field
  suite_test_create_infile(): Fills the infile_name field and uses system calls like open() / write() to create the input file
   
  CRITERIA for suite_test_read_output_actual()
  Makes use of system calls like stat() / open() / read() to read the input efficiently
  Determines file size ahead of time and uses a single memory allocation to create a buffer for the file contents
  Properly null-terminates the output_actual field; closes open file descriptors
   
  CRITERIA for ALL FUNCTIONS
  ERROR CHECKING: Functions perform error checking on system calls looking for -1 returns which indicate the system calls failed
  CODE STYLE: Functions adhere to CMSC 216 C Coding Style Guide which dictates reasonable indentation, commenting, consistency of curly usage, etc.
   
10 PROBLEM 2 chester_funcs.c
  CRITERIA for suite_test_start()
  Parent process uses previously defined functions to set the output file name and create the input file prior to forking
  Uses fork() to create a child process and has distinct code paths for parent and child processes
  Child process redirects standard output and error streams into outfile_name using system calls like open() / dup2()
  Child process may redirect standard input to come from infile_name
  Child process sets up an argv[] array using provided functions and uses and exec()-style sytem call to run the test program
  Error checking is done on I/O redirection and exec() calls with appropriate exit codes for failures here
  Explanatory COMMENTS are present to help a reader through the long function
   
  CRITERIA for suite_test_finish()
  Macros associated with wait() are used to determine if a child process exited normally or abnormally and the test exit_code field
  Output for the test is read using a previously defined function
  The test status is updated to indicate a pass / fail based on comparing expect/actual output and exitcodes; suite tests_passed is updated
   
  CRITERIA for ALL FUNCTIONS
  Error Checking: Functions perform error checking on system calls looking for -1 returns which indicate the system calls failed
  Code Style: Functions adhere to CMSC 216 C Coding Style Guide which dictates reasonable indentation, commenting, consistency of curly usage, etc.
   
10 PROBLEM 3 chester_funcs.c
  print_window(): Concise and clear code is used for printing a window of the string
  differing_index(): Concise and short code is used to identify the first index of difference between strings
   
  CRITERIA for suite_test_make_resultfile()
  sprintf() / strdup() are used to populate the resultfile_name test field
  No mingling of low-level/high-level output functions is used; prefer high-level C output like fopen() / fprintf() for its convenience
  Function has clear sections that print different parts of the output file like title, input if present, expected output, etc.
  Explanatory COMMENTS are present to help a reader through the long function
   
  CRITERIA for ALL FUNCTIONS
  Code Style: Functions adhere to CMSC 216 C Coding Style Guide which dictates reasonable indentation, commenting, consistency of curly usage, etc.
   
10 PROBLEM 4 chester_funcs.c
  CRITERIA for suite_run_tests_singleproc()
  Previously defined function used to create the testing directory if needed; error checking in case this fails
  Clear loop over only the tests specified in the tests_torun[] field, NOT all tests
  Previously defined functions used to start tests running and finish their processing when complete
  Use of wait() / waitpid() system calls to pause the process until a child process has finished.
  Error checking case of failures on wait() / waitpid() or finishing tests
   
  suite_print_results_table(): Clear loop to print test results for only tests that were run, use of printf() features to nicely format the table
   
  CRITERIA for ALL FUNCTIONS
  Error Checking: Functions perform error checking on system calls looking for -1 returns which indicate the system calls failed
  Code Style: Functions adhere to CMSC 216 C Coding Style Guide which dictates reasonable indentation, commenting, consistency of curly usage, etc.
   
10 PROBLEM 5 chester_main.c
  main() function makes use of provided functions in chester_util.c for parsing input files and de-allocating suite memory
  Clear processing of command line arguments is present with distinct cases for the two forms of running chester:
  (1) No command line arguments beyond the input file defaults to running all tests
  (2) Command line arguments are present beyond the input file and these are specific tests to run
  Use of previously defined functions to run all tests and print a results table
  Code Style: Functions adhere to CMSC 216 C Coding Style Guide which dictates reasonable indentation, commenting, consistency of curly usage, etc.

10 Optional Makeup Credit

A great advantage while testing is the ability to run tests concurrently. When multiple CPUs or Cores are available, this means that tests will run in parallel. Any I/O latency for tests will be masked to some degree by concurrent runs of tests. In short, test results will be obtained faster with concurrent runs.

This Optional MAKEUP problem studies one approach this in two parts.

  1. A new function called suite_run_tests_multiproc() is implemented which starts multiple processes running tests up to a limit.
  2. The main() function is modified to allow a command line parameter to dictate how many concurrent processes should be used to run tests.

10.1 Multiprocess Testing

The required suite_run_tests_singleproc() runs tests one at a time waiting for each to finish. However, the infrastructure around running tests is designed to allow concurrent child processes to handle tests. The following function performs the same task using multiple concurrent processes.

int suite_run_tests_multiproc(suite_t *suite);
// MAKEUP CREDIT: Like suite_run_tests_singleproc() but uses up to
// suite.max_procs concurrent processes to do so. This will speed up
// tests if multiple processors are available or if tests are bounded
// by I/O operations. The general algorithm to achieve concurrent
// execution is described in the project description and should be
// consulted carefully when implementing this function.

The algorithm alluded to in the documentation comments is roughly as follows.

CAVEAT: The pseudocode given runs all tests but actual implementations should only run tests with indices in suite.tests_torun[].

# RUN ALL TESTS CONCURRENTLY USING suite.max_proc CONCURRENT PROCESSES
create test directory
tests_count = suite.tests_count
tests_index = 0
test_complete = 0
procs_max = suite.max_procs
procs_running = 0
while tests_complete < tests_count :
  if procs_running < procs_max and test_index < tests_count:
    # under the maximum processes and tests remain to start
    start tests[tests_index]
    tests_index += 1
    procs_running += 1
  else:
    # runnig maxprocs or no tests remain to start
    test_pid = wait()
    tests_complete += 1
    procs_running -= 1
    testnum = test associated to test_pid
    finish testnum, create its results file
    print a "." to indicate progress

CAVEAT REPEATED: The pseudocode given runs all tests but actual implementations should only run tests with indices in suite.tests_torun[].

The output for the function is the same as for suite_run_tests_singleproc() except that the number of processes being used is shown. Below is a commented example:

Running with XX processes: .... Done

Substitut XX for suite.max_procs and as indicated in the pseudocode, print a dot (.) for each test that completes.

10.2 Chester Main Support

Modify the main() function in chester_main.c to support the following new command line invocations which allow the number of processes to use to be specified on the command line.

# PREVIOUS FORMS
>> chester tests.md             # (A) run all tests in the input file
>> chester tests.md 2 5 9 11    # (B) specify which tests to run on the command line

# NEW FORMS
>> chester -max_procs 4 tests.md           # (C) run all tests, use up to 4 concurrent child processes 
>> chester -max_procs 2 tests.md 2 5 9 11  # (D) run specified tests, use up to 2 concurrent child processes 

These new forms will require main() to check for the presence of -max_procs as an option and use it to set the suite.max_procs field after loading the suite from the indicated file.

Finally, when the max_procs option is present and the suite.max_procs field is set to a positive integer, main() should call suite_run_tests_multiproc(). If these are not so, main() should call suite_run_tests_singleproc().

10.3 Makeup Grading Criteria

Weight Criteria
10 AUTOMATED TESTS:
  make test-makeup runs tests from test_makeup.org for correctness
  Tests evaluate suite_run_tests_multiproc() and chester -max_procs N invocation
  1 point per test
   
5 MANUAL INSPECTION
  suite_run_tests_multiproc() uses the suggested structure / algorithm to launch and manage multiple processes
  Previous functions are used to create the testing directory, start tests, and finish tests
  A wait()-style system call is sued to suspend Chester until a child process finishes
  There is clear code to determine which test has finished based on a child PID, perhaps using a helper function
  The max_procs field of suite is honored: only max_procs child processes are created at one time
  NOTE: solution that do not honor max_procs will not receive much credit
  main() has new sections to honor the -max_procs N command line option and sets the suite max_procs field
  main() will call suite_run_tests_multiproc() when max_procs is above 0 and suite_run_tests_singleproc() otherwise
  main() handles -max_procs N while still allowing individual tests to be specified as additional command line arguments
  Code Style: Functions adhere to CMSC 216 C Coding Style Guide which dictates reasonable indentation, commenting, consistency of curly usage, etc.
   
15 TOTAL

11 Assignment Submission

11.1 Submit to Gradescope

Refer to the Project 1 instructions and adapt them for details of how to submit to Gradescope. In summary they are

  1. Type make zip in the project directory to create p4-comlete.zip
  2. Log into Gradescope, select Project 4, and upload p4-complete.zip

11.2 Late Policies

You may wish to review the policy on late project submission which will cost 1 Engagement Point per day late. No projects will be accepted more than 48 hours after the deadline.

https://www.cs.umd.edu/~profk/216/syllabus.html#late-submission


Web Accessibility
Author: Chris Kauffman (profk@umd.edu)
Date: 2024-11-20 Wed 12:07