Monday, July 11, 2011

The Art of Software Testing by Glenford Myers : Chapter 6

In this chapter, the author focuses on the part of testing that starts when human-testing ends. The author talks about origination of errors in software programming. Since software development is a process of communicating information about the program and translating it from one form to another, the vast majority of software errors are attributable to breakdowns, noise and mistakes in communication. The author then describes the flow of the software development process. The author suggests to include more precision into the development process and having a separate verification step at the end of each process. The author also suggests having distinct testing processes toward distinct development processes i.e. focus each testing process on a particular translation step. The author then discusses testing at different stage of development cycle:
  1. Function Testing: It is a process of finding discrepancies between the program and its external specification.
  2. System Testing:
    • System testing is the process of attempting to demonstrate how the program does not meet its objectives.
    • System testing is impossible if the project has not produced a written set of measurable objectives for its product.
      • The author then mentions following categories of test cases:
    • Facility Testing: It is the determination of whether each facility mentioned in the objectives was actually implemented.
    • Volume Testing: The program is subjected to heavy volume of data.
    • Stress Testing: It involves testing a program under heavy load in a short span of time.
    • Usability Testing: The author provides following aspects of usability testing
      • Does system contain excessive number of options ?
      • Does system return acknowledgment to all inputs?
      • Is the program easy to use?
      • Where accuracy is vital, is sufficient redundancy present?
      • Are the outputs of the program meaningful?
    • Security Testing: It is a process of attempting to devise test cases to subvert the program's security checks.
    • Performance Testing: Test cases should be designed to show that the program does not meet its performance objectives like response times and throughput rates.
    • Storage Testing: Test cases should be designed to show that the program does not meet storage objectives in terms of main and secondary storage and spill/temporary files.
    • Configuration Testing: The program should be tested with each kind of hardware device and with the minimum and maximum configuration. Each possible configuration of the program should also be tested.
    • Compatibility/Conversion Testing: Test cases should be designed to make sure the program meets the objectives of compatibility with and conversion procedures from, the existing system.
    • Installability Testing: Testing of installation procedures should be part of this testing.
    • Reliability Testing: If the objective of the program contain specific statements about reliability, tests should be devised to
    • Recovery Testing: Tests should be designed to see the system's recovery from programming errors, hardware failures and data errors.
    • Serviceability Testing: Serviceability objectives defined in terms of service aids, mean time to debug a problem, maintenance procedures, and quality of internal-logic documentation.
    • Documentation Testing: User documentation should be subject to an inspection for accuracy and clarity. Examples in the documentation should be part of test cases used to test the program.
    • Procedure Testing: Any human procedures involved in large programs should be tested.
      • The author clearly specifies not performing system test by programmers who have written the program as well as not within the organization developing the program.
  3. Acceptance Testing: This is carried out by conducting tests to show that the program does not do what it is contracted to do.
  4. Installation Testing: This may include test cases to check that all the files have been created and have necessary contents, all parts of the system exists and is working.
The author then discusses about test planning and enlists following components of a good test plan:
  1. Objectives: Each testing phase should have an objective.
  2. Completion criteria: Criteria for specifying the completion of each test phase needs to be specified.
  3. Schedules: The schedules for when the test cases will be designed, written and executed should be created.
  4. Responsibilities: The responsibilities of people regarding testing and fixing of errors should be clearly defined.
  5. Test-case libraries and standards: Systematic methods of identifying, writing and storing test cases are necessary.
  6. Tools: The required test tools must be identified including the owner for their development or acquisition and how to use them or when they are needed.
  7. Computer time: Each testing phase's required computer time should be calculated.
  8. Hardware configuration:
  9. Integration: System integration plan should be in place that defines the order of integration and the functional capability of each version of the system.
  10. Tracking procedures: There should be tracking of errors and the estimation of progress with respect to schedule, resources and completion criteria.
  11. Debugging procedures: Mechanisms must be in place to track the progress of corrections and adding them to the system.
  12. Regression testing: Regression testing is important because changes and error corrections tend to be more error-prone than the original code. The purpose of the regression testing is to determine if the change has regressed other aspects of the program.
The author then discusses about test completion criteria which can be two of the following:
  1. Scheduled time for testing expires.
  2. When all the test cases execute without detecting any errors.
The author describes how both the criteria are useless. The author provides three criteria
  1. The first one is based on use of testing methodologies:
    • Completion of module testing can be achieved if the test cases are derived from multi-condition coverage criterion or boundary value analysis of interface specification and all the test cases are unsuccessful.
    • Completion of function testing can be achieved if the test cases are derived from cause-effect graphing, boundary value analysis and error guessing and all the test cases are unsuccessful.
  2. The second criteria is to state the completion requirements in positive terms, i.e. test of a module is not complete till x errors are found or an elapsed time of y months. The author discuss the problem of estimating the number of errors in a program and provides several ways like
    • Experience with previous programs
    • Apply predictive models
    • Use industry-wide averages
  3. The third criteria is to plot the number of errors found per unit time during test phase. By examining the shape of the curve, it can be determined whether to finish or continue the testing.
The author again emphasizes on hiring an independent test agency to test the program.

No comments:

Post a Comment