Roles and Activities > Tester Role Set > Tester > Implement Test Suite

  • To arrange or assemble collections of tests to be executed together in a valuable way
  • To facilitate both breadth and depth of the test effort by exercising as many required test combinations as possible
Input Artifacts: Resulting Artifacts:
Frequency: This activity is typically conducted multiple times per iteration. .
Role: Tester
Tool Mentors:
More Information:
  • Work Guidelines: Maintaining Automated Test Suites

  • Workflow Details:

    Examine candidate Test Suites To top of page

    Purpose: To understand the Test Suites and select which candidates will be implemented.

    Start by reviewing any existing Test Suite outlines, and determine which Test Suites are good candidates for implementation at the current time. Use the Iteration Test Plan, Test Ideas Lists and Test Cases as a basis for making your decision.

    Examine related Test Scripts and Target Test Items To top of page

    Purpose: To understand the relationships between the Target Test Items and the available Test Scripts.

    For each Test Suite selected, identify what Target Test Items and associated Test Scripts are candidates for the Test Suite.

    Identify script dependencies To top of page

    Purpose: To identify any dependencies the Test Scripts have, both in terms of system state and in terms of other Test Scripts.

    Begin by considering the Test Environment Configuration and specific system start state. Consider what specific setup requirements there will be, such as the starting data set for dependent databases. Where one Target Environment Configuration will be used for various Test Suites, identify any configuration settings that may need to be managed by each Test Suite, such as Display Resolution or Regional settings.

    Now determine any specific relationships between the Test Scripts. Look for dependencies where the execution of one Test Script included in the Test Suite will result in a system state change required as a precondition of another Test Script.

    Once you've identified all of these dependencies, determine the correct sequence of execution for the dependent Test Scripts.

    Identify opportunities for reuse To top of page

    Purpose: To improve Test Suite maintainability, both by reusing existing assets and consolidating new assets.

    One of the main challenges in maintaining a Test Suite—especially an automated one—is ensuring that ongoing changes are easy to make. It's a good idea when possible and deemed useful to maintain a central point of modification for elements that are used in multiple places. That's especially true if those same elements are likely to change.

    While the Test Scripts themselves form a natural units of modularity, assembly of the Test Scripts into a Test Suite often identifies duplicate procedural elements across multiple Test Scripts that could be more effectively maintained if the were consolidated. Take the opportunity to identify any potential Test Scripts that might be refactored to assist ongoing maintenance.

    Apply necessary infrastructure utilities To top of page

    Purpose: To factor out complex test implementation support behavior as simplified utility functions.

    Most test efforts require the use of one or more "utilities" that generate, gather, diagnose, convert and compare information used during test implementation and execution. These utilities typically simplify both complex and laborious tasks that would be prone to error if performed manually. This step relates to applying existing utility functions within the Test Suite, and identifying new utilities that are required.

    It's a good idea to simplify the interfaces to these utilities, encapsulating as much complexity as possible within the private implementation of the utility. It's also a good idea to develop the utility in such a way that it can be reused where required for both manual and automated test efforts.

    We recommend you don't hide the information that characterizes an individual test within these utilities: instead, limit the utility to the complex mechanics of gathering information, or comparing actual values to expected results etc. but where possible, pass the specific characteristics of each individual test in as input from—and return the individual actual results an output to—a controlling Test Script or Test Suite.

    Determine recovery requirements To top of page

    Purpose: To enable Test Suites to be recovered without requiring the complete re-execution of the Test Suite.

    Determine the appropriate points within the Test Suite to provide recovery if the Test Suite fails during execution. This step gains importance where the Test Suite will contain a large number of Test Scripts, or will run for an extended period of time—often unattended. While most often identified as a requirement for automated Test Suites, it is also important to consider recovery points for manually executed Test Suites.

    In addition to recovery or restart points you may also want—in the case of automated Test Suites—to consider automated Test Suite recovery. Two approaches to auto-recovery are 1) basic recovery where the existing Test Suite can self-recover from a minor error that occurs in one of it's Test Scripts, typically recovering execution at the next Test Script in the Test Suite or 2) sophisticated recovery that cleans up after the failed Test Script, resetting appropriate system state including operating system reboot and data restoration if necessary. As in the first approach, the Test Suite then determines the script that failed and selects the next Test Script to execute.

    Implement recovery requirements To top of page

    Purpose: To implement and verify that the recovery process works as required.

    Depending on the level of sophistication required, it will require effort to implement and stabilize recovery processing. You'll need to allow time to simulate a number of likely (and a few unlikely) failures to prove the recovery processing works.

    In the case of automated recovery, both approaches outlined in the previous step have strengths and weaknesses. You should consider carefully the cost of sophisticated automated recovery, both in terms of initial development but also ongoing maintenance effort. Sometimes manual recovery is good enough.

    Stabilize the Test Suite To top of page

    Purpose: To resolve any dependency problems both in terms of System State and Test Script execution sequences.

    You should take time to stabilize the Test Suite by one or more trial test executions where possible. The difficulty in achieving stability increases proportionally to the complexity of the Test Suite, and where there is excessively tight coupling between unrelated and and low cohesion between related Test Scripts.

    There is the possibility of errors occurring when Test Scripts are executed together within a given Test Suite, that were not encountered when the individual Test Scripts were executed independently. These errors are often the most difficult to track down and diagnose, especially when the are encountered halfway though a length automated test run. Where practical, it's a good idea to rerun the Test Suite regularly as you add addition Test Scripts. This will help you isolate a small number of potential candidate Test Scripts to be diagnosed to identify the problem.

    Maintain traceability relationships To top of page

    Purpose: To enable impact analysis and assessment reporting to be performed on the traced items.

    Using the Traceability requirements outlined in the Test Plan, update the traceability relationships as necessary. Test Suites might be traced to defined Test Cases or to Test Ideas. Optionally, they may be traced to Use Cases, software specification elements, Implementation Model elements and to one or more measures of Test Coverage.

    Evaluate and verify your results To top of page

    Purpose: To verify that the activity has been completed appropriately and that the resulting artifacts are acceptable.

    Now that you have completed the work, it is beneficial to verify that the work was of sufficient value, and that you did not simply consume vast quantities of paper. You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those team members who will make subsequent use of it as input to their work. Where possible, use the checklists provided in RUP to verify that quality and completeness are "good enough".

    Have the people performing the downstream activities that rely on your work as input take part in reviewing your interim work. Do this while you still have time available to take action to address their concerns. You should also evaluate your work against the key input artifacts to make sure you have represented them accurately and sufficiently. It may be useful to have the author of the input artifact review your work on this basis.

    Try to remember that that RUP is an iterative process and that in many cases artifacts evolve over time. As such, it is not usually necessary—and is often counterproductive—to fully-form an artifact that will only be partially used or will not be used at all in immediately subsequent work. This is because there is a high probability that the situation surrounding the artifact will change—and the assumptions made when the artifact was created proven incorrect—before the artifact is used, resulting in wasted effort and costly rework. Also avoid the trap of spending too many cycles on presentation to the detriment of content value. In project environments where presentation has importance and economic value as a project deliverable, you might want to consider using an administrative resource to perform presentation tasks.

    Copyright  1987 - 2001 Rational Software Corporation

    Display Rational Unified Process using frames

    Rational Unified Process