Roles and Activities > Tester Role Set > Tester > Implement Test Suite
Start by reviewing any existing Test Suite outlines, and determine which Test Suites are good candidates for implementation at the current time. Use the Iteration Test Plan, Test Ideas Lists and Test Cases as a basis for making your decision.
For each Test Suite selected, identify what Target Test Items and associated Test Scripts are candidates for the Test Suite.
Begin by considering the Test Environment Configuration and specific system start state. Consider what specific setup requirements there will be, such as the starting data set for dependent databases. Where one Target Environment Configuration will be used for various Test Suites, identify any configuration settings that may need to be managed by each Test Suite, such as Display Resolution or Regional settings.
Now determine any specific relationships between the Test Scripts. Look for dependencies where the execution of one Test Script included in the Test Suite will result in a system state change required as a precondition of another Test Script.
Once you've identified all of these dependencies, determine the correct sequence of execution for the dependent Test Scripts.
One of the main challenges in maintaining a Test Suiteespecially an automated oneis ensuring that ongoing changes are easy to make. It's a good idea when possible and deemed useful to maintain a central point of modification for elements that are used in multiple places. That's especially true if those same elements are likely to change.
While the Test Scripts themselves form a natural units of modularity, assembly of the Test Scripts into a Test Suite often identifies duplicate procedural elements across multiple Test Scripts that could be more effectively maintained if the were consolidated. Take the opportunity to identify any potential Test Scripts that might be refactored to assist ongoing maintenance.
Most test efforts require the use of one or more "utilities" that generate, gather, diagnose, convert and compare information used during test implementation and execution. These utilities typically simplify both complex and laborious tasks that would be prone to error if performed manually. This step relates to applying existing utility functions within the Test Suite, and identifying new utilities that are required.
It's a good idea to simplify the interfaces to these utilities, encapsulating as much complexity as possible within the private implementation of the utility. It's also a good idea to develop the utility in such a way that it can be reused where required for both manual and automated test efforts.
We recommend you don't hide the information that characterizes an individual test within these utilities: instead, limit the utility to the complex mechanics of gathering information, or comparing actual values to expected results etc. but where possible, pass the specific characteristics of each individual test in as input fromand return the individual actual results an output toa controlling Test Script or Test Suite.
Determine the appropriate points within the Test Suite to provide recovery if the Test Suite fails during execution. This step gains importance where the Test Suite will contain a large number of Test Scripts, or will run for an extended period of timeoften unattended. While most often identified as a requirement for automated Test Suites, it is also important to consider recovery points for manually executed Test Suites.
In addition to recovery or restart points you may also wantin the case of automated Test Suitesto consider automated Test Suite recovery. Two approaches to auto-recovery are 1) basic recovery where the existing Test Suite can self-recover from a minor error that occurs in one of it's Test Scripts, typically recovering execution at the next Test Script in the Test Suite or 2) sophisticated recovery that cleans up after the failed Test Script, resetting appropriate system state including operating system reboot and data restoration if necessary. As in the first approach, the Test Suite then determines the script that failed and selects the next Test Script to execute.
Depending on the level of sophistication required, it will require effort to implement and stabilize recovery processing. You'll need to allow time to simulate a number of likely (and a few unlikely) failures to prove the recovery processing works.
In the case of automated recovery, both approaches outlined in the previous step have strengths and weaknesses. You should consider carefully the cost of sophisticated automated recovery, both in terms of initial development but also ongoing maintenance effort. Sometimes manual recovery is good enough.
You should take time to stabilize the Test Suite by one or more trial test executions where possible. The difficulty in achieving stability increases proportionally to the complexity of the Test Suite, and where there is excessively tight coupling between unrelated and and low cohesion between related Test Scripts.
There is the possibility of errors occurring when Test Scripts are executed together within a given Test Suite, that were not encountered when the individual Test Scripts were executed independently. These errors are often the most difficult to track down and diagnose, especially when the are encountered halfway though a length automated test run. Where practical, it's a good idea to rerun the Test Suite regularly as you add addition Test Scripts. This will help you isolate a small number of potential candidate Test Scripts to be diagnosed to identify the problem.
Using the Traceability requirements outlined in the Test Plan, update the traceability relationships as necessary. Test Suites might be traced to defined Test Cases or to Test Ideas. Optionally, they may be traced to Use Cases, software specification elements, Implementation Model elements and to one or more measures of Test Coverage.
Now that you have completed the work, it is beneficial to verify that the work was of sufficient value, and that you did not simply consume vast quantities of paper. You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those team members who will make subsequent use of it as input to their work. Where possible, use the checklists provided in RUP to verify that quality and completeness are "good enough".
Have the people performing the downstream activities that rely on your work as input take part in reviewing your interim work. Do this while you still have time available to take action to address their concerns. You should also evaluate your work against the key input artifacts to make sure you have represented them accurately and sufficiently. It may be useful to have the author of the input artifact review your work on this basis.
Try to remember that that RUP is an iterative process and that in many cases artifacts evolve over time. As such, it is not usually necessaryand is often counterproductiveto fully-form an artifact that will only be partially used or will not be used at all in immediately subsequent work. This is because there is a high probability that the situation surrounding the artifact will changeand the assumptions made when the artifact was created proven incorrectbefore the artifact is used, resulting in wasted effort and costly rework. Also avoid the trap of spending too many cycles on presentation to the detriment of content value. In project environments where presentation has importance and economic value as a project deliverable, you might want to consider using an administrative resource to perform presentation tasks.
Rational Unified Process