Artifacts > Test Artifact Set > Test Plan > Guidelines
Guidelines:
|
Test Plan |
The test plan contains information about the purpose and goals of testing within the project. Additionally, the test plan identifies the strategies to be used to implement and execute testing, and the resources needed. |
The purpose of the test plan is to communicate the intent of the testing activities. It is critical that this document be created as early as possible. Generating this artifact early in one of the first iterations of the Elaboration phase would not be too early. It may be desirable to develop the test plan iteratively, adding sections as the information is available.
Care should be taken in clearly communicating the scope of testing, the requirements for test, the test strategies, and the resource needs. This information identifies the purpose and boundaries of the test effort, what will be tested, how it will be tested, and what resources are needed for testing. Stating this information clearly, will expedite the review, feedback, and approval of the test effort.
At the outset of the project, a test plan identifying the overall intended testing for the project should be created, called the "Master Test Plan." As each iteration is planned, a more precise "Iteration Test Plan" is created (or several test plans, organized by type of test), containing only the data (requirements for test, test strategies, resources, etc.) that pertain to the iteration. Alternately, this information may be included in the Iteration Plan, if it does not make the iteration plan too difficult to manage or use.
Below are some guidelines to better identify and communicate the requirements for test, test risks and priorities, and test strategies.
Requirements for test identify what will be tested. They are the specific target of a test. There are a few general rules to apply when deriving requirements for test:
The requirements for test may be derived from many sources, including use cases, use-case models, supplemental specifications, design requirements, business cases, interviews with end-users, and the software architecture document. All of these should be reviewed to gather information that is used to identify the requirements for test.
Functional requirements for test, as their name implies, are derived from descriptions of the target-of-test's functional behaviors. At a minimum, each use case should derive at least one requirement for test. A more detailed list of requirements for test would include at least one requirement for test for each use case flow of events.
Performance requirements for test are derived from the target-of-test's specified performance behaviors. Typically, performance is stated as a measure of response time and/or resource usage, as measured under various conditions, including
Requirements for performance are described in the Supplementary Specifications. Review these materials, paying especial attention to statements that include the following:
You should derive at least one requirement for test for each statement in the specification which reflects information such that listed above.
Reliability requirements for test are derived several sources, typically described in Supplementary Specifications, User-Interface Guidelines, Design Guidelines, and Programming Guidelines.
Review these artifacts and pay especial attention to statements that include the following:
At least one requirement for test should be derived from each statement in the artifacts that reflects information listed above.
Successful testing requires that the test effort successfully balance factors such as resource constraints and risks. To accomplish this, the test effort should be prioritized so that the most important, significant, or riskiest use cases or components are tested first. To prioritize the test effort, a risk assessment and operational profile are performed and used as the basis for establishing the test priority.
The following sections describe how to determine test priority.
Identifying the requirements for test is only part of identifying what will be tested. Prioritizing what will be tested and in what order should also be performed. This step is done for several reasons, including:
There are three steps to assessing risk and establishing the test priorities:
Guidelines for each of these three steps are provided below:
Establishing the priority for test begins with the assessment of risk. Use cases or components that pose the greatest risk due to failure or have a high probability of failure should be amongst the first use cases tested.
Begin by identifying and describing the risk magnitude indicators that will be used, such as:
After identifying the risk magnitude indicators, list each use case or component in the target-of-test. For each use case or component in your list, identify a risk magnitude indicator, and justify (in a brief statement) the value you selected.
There are three perspectives that can be used for assessing risk:
Select one perspective, identify a risk magnitude indicator and justify your selection. It is not necessary to identify an indicator for each risk perspective. However, it suggested that, if a low indicator was identified, try evaluating the item from a different risk perspective to ensure the item is really a low risk.
Below are greater details on assessing risk by these three perspectives.
To assess risk by Effect, identify a condition, event, or action and try to determine its impact. Ask the question:
"What would happen if ___________?"
For example:
Below is a sample justification matrix for these items:
Description | Risk Mitigation Factor | Justification |
Insufficient disk space during install | H | Installing the software provides the user with
the first impression of the product. Any undesirable outcomes, such as those
listed below would degrade the user's system, the installed software, and
communicate a negative impression to the user:
|
Internet connection lost during inquiry | L | No damage resulting from the lost connection is done to the data or database. It is recognized that a lost connection may communicate a negative impression to the user. |
Internet connection lost during purchase | H | Any lost connections or transactions that
result in the outcomes listed below are unacceptable, as they increase the
overhead costs and decrease profits:
|
Unexpected value entered | H | Any transactions that result in the outcomes
listed below are unacceptable:
|
Assessing risk by Cause is the opposite of by Effect. Begin by stating an undesirable event or condition, and identify the set of events that could have permitted the condition to exist. Ask a question such as:
"How could ___________ happen?
For example:
Below is a sample justification matrix for these items:
Description | Risk Mitigation Factor | Justification |
Missing / application files and registry entries | H | Renders the application (and potentially the
system) un-usable. Installation is the first view of the application seen by
the users. If installation fails, for any reason, the user views the
software unfavorably. Possible causes of this condition include:
Of these causes, only the last one cannot be detected and handled by the installation process. |
Partial order | H | Partial orders cannot be fulfilled, resulting
in lost revenue and lost customers.
Possible causes include:
|
Corrupt data / database | H | Corrupt data cannot be tolerated for any
reason.
Possible causes include:
|
Replicated orders | H | Replicated orders increase the company overhead
and diminish profits via the costs associated with shipping, handling, and
restocking.
Possible causes include:
|
Inaccurate data for an order | H | Any orders that cannot be completed or incur
additional overhead costs are not acceptable.
Possible causes include:
|
Wrong number of records reflected in statement | H | Business decisions and accounts
receivable are dependent upon the accuracy of these reports.
Possible causes include:
|
Assessing risk by Likelihood is to determine the probability that a use case (or component implementing a use case) will fail. The probability is usually based on an external factors such as:
It should be noted, that when using this risk perspective, the risk magnitude indicators are related to the probability of a failure, not the effect or impact the failure has on the organization as was used in assessing risk by Effect and Cause.
Correlations between these factors and the probability of a failure exist, as identified below:
External Factor | Probability |
Failure discovery rate and / or density |
The probability of a failure increases as the failure discovery rates or density increases. Defects tend to congregate, therefore, as the rate of discovery or the number of defects (density) increases in a use case or component, the probability of finding another defect also increases. Discovery rates and density from previous releases should also be considered when assessing risk using this factor, as previous high discovery rates or densities indicate a high probability of additional failures. |
Rate of change | The probability of a failure increases as the rate of change to the use case or component increases. Therefore, as the number of changes increases, so too does the probability that a defect has been introduced. Every time a change is made to the code, there is the risk of "injecting" another defect it. |
Complexity | The probability of a failure increases as the measure of complexity of the use case or component increases. |
Origination / Originator | Knowledge and experience of where the code
originated and by whom can increase or decrease the probability of a
failure. The use of third party components typically decreases the probability of failure. However, this is only true if the third party component has been certified (meets your requirements, either through formal test or experience). The probability of failure typically decreases with the increased knowledge and skills of the implementer. However, such factors as the use of new tools, technologies, or acting in multiple roles may increase the probability of a failure even by the best team members. |
For example:
Below is a sample justification matrix for these items:
Description | Risk Mitigation Factor | Justification |
Installing new software | H | We are writing our own installation utility.
Renders the use of the application un-usable. Installation is the first view of the application seen by the users. If installation fails, for any reason, the user views the software unfavorably. |
Installing new software | L | We are using a commercially successful
installation utility.
While failed installation renders the use of the application un-usable, the installation utility selected is from a vendor that has achieved the number one market share with their product and has been in business for over four years. Our evaluation of their indicates that the product meets our needs and clients are satisfied with their product, the vendor, and their level of service and support. |
High failure discovery rates / defect densities in use cases 1, 10, 12. | H | Due to the previous high failure discovery rates and defect density use cases 1, 10, and 12 are considered high risk. |
Change Requests in use cases 14 and 19. | H | A high number of changes to these use cases increases the probability of injecting defects into the code. |
The next step in assessing risk and establishing a test priority is to determine the target-of-test's operational profile.
Begin by identifying and describing the operational profile magnitude indicators that will be used, such as:
The operational profile indicator you select should be based upon the frequency a use case or component is executed, including:
Typically, the greater the number of times a use case or component is used, the higher the operational profile indicator.
After identifying the operational profile magnitude indicators to be used, list each use case or component in the target-of-test. Determine an operational profile indicator for each item in your list and a state your justification for the indicator value. Information from the workload analysis document (See Artifact: Workload Analysis Document) may be used for this assessment.
Examples:
Description | Operational Profile Factor | Justification |
Installing new software | H | Performed once (typically), but by many users. Without installation however, application is unusable. |
Ordering items from the catalog | H | This is the most common use case executed by users. |
Customers inquiring about orders | L | Few customers inquire about their orders after they are placed |
Item selection dialog | H | This dialog is used by customers for placing orders and by inventory clerks to replenish stock. |
The last step in the assessing risk and establishing a test priority is to establish the test priority.
Begin by identifying and describing the test priority magnitude indicators that will be used, such as:
After identifying the test priority magnitude indicators to be used, list each use case or component in the target-of-test. Determine a test priority indicator for each item in your list and a state your justification. Below are some guidelines for determining a test priority indicator.
Consider the following when determining the test priority indicators for each item:
Strategies for establishing a test priority include:
Examples:
Item | Risk | Operational Profile | Actor | Contract | Priority |
Installing new software | H | H | L | H | H |
Ordering items from catalog | H | H | H | H | H |
Customer Inquiries | L | L | L | L | L |
Item Selection Dialog | L | H | L | L | H |
Item | Risk | Operational Profile | Actor | Contract | Priority |
Installing new software | H | H | L | H | H |
Ordering items from catalog | H | H | H | H | H |
Customer Inquiries | L | L | L | L | L |
Item Selection Dialog | L | H | L | L | L |
(Note: in the matrix below, H = 5, M = 3, and L = 1. A Total Weighted value greater than 30 is a High priority test item, values between 20 and 30 inclusive are a Medium priority, and values less than 20 are Low).
Item | Risk (x 3) | Operational Profile (x 2) | Actor (x 1) | Contract (x 3) | Weighted Value | Priority |
Installing new software | 5 (15) | 5 (10) | 1 (1) | 5 (15) | 41 | H (2) |
Ordering items from catalog | 5 (15) | 5 (10) | 5 (5) | 5 (15) | 45 | H (1) |
Customer Inquiries | 1 (3) | 1 (2) | 1 (1) | 1 (3) | 9 | L (4) |
Item Selection Dialog | 1 (3) | 5 (10) | 1 (1) | 1 (3) | 17 | L (3) |
The Test Strategy describes the general approach and objectives of a specific test effort.
A good test strategy should contain the following:
State clearly the type of test being implemented and the objective of the test. Explicitly stating this information reduces confusion and minimizes misunderstandings (especially since some tests may look very similar). The objective should state clearly why the test is being executed.
Examples:
"Functional Test. The functional test focuses on executing the following use cases implemented in the target-of-test, from the user interface."
"Performance Test. The performance test for the system will focus on measuring response time for use cases 2, 4, and 8 - 10. For these tests, a workload of one actor, executing these use cases without any other workload on the test system will be used."
"Configuration Test. Configuration testing will be implemented to identify and evaluate the behavior of the target-of-test on three different configurations, comparing the performance characteristics to our benchmark configuration."
Clearly state the stage in which the test will be executed. Identified below are the stages in which common test are executed:
|
Stage of Test | |||
Type of Tests | Unit | Integration | System | Acceptance |
Functional Tests
(Configuration, Function, Installation, Security, Volume) |
X | X | X | X |
Performance Tests
(performance profiles of individual components) |
X | X | (X)
optional or when system performance tests disclose defects |
|
Performance Tests
(Load, Stress, Contention) |
|
|
X | X |
Reliability
(Integrity, Structure) |
X | X | (X)
optional or when others tests disclose defects |
|
The technique should describe how testing will be implemented and executed. Include what will be tested, the major actions to be taken during test execution, and the method(s) used to evaluate the results.
Example:
Functional Test:
- For each use case flow of events, a representative set of transactions will identified, each representing the actions taken by the actor when the use case is executed.
- A minimum of two test cases will be developed for each transaction; one test case to reflect the positive condition and one to reflect the negative (unacceptable) condition.
- In the first iteration, use cases 1 - 4, and 12 will be tested, in the following manner:
- Use Case 1:
- Use Case 1 begins with the actor already logged into the application and at the main window, and terminates when the user has specified SAVE.
- Each test case will be implemented and executed using Rational Robot.
- Verification and assessment of execution for each test case will be done using the following methods:
- Test script execution (did each test script execute successfully and as desired?)
- Window Existence, or Object Data verification methods (implemented in the test scripts) will be used to verify that key windows display and specified data is captured / displayed by the target-of-test during test execution.
- The target-of-test's database (using Microsoft Access) will be examined before the test and again after the test to verify that the changes executed during the test are accurately reflected in the data.
Performance Test:
- For each use case, a representative set of transactions, as identified in the workload analysis document will be implemented and executed using Rational Suite PerformanceStudio (vu scripts) and Rational Robot (GUI scripts).
- At least three workloads will be reflected in the test scripts and test execution schedules including the following:
- Stressed workload: 750 users (15 % managers, 50 % sales, 35 % marketing)
- Peak workload: 350 users (10 % managers, 60 % sales, 30 % marketing)
- Nominal workload: 150 users (2 % managers, 75% sales, 23 % marketing)
- Test scripts used to execute each transaction will include the appropriate timers to capture response times, such as total transaction time (as defined in the workload analysis document), and key transaction activity or process times.
- The test scripts will execute the workloads for one hour (unless noted differently by the workload analysis document).
- Verification and assessment of execution for each test execution (of a workload) will include:
- Test execution will be monitored using state histograms (to verify that the test and workloads are executing as expected and desired)
- Test script execution (did each test script execute successfully and as desired?)
- Capture and evaluation of the identified response times using the following reports:
- Performance Percentile
- Response Time
Completion criteria are stated to for two purposes:
A clear statement of completion criteria should include the following items:
Example 1
Example 2
Example 3
This section should identify any influences or dependencies which may impact or influence the test effort describe in the test strategy. Influences might include:
Examples:
Rational Unified Process |