Roles and Activities > Analyst Role Set > Test Analyst > Define Test Details
Activity:
|
Purpose
|
|
Steps | |
Input Artifacts: | Resulting Artifacts: |
Frequency: This activity is typically conducted multiple times per iteration. . | |
Role: Test Analyst | |
Tool Mentors: | |
More Information:
|
Workflow Details: |
Purpose: | To gain a more detailed understanding of the Targeted Test Item based on the possible Test Ideas. |
Using the Test-Ideas List as context, examine the available information about the Target Test Item. The Use Case and related artifacts (e.g. Use-Case Realization, Use-Case Storyboard and Use-Case Scenarios) are usually good sources to begin with, in addition to any Supplementary Specifications, Business Rules and design artifacts.
Where limited information is available to you, you may need to discuss the Target Test Item with the development staff directly.
Purpose: | To determine a manageable subset of tests to define that are of most benefit in the current context. |
Review the Test-Ideas List and pick a number of the test ideas that you will design detailed tests for. In most cases you will pick a subset of the test ideas, based on time constraints, relevance of the test ideas to the current test cycle, completeness of the Target Test Item and so forth. Depending on the specific context of your situation the actual number of test ideas you take forward into design in the current test cycle will differ on a case-by-case basis.
We recommend that you avoid designing for all test ideas the first time you design from a given Test-Ideas List. Instead, take an incremental and iterative approach to working with the Test-Ideas List, focusing your efforts instead on the few ideas that you think are most likely to produce useful evaluation information for the given test cycle. This helps to mitigate the risk of devoting too much time to a single Target Test Item to the neglect of other items, and minimizes the risk of expending effort on designs for test ideas that may later prove of little interest.
Purpose: | To define the key characteristics of each test that is to be derived from the Test-Ideas List. |
Using the information you have gathered, consider each of the following aspects of the test.
Considering the test from a "Black-box" perspective, identify the key external visible characteristics that define the test. Identify what inputs will be required to stimulate the test, and what resulting outputs are to be expected. Also enumerate the key execution condition(s)the "How" of the execution condition does not have to be explained or understood for this step.
Note that Inputs and Expected Outputs willdepending on the specific testrange from simple data type values (eg "A", "1"), to complex multidimensional data (eg a sound clip, an object). It is better to define the qualifiers behind a particular Input or Expected Outputs, rather than just giving specific values. This provides the person subsequently implementing or executing the test with the required understanding of the reasoning behind the Test Data, allowing them to choose replacement and substitute values to vary the test in any given execution.
A point of observation is a point during the execution of a test at which you wish to observe some aspect of the state of the test environment. Given what you know of the execution condition(s) and the input and expected outputs, identify what specific points should be observed during test execution, and identify what a data should be observed.
A point of control is a point during the execution of a test at which you wish to make a decision from multiple choices regarding the test's flow of control. Investigate the Test Scenarios that are available, and for each consider the points at which control will vary through different executions of the test. Collate all of the different points of control and reduce the list to those needed for the current test cycle.
A test oracle combines both the expected output values to be tested for, and the means by which those values can be divined: it's both the response given and the medium through which it is given. For example, to verify the accurate representation of fonts used in a word processing package, print preview might be used as the medium by which the font presentation can be verified. The test oracle identifies aspects of both form and function that are necessary to verify the actual results of the test against the expected results.
Purpose: | To define the required Test Data values, including appropriate sources for that data. |
As mentioned previously, Test Data comes in many shapes-and-forms.
Where complex data-interdependencies are likely, try to make use of Domain Experts to specify appropriate Test Data conditions. Some test productivity tools provide features or utilities that enable simplified generation of Test Data sets.
Purpose: | To source and record sufficient valid Test Data to support the test. |
The accurate generation or collation of appropriate Test Data is one of the most arduous and time consuming tasks in defining a test. This is especially true where the system of a class that is data intensive.
We recommend recording Test Data in Microsoft Excel or another product with a tabular data management interface, such as Microsoft Access.
Purpose: | To enable impact analysis and assessment reporting to be performed on the traced items. |
Using the Traceability requirements outlined in the Test Plan, update the traceability relationships as required.
Purpose: | To verify that the activity has been completed appropriately and that the resulting artifacts are acceptable. |
Now that you have completed the work, it is beneficial to verify that the work was of sufficient value, and that you did not simply consume vast quantities of paper. You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those team members who will make subsequent use of it as input to their work. Where possible, use the checklists provided in RUP to verify that quality and completeness are "good enough".
Have the people performing the downstream activities that rely on your work as input take part in reviewing your interim work. Do this while you still have time available to take action to address their concerns. You should also evaluate your work against the key input artifacts to make sure you have represented them accurately and sufficiently. It may be useful to have the author of the input artifact review your work on this basis.
Try to remember that that RUP is an iterative process and that in many cases artifacts evolve over time. As such, it is not usually necessaryand is often counterproductiveto fully-form an artifact that will only be partially used or will not be used at all in immediately subsequent work. This is because there is a high probability that the situation surrounding the artifact will changeand the assumptions made when the artifact was created proven incorrectbefore the artifact is used, resulting in wasted effort and costly rework. Also avoid the trap of spending too many cycles on presentation to the detriment of content value. In project environments where presentation has importance and economic value as a project deliverable, you might want to consider using an administrative resource to perform presentation tasks.
Rational Unified Process |