
Requirements Tracking & Reporting.
The image below, Requirements Tracking & Reporting, depicts the
flow for the requirements tracking and reporting manuals involved in producing
the Software Test Report.
The contract spells out that it is required to track requirements
and the status of defects and test cases. Two common types of test reporting
by the Test Manager are weekly test status report and a test report. The
reports are tailored for each project. The weekly status report on types
and status of defects and status of test cases. They should be distributed
to other team leads. The test report is prepared at the end of each test
phase, i.e., function, system, installation, and acceptance.
Why track requirements?
The reason for Formal Test is to baseline the requirements. All requirements
except those identified in the test disclaimer must be tested during function/FQT
[CSCI] and system test. Test Engineering needs to be able to prove at
each step of test development that requirements are in the test work products.
Corresponding test documentation must include a requirements traceability
matrix showing which test work products (i.e., the test definitions of
MIL-STD-2167A and the test cases) contain the requirements. The test work
product is one of the test phases (function, system, installation, and
acceptance).
Reporting on traceability.
The image already shown above, Requirements Tracking & Reporting,
shows three phases of requirements coverage:
- From the baselined requirements documents to the first level of test work
product, where the requirements are logically/functionally grouped.
- From the first level of test work products to the test cases.
- From the baselined requirements to the test cases.
Two types of requirements tracking can be considered static
and dynamic Tracking. A COTS tool such as Rational Unified Process is
recommended to achieve this process. Static requirement tracking consists
of storing and maintaining a master list of requirements. As the requirements
are included in test documentation, a tester needs to make a manual entry
to reflect the coverage. As work products add, delete, or modify requirements
numbers, entries in the master list must be manually updated. The margin
for error in this process is significant and requires a person dedicated
to the task. Dynamic requirement tracking consists of storing and maintaining
a master list of requirements in a database. This database could then
be manipulated with scripts and forms to search the work products and
merge the lists for daily updates and monitoring.
Reporting.
Three types of documentation on reporting should be prepared, defect
status reports, test case procedures and, a summary test report. Generating
and distributing a status report is part of Formal Test. It the responsibility
of the Test Manager to generate the reports and share summary information
on defects and test cases to the other team leads. Rational ClearQuest
is an example of defect tracking tool available. Defect reports are examples
of the types of summary information that is included in the report. Test
Cases are also examples of the types of summary information included in
the report.
The contents of the Test Summary Report should include the following:
- Tests executed & completion status. There is usually a tabular
format in the front of the document providing a summary of the number
of test cases executed, how many were successful and how many failed.
The content of this table varies from project to project, but the objective
here is to summarize the data.
- An appendix to contain a copy of each test case, with the log of the
results and all signatures.
- List of problems found. This is usually a tabular format with the
objective being to summarize. The Test Manager’s goal is to have minimal
problems found during Formal Test execution for the client.
- Unresolved & non-reproducible problems. Some master test plans
call out a method that allows the client to sign-off on problems and
have the product delivered as is. These typically include cosmetic problems
or functional problems with limited reach. Additionally, some master
test plans call out a procedure for handling non-reproducible errors.
- Evaluation. Provides an overall analysis of the software capabilities
demonstrated by the test results. It identifies any remaining deficiencies,
limitations, or constraints in the software that were detected by the
tests and their impact on the software and system performance.
- Recommendations. List any recommended improvements to the design,
operation, or testing of the software.
- Traceability. There is a need to provide a traceability matrix from
source requirements to test cases--third (and final) level of traceability.
Metrics.
Metrics should be collected for process and product improvement and
to provide data on estimate to complete the project. Metrics also provide
an estimate of impact on engineering change proposals/requests. A basic
set of metrics is:
- Number of work products per week/per tester versus planned
- Test definitions, test condition matrices, & test cases
- Number of defects opened per week per category and/or severity (defect arrival
rate)
- Number of defects resolved per week per category and/or severity (defect
closure rate)
- Number of test cases executed per week versus planned
- Exposed, successful, & failed
- Average amount of time to resolve defects per category and/or severity
- Number of peer reviews completed
- Number of defects per software module
- Lines of code count (source code, GUI, 4GL, etc.)
- Number of test groupings
- Number of test cases
- Number of hours for work product development & execution
- Test groupings & cases
- Implement charge numbers
Quality Assurance Responsibilities for Formal Test.
The Test Manager needs to ensure that quality is built into the test
process, test work products and resources. The test manager responsibilities
are the following:
- Verification of test cases.
- Procedural portion of test cases.
- Verified during unit test? Integration test? Dry runs?
- Dry run by someone other than author.
- Change control.
- The Test Manager needs to be a member of the Configuration Control
Board (CCB). This ensures that test provide inputs on the impact of
changes and to be alerted to approved changes at the earlier possible
time.
- Test engineering needs to be in-line for all change control. This
includes both prior to and after baselining (such as requirements
during the requirements analysis phase). Test engineering may be working
on unbaselined requirements during requirements analysis.
- Consistency issues. All test work products need to be consistent
in format and terminology.
- Process needs to be easily accessible. Anything relating to the test
effort and how to do something or in providing additional information
needs to be easily accessible.
- Tracking. Requirements, defects and test cases.
Configuration Management.
There are two levels of Configuration Management (CM) control: formal
control by the CM team and test-controlled CM, which is under the Test
Manager’s responsibilities. The image below, Configuration Management,
depicts the flow of events and activities for the levels of CM.
The responsibilities of the two levels of CM are the following:
- Formal CM
- Software configuration management load build. The CM team will be responsible
for controlling the software being tested and letting the Test Manager
know when a new build is available. This ensures software test environment
can be duplicated. Formal test is always conducted on configuration
controlled software.
- Test documentation. The test documentation fall under the same
CM guidelines as other project/program deliverables.
- Change control. The CM team will be required to track the forms
associated with the project/program change control.
- Tools. All project/program-related tools should be formally tracked
by the CM team.
- Test Controlled CM. The Test Manager should address CM as though it is its
own development effort. A directory structure for the organization,
the utilities to get the files in/out of CM control, and monitoring/enforcement
of comments and change control is needed.
- Test work products. All test work products need to be put under
CM control and document check-ins needs to be monitored.
- Tools. Tools to assist in the test process should be maintained
under CM control and in a database. As tools are written, supporting
documentation they should be written and put under CM control. Rational
ClearCase is example of a COTS tool for the CM process.
Miscellaneous test management.
Not all test management issues have been covered by the previous paragraphs.
There are traditional management issues that are pertinent but not addressed
directly in a projects process plan. The following are other management
concerns:
- Team players. The Test Manager frequently deals with personnel issues:
(1) testers that are overly aggressive in citing problems or are overly
opinionated in discussing potential problems with the software engineering,
and (2) software engineers that feel they have the right to vent on
testers because of the number of problems being found, the severity
of the problems being found, etc. Resolving these issues will often
come down to how effectively the test and software engineering managers
are able to communicate. A "little bit" of an adversarial
relationship between the two teams is good. The Test Manager and Software
Engineering Manager MUST GET ALONG!
- Unstructured or ad hoc testing. This is an activity during Formal
Test when testers sit down at the system without test cases and try
to break the software. Any problems found require that a test case be
written and the problem logged. This is extremely beneficial because
it allows testers who have focused on other areas to be creative in
an area new to them. Utilizes the test technique of "error guessing"
where you think there might be an error; follow through on hunches.
(1) Ad hoc testing should only be done during dry run...not during Formal
Test when the client may be witnessing, and (2) ad hoc testing also
introduces the issue of repeatability when a defect is found.
- Test suspension & resumption. The plans for this should
be identified in the master test plan. Common causes: equipment failure,
critical software errors, and significant engineering change proposal/request.
- Risk identification & mitigation. The Test Manager needs to have
risk identification & mitigation tracking in place that feeds into
the project/program risk tracking. Guidelines: elevate the risk to the
project/program level when the internal test schedule/budget/personnel
cannot alleviate the risk and it affects other project teams.
- Testing special environments. Being able to use the real platform
versus a test platform. The shifts that the platform is availability
for use.

In Unit 10 you learned that there are four phases of the Formal test. These
phases are:
- Function test followed by a regression test
- System test followed by a regression test
- Installation test followed by a regression test
- Acceptance test followed by a regression test
The project should have approved The Master Test Plan before going into
the Formal Test Phase.
Test engineering should become involved during requirements analysis.
|