Purpose:
|
To identify and outline one or more strategies that will facilitate the required assessment
process.
|
Now that you have a better understanding of the assessment and traceability requirements, and of the
constraints placed on them by the desired quality level and available process and tool support, you need to
consider the potential assessment or evaluation strategies you could employ. For a more detailed treatment
of possible strategies, we suggest you read Cem Kaner's paper "Measurement of the Extent of
Testing," October 2000. (Get Adobe
Reader)
Sub-topics:
There are many different approaches to test coverage, and no one coverage measure alone provides all the
coverage information necessary to form an assessment of the extent or completeness of the test effort. Note
that different coverage strategies take more or less effort to implement, and with any particular
measurement category, there will usually be a depth of coverage analysis at which point it becomes
uneconomic to record more detailed information.
Some categories of test coverage measurement include: Requirements, Source Code, Product Claims and
Standards. We recommend you consider incorporating more than one coverage category in your test assessment
strategy. In most cases, test coverage refers to the planning and implementation of specific tests in the
first instance. However, test coverage metrics and their analysis are also useful to consider in
conjunction with test results or defect analysis.
A common approach to test results analysis is to simply refer to the number of results that were positive
or negative as a percentage of the total number of tests run. Our opinion, and the opinion of other
practitioner in the test community, is that this is a simplistic and incomplete approach to analyzing test
results.
Instead, we recommend you analyze your test results in terms of relative trend over time. Within each test
cycle, consider the relative distribution of test failures across different dimensions such as the
functional area being tested, the type of quality risks being explored, the relative complexity of the
tests and the test resources applied to each functional area.
While defects themselves are obviously related to the results of the test effort, the analysis of defect
data does not provide any useful information about the progress of the test effort or the completeness or
thoroughness of that effort. However, a mistake made by some test teams and project managers is to use the
current defect count to measure the progress of testing or as a gauge to the quality of the developed
software. Our opinion, and the opinion of other practitioner in the test community, is that this is a
meaningless approach.
Instead, we recommend you analyze the relative trend of the defect count over time to provide a measure of
relative stability. For example, assuming the test effort remains relatively constant, you would typically
expect to see the new defect discovery rate as measured against a regular time period "bell curve" over the
course of the iteration; an increasing discovery rate that peaks then tails off toward the end of the
iteration. However, you'll need to provide this information in conjunction with an analysis of other defect
metrics such as: defect resolutions rates, including an analysis of the resolution type; distribution of
defects by severity; distribution of defects by functional area.
With sophisticated tool support, you can perform complex analysis of defect data relatively easily; without
appropriate tool support it is a much more difficult proposition.
|