Test Metrics Life Cycle
Test Summary Report
Test Tracking and Efficiency
Test Effort
Test Effectiveness
Test Coverage
Test Economics
Test Team Metrics
Test Management Tools
Test Automation Metrics
Test Automation Metrics
Examples
2. WHY DO WE NEED TEST METRICS?
• Test Planning andControl
• Helps improve overall project planning
• Helps to analyze the risks associated in a deeper way
• Enforces better relationship between testing
coverage and risks
To measure the quality,
cost and effectiveness
of the project and the processes
3. To control testing
you need to
measure it
on all stages
You need
Test Metrics
YOU CAN’T CONRTOL WHAT YOU CAN’T MEASURE
9. Test Effectiveness
Test Effectiveness
Using Defect Containment efficiency:
Defect Detection Percentage =
(DDP) Total defects found in all stages
Defects found by testing stage
11. Test Coverage
REQ TC Name Test Result
REQ 1 TC Name1 Pass
REQ 2 TC Name2 Failed
REQ 3 TC Name3 Incomplete
Test Cases by Requirement
Requirement Defect Density
Req name Total # of Defects
Req A 25
Req B 2
12. Test Economics
Total allocated costs for testing
Actual cost of testing
Budget variance
Cost per bug fix
Cost of not testing
13. Test Team Metrics
Distribution of open defects for
retest per test team member
Test cases allocated
per test team member
14. Defect Density = No. of Defects identified / size
No. of Defects identified 30
No. of requirements 5
Others
Defect Density = (30 / 5) = 6
Defect Removal Efficiency (DRE)
No. of Defects found while Testing + No. of Defects found by User
DRE= х100%
120
No. of Defects found by User 42
No. of Defects found while Testing
No. of Defects found while Testing
DRE = [120 / (120 + 42)] * 100 = [120 /162] * 100 = 74.07%
15. Others
Defect Leakage =
Valid Defects Reported
Defects Found by User)
х100%
120
No. of Defects found by User 42
No. of Defects found while Testing
Defect Leakage = (42 /120) * 100%=35%
Example, QA team reported 100 defects out
of which 20 were invalid(not bugs, duplicates,
etc.) and only 65 were fixed
Defect gap % is: (65/100-20)X100= 81%
Defect Gap =
Total No. of Valid Defects reported)
Total No. of Defects Fixed
х100%
16. Others
Total RawTest Steps
Test CaseProductivity = Step(s)/hour
Efforts(hours)
Test Case Name Raw Steps
XYZ_1 30
XYZ_2 32
XYZ_3 40
XYZ_4 36
XYZ_5 45
Total Raw Steps 183
Efforts took for writing 183 steps
is 8 hours.
TCP=183/8=22.8
Test case productivity = 23 steps/hour
19. Test Automation Metrics
AP (%) =
Automation Progress
# of actual test cases automated
# of test cases automatable
Test Progress
TP (%) =
# of test cases (attempted or completed)
time (days/weeks/months, etc)
Percent of Automated Testing Test Coverage
PTC( (%) =
automation coverage )
total coverage
20. What really matters
Hey
I have 3,896 test cases,
and I'm 30 percent complete
on test execution.
So, what does that mean for time/quality/cost,
along with on-time delivery to your end user?
21. What doesn’t really matter
Number of Test Cases Executed
Number of Bugs Found Per Tester
Percentage Pass Rate
Unit Test Code Coverage
Percentage of Automation
22. What really matters
Committed stories vs. delivered results
meeting "doneness" criteria
User satisfaction
Continuous improvement
23. Tips for using Metrics
‘Make everything simple as possible, but not simpler)’
Keep it Simple
Make It Meaningful
Track It
Use It
25. ISTQB Exam Examples
Give the following figures for the testing on a project, and assuming the failure rate for
initial tests remains constant and that all retests pass, what number of tests remain to
be run?
Tests planned 1320
Initial tests run 660
Initial tests passed 480
Retests run 100
A. 900
B. 920
C. 740
D. 840
Explanation:
If 660 tests have been run and 480 passed then 180 failed - hence 0.27 (180/ 660) is
the failure rate. Of the 660 run, the 180 that failed need to be rerun and 100 of these
have been run already, hence of the first 660, just 80 (180-100) are left to re-run.
Assuming the same failure rate (0.27) then for the second 660 tests we can expect 180
to fail again so these will need to be re-run, giving 660+180=840 tests. If we acid the 80
remaining from the first 660, we get 920 (80+840).
26. ISTQB Exam ExamplesAn insurance company has a dedicated testing team consisting of a small number of core testers that maintain and lead testing projects as part of the
internally run software projects. They write and execute tests together with non-professional testers from the different departments that will later use
the new or changed products. The testing team has defined the following measurement goal to improve their testing processes: Analyze all test cases in
the test case-database For the purpose of understanding With respect to reusability From the viewpoint of the core testing team In the context of starting
new testing projects The following questions and metrics were proposed by the testing team members. Which of these should best be used in the GQM
plan to fulfill the given goal?
Q.1 “Is it better to rework and then reuse test cases or to write them from scratch?”
Q.2 “How much time and money is spent on reworking test cases?”
Q.3 “What can we do to improve reusability?”
Q.4 “How many test cases must be archived and can never be reused again?”
Q.5 “Who is better at writing effective test cases – the core team testers or the non-professional testers?”
Q.6 “Which kinds of projects produce which grade of reusability?”
Q.7 “Which training do non-professional testers need to write better test cases?”
M.01 Number of reused test cases (#)
M.02 Time spent writing new test cases (hours)
M.03 Time spent reworking test cases (hours)
M.04 Group test case author belongs to ([core, dept])
M.05 Money spent reworking test cases ([$])
M.06 Number of outliers in reusability chart
M.07 Percentage of test cases that has been reworked ([0%, 20%, 50%, 80%, 100%])
M.08 Part of test case that has been reworked ([administrative data, input values, preconditions, test steps, expected results])
M.09 Number of changed revisions of test case (#)
M.10 Type of project ([Class A, Class B, Class C])
M.11 Number of rejected test cases in reviews (#)
M.12 Root causes for non-reusable test cases
Answer Set:
A. Q.1 – Q.7 and M.01 – M.12 as all are equally necessary
B. B. Q.1, Q.2, Q4 and Q.6 with M.01-M.05, M.07-M.08, M.10 as exactly these are adherent to the given goal
C. C. Q.3 is sufficient, but all M.01-M.12 answer that question
D. D. Q.1, Q.5 and Q.7 are the only questions that need to be answered, M.02-M.04, M.09, M.11- M.12 answer these questions
Justification: A. Incorrect – Q.3 is not “Understanding” but the higher level “Improving”, M.06 is on level “Controlling”, M.12 on level “Improving”
B. Correct – Q.1, Q.2, Q.4 and Q.6 directly refer to the given goal, M.01-M.05, M.07-M.08M.10 answer these questions.
Incorrect – Q.3 is not “Understanding” but the higher level “Improving”. Moreover typically several questions are needed to fulfill one goal (remember the
GQM graphic), M.06 is on level (statistical) “Controlling”, M..12 on level “improving”. M.0 is not dealing with only reusability; it could very well be the case
that changed requirements during a projects’ lifetime caused new revisions (therefore this is not a good metric to use). M.11 is a metric to answer Q.5
which is not directly related to reusability.
D. Incorrect – Answers to Q.5 and Q.7 do not help in fulfilling the given goal.
27. ISTQB Exam Examples
After release client found 9 defects. During the test, tester reported 201 defects,
namely:
47 Performance defects
16 Design defects
13 Critical defects
20 Invalid defects
58 Requirements defects
17 Review defects
15 Calculation Errors
15 System test defects
What is the percentage of Defect Leakage if 3 design defects were recognized
as not a bugs before release?
Defect Leakage=(Defects Found by User/Valid Defects Reported)*100
Defect Leakage = (9/201-20-3)*100=4.9%