This slide presentation is from the QAI -Quest sponsored webinar with #XBOSoft where Philip Lew covered the Do's and Don'ts of #Software #Quality and Software #Testing Metrics.
This webinar discussed some of the most common mistakes in using software quality metrics and measurements. The primary take away is to learn from the mistakes of others, particularly where to use and not use #metrics to measure your testing and QA efforts. The last thing you want is to measure the wrong thing and create unwanted behavior. With knowledge of what not to do, we’ll then dive into how to develop a measurements and metrics framework that aligns with the organization’s business objectives. This means taking on a manager’s viewpoint so that your metrics don’t just measure testing progress, but also measure product quality and how it impacts an organization’s bottom line. As part of the webinar, we’ll discuss a variety of metrics that can be used to track work effort with results and enable you to plan and forecast your testing needs.
2. Webinar Spirit and
Expectations
• Interactive-I hope you post questions
• We’ll have a couple of polls to get
your ideas flowing as we go along
• I won’t read the slides…
• Slides for you as a take-away
• I may ask questions
– And I hope you post answers J
2
3. Understand, Evaluate and
Improve
• If our end goal is improvement then what?
• To improve, we need to evaluate
• In order to evaluate, we must understand what
we are evaluating
• To do this… We need metrics
Can you think of other examples in our lives
where this applies? Where do you use metrics
to evaluate and improve?
3
4. Metrics in real life
Food
Eaten
Weight
Performance
Race
Results
4
• Calories
• Fat
• Carbohydrates
• Protein
• Time of day
• Vitamins
• …
• Blood pressure
• Cholesterol
• Blood glucose
• Red cell count
• White cell count
• Hematocrit
• Hemoglobin
• Body fat %
• …
• Placing
• …
• Effort/Power
• Heart rate/Watts
• Speed
• Time
Intelligence
Finesse
Context
• Training
• Sleep
5. DO
Think about the process you are measuring
and measure all along the way at each step in
the process.
6. Metrics - Benefits
• Understand how QA, testing, and its processes
and where the problems are
• Evaluate the process and the product in the right
context
• Predict and control process and product qualities
• Reuse successful experiences
– Feed back experience to current and future projects
• Monitor how something is performing
• Analyze the information provided to drive
improvements
6
7. How can measurement help us (YOU)
• Create a organizational memory – baselines of current
practices-situation
• Determine strengths and weaknesses of the current
process and product
– What types of errors are most common?
• Develop reasoning for adopting/refining techniques
– What techniques will minimize the problems?
• Assess the impact of techniques
– Does more/less functional testing reduce defects?
• Evaluate the quality of the process/product
– Are we applying inspections appropriately?
– What is the reliability of the product before/after
delivery? 7
9. Why we need to measure ?
• Our bosses want us to…
• They want someone to point fingers at
• They want to fire some people and save money
• They need to report to their managers
• They
want
some
basis
on
which
to
evaluate
us
and
give
us
a
raise!
• We
need
to
figure
out
a
way
to
do
beCer!
• We
want
to
improve
our
work
and
improve
soFware
quality
9
10. The Metrics Conundrum
• QA and Testing
Language
– Defects
– Execution status
– Test cases
– Pass/fail rates
– DRE…
• Business
Language
– Cost
effecMve
– ROI
– Cost
of
ownership
– Cost
of
poor
quality
– ProducMvity
– Calls
to
help
desk
– Customer
saMsfacMon
– Customer
retenMon
10
11. In your organization…
• What measurements do you take in your
organization and why?
• Who uses them and for what?
11
12. POLL: How many metrics are you
collecting on a regular basis within
your organization?
A. 1-5
B. 6-10
C. 11-15
D. 0
Quest 2014 12
13. The Metric Reality
• Measurement and metrics are like dinner. It
takes 2-3 hours to make dinner, and 15 minutes
to consume…
• But… many metrics are never reviewed or
analyzed (consumed)
• WHY?
13
14. The Metric Conundrum (cont.)
• Test leads and test managers rarely have the
right metrics to show or quantify value
• Metric collection and reporting are a drag
• QA metrics usually focus only on test execution
• Test tools don’t have most of the metrics we
want
• Reports generated by QA are only rarely
reviewed
• Metrics are not connected to anything of value/
meaningful for ________.
14
15. Let’s look at some of the most
common mistakes in implementing
metrics.
16. Don’t – Measure the wrong thing
• Often times, we get an idea for a software quality metric from a
person, company or article and begin using it without thinking ‘What
am I trying to measure and why?” In the end, we sometimes get
measurements that don’t matter relative to our goal.
• Some sample metrics to review:
– Test Coverage = Number of units (KLOC/FP) tested / total size of
the system
– Test Density-Number of tests per unit size = Number of test
cases per KLOC/FP
– Acceptance criteria test coverage = Acceptance criteria tested /
total acceptance criteria
– Defects per size = Defects detected / system size
– Test cost (in %) = Cost of testing / total cost *100
Quest
2014
16
17. Don’t – Forget to differentiate
between quality and defects
• Metric becomes the goal
• Organizations concentrated on “the metrics”,
forget to understand the metric’s relationship to
the goal or objective.
• Defect counts need to be incorporated into an
overall valuation because Quality is ultimately
measured in the eyes of the end user.
Quest
2014
17
18. Don’t – Forget about context
• Metrics don’t have consistent context so they are
unreliable – Context needs to be defined and
then maintained for measurements to be
meaningful.
• Difficult in today’s environment with changing
test platforms and other contextual factors.
Quest
2014
18
19. What contextual factors could
there be?
• Release complexity
• Development methodology
• Software maturity
• Development team maturity and expertise
• Development team and QA integration
• Resources available
• User base
Quest
2014
19
All metrics need to be normalized for proper interpretation
20. Metrics need context to tell the
whole story
• Normalized per function point (or per LOC)
• At product delivery (first X months or first year of
operation)
• Ongoing (per year of operation)
• By level of severity
– Gross numbers don’t tell much
• By category or cause, e.g.: requirements defect,
design defect, code defect, documentation/on-
line help defect, defect introduced by fixes, etc.
– Total numbers tell 0
Quest
2014
20
21. Don’t – Be sporadic or
irregular
• Measurements used are not consistent – Just as
context needs to be consistent, so do the
measurements, methods, and time intervals that
you collect the measurements and calculate the
metrics.
• Just as in weighing yourself, it doesn’t make
sense to drink 2 gallons one day and weigh in,
and go jogging 10 miles the next day and weigh
in.
Quest
2014
21
22. Don’t – Calculate metrics that
don’t answer specific questions
• Metrics don’t answer the questions you had to
begin with
• You run off collecting measurements and
calculating metrics without thinking what
answers will I get after getting this information?
Quest
2014
22
23. Poll: How many of you collect metrics
that you don’t need or use?
24. Don’t – Collect measurements
that no one wants
• Metrics have no audience – As a corollary to the
previous factor, if there is no question to be
answered, then there will also be no audience
for the metric.
• Metrics need to have an audience in order to
have meaning.
• How many of the metrics and reports that you
generate are read?
Quest
2014
24
25. Do - Collect what “they” want
• Ratios and percentages rather than absolutes
• Comparisons over time, or by release
• Report on typical project constraints:
– Costs
– Time
– Quality
Quest
2014
25
26. Do - Collect what they want
Costs (Some potential metrics include):
• Business losses per defect that occurs during operation
• Business interruption costs
• Costs of work-arounds
• Costs of reviews, inspections and preventive measures
• Costs of test planning and preparation
• Costs of test execution, defect tracking, version and change control
• Costs of test tools and tool support
• Costs of test case library maintenance
• Costs of testing & QA education associated with the product
• Re-work effort (hours, as a percentage of the original coding hours)
• Lost sales or goodwill
• Annual QA and testing cost (per function point)
Quest
2014
26
27. Do - Collect what they want
Time-Resources (Some potential metrics include):
• Labor hours/defect fix
• Turnaround time for defect fixes, by level of
severity
• Time for minor vs. major enhancements
– actual vs. planned elapsed time
• Effort for minor vs. major enhancements
• actual vs. planned effort hours
Quest
2014
27
28. Do - Collect what they want
Quality (Some potential metrics include):
• Survey before, after (and ongoing) product delivery
• # system enhancement requests per year
• # maintenance fix requests per year
• User problems: call volume to customer service/Tech support
• User Satisfaction
– training time per new user, time to reach task time of x
– # errors per new user
• # product recalls or fix/patch releases/year
• # production re-runs
• Availability (time system is available/ time the system is needed to
be available)
Quest
2014
28
29. Collect what they want
• Show them in combination and relative to each
other
– Cost vs. quality
– Cost vs. time
– Quality vs. time
Quest
2014
29
30. Don’t – Make the collection
effort the end game
• Measurements are too hard to get – If you end
up designing the right metric to answer the right
question, it doesn’t matter if it takes several man
days to get the data and do the calculations.
• Unless the value and decisions made from these
metrics have considerable value, they’ll soon be
abandoned.
Quest
2014
30
31. Poll: How many of you started to collect
metrics but then found it was too
difficult or time consuming and quit?
32. Don’t – Forget indicators
• Metrics have no indicators so cannot evaluate
– You collect mounds of data but then what?
– How do you determine what is ‘good’ or ‘bad’?
– Before you get started collecting and calculating you
need to put together a way to evaluate the numbers
you get with meaningful indicators that can be used
as benchmarks as your metrics program matures.
Quest
2014
32
33. Conclusions
• Designing and implementing a software quality
metrics program requires careful thought and
planning.
• First step is finding out the questions that you
want to answer or goals of using metrics.
– Many refer to this as the goal-question-metric
paradigm, but in simple terms, what are you going to
do with the numbers once you get them?
• Most of the “Don’ts” are related to not thinking
about the objectives of the metrics and actions
you will take based on them.
Quest
2014
33
34. Solutions
• Develop a stakeholders and their goals-
objectives
• Develop a list of questions that, if answered,
would determine if the goals are met
• Develop a catalogue of metrics (that answer the
questions) that can mix and match to apply to
the goals depending on the stakeholder
• Develop and collect metrics that accompany
each part of the development process, not just
testing
– There are many “defects” not directly in dev.Quest
2014
34
35. Coming up at the conference
Measurement
Framework
Improvement
Decisions
Stakeholders
35
Questions and Answers