Risk-based testing is a commonly-performed technique for prioritizing tests that must be performed in a short time frame. However, this technique isn't perfect and has some risks in itself. This presentation lists 13 ways a tester can be "fooled by risk."
2. At The Outset
• We believe in risk-based testing!
• We have designed and applied risk-based test
approaches for the past 20+ years.
3. However…
• We have been fooled by risk
assessments before.
• Some risks were higher than
assessed and have exhibited
more defects than expected.
• Other risks were lower than
expected and it wasn’t clear if
the tests were weak or if the
software was better than
expected.
4. Questions to Ponder
• How can we be “fooled” by
inaccurate risk assessments?
• How reliable is risk-based
testing?
• How can we build a safety net
in case of inaccurate risk
assessments?
• What can we learn about risk
from other risk-based
industries?
5. Risk AssessmentRisk Assessment
InformalInformal FormalFormal
Prediction ModelsPrediction Models
Checklists/RankingsChecklists/Rankings
InterviewsInterviews
TaxonomiesTaxonomies
Based on intuition
and experience
Formal vs. Informal Methods
6. Causes of Missing Risks
• Unexpected events
• That’s why insurance policies
have clauses for “Acts of God”,
war and terrorism, to keep from
paying claims far outside the
norm.
• In software, however, we can’t
transfer the risk that easily.
7. May 3, 1999 Oklahoma City
Tornado
Randy’s House
8. Causes of Missing Risks (2)
• Incorrect information
• People sometimes provide
inaccurate information (lie) on
insurance applications, just like they
do in software risk assessments.
• In the insurance business, the
company is protected by the policy
contract against false statements
used to obtain coverage.
• In the software business, we have
no such protection.
9. Causes of Missing Risks (3)
• Flawed assumptions
• Professional risk assessors, such as insurance underwriters,
actuaries, and loan officers have significant experience in
assessing risk.
• Their methods are pretty solid, although not perfect.
• In the software business, our assumptions tend to depend on
the situation at hand.
10. Ways We Can Be “Fooled” by
Inaccurate Risk Assessments
11. 1
#1 – No Software “Physics” for
Defects
• Just because our risk assessment may
indicate a software function should not
fail, the software may not behave that
way.
• An example of this is the complexity of
software.
• As testers, we need to understand that all
the tricks and techniques we use may be
helpful, but are not guaranteed to be
totally accurate or effective.
12. 2
#2 - We Can't Think of Everything
• Unexpected things can happen that can
change the risk equation by a little or a lot.
• Risk, by its very nature, implies a degree of
uncertainty.
• To address this risk, always remember that you
don’t know everything.
• Use the “Columbo” approach and ask questions even
when the answers may seem obvious.
• Obtain other perspectives in the risk assessment to
evaluate your own conclusions.
13. 3
#3 - People Do Not Always
Provide Accurate Information
• When we base a risk assessment on
information obtained from people, there
is always the possibility the information
could be skewed, inaccurate, or
misleading.
• Ask multiple people the same questions in the
same way.
• Ask the same question in a slightly different way.
• Look for inconsistencies and similarities to
determine a confidence level.
14. 4
#4 - The "I Want to Believe" Syndrome
• There are times that we don't have a
rational reason to believe in
something, but we would really would
like to.
• Risk denial is one approach to dealing
with risk, just not a good one.
15. 5
#5 - The "High Risk" Effect
• This is the opposite of the "I
Want to Believe" syndrome.
• So many things are seen as
high risks that the value of
risk assessment is lost.
• To deal with this problem we
must find ways to make the
assessment criteria more
specific, objective and
detailed.
16. 6
#6 - Flawed Risk
Assessment Methods
• Applying someone else's methods that don't
fit your context
• Devising an inaccurate and unproven method
on your own
• Misapplying a good method incorrectly
because of a lack of understanding
17. 7
#7 – Using Informal Methods
• You can be fooled by informal risk assessment
methods, but at least you are aware that the
assessment may be a guess
• Perhaps an informed one, but still a guess.
• A major problem is that you have nothing upon
which to base risk assumptions.
• To balance this risk, add some structure to your
risk assessment.
• You don’t have to eliminate intuition and experience, just
document your rationale for your risk-based decisions.
18. 8
#8 - Failing to Incorporate
Intuition
• Many times I followed a hunch and found
defects even when the risk assessment
indicated a low risk of failure.
• To address this risk, take a step back from
any risk assessment and ask critical
questions, such as:
• “What looks odd about this?”
• “What looks too good to be true?”
• “What am I not seeing or hearing?”
19. 9
#9 - Only Performing the
Risk Assessment Once
• Risk assessment is a
snapshot taken at a given
point in time.
• The problem is that risks change
throughout a project.
• Ideally, there should be pre-
defined risk assessment
checkpoints throughout the
project.
• concept stage, requirements,
design, code, test and deployment.
20. 0
#10 - Failing to Report Risk Assessment
Results Correctly and Timely
• The longer a known risk remains
unreported, the less time is available to
address it.
• The risk may increase or decrease over time.
• Risk assessment results are like any
other form of measurement and can be
manipulated to suit the objectives of
the presenter or the receiver.
• Example: The situation where the presenter of
the results is fearful of giving bad news for
political or legal reasons.
21. 1
#11 - Failing to Act on Assessment
Results
• Unless you take action on a
risk, the risk assessment is
little more than an exercise.
• To address this risk, your
testing process should
specifically include the
results from your risk
assessment.
22. 2
#12 - A Limited View of
Risk
• If you view the project from
only one perspective (the
“seasoned veteran user”
perspective or the “novice”
perspective), then you will
only identify risks in that
profile.
• Remember to view risk from
multiple perspectives to get
an accurate assessment.
23. 3
#13 – The “Cry Wolf” Effect
• In this scenario, a risk is raised so many
times without incident that people start
to disregard it.
• Then, when the risk actually manifests,
people are unprepared.
24. 4
Example: Adjusting Your
Risk Assessment Methods
Likelihood of Failure
ImpactofFailure
Low High
High
• ACB001
• ACB002
• ACB003
4 - Very High Risk
3 - High Risk
2 - Moderate Risk1 - Low Risk
• ACB004
• ACB005
25. 5
Example: Adjusting Your Risk
Assessment Methods (2)
Likelihood of Failure
ImpactofFailure
Low High
High
RCS
4 - Very High Risk
Complete regression testing,
100% path coverage
3 - High Risk
High level of regression testing,
100% path coverage
2 - Moderate Risk
Partial regression testing,
100% branch coverage
1 - Low Risk
Test changes plus the
most critical cases,
Moderate code coverage
26. 6
The Safety Net
• There is a word that is often used
in conjunction with risk, that
people sometimes omit -
"contingency“
• a possibility that must be prepared for.
• Contingencies are needed
because we have a rich history of
seeing how real life events may
not match the risk assessment.
• Think of a contingency as your "Plan B"
in case the unexpected happens.
27. 7
An Example From the
Insurance Industry
• Reserves are established to
cover higher levels of loss
than normal premiums would
cover.
• Minimal reserve levels are set
by law.
• An insurer may set higher levels if
they need more assurance they
can cover unexpected losses.
28. 8
However, in the Software
Industry…
• There are no regulations about such reserves for projects.
• To make matters worse, in software projects, people tend
to be optimists that reserves won’t be needed.
• “In fact,” some will say, “we will beat our estimates and finish
the project early. That will get us bonuses for sure!”
29. 9
What About “Padding”
Estimates?
• Some feel that the estimate
should be carefully calculated
as accurately as possible and
that should be the actual
working estimate.
• Others feel that this approach
is a recipe for disaster because
there is no room for dealing
with contingencies.
30. 0
Reframing the Debate
• Instead of “padding”, let’s call these "project reserves".
• When used as intended, project reserves help us deal with
the unexpected.
• Problems arise, however, when people abuse reserves.
32. 2
Plan “B”
• Reserves are just time and money - they don't tell you what
to do, but a contingency plan does.
• Contingency plans can be created for just about any project
activity.
33. 3
Major Project Activities
Deserve Priority
Consideration
• What if…
• The requirements are inadequate?
• The degree of requirements
change is excessive?
• High levels of defects are
discovered during testing or
reviews?
• Severe problems are encountered
during implementation?
34. 4
A Risk Mitigation Strategy
• Should address:
• How risks will be addressed
• Who will address the risks
• When risks will be addressed
• When risks will be reassessed
35. A reasonable conclusion is
that every risk assessment
should also address project
reserves and contingencies.
36. 6
Summary
• Hopefully, this doesn’t frighten
you from applying risk-based
testing.
• Many times our biases form
the basis of our perceptions.
• This causes us to fail to recognize
important risks, and in some
cases, entire classes of risks.
37. 7
Summary (2)
• The key to dealing with risk is not to
rely on just one aspect of the risk
picture.
• We must also balance risk with
contingencies to have a safety net for
any risk assessment approach.
Hinweis der Redaktion
For many software developers and testers, our view of risk is often shaped by time and resources instead of the nature of the risks themselves. For example, the prospect of meeting an aggressive deadline (with its accompanying bonus) may cause us to take higher risks than we normally would.
For many years, complexity has been considered to be a major contributor to software failure. The premise is that the more complex the code, the harder it is to understand and test completely. However, complex code doesn ’t always fail. On the other hand, I have seen simple code fail in spectacular ways. Like the small code change in a simple software module that implemented incorrect checking account service charges for over 100 banks. The code was simple, the change was simple, but the impact was huge.
After all, if we knew the exact outcome of a future event, there would be no risk at all. We would know exactly which parts of an application would fail and what those failures would be (and we would fix them). Another way to say this is "Sometimes you don't know what you don't know." To address this risk, always remember that you don ’t know everything. Use the
A typical example of this is when the deadline is quickly approaching and we re-prioritize our tests (based on risk, of course), but choose not to test certain aspects of the software because 1) we ’re feeling confident based on past tests (even though the software has been changed), and 2) time is short. Plus, if the software does fail, we’ll let the help desk and developers deal with it.
Nothing falls into the "low" or "moderate" categories of risk. People may tend to favor the importance of their own areas and believe that if that fails, it will be the end of civilization as we know it. If everything is truly a “high” or “very high” risk, the leader of the risk assessment effort should make sure this information is documented in writing (status reports, test plans, etc.) and understood by the project leader and sponsor. In this case, other methods besides testing will be needed to provide a high-level of confidence in the software or system.
It takes time and practice to get good at assessing risk. It ’s not wrong to try new methods or to invent ones on your own. Just realize that any risk assessment method can be flawed, or we can use it in an incorrect way. Always ask yourself if there are things that just don’t look right.
If later you ever need to defend a risk-based decision, without a method you are left with little defense of how you arrived at the decision.
Unfortunately, this is not something that can be easily taught, but must be learned over time. Experience forms the basis of a lot of what we consider intuition and we need to learn how to listen to the inner voice that tells us to consider things perhaps not indicated in a risk assessment.
To get an accurate view of risk, assuming your method is reasonably sound, the assessment should be performed on a regular basis. In fact, the assessment should continue even after system deployment because risks are still present and still changing. The checkpoints shown on the slide are all good times to reassess risk. Some people find that even within each of these project activities multiple risk assessment snapshots may be needed.
When risk assessment results are conveyed with missing, incorrect or ambiguous information, any conclusion based on them is at risk of being incorrect.
For project risk assessments, it ’s helpful if the project manager takes responsibility for the risk assessment since they are often in the best position to take action on the findings.
I blame senior management and customers for the root cause of this debate. The problem is that management and customers are notorious for taking any estimate and reducing it by X%. Some people believe that all estimates contain padding, therefore all estimates can (and must) be reduced to "more reasonable" levels.
An example of this is when people steal time from reserves to compensate for poor project decisions. It's one thing to use a reserve for extra needed time because a supplier is late, but another thing to use the reserve because developers are creating highly defective software.
Whether a beginner or a veteran practitioner, the more we understand about how we can be fooled by risk assessment, the better we can anticipate problems during the testing effort.