SlideShare ist ein Scribd-Unternehmen logo
1 von 51
Downloaden Sie, um offline zu lesen
MA
Full-day Tutorials
5/5/2014 8:30:00 AM
A Rapid Introduction to
Rapid Software Testing
Presented by:
Michael Bolton
DevelopSense
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
Michael Bolton
DevelopSense
Tester, consultant, and trainer Michael Bolton is the co-author (with James Bach) of Rapid
Software Testing, a course that presents a methodology and mindset for testing software
expertly in uncertain conditions and under extreme time pressure. Michael is a leader in the
context-driven software testing movement, with twenty years of experience testing,
developing, managing, and writing about software. Currently, he leads DevelopSense, a
Toronto-based consultancy. Prior to DevelopSense, he was with Quarterdeck Corporation,
where he managed the company’s flagship products and directed project and testing
teams—both in-house and worldwide. Contact Michael at michael@developsense.com.
1
A Rapid Introduction
to Rapid Software Testing
James Bach, Satisfice, Inc.
james@satisfice.com
www.satisfice.com
+1 (360) 440-1435
Michael Bolton, DevelopSense
mb@developsense.com
www.developsense.com
+1 (416) 656-5160
2
Acknowledgements
• Some of this material was developed in collaboration with
Dr. Cem Kaner, of the Florida Institute of Technology. See
www.kaner.com and www.testingeducation.org.
• Doug Hoffman (www.softwarequalitymethods.com) has also
contributed to and occasionally teaches from this material.
• Many of the ideas in this presentation were also inspired by or
augmented by other colleagues including Jonathan Bach, Bret
Pettichord, Brian Marick, Dave Gelperin, Elisabeth Hendrickson, Jerry
Weinberg, Noel Nyman, and Mary Alton.
• Some of our exercises were introduced to us by Payson Hall, Jerry
Weinberg, Ross Collard, James Lyndsay, Dave Smith, Earl Everett,
Brian Marick, Cem Kaner and Joe McMahon.
• Many ideas were improved by students who took earlier versions of
the class going back to 1996.
2
3
Assumptions About You
• You test software, or any other complex human
creation.
• You have at least some control over the design of your
tests and some time to create new tests.
• You are worried that your test process is spending too
much time and resources on things that aren’t
important.
• You test under uncertainty and time pressure.
• Your major goal is to find important problems quickly.
• You want to get very good at (software) testing.
A Question
What makes
testing harder or
slower?
3
Premises of Rapid Testing
1. Software projects and products are relationships
between people.
2. Each project occurs under conditions of uncertainty
and time pressure.
3. Despite our best hopes and intentions, some
degree of inexperience, carelessness, and
incompetence is normal.
4. A test is an activity; it is performance, not artifacts.
Premises of Rapid Testing
5. Testing’s purpose is to discover the status of the
product and any threats to its value, so that our
clients can make informed decisions about it.
6. We commit to performing credible, cost-effective
testing, and we will inform our clients of anything that
threatens that commitment.
7. We will not knowingly or negligently mislead our
clients and colleagues or ourselves.
8. Testers accept responsibility for the quality of their
work, although they cannot control the quality of the
product.
4
7
Rapid Testing
Rapid testing is a mind-set
and a skill-set of testing
focused on how to do testing
more quickly,
less expensively,
with excellent results.
This is a general testing
methodology. It adapts to
any kind of project or product.
8
Rapid testing may not
be exhaustive, but it is
thorough enough and
quick enough. It’s less
work than ponderous
testing. It might be less
work than slapdash
testing.
It fulfills the mission
of testing.
How does Rapid Testing compare
with other kinds of testing?
Management likes to talk
about exhaustive testing, but
they don’t want to fund it and
they don’t know how to do it.
You can always test
quickly...
But it might be poor
testing.
When testing is turned into an
elaborate set of rote tasks,
it becomes ponderous without
really being thorough.
MoreWork&Time
(Cost)
Better Thinking & Better Testing
(Value)
Slapdash
Much faster, cheaper,
and easier
Ponderous
Slow, expensive,
and easier
Rapid
Faster, less expensive,
still challenging
Exhaustive
Slow, very expensive,
and difficult
5
9
Excellent Rapid Technical Work
Begins with You
When the ball comes to you…
Do you know you have the ball?
Can you receive the pass?
Do you know what your
role and mission is?
Do you know where
your teammates are?
Are you ready to act, right now?
Can you let your teammates help you?
Do you know your options?
Is your equipment ready?
Can you read the
situation on the field?
Are you aware of the
criticality of the situation?
10
• Rapid test teams are about diverse talents cooperating
• We call this the elliptical team, as opposed to the team of
perfect circles.
• Some important dimensions to vary:
• Technical skill
• Domain expertise
• Temperament (e.g. introvert vs. extrovert)
• Testing experience
• Project experience
• Industry experience
• Product knowledge
• Educational background
• Writing skill
• Diversity makes exploration far more powerful
• Your team is powerful because of your unique contribution
…but you don’t have to be great at everything.
6
What It Means To Test Rapidly
• Since testing is about finding a potentially infinite
number of problems in an infinite space in a finite
amount of time, we must…
• understand our mission and obstacles to fulfilling it
• know how to recognize problems quickly
• model the product and the test space to know where to look
for problems
• prefer inexpensive, lightweight, effective tools
• reduce dependence on expensive, time-consuming artifacts,
while getting value from the ones we’ve got
• do nothing that wastes time or effort
• tell a credible story about all that
11
One Big Problem in Testing
Formality Bloat
• Much of the time, your testing doesn’t need to be very formal*
• Even when your testing does need to be formal, you’ll need to
do substantial amounts of informal testing in order figure out
how to do excellent formal testing.
• Who says? The FDA. See http://www.satisfice.com/blog/archives/602
• Even in a highly regulated environment, you do formal testing primarily
for the auditors. You do informal testing to make sure you don’t
lose money, blow things up, or kill people.
* Formal testing means testing that must be done to verify a specific fact,
or that must be done in a specific way.
7
EXERCISE
Test the Famous Triangle
14
What is testing?
Serving Your Client
If you don’t have an understanding and an agreement
on what is the mission of your testing, then doing it
“rapidly” would be pointless.
8
Not Enough
Product and Project Information?
Where do we get
test information?
What Is A Problem?
A problem is…
9
How Do We Recognize Problems?
An oracle is…
a way to recognize
a problem.
Learn About Heuristics
Heuristics are fallible, “fast and frugal” methods of solving
problems, making decisions, or accomplishing tasks.
“The engineering method is
the use of heuristics
to cause the best change
in a poorly understood situation
within the available resources.”
Billy Vaughan Koen
Discussion of the Method
10
Heuristics: Generating Solutions
Quickly and Inexpensively
• Heuristic (adjective):
serving to discover or learn
• Heuristic (noun):
a fallible method for solving a problem
or making a decision
“Heuristic reasoning is not regarded as final and strict
but as provisional and plausible only, whose purpose
is to discover the solution to the present problem.”
- George Polya, How to Solve It
Oracles
An oracle is a heuristic principle or mechanism
by which we recognize a problem.
“...it appeared at least once to meet some
requirement to some degree”
“...uh, when I ran it”
“...that one time”
“...on my machine.”
“It works!”
really means…
11
Familiar Problems
If a product is consistent with problems we’ve seen before,
we suspect that there might be a problem.
Explainability
If a product is inconsistent with our ability to explain it
(or someone else’s), we suspect that there might be a problem.
12
World
If a product is inconsistent with the way the world works,
we suspect that there might be a problem.
History
If a product is inconsistent with previous versions of itself,
we suspect that there might be a problem.
Okay,
so how the
#&@ do I print
now?
13
Image
If a product is inconsistent with an image that
the company wants to project, we suspect a problem.
Comparable Products
WordPad Word
When a product seems inconsistent with a product that is
in some way comparable, we suspect that there might be a problem.
14
Claims
When a product is inconsistent with claims that important
people make about it, we suspect a problem.
User Expectations
When a product is inconsistent with expectations that a
reasonable user might have, we suspect a problem.
15
Purpose
When a product is inconsistent with its designers’ explicit
or implicit purposes, we suspect a problem.
Product
When a product is inconsistent internally—as when it
contradicts itself—we suspect a problem.
16
Statutes and Standards
When a product is inconsistent with laws or widely
accepted standards, we suspect a problem.
32
Consistency (“this agrees with that”)
an important theme in oracle principles
• Familiarity: The system is not consistent with the pattern of any familiar problem.
• Explainability: The system is consistent with our ability to describe it clearly.
• World: The system is consistent with things that we recognize in the world.
• History: The present version of the system is consistent with past versions of it.
• Image: The system is consistent with an image that the organization wants to project.
• Comparable Products: The system is consistent with comparable systems.
• Claims: The system is consistent with what important people say it’s supposed to be.
• Users’ Expectations: The system is consistent with what users want.
• Product: Each element of the system is consistent with comparable elements in the
same system.
• Purpose: The system is consistent with its purposes, both explicit and implicit.
• Standards and Statutes: The system is consistent with applicable laws, or relevant
implicit or explicit standards.
Consistency heuristics rely on the quality of your
models of the product and its context.
17
Consistency heuristics rely on the quality of
your models of the product and its context.
An oracle doesn’t tell you that there IS a problem.
An oracle tells you that you might be seeing a problem.
Rely solely on documented, anticipated sources of oracles,
and your testing will likely be slower and weaker.
Train your mind to recognize patterns of oracles
and your testing will likely be faster
and your ability to spot problems will be sharper.
All Oracles Are Heuristic
An oracle can alert you to a possible problem,
but an oracle cannot tell you that there is no problem.
• A person whose opinion matters.
• An opinion held by a person who matters.
• A disagreement among people who matter.
• A reference document with useful information.
• A known good example output.
• A known bad example output.
• A process or tool by which the output is checked.
• A process or tool that helps a tester identify patterns.
• A feeling like confusion or annoyance.
• A desirable consistency between related things.
General Examples of Oracles
things that suggest “problem” or “no problem”
34
People
Mechanisms
Feelings
Principles
18
Tacit Explicit
OtherPeopleTester
Your
Feelings &
Mental Models
Shared Artifacts
(specs, tools, etc.)
Stakeholders’
Feelings &
Mental Models
Inference
Observable
Consistencies
ReferenceConference
Experience
Oracles from the Inside Out
Oracle Cost and Value
• Some oracles are more authoritative
• but more responsive to change
• Some oracles are more consistent
• but maybe not up to date
• Some oracles are more immediate
• but less reliable
• Some oracles are more precise
• but the precision may be misleading
• Some oracles are more accurate
• but less precise
• Some oracles are more available
• but less authoritative
• Some oracles are easier to interpret
• but more narrowly focused
19
Feelings As Heuristic Triggers For Oracles
• An emotional reaction is a trigger to attention
and learning
• Without emotion, we don’t reason well
• See Damasio, The Feeling of What Happens
• When you find yourself mildly concerned about
something, someone else could be very
concerned about it
• Observe emotions to help overcome your
biases and to evaluate significance
An emotion is a signal; consider looking into it
38
All Oracles Are Heuristic
• We often do not have oracles that establish a definite correct or incorrect
result, in advance. Oracles may reveal themselves to us on the fly, or later.
That’s why we use abductive inference.
• No single oracle can tell us whether a program (or a feature) is working
correctly at all times and in all circumstances.
That’s why we use a variety of oracles.
• Any program that looks like it’s working, to you, may in fact be failing in
some way that happens to fool all of your oracles. That’s why we proceed
with humility and critical thinking.
• We never know when a test is finished.
That’s why we try to maintain uncertainty when everyone else on the
project is sure.
• You (the tester) can’t know the deep truth about any result.
That’s why we report whatever seems likely to be a bug.
20
39
Oracles are Not Perfect
And Testers are Not Judges
• You don’t need to know for sure if something is a bug;
it’s not your job to decide if something is a bug; it’s your
job to decide if it’s worth reporting.
• You do need to form a justified belief that it MIGHT be
a threat to product value in the opinion of someone
who matters.
• And you must be able to say why you think so; you must
be able to cite good oracles… or you will lose credibility.
MIP’ing VS. Black Flagging
40
Coping With Difficult Oracle Problems
• Ignore the Problem
• Ask “so what?” Maybe the value of the information doesn’t justify the cost.
• Simplify the Problem
• Ask for testability. It usually doesn’t happen by accident.
• Built-in oracle. Internal error detection and handling.
• Lower the standards. You may be using an unreasonable standard of correctness.
• Shift the Problem
• Parallel testing. Compare with another instance of a comparable algorithm.
• Live oracle. Find an expert who can tell if the output is correct.
• Reverse the function. (e.g. 2 x 2 = 4, then 4/2 = 2)
• Divide and Conquer the Problem
• Spot check. Perform a detailed inspection on one instance out of a set of outputs.
• Blink test. Compare or review overwhelming batches of data for patterns that stand
out.
• Easy input. Use input for which the output is easy to analyze.
• Easy output. Some output may be obviously wrong, regardless of input.
• Unit test first. Learn about the pieces that make the whole.
• Test incrementally. Learn about the product by testing over a period of time.
21
41
“Easy Input”
• Fixed Markers. Use distinctive fixed input patterns that are easy to
spot in the output.
• Statistical Markers. Use populations of data that have
distinguishable statistical properties.
• Self-Referential Data. Use data that embeds metadata about itself.
(e.g. counterstrings)
• Easy Input Regions. For specific inputs, the correct output may be
easy to calculate.
• Outrageous Values. For some inputs, we expect error handling.
• Idempotent Input. Try a case where the output will be the same as
the input.
• Match. Do the “same thing” twice and look for a match.
• Progressive Mismatch. Do progressively differing things over time
and account for each difference. (code-breaking technique)
42
Oracles Are Linked To Threats
To Quality Criteria
Any inconsistency may represent diminished value.
Many test approaches focus on capability (functionality)
and underemphasize the other criteria.
Capability Scalability
Reliability Compatibility
Usability Performance
Charisma Installability
Security Development
22
43
Oracles Are Linked To Threats
To Quality Criteria
Any inconsistency may represent diminished value.
Many test approaches focus on capability (functionality)
and underemphasize the other criteria.
Supportability
Testability
Maintainability
Portability
Localization
Focusing on Preparation and Skill
Can Reduce Documentation Bloat
3.0 Test Procedures
3.1 General testing protocol.
• In the test descriptions that follow, the word “verify" is used to highlight specific items
that must be checked. In addition to those items a tester shall, at all times, be alert for
any unexplained or erroneous behavior of the product. The tester shall bear in mind
that, regardless of any specific requirements for any specific test, there is the
overarching general requirement that the product shall not pose an unacceptable risk
of harm to the patient, including an unacceptable risk using reasonably foreseeable
misuse.
• Test personnel requirements: The tester shall be thoroughly familiar with the
generator and workstation FRS, as well as with the working principles of the devices
themselves. The tester shall also know the working principles of the power test jig and
associated software, including how to configure and calibrate it and how to recognize if
it is not working correctly. The tester shall have sufficient skill in data analysis and
measurement theory to make sense of statistical test results. The tester shall be
sufficiently familiar with test design to complement this protocol with exploratory
testing, in the event that anomalies appear that require investigation. The tester shall
know how to keep test records to credible, professional standard.
23
Remember…
For skilled testers,
good testing isn’t just about
pass vs. fail.
For skilled testers,
testing is about
problem vs. no problem.
Where Do We Look For Problems?
Coverage is…
how much of the
product has been tested.
24
Coverage is “how much of the product we have tested.”
What IS Coverage?
It’s the extent to which we have
traveled over some map of the product.
MODELS
Models
• A model is an idea, activity, or object…
such as an idea in your mind, a diagram, a list of words, a spreadsheet,
a person, a toy, an equation, a demonstration, or a program
such as something complex that you need to work with or study
- A map is a model that helps to navigate across a terrain.
- 2+2=4 is a model for adding two apples to a basket that already has two apples.
- Atmospheric models help predict where hurricanes will go.
- A fashion model helps understand how clothing would look on actual humans.
- Your beliefs about what you test are a model of what you test.
• …that heuristically represents (literally,
re-presents) another idea, activity, or object…
• …whereby understanding something about the
model may help you to understand or manipulate
the thing that it represents.
25
There are as many kinds of test coverage as there are
ways to model the system.
Intentionally OR Incidentally
One Way to Model Coverage:
Product Elements (with Quality Criteria)
Capability
Reliability
Usability
Charisma
Security
Scalability
Compatibility
Performance
Installability
Supportability
Testability
Maintainability
• Structure
• Function
• Data
• Interfaces
• Platform
• Operations
• Time
26
51
To test a very simple product meticulously,
part of a complex product meticulously,
or to maximize test integrity…
1. Start the test from a known (clean) state.
2. Prefer simple, deterministic actions.
3. Trace test steps to a specified model.
4. Follow established and consistent lab procedures.
5. Make specific predictions, observations and records.
6. Make it easy to reproduce (automation may help).
General Focusing Heuristics
• use test-first approach or unit testing for better code
coverage
• work from prepared test coverage outlines and risk lists
• use diagrams, state models, and the like, and cover them
• apply specific test techniques to address particular coverage
areas
• make careful observations and match to expectations
To do this more rapidly, make preparation and artifacts fast and frugal:
leverage existing materials and avoid repeating yourself.
Emphasize doing; relax planning. You’ll make discoveries along the way!
27
53
To find unexpected problems,
elusive problems that occur in sustained field use,
or more problems quickly in a complex product…
1. Start from different states (not necessarily clean).
2. Prefer complex, challenging actions.
3. Generate tests from a variety of models.
4. Question your lab procedures and tools.
5. Try to see everything with open expectations.
6. Make the test hard to pass, instead of easy to reproduce.
That’s a
PowerPoint
bug!
General Defocusing Heuristics
• diversify your models; intentional coverage in one area can lead
to unintentional coverage in other areas—this is a Good Thing
• diversify your test techniques
• be alert to problems other than the ones that you’re actively
looking for
• welcome and embrace productive distraction
• do some testing that is not oriented towards a specific risk
• use high-volume, randomized automated tests
28
DISCUSSION
How Many Test Cases?
What About Quantifying Coverage Overall?
• A nice idea, but we don’t know how to do it in a way
that is consistent with basic measurement theory
• If we describe coverage by counting test cases, we’re
committing reification error.
• If we use percentages to quantify coverage, we need to
establish what 100% looks like.
• But we might do that with respect to some specific models.
• Complex systems may display emergent behaviour.
29
Extent of Coverage
• Smoke and sanity
• Can this thing even be tested at all?
• Common, core, and critical
• Can this thing do the things it must do?
• Does it handle happy paths and regular input?
• Can it work?
• Complex, harsh, extreme and exceptional
• Will this thing handle challenging tests, complex data flows,
and malformed input, etc.?
• Will it work?
How Might We Organize,
Record, and Report Coverage?
• automated tools (e.g. profilers, coverage tools)
• annotated diagrams and mind maps
• coverage matrices
• bug taxonomies
• Michael Hunter’s You Are Not Done Yet list
• James Bach’s Heuristic Test Strategy Model
• described at www.satisfice.com
• articles about it at www.developsense.com
• Mike Kelly’s MCOASTER model
• product coverage outlines and risk lists
• session-based test management
• http://www.satisfice.com/sbtm
See three articles here:
http://www.developsense.com/publications.html#coverage
30
59
What Does Rapid Testing Look Like?
Concise Documentation Minimizes Waste
Risk ModelCoverage Model Test Strategy
Reference
Risk CatalogTesting Heuristics
General
Project-
Specific
Testing
Playbook
Status
Dashboard
Schedule BugsIssues
Rapid Testing Documentation
• Recognize
• a requirements document is not the requirements
• a test plan document is not a test plan
• a test script is not a test
• doing, rather than planning, produces results
• Determine where your documentation is on the continuum:
product or tool?
• Keep your tools sharp and lightweight
• Obtain consensus from others as to what’s necessary and what’s
excess in products
• Ask whether reporting test results takes priority over
obtaining test results
• note that in some contexts, it might
• Eliminate unnecessary clerical work
31
Visualizing Test Progress
Visualizing Test Progress
32
Visualizing Test Progress
See “A Sticky Situation”, Better Software, February 2012
What IS Exploratory Testing?
• Simultaneous test design, test
execution, and learning.
• James Bach, 1995
But maybe it would be a good idea to underscore
why that’s important…
33
What IS Exploratory Testing?
• I follow (and to some degree contributed to) Kaner’s definition, which
was refined over several peer conferences through 2007:
Exploratory software testing is…
• a style of software testing
• that emphasizes the personal freedom and responsibility
• of the individual tester
• to continually optimize the value of his or her work
• by treating test design, test execution, test result interpretation,
and test-related learning
• as mutually supportive activities
• that run in parallel
• throughout the project.
See Kaner, “Exploratory Testing After 23 Years”, www.kaner.com/pdfs/ETat23.pdf
So maybe it would be
a good idea to keep it
brief most of the
time…
Why Exploratory Approaches?
• Systems are far
more than
collections of
functions
• Systems typically
depend upon and
interact with
many external
systems
34
Why Exploratory Approaches?
• Systems are too
complex for
individuals to
comprehend and
describe
• Products evolve
rapidly in ways
that cannot be
anticipated
In the future, developers will likely do more verification and validation at the
unit level than they have done before.
Testers must explore, discover, investigate, and learn about the system.
Why Exploratory Approaches?
• Developers are using tools and frameworks that
make programming more productive, but that
may manifest more emergent behaviour.
• Developers are increasingly adopting unit testing
and test-driven development.
• The traditional focus is on verification, validation,
and confirmation.
The new focus must be on exploration, discovery,
investigation, and learning.
35
Why Exploratory Approaches?
• We don’t have time to waste
• preparing wastefully elaborate written plans
• for complex products
• built from many parts
• and interacting with many systems
• (many of which we don’t control…
• …or even understand)
• where everything is changing over time
• and there’s so much learning to be done
• and the result, not the plan, is paramount.
Questions About Scripts…
arrows and cycles
Where do scripts
come from?
What happens when the
unexpected happens
during a script?
What do we do
with what we
learn?
Will everyone follow the same
script the same way?
(task performing)
36
Questions About Exploration…
arrows and cycles
(value seeking)
Where does
exploration
come from?
What happens when
the unexpected
happens during
exploration?
What do we do
with what we
learn?
Will everyone
explore the same way?
Exploration is Not Just Action
arrows and cycles
37
You can put them together!
arrows and cycles
You can put them together!
arrows and cycles
38
What Exploratory Testing Is Not
• Touring
• http://www.developsense.com/blog/2011/12/what-exploratory-testing-is-not-part-1-
touring/
• After-Everything-Else Testing
• http://www.developsense.com/blog/2011/12/what-exploratory-testing-is-not-part-2-after-
everything-else-testing/
• Tool-Free Testing
• http://www.developsense.com/blog/2011/12/what-exploratory-testing-is-not-part-3-tool-
free-testing/
• Quick Tests
• http://www.developsense.com/blog/2011/12/what-exploratory-testing-is-not-part-4-quick-
tests/
• Undocumented Testing
• http://www.developsense.com/blog/2011/12/what-exploratory-testing-is-not-part-5-
undocumented-testing/
• “Experienced-Based” Testing
• http://www.satisfice.com/blog/archives/664
• defined by any specific example of exploratory testing
• http://www.satisfice.com/blog/archives/678
Exploratory Testing
• IS NOT “random testing” (or sloppy,
or slapdash testing)
• IS NOT “unstructured testing”
• IS NOT procedurally structured
• IS NOT unteachable
• IS NOT unmanageable
• IS NOT scripted
• IS NOT a technique
• IS “ad hoc”, in the dictionary sense,
“to the purpose”
• IS structured and rigorous
• IS cognitively structured
• IS highly teachable
• IS highly manageable
• IS chartered
• IS an approach
The way we practice and teach it, exploratory testing…
39
Contrasting Approaches
Scripted Testing
• Is directed from elsewhere
• Is determined in advance
• Is about confirmation
• Is about controlling tests
• Emphasizes predictability
• Emphasizes decidability
• Like making a speech
• Like playing from a score
Exploratory Testing
• Is directed from within
• Is determined in the moment
• Is about investigation
• Is about improving test design
• Emphasizes adaptability
• Emphasizes learning
• Like having a conversation
• Like playing in a jam session
Exploratory Testing IS Structured
• Exploratory testing, as we teach it, is a structured process conducted by a
skilled tester, or by lesser skilled testers or users working under supervision.
• The structure of ET comes from many sources:
• Test design heuristics
• Chartering
• Time boxing
• Perceived product risks
• The nature of specific tests
• The structure of the product being tested
• The process of learning the product
• Development activities
• Constraints and resources afforded by the project
• The skills, talents, and interests of the tester
• The overall mission of testing
In other words,
it’s not “random”,
but systematic.
Not procedurally
structured, but
cognitively structured.
40
79
In excellent exploratory testing, one structure tends to
dominate all the others:
Exploratory testers construct a compelling story of
their testing. It is this story that
gives ET a backbone.
Exploratory Testing IS Structured
80
To test is to compose, edit, narrate, and justify
THREE stories.
A story about the status of the PRODUCT…
…about what it does, how it failed, and how it might fail...
…in ways that matter to your various clients.
A story about HOW YOU TESTED it…
…how you configured, operated and observed it…
…how you recognized problems…
…about what you have and haven’t tested yet…
…and what you won’t test at all (unless the client objects)…
A story about how GOOD that testing was…
…the risks and costs of (not) testing…
…what made testing harder or slower…
…how testable (or not) the product is…
…what you need and what you recommend.
Bugs
Issues
Product any good?
How do you know?
Why should I be pleased
with your work?
41
81
What does “taking advantage of resources” mean?
• Mission
• The problem we are here to solve for our customer.
• Information
• Information about the product or project that is needed for testing.
• Developer relations
• How you get along with the programmers.
• Team
• Anyone who will perform or support testing.
• Equipment & tools
• Hardware, software, or documents required to administer testing.
• Schedule
• The sequence, duration, and synchronization of project events.
• Test Items
• The product to be tested.
• Deliverables
• The observable products of the test project.
82
“Ways to test…”?
General Test Techniques
• Function testing
• Domain testing
• Stress testing
• Flow testing
• Scenario testing
• Claims testing
• User testing
• Risk testing
• Automatic checking
42
83
• Happy Path
• Tour the Product
• Sample Data
• Variables
• Files
• Complexity
• Menus & Windows
• Keyboard & Mouse
A quick test is a cheap test that has some value
but requires little preparation, knowledge,
or time to perform.
• Interruptions
• Undermining
• Adjustments
• Dog Piling
• Continuous Use
• Feature Interactions
• Click on Help
Cost as a Simplifying Factor
Try quick tests as well as careful tests
84
• Input Constraint Attack
• Click Frenzy
• Shoe Test
• Blink Test
• Error Message Hangover
A quick test is a cheap test that has some value
but requires little preparation, knowledge,
or time to perform.
• Resource Starvation
• Multiple Instances
• Crazy Configs
• Cheap Tools
Cost as a Simplifying Factor
Try quick tests as well as careful tests
43
Touring the Product:
Mike Kelly’s FCC CUTS VIDS
• Feature tour
• Complexity tour
• Claims tour
• Configuration tour
• User tour
• Testability tour
• Scenario tour
• Variability tour
• Interoperability tour
• Data tour
• Structure tour
44
88
Summing Up:
Themes of Rapid Testing
• Put the tester's mind at the center of testing.
• Learn to deal with complexity and ambiguity.
• Learn to tell a compelling testing story.
• Develop testing skills through practice, not just talk.
• Use heuristics to guide and structure your process.
• Replace “check for…” with “look for problems in…”
• Be a service to the project community, not an obstacle.
• Consider cost vs. value in all your testing activity.
• Diversify your team and your tactics.
• Dynamically manage the focus of your work.
• Your context should drive your choices, both of which
evolve over time.
- 1 -
Designed by James Bach Version 5.2.1
james@satisfice.com 9/17/2013
www.satisfice.com
Copyright 1996-2013, Satisfice, Inc.
The Heuristic Test Strategy Model is a set of patterns for designing a test strategy. The immediate purpose of
this model is to remind testers of what to think about when they are creating tests. Ultimately, it is intended to
be customized and used to facilitate dialog and direct self-learning among professional testers.
Project Environment includes resources, constraints, and other elements in the project that may enable or
hobble our testing. Sometimes a tester must challenge constraints, and sometimes accept them.
Product Elements are things that you intend to test. Software is complex and invisible. Take care to cover all of
it that matters, not just the parts that are easy to see.
Quality Criteria are the rules, values, and sources that allow you as a tester to determine if the product has
problems. Quality criteria are multidimensional and often hidden or self-contradictory.
Test Techniques are heuristics for creating tests. All techniques involve some sort of analysis of project
environment, product elements, and quality criteria.
Perceived Quality is the result of testing. You can never know the "actual" quality of a software product, but
through the application of a variety of tests, you can make an informed assessment of it.
- 2 -
General Test Techniques
A test technique is a heuristic for creating tests. There are many interesting techniques. The list includes nine general
techniques. By “general technique” we mean that the technique is simple and universal enough to apply to a wide variety
of contexts. Many specific techniques are based on one or more of these nine. And an endless variety of specific test
techniques may be constructed by combining one or more general techniques with coverage ideas from the other lists in
this model.
Function Testing
Test what it can do
1. Identify things that the product can do (functions and
sub-functions).
2. Determine how you’d know if a function was capable of
working.
3. Test each function, one at a time.
4. See that each function does what it’s supposed to do and
not what it isn’t supposed to do.
Claims Testing
Verify every claim
1. Identify reference materials that include claims about
the product (implicit or explicit). Consider SLAs, EULAs,
advertisements, specifications, help text, manuals, etc.
2. Analyze individual claims, and clarify vague claims.
3. Verify that each claim about the product is true.
4. If you’re testing from an explicit specification, expect it
and the product to be brought into alignment.
Domain Testing
Divide and conquer the data
1. Look for any data processed by the product. Look at
outputs as well as inputs.
2. Decide which particular data to test with. Consider
things like boundary values, typical values, convenient
values, invalid values, or best representatives.
3. Consider combinations of data worth testing together.
User Testing
Involve the users
1. Identify categories and roles of users.
2. Determine what each category of user will do (use
cases), how they will do it, and what they value.
3. Get real user data, or bring real users in to test.
4. Otherwise, systematically simulate a user (be careful—
it’s easy to think you’re like a user even when you’re
not).
5. Powerful user testing is that which involves a variety of
users and user roles, not just one.
Stress Testing
Overwhelm the product
1. Look for sub-systems and functions that are vulnerable
to being overloaded or “broken” in the presence of
challenging data or constrained resources.
2. Identify data and resources related to those sub-
systems and functions.
3. Select or generate challenging data, or resource
constraint conditions to test with: e.g., large or complex
data structures, high loads, long test runs, many test
cases, low memory conditions.
Risk Testing
Imagine a problem, then look for it.
1. What kinds of problems could the product have?
2. Which kinds matter most? Focus on those.
3. How would you detect them if they were there?
4. Make a list of interesting problems and design tests
specifically to reveal them.
5. It may help to consult experts, design documentation,
past bug reports, or apply risk heuristics.
Flow Testing
Do one thing after another
1. Perform multiple activities connected end-to-end; for
instance, conduct tours through a state model.
2. Don’t reset the system between actions.
3. Vary timing and sequencing, and try parallel threads.
Automatic Checking
Check a million different facts
1. Look for or develop tools that can perform a lot of
actions and check a lot things.
2. Consider tools that partially automate test coverage.
3. Consider tools that partially automate oracles.
4. Consider automatic change detectors.
5. Consider automatic test data generators.
6. Consider tools that make human testing more powerful.
Scenario Testing
Test to a compelling story
1. Begin by thinking about everything going on around the
product.
2. Design tests that involve meaningful and complex
interactions with the product.
3. A good scenario test is a compelling story of how
someone who matters might do something that matters
with the product.
- 3 -
Project Environment
Creating and executing tests is the heart of the test project. However, there are many factors in the project environment
that are critical to your decision about what particular tests to create. In each category, below, consider how that factor
may help or hinder your test design process. Try to exploit every resource.
Mission. Your purpose on this project, as understood by you and your customers.
 Do you know who your customers are? Whose opinions matter? Who benefits or suffers from the work you do?
 Do you know what your customers expect of you on this project? Do you agree?
 Maybe your customers have strong ideas about what tests you should create and run.
 Maybe they have conflicting expectations. You may have to help identify and resolve those.
Information. Information about the product or project that is needed for testing.
 Whom can we consult with to learn about this project?
 Are there any engineering documents available? User manuals? Web-based materials? Specs? User stories?
 Does this product have a history? Old problems that were fixed or deferred? Pattern of customer complaints?
 Is your information current? How are you apprised of new or changing information?
 Are there any comparable products or projects from which we can glean important information?
Developer Relations. How you get along with the programmers.
 Hubris: Does the development team seem overconfident about any aspect of the product?
 Defensiveness: Is there any part of the product the developers seem strangely opposed to having tested?
 Rapport: Have you developed a friendly working relationship with the programmers?
 Feedback loop: Can you communicate quickly, on demand, with the programmers?
 Feedback: What do the developers think of your test strategy?
Test Team. Anyone who will perform or support testing.
 Do you know who will be testing? Do you have enough people?
 Are there people not on the “test team” that might be able to help? People who’ve tested similar products before and might
have advice? Writers? Users? Programmers?
 Are there particular test techniques that the team has special skill or motivation to perform?
 Is any training needed? Is any available?
 Who is co-located and who is elsewhere? Will time zones be a problem?
Equipment & Tools. Hardware, software, or documents required to administer testing.
 Hardware: Do we have all the equipment you need to execute the tests? Is it set up and ready to go?
 Automation: Are any test tools needed? Are they available?
 Probes: Are any tools needed to aid in the observation of the product under test?
 Matrices & Checklists: Are any documents needed to track or record the progress of testing?
Schedule. The sequence, duration, and synchronization of project events
 Test Design: How much time do you have? Are there tests better to create later than sooner?
 Test Execution: When will tests be executed? Are some tests executed repeatedly, say, for regression purposes?
 Development: When will builds be available for testing, features added, code frozen, etc.?
 Documentation: When will the user documentation be available for review?
Test Items. The product to be tested.
 Scope: What parts of the product are and are not within the scope of your testing responsibility?
 Availability: Do you have the product to test? Do you have test platforms available? When do you get new builds?
 Volatility: Is the product constantly changing? What will be the need for retesting?
 New Stuff: What has recently been changed or added in the product?
 Testability: Is the product functional and reliable enough that you can effectively test it?
 Future Releases: What part of your tests, if any, must be designed to apply to future releases of the product?
Deliverables. The observable products of the test project.
 Content: What sort of reports will you have to make? Will you share your working notes, or just the end results?
 Purpose: Are your deliverables provided as part of the product? Does anyone else have to run your tests?
 Standards: Is there a particular test documentation standard you’re supposed to follow?
 Media: How will you record and communicate your reports?
- 4 -
Product Elements
Ultimately a product is an experience or solution provided to a customer. Products have many dimensions. So, to test well,
we must examine those dimensions. Each category, listed below, represents an important and unique aspect of a product.
Testers who focus on only a few of these are likely to miss important bugs.
Structure. Everything that comprises the physical product.
 Code: the code structures that comprise the product, from executables to individual routines.
 Hardware: any hardware component that is integral to the product.
 Non-executable files: any files other than multimedia or programs, like text files, sample data, or help files.
 Collateral: anything beyond software and hardware that is also part of the product, such as paper documents, web links and content,
packaging, license agreements, etc.
Function. Everything that the product does.
 Application: any function that defines or distinguishes the product or fulfills core requirements.
 Calculation: any arithmetic function or arithmetic operations embedded in other functions.
 Time-related: time-out settings; daily or month-end reports; nightly batch jobs; time zones; business holidays; interest calculations; terms and
warranty periods; chronograph functions.
 Transformations: functions that modify or transform something (e.g. setting fonts, inserting clip art, withdrawing money from account).
 Startup/Shutdown: each method and interface for invocation and initialization as well as exiting the product.
 Multimedia: sounds, bitmaps, videos, or any graphical display embedded in the product.
 Error Handling: any functions that detect and recover from errors, including all error messages.
 Interactions: any interactions between functions within the product.
 Testability: any functions provided to help test the product, such as diagnostics, log files, asserts, test menus, etc.
Data. Everything that the product processes.
 Input: any data that is processed by the product.
 Output: any data that results from processing by the product.
 Preset: any data that is supplied as part of the product, or otherwise built into it, such as prefabricated databases, default values, etc.
 Persistent: any data that is stored internally and expected to persist over multiple operations. This includes modes or states of the product,
such as options settings, view modes, contents of documents, etc.
 Sequences/Combinations: any ordering or permutation of data, e.g. word order, sorted vs. unsorted data, order of tests.
 Cardinality: Numbers of objects or fields may vary (e.g. zero, one, many, max, open limit). Some may have to be unique (e.g. database keys).
 Big/Little: variations in the size and aggregation of data.
 Noise: any data or state that is invalid, corrupted, or produced in an uncontrolled or incorrect fashion.
 Lifecycle: transformations over the lifetime of a data entity as it is created, accessed, modified, and deleted.
Interfaces. Every conduit by which the product is accessed or expressed.
 User Interfaces: any element that mediates the exchange of data with the user (e.g. displays, buttons, fields, whether physical or virtual).
 System Interfaces: any interface with something other than a user, such as other programs, hard disk, network, etc.
 API/SDK: Any programmatic interfaces or tools intended to allow the development of new applications using this product.
 Import/export: any functions that package data for use by a different product, or interpret data from a different product.
Platform. Everything on which the product depends (and that is outside your project).
 External Hardware: hardware components and configurations that are not part of the shipping product, but are required (or
 optional) in order for the product to work: systems, servers, memory, keyboards, the Cloud.
 External Software: software components and configurations that are not a part of the shipping product, but are required (or
 optional) in order for the product to work: operating systems, concurrently executing applications, drivers, fonts, etc.
 Internal Components: libraries and other components that are embedded in your product but are produced outside your project.
Operations. How the product will be used.
 Users: the attributes of the various kinds of users.
 Environment: the physical environment in which the product operates, including such elements as noise, light, and distractions.
 Common Use: patterns and sequences of input that the product will typically encounter. This varies by user.
 Disfavored Use: patterns of input produced by ignorant, mistaken, careless or malicious use.
 Extreme Use: challenging patterns and sequences of input that are consistent with the intended use of the product.
Time. Any relationship between the product and time.
 Input/Output: when input is provided, when output created, and any timing relationships (delays, intervals, etc.) among them.
 Fast/Slow: testing with “fast” or “slow” input; fastest and slowest; combinations of fast and slow.
 Changing Rates: speeding up and slowing down (spikes, bursts, hangs, bottlenecks, interruptions).
 Concurrency: more than one thing happening at once (multi-user, time-sharing, threads, and semaphores, shared data).
- 5 -
Quality Criteria Categories
A quality criterion is some requirement that defines what the product should be. By thinking about different kinds of
criteria, you will be better able to plan tests that discover important problems fast. Each of the items on this list can be
thought of as a potential risk area. For each item below, determine if it is important to your project, then think how you
would recognize if the product worked well or poorly in that regard.
Capability. Can it perform the required functions?
Reliability. Will it work well and resist failure in all required situations?
 Robustness: the product continues to function over time without degradation, under reasonable conditions.
 Error handling: the product resists failure in the case of errors, is graceful when it fails, and recovers readily.
 Data Integrity: the data in the system is protected from loss or corruption.
 Safety: the product will not fail in such a way as to harm life or property.
Usability. How easy is it for a real user to use the product?
 Learnability: the operation of the product can be rapidly mastered by the intended user.
 Operability: the product can be operated with minimum effort and fuss.
 Accessibility: the product meets relevant accessibility standards and works with O/S accessibility features.
Charisma. How appealing is the product?
 Aesthetics: the product appeals to the senses.
 Uniqueness: the product is new or special in some way.
 Necessity: the product possesses the capabilities that users expect from it.
 Usefulness: the product solves a problem that matters, and solves it well.
 Entrancement: users get hooked, have fun, are fully engaged when using the product.
 Image: the product projects the desired impression of quality.
Security. How well is the product protected against unauthorized use or intrusion?
 Authentication: the ways in which the system verifies that a user is who he says he is.
 Authorization: the rights that are granted to authenticated users at varying privilege levels.
 Privacy: the ways in which customer or employee data is protected from unauthorized people.
 Security holes: the ways in which the system cannot enforce security (e.g. social engineering vulnerabilities)
Scalability. How well does the deployment of the product scale up or down?
Compatibility. How well does it work with external components & configurations?
 Application Compatibility: the product works in conjunction with other software products.
 Operating System Compatibility: the product works with a particular operating system.
 Hardware Compatibility: the product works with particular hardware components and configurations.
 Backward Compatibility: the products works with earlier versions of itself.
 Resource Usage: the product doesn’t unnecessarily hog memory, storage, or other system resources.
Performance. How speedy and responsive is it?
Installability. How easily can it be installed onto its target platform(s)?
 System requirements: Does the product recognize if some necessary component is missing or insufficient?
 Configuration: What parts of the system are affected by installation? Where are files and resources stored?
 Uninstallation: When the product is uninstalled, is it removed cleanly?
 Upgrades/patches: Can new modules or versions be added easily? Do they respect the existing configuration?
 Administration: Is installation a process that is handled by special personnel, or on a special schedule?
Development. How well can we create, test, and modify it?
 Supportability: How economical will it be to provide support to users of the product?
 Testability: How effectively can the product be tested?
 Maintainability: How economical is it to build, fix or enhance the product?
 Portability: How economical will it be to port or reuse the technology elsewhere?
 Localizability: How economical will it be to adapt the product for other places?

Weitere ähnliche Inhalte

Was ist angesagt?

Fabian Scarano - Preparing Your Team for the Future
Fabian Scarano - Preparing Your Team for the FutureFabian Scarano - Preparing Your Team for the Future
Fabian Scarano - Preparing Your Team for the FutureTEST Huddle
 
Help Me, I got a team of junior testers!
Help Me, I got a team of junior testers!Help Me, I got a team of junior testers!
Help Me, I got a team of junior testers!SQALab
 
Advancing Testing Using Axioms
Advancing Testing Using AxiomsAdvancing Testing Using Axioms
Advancing Testing Using AxiomsSQALab
 
The Test Coverage Outline: Your Testing Road Map
The Test Coverage Outline: Your Testing Road MapThe Test Coverage Outline: Your Testing Road Map
The Test Coverage Outline: Your Testing Road MapTechWell
 
Will Robots Replace Testers?
Will Robots Replace Testers?Will Robots Replace Testers?
Will Robots Replace Testers?TEST Huddle
 
Santa Barbara Agile: Exploratory Testing Explained and Experienced
Santa Barbara Agile: Exploratory Testing Explained and ExperiencedSanta Barbara Agile: Exploratory Testing Explained and Experienced
Santa Barbara Agile: Exploratory Testing Explained and ExperiencedMaaret Pyhäjärvi
 
Erkki Poyhonen - Software Testing - A Users Guide
Erkki Poyhonen - Software Testing - A Users GuideErkki Poyhonen - Software Testing - A Users Guide
Erkki Poyhonen - Software Testing - A Users GuideTEST Huddle
 
Test Cases are dead, long live Checklists!
Test Cases are dead, long live Checklists!Test Cases are dead, long live Checklists!
Test Cases are dead, long live Checklists!SQALab
 
[HCMC STC Jan 2015] Proving Our Worth Quantifying The Value Of Testing
[HCMC STC Jan 2015] Proving Our Worth  Quantifying The Value Of Testing[HCMC STC Jan 2015] Proving Our Worth  Quantifying The Value Of Testing
[HCMC STC Jan 2015] Proving Our Worth Quantifying The Value Of TestingHo Chi Minh City Software Testing Club
 
Graham Thomas - The Testers Toolbox - EuroSTAR 2010
Graham Thomas - The Testers Toolbox - EuroSTAR 2010Graham Thomas - The Testers Toolbox - EuroSTAR 2010
Graham Thomas - The Testers Toolbox - EuroSTAR 2010TEST Huddle
 
Things Could Get Worse: Ideas About Regression Testing
Things Could Get Worse: Ideas About Regression TestingThings Could Get Worse: Ideas About Regression Testing
Things Could Get Worse: Ideas About Regression TestingTechWell
 
Using Stories to Test Requirements and Systems
Using Stories to Test Requirements and SystemsUsing Stories to Test Requirements and Systems
Using Stories to Test Requirements and SystemsPaul Gerrard
 
Exploratory Testing Explained
Exploratory Testing ExplainedExploratory Testing Explained
Exploratory Testing ExplainedTechWell
 
Exploratory Testing Explained and Experienced
Exploratory Testing Explained and ExperiencedExploratory Testing Explained and Experienced
Exploratory Testing Explained and ExperiencedMaaret Pyhäjärvi
 
Rethinking the Role of Testers
Rethinking the Role of TestersRethinking the Role of Testers
Rethinking the Role of TestersPaul Gerrard
 
Stefaan Lukermans & Dominic Maes - Testers And Garbage Men - EuroSTAR 2011
Stefaan Lukermans & Dominic Maes - Testers And Garbage Men - EuroSTAR 2011Stefaan Lukermans & Dominic Maes - Testers And Garbage Men - EuroSTAR 2011
Stefaan Lukermans & Dominic Maes - Testers And Garbage Men - EuroSTAR 2011TEST Huddle
 
The Future of Testing
The Future of TestingThe Future of Testing
The Future of TestingPaul Gerrard
 
The Snail Entrepreneur: The 7-year-old kid every startup should learn from
The Snail Entrepreneur: The 7-year-old kid every startup should learn fromThe Snail Entrepreneur: The 7-year-old kid every startup should learn from
The Snail Entrepreneur: The 7-year-old kid every startup should learn fromClaudio Perrone
 

Was ist angesagt? (20)

Fabian Scarano - Preparing Your Team for the Future
Fabian Scarano - Preparing Your Team for the FutureFabian Scarano - Preparing Your Team for the Future
Fabian Scarano - Preparing Your Team for the Future
 
A Taste of Exploratory Testing
A Taste of Exploratory TestingA Taste of Exploratory Testing
A Taste of Exploratory Testing
 
Help Me, I got a team of junior testers!
Help Me, I got a team of junior testers!Help Me, I got a team of junior testers!
Help Me, I got a team of junior testers!
 
Advancing Testing Using Axioms
Advancing Testing Using AxiomsAdvancing Testing Using Axioms
Advancing Testing Using Axioms
 
The Test Coverage Outline: Your Testing Road Map
The Test Coverage Outline: Your Testing Road MapThe Test Coverage Outline: Your Testing Road Map
The Test Coverage Outline: Your Testing Road Map
 
Will Robots Replace Testers?
Will Robots Replace Testers?Will Robots Replace Testers?
Will Robots Replace Testers?
 
Santa Barbara Agile: Exploratory Testing Explained and Experienced
Santa Barbara Agile: Exploratory Testing Explained and ExperiencedSanta Barbara Agile: Exploratory Testing Explained and Experienced
Santa Barbara Agile: Exploratory Testing Explained and Experienced
 
[HCMC STC Jan 2015] Workshop Of Context-Driven Testing In Agile
[HCMC STC Jan 2015] Workshop Of Context-Driven Testing In Agile[HCMC STC Jan 2015] Workshop Of Context-Driven Testing In Agile
[HCMC STC Jan 2015] Workshop Of Context-Driven Testing In Agile
 
Erkki Poyhonen - Software Testing - A Users Guide
Erkki Poyhonen - Software Testing - A Users GuideErkki Poyhonen - Software Testing - A Users Guide
Erkki Poyhonen - Software Testing - A Users Guide
 
Test Cases are dead, long live Checklists!
Test Cases are dead, long live Checklists!Test Cases are dead, long live Checklists!
Test Cases are dead, long live Checklists!
 
[HCMC STC Jan 2015] Proving Our Worth Quantifying The Value Of Testing
[HCMC STC Jan 2015] Proving Our Worth  Quantifying The Value Of Testing[HCMC STC Jan 2015] Proving Our Worth  Quantifying The Value Of Testing
[HCMC STC Jan 2015] Proving Our Worth Quantifying The Value Of Testing
 
Graham Thomas - The Testers Toolbox - EuroSTAR 2010
Graham Thomas - The Testers Toolbox - EuroSTAR 2010Graham Thomas - The Testers Toolbox - EuroSTAR 2010
Graham Thomas - The Testers Toolbox - EuroSTAR 2010
 
Things Could Get Worse: Ideas About Regression Testing
Things Could Get Worse: Ideas About Regression TestingThings Could Get Worse: Ideas About Regression Testing
Things Could Get Worse: Ideas About Regression Testing
 
Using Stories to Test Requirements and Systems
Using Stories to Test Requirements and SystemsUsing Stories to Test Requirements and Systems
Using Stories to Test Requirements and Systems
 
Exploratory Testing Explained
Exploratory Testing ExplainedExploratory Testing Explained
Exploratory Testing Explained
 
Exploratory Testing Explained and Experienced
Exploratory Testing Explained and ExperiencedExploratory Testing Explained and Experienced
Exploratory Testing Explained and Experienced
 
Rethinking the Role of Testers
Rethinking the Role of TestersRethinking the Role of Testers
Rethinking the Role of Testers
 
Stefaan Lukermans & Dominic Maes - Testers And Garbage Men - EuroSTAR 2011
Stefaan Lukermans & Dominic Maes - Testers And Garbage Men - EuroSTAR 2011Stefaan Lukermans & Dominic Maes - Testers And Garbage Men - EuroSTAR 2011
Stefaan Lukermans & Dominic Maes - Testers And Garbage Men - EuroSTAR 2011
 
The Future of Testing
The Future of TestingThe Future of Testing
The Future of Testing
 
The Snail Entrepreneur: The 7-year-old kid every startup should learn from
The Snail Entrepreneur: The 7-year-old kid every startup should learn fromThe Snail Entrepreneur: The 7-year-old kid every startup should learn from
The Snail Entrepreneur: The 7-year-old kid every startup should learn from
 

Andere mochten auch

Techniques for Agile Performance Testing
Techniques for Agile Performance TestingTechniques for Agile Performance Testing
Techniques for Agile Performance TestingTechWell
 
Application Performance Testing: A Simplified Universal Approach
Application Performance Testing: A Simplified Universal ApproachApplication Performance Testing: A Simplified Universal Approach
Application Performance Testing: A Simplified Universal ApproachTechWell
 
A Funny Thing Happened on the Way to User Acceptance Testing
A Funny Thing Happened on the Way to User Acceptance TestingA Funny Thing Happened on the Way to User Acceptance Testing
A Funny Thing Happened on the Way to User Acceptance TestingTechWell
 
Test Automation Patterns
Test Automation PatternsTest Automation Patterns
Test Automation PatternsTechWell
 
Security Testing for Testing Professionals
Security Testing for Testing ProfessionalsSecurity Testing for Testing Professionals
Security Testing for Testing ProfessionalsTechWell
 
Test Estimation in Practice
Test Estimation in PracticeTest Estimation in Practice
Test Estimation in PracticeTechWell
 
Introducing Keyword-driven Test Automation
Introducing Keyword-driven Test AutomationIntroducing Keyword-driven Test Automation
Introducing Keyword-driven Test AutomationTechWell
 
Leveraging Open Source Automation: A Selenium WebDriver Example
Leveraging Open Source Automation: A Selenium WebDriver ExampleLeveraging Open Source Automation: A Selenium WebDriver Example
Leveraging Open Source Automation: A Selenium WebDriver ExampleTechWell
 
Test Process Improvement in Agile
Test Process Improvement in AgileTest Process Improvement in Agile
Test Process Improvement in AgileTechWell
 
Testing with Limited, Vague, and Missing Requirements
Testing with Limited, Vague, and Missing RequirementsTesting with Limited, Vague, and Missing Requirements
Testing with Limited, Vague, and Missing RequirementsTechWell
 
Test Automation for Mobile Applications: A Practical Guide
Test Automation for Mobile Applications: A Practical GuideTest Automation for Mobile Applications: A Practical Guide
Test Automation for Mobile Applications: A Practical GuideTechWell
 
Creating a Better Testing Future: The World Is Changing and We Must Change Wi...
Creating a Better Testing Future: The World Is Changing and We Must Change Wi...Creating a Better Testing Future: The World Is Changing and We Must Change Wi...
Creating a Better Testing Future: The World Is Changing and We Must Change Wi...TechWell
 
Getting Started with Risk-based Testing
Getting Started with Risk-based TestingGetting Started with Risk-based Testing
Getting Started with Risk-based TestingTechWell
 
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreThe Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreTechWell
 
Principles Before Practices: Transform Your Testing by Understanding Key Conc...
Principles Before Practices: Transform Your Testing by Understanding Key Conc...Principles Before Practices: Transform Your Testing by Understanding Key Conc...
Principles Before Practices: Transform Your Testing by Understanding Key Conc...TechWell
 
Testing Cloud Services: SaaS, PaaS, and IaaS
Testing Cloud Services: SaaS, PaaS, and IaaSTesting Cloud Services: SaaS, PaaS, and IaaS
Testing Cloud Services: SaaS, PaaS, and IaaSTechWell
 

Andere mochten auch (16)

Techniques for Agile Performance Testing
Techniques for Agile Performance TestingTechniques for Agile Performance Testing
Techniques for Agile Performance Testing
 
Application Performance Testing: A Simplified Universal Approach
Application Performance Testing: A Simplified Universal ApproachApplication Performance Testing: A Simplified Universal Approach
Application Performance Testing: A Simplified Universal Approach
 
A Funny Thing Happened on the Way to User Acceptance Testing
A Funny Thing Happened on the Way to User Acceptance TestingA Funny Thing Happened on the Way to User Acceptance Testing
A Funny Thing Happened on the Way to User Acceptance Testing
 
Test Automation Patterns
Test Automation PatternsTest Automation Patterns
Test Automation Patterns
 
Security Testing for Testing Professionals
Security Testing for Testing ProfessionalsSecurity Testing for Testing Professionals
Security Testing for Testing Professionals
 
Test Estimation in Practice
Test Estimation in PracticeTest Estimation in Practice
Test Estimation in Practice
 
Introducing Keyword-driven Test Automation
Introducing Keyword-driven Test AutomationIntroducing Keyword-driven Test Automation
Introducing Keyword-driven Test Automation
 
Leveraging Open Source Automation: A Selenium WebDriver Example
Leveraging Open Source Automation: A Selenium WebDriver ExampleLeveraging Open Source Automation: A Selenium WebDriver Example
Leveraging Open Source Automation: A Selenium WebDriver Example
 
Test Process Improvement in Agile
Test Process Improvement in AgileTest Process Improvement in Agile
Test Process Improvement in Agile
 
Testing with Limited, Vague, and Missing Requirements
Testing with Limited, Vague, and Missing RequirementsTesting with Limited, Vague, and Missing Requirements
Testing with Limited, Vague, and Missing Requirements
 
Test Automation for Mobile Applications: A Practical Guide
Test Automation for Mobile Applications: A Practical GuideTest Automation for Mobile Applications: A Practical Guide
Test Automation for Mobile Applications: A Practical Guide
 
Creating a Better Testing Future: The World Is Changing and We Must Change Wi...
Creating a Better Testing Future: The World Is Changing and We Must Change Wi...Creating a Better Testing Future: The World Is Changing and We Must Change Wi...
Creating a Better Testing Future: The World Is Changing and We Must Change Wi...
 
Getting Started with Risk-based Testing
Getting Started with Risk-based TestingGetting Started with Risk-based Testing
Getting Started with Risk-based Testing
 
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreThe Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
 
Principles Before Practices: Transform Your Testing by Understanding Key Conc...
Principles Before Practices: Transform Your Testing by Understanding Key Conc...Principles Before Practices: Transform Your Testing by Understanding Key Conc...
Principles Before Practices: Transform Your Testing by Understanding Key Conc...
 
Testing Cloud Services: SaaS, PaaS, and IaaS
Testing Cloud Services: SaaS, PaaS, and IaaSTesting Cloud Services: SaaS, PaaS, and IaaS
Testing Cloud Services: SaaS, PaaS, and IaaS
 

Ähnlich wie A Rapid Introduction to Rapid Software Testing

A Rapid Introduction to Rapid Software Testing
A Rapid Introduction to Rapid Software TestingA Rapid Introduction to Rapid Software Testing
A Rapid Introduction to Rapid Software TestingTechWell
 
Testing the unknown: the art and science of working with hypothesis
Testing the unknown: the art and science of working with hypothesisTesting the unknown: the art and science of working with hypothesis
Testing the unknown: the art and science of working with hypothesisArdita Karaj
 
Rapid software testing
Rapid software testingRapid software testing
Rapid software testingSachin MK
 
Presented at Ford's 2017 Global IT Learning Summit (GLITS)
Presented at Ford's 2017 Global IT Learning Summit (GLITS)Presented at Ford's 2017 Global IT Learning Summit (GLITS)
Presented at Ford's 2017 Global IT Learning Summit (GLITS)Ron Lazaro
 
A Happy Marriage between Context-Driven and Agile
A Happy Marriage between Context-Driven and AgileA Happy Marriage between Context-Driven and Agile
A Happy Marriage between Context-Driven and AgileIlari Henrik Aegerter
 
Huib Schoots Testing in modern times - a story about Quality and Value - Test...
Huib Schoots Testing in modern times - a story about Quality and Value - Test...Huib Schoots Testing in modern times - a story about Quality and Value - Test...
Huib Schoots Testing in modern times - a story about Quality and Value - Test...FiSTB
 
Michael Bolton - Two Futures of Software Testing
Michael Bolton - Two Futures of Software TestingMichael Bolton - Two Futures of Software Testing
Michael Bolton - Two Futures of Software TestingTEST Huddle
 
Why unvalidated assumption is the enemy of good product
Why unvalidated assumption is the enemy of good productWhy unvalidated assumption is the enemy of good product
Why unvalidated assumption is the enemy of good productSeb Agertoft
 
Try: Fail, Try: Succeed by Tim Grant
Try: Fail, Try: Succeed by Tim GrantTry: Fail, Try: Succeed by Tim Grant
Try: Fail, Try: Succeed by Tim GrantQA or the Highway
 
LEAN: Dream Maker Developments
LEAN: Dream Maker DevelopmentsLEAN: Dream Maker Developments
LEAN: Dream Maker DevelopmentsVadim Davydov
 
Вадим Давидов та Людмила Гребенюк “LEAN: Dream Maker Developments” Kharkiv Pr...
Вадим Давидов та Людмила Гребенюк “LEAN: Dream Maker Developments” Kharkiv Pr...Вадим Давидов та Людмила Гребенюк “LEAN: Dream Maker Developments” Kharkiv Pr...
Вадим Давидов та Людмила Гребенюк “LEAN: Dream Maker Developments” Kharkiv Pr...Lviv Startup Club
 
Agile for Me- CodeStock 2009
Agile for Me- CodeStock 2009Agile for Me- CodeStock 2009
Agile for Me- CodeStock 2009Adrian Carr
 
How do we fix testing
How do we fix testingHow do we fix testing
How do we fix testingPeter Varhol
 
Exploratory Testing Explained
Exploratory Testing ExplainedExploratory Testing Explained
Exploratory Testing ExplainedTechWell
 
Agile bodensee - Agile Testing: Bug prevention vs. bug detection
Agile bodensee - Agile Testing: Bug prevention vs. bug detectionAgile bodensee - Agile Testing: Bug prevention vs. bug detection
Agile bodensee - Agile Testing: Bug prevention vs. bug detectionMichael Palotas
 
Rapid Software Testing: Strategy
Rapid Software Testing: StrategyRapid Software Testing: Strategy
Rapid Software Testing: StrategyTechWell
 
Quality, Testing & Agile Methodologies
Quality, Testing & Agile MethodologiesQuality, Testing & Agile Methodologies
Quality, Testing & Agile MethodologiesJohan Hoberg
 
Intro to Product Discovery
Intro to Product DiscoveryIntro to Product Discovery
Intro to Product DiscoveryMatthew Godfrey
 

Ähnlich wie A Rapid Introduction to Rapid Software Testing (20)

A Rapid Introduction to Rapid Software Testing
A Rapid Introduction to Rapid Software TestingA Rapid Introduction to Rapid Software Testing
A Rapid Introduction to Rapid Software Testing
 
Testing the unknown: the art and science of working with hypothesis
Testing the unknown: the art and science of working with hypothesisTesting the unknown: the art and science of working with hypothesis
Testing the unknown: the art and science of working with hypothesis
 
Rapid software testing
Rapid software testingRapid software testing
Rapid software testing
 
Presented at Ford's 2017 Global IT Learning Summit (GLITS)
Presented at Ford's 2017 Global IT Learning Summit (GLITS)Presented at Ford's 2017 Global IT Learning Summit (GLITS)
Presented at Ford's 2017 Global IT Learning Summit (GLITS)
 
A Happy Marriage between Context-Driven and Agile
A Happy Marriage between Context-Driven and AgileA Happy Marriage between Context-Driven and Agile
A Happy Marriage between Context-Driven and Agile
 
Huib Schoots Testing in modern times - a story about Quality and Value - Test...
Huib Schoots Testing in modern times - a story about Quality and Value - Test...Huib Schoots Testing in modern times - a story about Quality and Value - Test...
Huib Schoots Testing in modern times - a story about Quality and Value - Test...
 
Michael Bolton - Two Futures of Software Testing
Michael Bolton - Two Futures of Software TestingMichael Bolton - Two Futures of Software Testing
Michael Bolton - Two Futures of Software Testing
 
Why unvalidated assumption is the enemy of good product
Why unvalidated assumption is the enemy of good productWhy unvalidated assumption is the enemy of good product
Why unvalidated assumption is the enemy of good product
 
ATD2K16
ATD2K16ATD2K16
ATD2K16
 
Try: Fail, Try: Succeed by Tim Grant
Try: Fail, Try: Succeed by Tim GrantTry: Fail, Try: Succeed by Tim Grant
Try: Fail, Try: Succeed by Tim Grant
 
LEAN: Dream Maker Developments
LEAN: Dream Maker DevelopmentsLEAN: Dream Maker Developments
LEAN: Dream Maker Developments
 
Вадим Давидов та Людмила Гребенюк “LEAN: Dream Maker Developments” Kharkiv Pr...
Вадим Давидов та Людмила Гребенюк “LEAN: Dream Maker Developments” Kharkiv Pr...Вадим Давидов та Людмила Гребенюк “LEAN: Dream Maker Developments” Kharkiv Pr...
Вадим Давидов та Людмила Гребенюк “LEAN: Dream Maker Developments” Kharkiv Pr...
 
Agile for Me- CodeStock 2009
Agile for Me- CodeStock 2009Agile for Me- CodeStock 2009
Agile for Me- CodeStock 2009
 
How do we fix testing
How do we fix testingHow do we fix testing
How do we fix testing
 
Exploratory Testing Explained
Exploratory Testing ExplainedExploratory Testing Explained
Exploratory Testing Explained
 
Agile bodensee - Agile Testing: Bug prevention vs. bug detection
Agile bodensee - Agile Testing: Bug prevention vs. bug detectionAgile bodensee - Agile Testing: Bug prevention vs. bug detection
Agile bodensee - Agile Testing: Bug prevention vs. bug detection
 
Rapid Software Testing: Strategy
Rapid Software Testing: StrategyRapid Software Testing: Strategy
Rapid Software Testing: Strategy
 
Quality, Testing & Agile Methodologies
Quality, Testing & Agile MethodologiesQuality, Testing & Agile Methodologies
Quality, Testing & Agile Methodologies
 
Dallas Techologies
Dallas TechologiesDallas Techologies
Dallas Techologies
 
Intro to Product Discovery
Intro to Product DiscoveryIntro to Product Discovery
Intro to Product Discovery
 

Mehr von TechWell

Failing and Recovering
Failing and RecoveringFailing and Recovering
Failing and RecoveringTechWell
 
Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization TechWell
 
Test Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTest Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTechWell
 
System-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartSystem-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartTechWell
 
Build Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyBuild Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyTechWell
 
Testing Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTesting Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTechWell
 
Implement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowImplement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowTechWell
 
Develop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityDevelop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityTechWell
 
Eliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyEliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyTechWell
 
Transform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTransform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTechWell
 
The Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipThe Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipTechWell
 
Resolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsResolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsTechWell
 
Pin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GamePin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GameTechWell
 
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsAgile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsTechWell
 
A Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationA Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationTechWell
 
Databases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessDatabases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessTechWell
 
Mobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateMobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateTechWell
 
Cultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessCultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessTechWell
 
Turn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTurn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTechWell
 

Mehr von TechWell (20)

Failing and Recovering
Failing and RecoveringFailing and Recovering
Failing and Recovering
 
Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization
 
Test Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTest Design for Fully Automated Build Architecture
Test Design for Fully Automated Build Architecture
 
System-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartSystem-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good Start
 
Build Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyBuild Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test Strategy
 
Testing Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTesting Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for Success
 
Implement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowImplement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlow
 
Develop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityDevelop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your Sanity
 
Ma 15
Ma 15Ma 15
Ma 15
 
Eliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyEliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps Strategy
 
Transform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTransform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOps
 
The Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipThe Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—Leadership
 
Resolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsResolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile Teams
 
Pin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GamePin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile Game
 
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsAgile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
 
A Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationA Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps Implementation
 
Databases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessDatabases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery Process
 
Mobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateMobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to Automate
 
Cultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessCultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for Success
 
Turn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTurn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile Transformation
 

Kürzlich hochgeladen

Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfSeasiaInfotech2
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 

Kürzlich hochgeladen (20)

Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdf
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 

A Rapid Introduction to Rapid Software Testing

  • 1. MA Full-day Tutorials 5/5/2014 8:30:00 AM A Rapid Introduction to Rapid Software Testing Presented by: Michael Bolton DevelopSense Brought to you by: 340 Corporate Way, Suite 300, Orange Park, FL 32073 888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
  • 2. Michael Bolton DevelopSense Tester, consultant, and trainer Michael Bolton is the co-author (with James Bach) of Rapid Software Testing, a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure. Michael is a leader in the context-driven software testing movement, with twenty years of experience testing, developing, managing, and writing about software. Currently, he leads DevelopSense, a Toronto-based consultancy. Prior to DevelopSense, he was with Quarterdeck Corporation, where he managed the company’s flagship products and directed project and testing teams—both in-house and worldwide. Contact Michael at michael@developsense.com.
  • 3. 1 A Rapid Introduction to Rapid Software Testing James Bach, Satisfice, Inc. james@satisfice.com www.satisfice.com +1 (360) 440-1435 Michael Bolton, DevelopSense mb@developsense.com www.developsense.com +1 (416) 656-5160 2 Acknowledgements • Some of this material was developed in collaboration with Dr. Cem Kaner, of the Florida Institute of Technology. See www.kaner.com and www.testingeducation.org. • Doug Hoffman (www.softwarequalitymethods.com) has also contributed to and occasionally teaches from this material. • Many of the ideas in this presentation were also inspired by or augmented by other colleagues including Jonathan Bach, Bret Pettichord, Brian Marick, Dave Gelperin, Elisabeth Hendrickson, Jerry Weinberg, Noel Nyman, and Mary Alton. • Some of our exercises were introduced to us by Payson Hall, Jerry Weinberg, Ross Collard, James Lyndsay, Dave Smith, Earl Everett, Brian Marick, Cem Kaner and Joe McMahon. • Many ideas were improved by students who took earlier versions of the class going back to 1996.
  • 4. 2 3 Assumptions About You • You test software, or any other complex human creation. • You have at least some control over the design of your tests and some time to create new tests. • You are worried that your test process is spending too much time and resources on things that aren’t important. • You test under uncertainty and time pressure. • Your major goal is to find important problems quickly. • You want to get very good at (software) testing. A Question What makes testing harder or slower?
  • 5. 3 Premises of Rapid Testing 1. Software projects and products are relationships between people. 2. Each project occurs under conditions of uncertainty and time pressure. 3. Despite our best hopes and intentions, some degree of inexperience, carelessness, and incompetence is normal. 4. A test is an activity; it is performance, not artifacts. Premises of Rapid Testing 5. Testing’s purpose is to discover the status of the product and any threats to its value, so that our clients can make informed decisions about it. 6. We commit to performing credible, cost-effective testing, and we will inform our clients of anything that threatens that commitment. 7. We will not knowingly or negligently mislead our clients and colleagues or ourselves. 8. Testers accept responsibility for the quality of their work, although they cannot control the quality of the product.
  • 6. 4 7 Rapid Testing Rapid testing is a mind-set and a skill-set of testing focused on how to do testing more quickly, less expensively, with excellent results. This is a general testing methodology. It adapts to any kind of project or product. 8 Rapid testing may not be exhaustive, but it is thorough enough and quick enough. It’s less work than ponderous testing. It might be less work than slapdash testing. It fulfills the mission of testing. How does Rapid Testing compare with other kinds of testing? Management likes to talk about exhaustive testing, but they don’t want to fund it and they don’t know how to do it. You can always test quickly... But it might be poor testing. When testing is turned into an elaborate set of rote tasks, it becomes ponderous without really being thorough. MoreWork&Time (Cost) Better Thinking & Better Testing (Value) Slapdash Much faster, cheaper, and easier Ponderous Slow, expensive, and easier Rapid Faster, less expensive, still challenging Exhaustive Slow, very expensive, and difficult
  • 7. 5 9 Excellent Rapid Technical Work Begins with You When the ball comes to you… Do you know you have the ball? Can you receive the pass? Do you know what your role and mission is? Do you know where your teammates are? Are you ready to act, right now? Can you let your teammates help you? Do you know your options? Is your equipment ready? Can you read the situation on the field? Are you aware of the criticality of the situation? 10 • Rapid test teams are about diverse talents cooperating • We call this the elliptical team, as opposed to the team of perfect circles. • Some important dimensions to vary: • Technical skill • Domain expertise • Temperament (e.g. introvert vs. extrovert) • Testing experience • Project experience • Industry experience • Product knowledge • Educational background • Writing skill • Diversity makes exploration far more powerful • Your team is powerful because of your unique contribution …but you don’t have to be great at everything.
  • 8. 6 What It Means To Test Rapidly • Since testing is about finding a potentially infinite number of problems in an infinite space in a finite amount of time, we must… • understand our mission and obstacles to fulfilling it • know how to recognize problems quickly • model the product and the test space to know where to look for problems • prefer inexpensive, lightweight, effective tools • reduce dependence on expensive, time-consuming artifacts, while getting value from the ones we’ve got • do nothing that wastes time or effort • tell a credible story about all that 11 One Big Problem in Testing Formality Bloat • Much of the time, your testing doesn’t need to be very formal* • Even when your testing does need to be formal, you’ll need to do substantial amounts of informal testing in order figure out how to do excellent formal testing. • Who says? The FDA. See http://www.satisfice.com/blog/archives/602 • Even in a highly regulated environment, you do formal testing primarily for the auditors. You do informal testing to make sure you don’t lose money, blow things up, or kill people. * Formal testing means testing that must be done to verify a specific fact, or that must be done in a specific way.
  • 9. 7 EXERCISE Test the Famous Triangle 14 What is testing? Serving Your Client If you don’t have an understanding and an agreement on what is the mission of your testing, then doing it “rapidly” would be pointless.
  • 10. 8 Not Enough Product and Project Information? Where do we get test information? What Is A Problem? A problem is…
  • 11. 9 How Do We Recognize Problems? An oracle is… a way to recognize a problem. Learn About Heuristics Heuristics are fallible, “fast and frugal” methods of solving problems, making decisions, or accomplishing tasks. “The engineering method is the use of heuristics to cause the best change in a poorly understood situation within the available resources.” Billy Vaughan Koen Discussion of the Method
  • 12. 10 Heuristics: Generating Solutions Quickly and Inexpensively • Heuristic (adjective): serving to discover or learn • Heuristic (noun): a fallible method for solving a problem or making a decision “Heuristic reasoning is not regarded as final and strict but as provisional and plausible only, whose purpose is to discover the solution to the present problem.” - George Polya, How to Solve It Oracles An oracle is a heuristic principle or mechanism by which we recognize a problem. “...it appeared at least once to meet some requirement to some degree” “...uh, when I ran it” “...that one time” “...on my machine.” “It works!” really means…
  • 13. 11 Familiar Problems If a product is consistent with problems we’ve seen before, we suspect that there might be a problem. Explainability If a product is inconsistent with our ability to explain it (or someone else’s), we suspect that there might be a problem.
  • 14. 12 World If a product is inconsistent with the way the world works, we suspect that there might be a problem. History If a product is inconsistent with previous versions of itself, we suspect that there might be a problem. Okay, so how the #&@ do I print now?
  • 15. 13 Image If a product is inconsistent with an image that the company wants to project, we suspect a problem. Comparable Products WordPad Word When a product seems inconsistent with a product that is in some way comparable, we suspect that there might be a problem.
  • 16. 14 Claims When a product is inconsistent with claims that important people make about it, we suspect a problem. User Expectations When a product is inconsistent with expectations that a reasonable user might have, we suspect a problem.
  • 17. 15 Purpose When a product is inconsistent with its designers’ explicit or implicit purposes, we suspect a problem. Product When a product is inconsistent internally—as when it contradicts itself—we suspect a problem.
  • 18. 16 Statutes and Standards When a product is inconsistent with laws or widely accepted standards, we suspect a problem. 32 Consistency (“this agrees with that”) an important theme in oracle principles • Familiarity: The system is not consistent with the pattern of any familiar problem. • Explainability: The system is consistent with our ability to describe it clearly. • World: The system is consistent with things that we recognize in the world. • History: The present version of the system is consistent with past versions of it. • Image: The system is consistent with an image that the organization wants to project. • Comparable Products: The system is consistent with comparable systems. • Claims: The system is consistent with what important people say it’s supposed to be. • Users’ Expectations: The system is consistent with what users want. • Product: Each element of the system is consistent with comparable elements in the same system. • Purpose: The system is consistent with its purposes, both explicit and implicit. • Standards and Statutes: The system is consistent with applicable laws, or relevant implicit or explicit standards. Consistency heuristics rely on the quality of your models of the product and its context.
  • 19. 17 Consistency heuristics rely on the quality of your models of the product and its context. An oracle doesn’t tell you that there IS a problem. An oracle tells you that you might be seeing a problem. Rely solely on documented, anticipated sources of oracles, and your testing will likely be slower and weaker. Train your mind to recognize patterns of oracles and your testing will likely be faster and your ability to spot problems will be sharper. All Oracles Are Heuristic An oracle can alert you to a possible problem, but an oracle cannot tell you that there is no problem. • A person whose opinion matters. • An opinion held by a person who matters. • A disagreement among people who matter. • A reference document with useful information. • A known good example output. • A known bad example output. • A process or tool by which the output is checked. • A process or tool that helps a tester identify patterns. • A feeling like confusion or annoyance. • A desirable consistency between related things. General Examples of Oracles things that suggest “problem” or “no problem” 34 People Mechanisms Feelings Principles
  • 20. 18 Tacit Explicit OtherPeopleTester Your Feelings & Mental Models Shared Artifacts (specs, tools, etc.) Stakeholders’ Feelings & Mental Models Inference Observable Consistencies ReferenceConference Experience Oracles from the Inside Out Oracle Cost and Value • Some oracles are more authoritative • but more responsive to change • Some oracles are more consistent • but maybe not up to date • Some oracles are more immediate • but less reliable • Some oracles are more precise • but the precision may be misleading • Some oracles are more accurate • but less precise • Some oracles are more available • but less authoritative • Some oracles are easier to interpret • but more narrowly focused
  • 21. 19 Feelings As Heuristic Triggers For Oracles • An emotional reaction is a trigger to attention and learning • Without emotion, we don’t reason well • See Damasio, The Feeling of What Happens • When you find yourself mildly concerned about something, someone else could be very concerned about it • Observe emotions to help overcome your biases and to evaluate significance An emotion is a signal; consider looking into it 38 All Oracles Are Heuristic • We often do not have oracles that establish a definite correct or incorrect result, in advance. Oracles may reveal themselves to us on the fly, or later. That’s why we use abductive inference. • No single oracle can tell us whether a program (or a feature) is working correctly at all times and in all circumstances. That’s why we use a variety of oracles. • Any program that looks like it’s working, to you, may in fact be failing in some way that happens to fool all of your oracles. That’s why we proceed with humility and critical thinking. • We never know when a test is finished. That’s why we try to maintain uncertainty when everyone else on the project is sure. • You (the tester) can’t know the deep truth about any result. That’s why we report whatever seems likely to be a bug.
  • 22. 20 39 Oracles are Not Perfect And Testers are Not Judges • You don’t need to know for sure if something is a bug; it’s not your job to decide if something is a bug; it’s your job to decide if it’s worth reporting. • You do need to form a justified belief that it MIGHT be a threat to product value in the opinion of someone who matters. • And you must be able to say why you think so; you must be able to cite good oracles… or you will lose credibility. MIP’ing VS. Black Flagging 40 Coping With Difficult Oracle Problems • Ignore the Problem • Ask “so what?” Maybe the value of the information doesn’t justify the cost. • Simplify the Problem • Ask for testability. It usually doesn’t happen by accident. • Built-in oracle. Internal error detection and handling. • Lower the standards. You may be using an unreasonable standard of correctness. • Shift the Problem • Parallel testing. Compare with another instance of a comparable algorithm. • Live oracle. Find an expert who can tell if the output is correct. • Reverse the function. (e.g. 2 x 2 = 4, then 4/2 = 2) • Divide and Conquer the Problem • Spot check. Perform a detailed inspection on one instance out of a set of outputs. • Blink test. Compare or review overwhelming batches of data for patterns that stand out. • Easy input. Use input for which the output is easy to analyze. • Easy output. Some output may be obviously wrong, regardless of input. • Unit test first. Learn about the pieces that make the whole. • Test incrementally. Learn about the product by testing over a period of time.
  • 23. 21 41 “Easy Input” • Fixed Markers. Use distinctive fixed input patterns that are easy to spot in the output. • Statistical Markers. Use populations of data that have distinguishable statistical properties. • Self-Referential Data. Use data that embeds metadata about itself. (e.g. counterstrings) • Easy Input Regions. For specific inputs, the correct output may be easy to calculate. • Outrageous Values. For some inputs, we expect error handling. • Idempotent Input. Try a case where the output will be the same as the input. • Match. Do the “same thing” twice and look for a match. • Progressive Mismatch. Do progressively differing things over time and account for each difference. (code-breaking technique) 42 Oracles Are Linked To Threats To Quality Criteria Any inconsistency may represent diminished value. Many test approaches focus on capability (functionality) and underemphasize the other criteria. Capability Scalability Reliability Compatibility Usability Performance Charisma Installability Security Development
  • 24. 22 43 Oracles Are Linked To Threats To Quality Criteria Any inconsistency may represent diminished value. Many test approaches focus on capability (functionality) and underemphasize the other criteria. Supportability Testability Maintainability Portability Localization Focusing on Preparation and Skill Can Reduce Documentation Bloat 3.0 Test Procedures 3.1 General testing protocol. • In the test descriptions that follow, the word “verify" is used to highlight specific items that must be checked. In addition to those items a tester shall, at all times, be alert for any unexplained or erroneous behavior of the product. The tester shall bear in mind that, regardless of any specific requirements for any specific test, there is the overarching general requirement that the product shall not pose an unacceptable risk of harm to the patient, including an unacceptable risk using reasonably foreseeable misuse. • Test personnel requirements: The tester shall be thoroughly familiar with the generator and workstation FRS, as well as with the working principles of the devices themselves. The tester shall also know the working principles of the power test jig and associated software, including how to configure and calibrate it and how to recognize if it is not working correctly. The tester shall have sufficient skill in data analysis and measurement theory to make sense of statistical test results. The tester shall be sufficiently familiar with test design to complement this protocol with exploratory testing, in the event that anomalies appear that require investigation. The tester shall know how to keep test records to credible, professional standard.
  • 25. 23 Remember… For skilled testers, good testing isn’t just about pass vs. fail. For skilled testers, testing is about problem vs. no problem. Where Do We Look For Problems? Coverage is… how much of the product has been tested.
  • 26. 24 Coverage is “how much of the product we have tested.” What IS Coverage? It’s the extent to which we have traveled over some map of the product. MODELS Models • A model is an idea, activity, or object… such as an idea in your mind, a diagram, a list of words, a spreadsheet, a person, a toy, an equation, a demonstration, or a program such as something complex that you need to work with or study - A map is a model that helps to navigate across a terrain. - 2+2=4 is a model for adding two apples to a basket that already has two apples. - Atmospheric models help predict where hurricanes will go. - A fashion model helps understand how clothing would look on actual humans. - Your beliefs about what you test are a model of what you test. • …that heuristically represents (literally, re-presents) another idea, activity, or object… • …whereby understanding something about the model may help you to understand or manipulate the thing that it represents.
  • 27. 25 There are as many kinds of test coverage as there are ways to model the system. Intentionally OR Incidentally One Way to Model Coverage: Product Elements (with Quality Criteria) Capability Reliability Usability Charisma Security Scalability Compatibility Performance Installability Supportability Testability Maintainability • Structure • Function • Data • Interfaces • Platform • Operations • Time
  • 28. 26 51 To test a very simple product meticulously, part of a complex product meticulously, or to maximize test integrity… 1. Start the test from a known (clean) state. 2. Prefer simple, deterministic actions. 3. Trace test steps to a specified model. 4. Follow established and consistent lab procedures. 5. Make specific predictions, observations and records. 6. Make it easy to reproduce (automation may help). General Focusing Heuristics • use test-first approach or unit testing for better code coverage • work from prepared test coverage outlines and risk lists • use diagrams, state models, and the like, and cover them • apply specific test techniques to address particular coverage areas • make careful observations and match to expectations To do this more rapidly, make preparation and artifacts fast and frugal: leverage existing materials and avoid repeating yourself. Emphasize doing; relax planning. You’ll make discoveries along the way!
  • 29. 27 53 To find unexpected problems, elusive problems that occur in sustained field use, or more problems quickly in a complex product… 1. Start from different states (not necessarily clean). 2. Prefer complex, challenging actions. 3. Generate tests from a variety of models. 4. Question your lab procedures and tools. 5. Try to see everything with open expectations. 6. Make the test hard to pass, instead of easy to reproduce. That’s a PowerPoint bug! General Defocusing Heuristics • diversify your models; intentional coverage in one area can lead to unintentional coverage in other areas—this is a Good Thing • diversify your test techniques • be alert to problems other than the ones that you’re actively looking for • welcome and embrace productive distraction • do some testing that is not oriented towards a specific risk • use high-volume, randomized automated tests
  • 30. 28 DISCUSSION How Many Test Cases? What About Quantifying Coverage Overall? • A nice idea, but we don’t know how to do it in a way that is consistent with basic measurement theory • If we describe coverage by counting test cases, we’re committing reification error. • If we use percentages to quantify coverage, we need to establish what 100% looks like. • But we might do that with respect to some specific models. • Complex systems may display emergent behaviour.
  • 31. 29 Extent of Coverage • Smoke and sanity • Can this thing even be tested at all? • Common, core, and critical • Can this thing do the things it must do? • Does it handle happy paths and regular input? • Can it work? • Complex, harsh, extreme and exceptional • Will this thing handle challenging tests, complex data flows, and malformed input, etc.? • Will it work? How Might We Organize, Record, and Report Coverage? • automated tools (e.g. profilers, coverage tools) • annotated diagrams and mind maps • coverage matrices • bug taxonomies • Michael Hunter’s You Are Not Done Yet list • James Bach’s Heuristic Test Strategy Model • described at www.satisfice.com • articles about it at www.developsense.com • Mike Kelly’s MCOASTER model • product coverage outlines and risk lists • session-based test management • http://www.satisfice.com/sbtm See three articles here: http://www.developsense.com/publications.html#coverage
  • 32. 30 59 What Does Rapid Testing Look Like? Concise Documentation Minimizes Waste Risk ModelCoverage Model Test Strategy Reference Risk CatalogTesting Heuristics General Project- Specific Testing Playbook Status Dashboard Schedule BugsIssues Rapid Testing Documentation • Recognize • a requirements document is not the requirements • a test plan document is not a test plan • a test script is not a test • doing, rather than planning, produces results • Determine where your documentation is on the continuum: product or tool? • Keep your tools sharp and lightweight • Obtain consensus from others as to what’s necessary and what’s excess in products • Ask whether reporting test results takes priority over obtaining test results • note that in some contexts, it might • Eliminate unnecessary clerical work
  • 34. 32 Visualizing Test Progress See “A Sticky Situation”, Better Software, February 2012 What IS Exploratory Testing? • Simultaneous test design, test execution, and learning. • James Bach, 1995 But maybe it would be a good idea to underscore why that’s important…
  • 35. 33 What IS Exploratory Testing? • I follow (and to some degree contributed to) Kaner’s definition, which was refined over several peer conferences through 2007: Exploratory software testing is… • a style of software testing • that emphasizes the personal freedom and responsibility • of the individual tester • to continually optimize the value of his or her work • by treating test design, test execution, test result interpretation, and test-related learning • as mutually supportive activities • that run in parallel • throughout the project. See Kaner, “Exploratory Testing After 23 Years”, www.kaner.com/pdfs/ETat23.pdf So maybe it would be a good idea to keep it brief most of the time… Why Exploratory Approaches? • Systems are far more than collections of functions • Systems typically depend upon and interact with many external systems
  • 36. 34 Why Exploratory Approaches? • Systems are too complex for individuals to comprehend and describe • Products evolve rapidly in ways that cannot be anticipated In the future, developers will likely do more verification and validation at the unit level than they have done before. Testers must explore, discover, investigate, and learn about the system. Why Exploratory Approaches? • Developers are using tools and frameworks that make programming more productive, but that may manifest more emergent behaviour. • Developers are increasingly adopting unit testing and test-driven development. • The traditional focus is on verification, validation, and confirmation. The new focus must be on exploration, discovery, investigation, and learning.
  • 37. 35 Why Exploratory Approaches? • We don’t have time to waste • preparing wastefully elaborate written plans • for complex products • built from many parts • and interacting with many systems • (many of which we don’t control… • …or even understand) • where everything is changing over time • and there’s so much learning to be done • and the result, not the plan, is paramount. Questions About Scripts… arrows and cycles Where do scripts come from? What happens when the unexpected happens during a script? What do we do with what we learn? Will everyone follow the same script the same way? (task performing)
  • 38. 36 Questions About Exploration… arrows and cycles (value seeking) Where does exploration come from? What happens when the unexpected happens during exploration? What do we do with what we learn? Will everyone explore the same way? Exploration is Not Just Action arrows and cycles
  • 39. 37 You can put them together! arrows and cycles You can put them together! arrows and cycles
  • 40. 38 What Exploratory Testing Is Not • Touring • http://www.developsense.com/blog/2011/12/what-exploratory-testing-is-not-part-1- touring/ • After-Everything-Else Testing • http://www.developsense.com/blog/2011/12/what-exploratory-testing-is-not-part-2-after- everything-else-testing/ • Tool-Free Testing • http://www.developsense.com/blog/2011/12/what-exploratory-testing-is-not-part-3-tool- free-testing/ • Quick Tests • http://www.developsense.com/blog/2011/12/what-exploratory-testing-is-not-part-4-quick- tests/ • Undocumented Testing • http://www.developsense.com/blog/2011/12/what-exploratory-testing-is-not-part-5- undocumented-testing/ • “Experienced-Based” Testing • http://www.satisfice.com/blog/archives/664 • defined by any specific example of exploratory testing • http://www.satisfice.com/blog/archives/678 Exploratory Testing • IS NOT “random testing” (or sloppy, or slapdash testing) • IS NOT “unstructured testing” • IS NOT procedurally structured • IS NOT unteachable • IS NOT unmanageable • IS NOT scripted • IS NOT a technique • IS “ad hoc”, in the dictionary sense, “to the purpose” • IS structured and rigorous • IS cognitively structured • IS highly teachable • IS highly manageable • IS chartered • IS an approach The way we practice and teach it, exploratory testing…
  • 41. 39 Contrasting Approaches Scripted Testing • Is directed from elsewhere • Is determined in advance • Is about confirmation • Is about controlling tests • Emphasizes predictability • Emphasizes decidability • Like making a speech • Like playing from a score Exploratory Testing • Is directed from within • Is determined in the moment • Is about investigation • Is about improving test design • Emphasizes adaptability • Emphasizes learning • Like having a conversation • Like playing in a jam session Exploratory Testing IS Structured • Exploratory testing, as we teach it, is a structured process conducted by a skilled tester, or by lesser skilled testers or users working under supervision. • The structure of ET comes from many sources: • Test design heuristics • Chartering • Time boxing • Perceived product risks • The nature of specific tests • The structure of the product being tested • The process of learning the product • Development activities • Constraints and resources afforded by the project • The skills, talents, and interests of the tester • The overall mission of testing In other words, it’s not “random”, but systematic. Not procedurally structured, but cognitively structured.
  • 42. 40 79 In excellent exploratory testing, one structure tends to dominate all the others: Exploratory testers construct a compelling story of their testing. It is this story that gives ET a backbone. Exploratory Testing IS Structured 80 To test is to compose, edit, narrate, and justify THREE stories. A story about the status of the PRODUCT… …about what it does, how it failed, and how it might fail... …in ways that matter to your various clients. A story about HOW YOU TESTED it… …how you configured, operated and observed it… …how you recognized problems… …about what you have and haven’t tested yet… …and what you won’t test at all (unless the client objects)… A story about how GOOD that testing was… …the risks and costs of (not) testing… …what made testing harder or slower… …how testable (or not) the product is… …what you need and what you recommend. Bugs Issues Product any good? How do you know? Why should I be pleased with your work?
  • 43. 41 81 What does “taking advantage of resources” mean? • Mission • The problem we are here to solve for our customer. • Information • Information about the product or project that is needed for testing. • Developer relations • How you get along with the programmers. • Team • Anyone who will perform or support testing. • Equipment & tools • Hardware, software, or documents required to administer testing. • Schedule • The sequence, duration, and synchronization of project events. • Test Items • The product to be tested. • Deliverables • The observable products of the test project. 82 “Ways to test…”? General Test Techniques • Function testing • Domain testing • Stress testing • Flow testing • Scenario testing • Claims testing • User testing • Risk testing • Automatic checking
  • 44. 42 83 • Happy Path • Tour the Product • Sample Data • Variables • Files • Complexity • Menus & Windows • Keyboard & Mouse A quick test is a cheap test that has some value but requires little preparation, knowledge, or time to perform. • Interruptions • Undermining • Adjustments • Dog Piling • Continuous Use • Feature Interactions • Click on Help Cost as a Simplifying Factor Try quick tests as well as careful tests 84 • Input Constraint Attack • Click Frenzy • Shoe Test • Blink Test • Error Message Hangover A quick test is a cheap test that has some value but requires little preparation, knowledge, or time to perform. • Resource Starvation • Multiple Instances • Crazy Configs • Cheap Tools Cost as a Simplifying Factor Try quick tests as well as careful tests
  • 45. 43 Touring the Product: Mike Kelly’s FCC CUTS VIDS • Feature tour • Complexity tour • Claims tour • Configuration tour • User tour • Testability tour • Scenario tour • Variability tour • Interoperability tour • Data tour • Structure tour
  • 46. 44 88 Summing Up: Themes of Rapid Testing • Put the tester's mind at the center of testing. • Learn to deal with complexity and ambiguity. • Learn to tell a compelling testing story. • Develop testing skills through practice, not just talk. • Use heuristics to guide and structure your process. • Replace “check for…” with “look for problems in…” • Be a service to the project community, not an obstacle. • Consider cost vs. value in all your testing activity. • Diversify your team and your tactics. • Dynamically manage the focus of your work. • Your context should drive your choices, both of which evolve over time.
  • 47. - 1 - Designed by James Bach Version 5.2.1 james@satisfice.com 9/17/2013 www.satisfice.com Copyright 1996-2013, Satisfice, Inc. The Heuristic Test Strategy Model is a set of patterns for designing a test strategy. The immediate purpose of this model is to remind testers of what to think about when they are creating tests. Ultimately, it is intended to be customized and used to facilitate dialog and direct self-learning among professional testers. Project Environment includes resources, constraints, and other elements in the project that may enable or hobble our testing. Sometimes a tester must challenge constraints, and sometimes accept them. Product Elements are things that you intend to test. Software is complex and invisible. Take care to cover all of it that matters, not just the parts that are easy to see. Quality Criteria are the rules, values, and sources that allow you as a tester to determine if the product has problems. Quality criteria are multidimensional and often hidden or self-contradictory. Test Techniques are heuristics for creating tests. All techniques involve some sort of analysis of project environment, product elements, and quality criteria. Perceived Quality is the result of testing. You can never know the "actual" quality of a software product, but through the application of a variety of tests, you can make an informed assessment of it.
  • 48. - 2 - General Test Techniques A test technique is a heuristic for creating tests. There are many interesting techniques. The list includes nine general techniques. By “general technique” we mean that the technique is simple and universal enough to apply to a wide variety of contexts. Many specific techniques are based on one or more of these nine. And an endless variety of specific test techniques may be constructed by combining one or more general techniques with coverage ideas from the other lists in this model. Function Testing Test what it can do 1. Identify things that the product can do (functions and sub-functions). 2. Determine how you’d know if a function was capable of working. 3. Test each function, one at a time. 4. See that each function does what it’s supposed to do and not what it isn’t supposed to do. Claims Testing Verify every claim 1. Identify reference materials that include claims about the product (implicit or explicit). Consider SLAs, EULAs, advertisements, specifications, help text, manuals, etc. 2. Analyze individual claims, and clarify vague claims. 3. Verify that each claim about the product is true. 4. If you’re testing from an explicit specification, expect it and the product to be brought into alignment. Domain Testing Divide and conquer the data 1. Look for any data processed by the product. Look at outputs as well as inputs. 2. Decide which particular data to test with. Consider things like boundary values, typical values, convenient values, invalid values, or best representatives. 3. Consider combinations of data worth testing together. User Testing Involve the users 1. Identify categories and roles of users. 2. Determine what each category of user will do (use cases), how they will do it, and what they value. 3. Get real user data, or bring real users in to test. 4. Otherwise, systematically simulate a user (be careful— it’s easy to think you’re like a user even when you’re not). 5. Powerful user testing is that which involves a variety of users and user roles, not just one. Stress Testing Overwhelm the product 1. Look for sub-systems and functions that are vulnerable to being overloaded or “broken” in the presence of challenging data or constrained resources. 2. Identify data and resources related to those sub- systems and functions. 3. Select or generate challenging data, or resource constraint conditions to test with: e.g., large or complex data structures, high loads, long test runs, many test cases, low memory conditions. Risk Testing Imagine a problem, then look for it. 1. What kinds of problems could the product have? 2. Which kinds matter most? Focus on those. 3. How would you detect them if they were there? 4. Make a list of interesting problems and design tests specifically to reveal them. 5. It may help to consult experts, design documentation, past bug reports, or apply risk heuristics. Flow Testing Do one thing after another 1. Perform multiple activities connected end-to-end; for instance, conduct tours through a state model. 2. Don’t reset the system between actions. 3. Vary timing and sequencing, and try parallel threads. Automatic Checking Check a million different facts 1. Look for or develop tools that can perform a lot of actions and check a lot things. 2. Consider tools that partially automate test coverage. 3. Consider tools that partially automate oracles. 4. Consider automatic change detectors. 5. Consider automatic test data generators. 6. Consider tools that make human testing more powerful. Scenario Testing Test to a compelling story 1. Begin by thinking about everything going on around the product. 2. Design tests that involve meaningful and complex interactions with the product. 3. A good scenario test is a compelling story of how someone who matters might do something that matters with the product.
  • 49. - 3 - Project Environment Creating and executing tests is the heart of the test project. However, there are many factors in the project environment that are critical to your decision about what particular tests to create. In each category, below, consider how that factor may help or hinder your test design process. Try to exploit every resource. Mission. Your purpose on this project, as understood by you and your customers.  Do you know who your customers are? Whose opinions matter? Who benefits or suffers from the work you do?  Do you know what your customers expect of you on this project? Do you agree?  Maybe your customers have strong ideas about what tests you should create and run.  Maybe they have conflicting expectations. You may have to help identify and resolve those. Information. Information about the product or project that is needed for testing.  Whom can we consult with to learn about this project?  Are there any engineering documents available? User manuals? Web-based materials? Specs? User stories?  Does this product have a history? Old problems that were fixed or deferred? Pattern of customer complaints?  Is your information current? How are you apprised of new or changing information?  Are there any comparable products or projects from which we can glean important information? Developer Relations. How you get along with the programmers.  Hubris: Does the development team seem overconfident about any aspect of the product?  Defensiveness: Is there any part of the product the developers seem strangely opposed to having tested?  Rapport: Have you developed a friendly working relationship with the programmers?  Feedback loop: Can you communicate quickly, on demand, with the programmers?  Feedback: What do the developers think of your test strategy? Test Team. Anyone who will perform or support testing.  Do you know who will be testing? Do you have enough people?  Are there people not on the “test team” that might be able to help? People who’ve tested similar products before and might have advice? Writers? Users? Programmers?  Are there particular test techniques that the team has special skill or motivation to perform?  Is any training needed? Is any available?  Who is co-located and who is elsewhere? Will time zones be a problem? Equipment & Tools. Hardware, software, or documents required to administer testing.  Hardware: Do we have all the equipment you need to execute the tests? Is it set up and ready to go?  Automation: Are any test tools needed? Are they available?  Probes: Are any tools needed to aid in the observation of the product under test?  Matrices & Checklists: Are any documents needed to track or record the progress of testing? Schedule. The sequence, duration, and synchronization of project events  Test Design: How much time do you have? Are there tests better to create later than sooner?  Test Execution: When will tests be executed? Are some tests executed repeatedly, say, for regression purposes?  Development: When will builds be available for testing, features added, code frozen, etc.?  Documentation: When will the user documentation be available for review? Test Items. The product to be tested.  Scope: What parts of the product are and are not within the scope of your testing responsibility?  Availability: Do you have the product to test? Do you have test platforms available? When do you get new builds?  Volatility: Is the product constantly changing? What will be the need for retesting?  New Stuff: What has recently been changed or added in the product?  Testability: Is the product functional and reliable enough that you can effectively test it?  Future Releases: What part of your tests, if any, must be designed to apply to future releases of the product? Deliverables. The observable products of the test project.  Content: What sort of reports will you have to make? Will you share your working notes, or just the end results?  Purpose: Are your deliverables provided as part of the product? Does anyone else have to run your tests?  Standards: Is there a particular test documentation standard you’re supposed to follow?  Media: How will you record and communicate your reports?
  • 50. - 4 - Product Elements Ultimately a product is an experience or solution provided to a customer. Products have many dimensions. So, to test well, we must examine those dimensions. Each category, listed below, represents an important and unique aspect of a product. Testers who focus on only a few of these are likely to miss important bugs. Structure. Everything that comprises the physical product.  Code: the code structures that comprise the product, from executables to individual routines.  Hardware: any hardware component that is integral to the product.  Non-executable files: any files other than multimedia or programs, like text files, sample data, or help files.  Collateral: anything beyond software and hardware that is also part of the product, such as paper documents, web links and content, packaging, license agreements, etc. Function. Everything that the product does.  Application: any function that defines or distinguishes the product or fulfills core requirements.  Calculation: any arithmetic function or arithmetic operations embedded in other functions.  Time-related: time-out settings; daily or month-end reports; nightly batch jobs; time zones; business holidays; interest calculations; terms and warranty periods; chronograph functions.  Transformations: functions that modify or transform something (e.g. setting fonts, inserting clip art, withdrawing money from account).  Startup/Shutdown: each method and interface for invocation and initialization as well as exiting the product.  Multimedia: sounds, bitmaps, videos, or any graphical display embedded in the product.  Error Handling: any functions that detect and recover from errors, including all error messages.  Interactions: any interactions between functions within the product.  Testability: any functions provided to help test the product, such as diagnostics, log files, asserts, test menus, etc. Data. Everything that the product processes.  Input: any data that is processed by the product.  Output: any data that results from processing by the product.  Preset: any data that is supplied as part of the product, or otherwise built into it, such as prefabricated databases, default values, etc.  Persistent: any data that is stored internally and expected to persist over multiple operations. This includes modes or states of the product, such as options settings, view modes, contents of documents, etc.  Sequences/Combinations: any ordering or permutation of data, e.g. word order, sorted vs. unsorted data, order of tests.  Cardinality: Numbers of objects or fields may vary (e.g. zero, one, many, max, open limit). Some may have to be unique (e.g. database keys).  Big/Little: variations in the size and aggregation of data.  Noise: any data or state that is invalid, corrupted, or produced in an uncontrolled or incorrect fashion.  Lifecycle: transformations over the lifetime of a data entity as it is created, accessed, modified, and deleted. Interfaces. Every conduit by which the product is accessed or expressed.  User Interfaces: any element that mediates the exchange of data with the user (e.g. displays, buttons, fields, whether physical or virtual).  System Interfaces: any interface with something other than a user, such as other programs, hard disk, network, etc.  API/SDK: Any programmatic interfaces or tools intended to allow the development of new applications using this product.  Import/export: any functions that package data for use by a different product, or interpret data from a different product. Platform. Everything on which the product depends (and that is outside your project).  External Hardware: hardware components and configurations that are not part of the shipping product, but are required (or  optional) in order for the product to work: systems, servers, memory, keyboards, the Cloud.  External Software: software components and configurations that are not a part of the shipping product, but are required (or  optional) in order for the product to work: operating systems, concurrently executing applications, drivers, fonts, etc.  Internal Components: libraries and other components that are embedded in your product but are produced outside your project. Operations. How the product will be used.  Users: the attributes of the various kinds of users.  Environment: the physical environment in which the product operates, including such elements as noise, light, and distractions.  Common Use: patterns and sequences of input that the product will typically encounter. This varies by user.  Disfavored Use: patterns of input produced by ignorant, mistaken, careless or malicious use.  Extreme Use: challenging patterns and sequences of input that are consistent with the intended use of the product. Time. Any relationship between the product and time.  Input/Output: when input is provided, when output created, and any timing relationships (delays, intervals, etc.) among them.  Fast/Slow: testing with “fast” or “slow” input; fastest and slowest; combinations of fast and slow.  Changing Rates: speeding up and slowing down (spikes, bursts, hangs, bottlenecks, interruptions).  Concurrency: more than one thing happening at once (multi-user, time-sharing, threads, and semaphores, shared data).
  • 51. - 5 - Quality Criteria Categories A quality criterion is some requirement that defines what the product should be. By thinking about different kinds of criteria, you will be better able to plan tests that discover important problems fast. Each of the items on this list can be thought of as a potential risk area. For each item below, determine if it is important to your project, then think how you would recognize if the product worked well or poorly in that regard. Capability. Can it perform the required functions? Reliability. Will it work well and resist failure in all required situations?  Robustness: the product continues to function over time without degradation, under reasonable conditions.  Error handling: the product resists failure in the case of errors, is graceful when it fails, and recovers readily.  Data Integrity: the data in the system is protected from loss or corruption.  Safety: the product will not fail in such a way as to harm life or property. Usability. How easy is it for a real user to use the product?  Learnability: the operation of the product can be rapidly mastered by the intended user.  Operability: the product can be operated with minimum effort and fuss.  Accessibility: the product meets relevant accessibility standards and works with O/S accessibility features. Charisma. How appealing is the product?  Aesthetics: the product appeals to the senses.  Uniqueness: the product is new or special in some way.  Necessity: the product possesses the capabilities that users expect from it.  Usefulness: the product solves a problem that matters, and solves it well.  Entrancement: users get hooked, have fun, are fully engaged when using the product.  Image: the product projects the desired impression of quality. Security. How well is the product protected against unauthorized use or intrusion?  Authentication: the ways in which the system verifies that a user is who he says he is.  Authorization: the rights that are granted to authenticated users at varying privilege levels.  Privacy: the ways in which customer or employee data is protected from unauthorized people.  Security holes: the ways in which the system cannot enforce security (e.g. social engineering vulnerabilities) Scalability. How well does the deployment of the product scale up or down? Compatibility. How well does it work with external components & configurations?  Application Compatibility: the product works in conjunction with other software products.  Operating System Compatibility: the product works with a particular operating system.  Hardware Compatibility: the product works with particular hardware components and configurations.  Backward Compatibility: the products works with earlier versions of itself.  Resource Usage: the product doesn’t unnecessarily hog memory, storage, or other system resources. Performance. How speedy and responsive is it? Installability. How easily can it be installed onto its target platform(s)?  System requirements: Does the product recognize if some necessary component is missing or insufficient?  Configuration: What parts of the system are affected by installation? Where are files and resources stored?  Uninstallation: When the product is uninstalled, is it removed cleanly?  Upgrades/patches: Can new modules or versions be added easily? Do they respect the existing configuration?  Administration: Is installation a process that is handled by special personnel, or on a special schedule? Development. How well can we create, test, and modify it?  Supportability: How economical will it be to provide support to users of the product?  Testability: How effectively can the product be tested?  Maintainability: How economical is it to build, fix or enhance the product?  Portability: How economical will it be to port or reuse the technology elsewhere?  Localizability: How economical will it be to adapt the product for other places?