4. CROSS
FUNCTIONAL
TEAMS
Working in cross functional teams has broadened the
responsibility of professionals
Focus on what the team delivers as a whole
Makes work more dynamic and interesting, but also
everyone has to be prepared in fields that may have
“belonged” to someone else
Quality can’t be tested in, so it must be built in
First hello world program- they would execute and test
it
5. DEVELOPERS
Developers don’t need to be testing experts.
Separate testers in order to avoid any bias that the
developers might have, require a specific skill set
Developer in scrum development teams:
all the team members
Developer in this book:
A person whose primary responsibility is to write source
code
Developers don’t want to give code that just compiles,
they want to give code that works and works well
Write code in a way which makes verification possible
6. TESTING
Testing- an activity performed to ensure correctness
and quality of software
Developer testing activities
Unit testing
Integration testing
Maintenance
Continuous integration
Test automation
7. UNITTESTING
Unit testing- developers write these
Earliest fastest and most convenient way to verify their
code
100% developer owned
Can be done before or after writing code
Before to drive its design
After to verify it works as expected
8. INTEGRATION
TESTING
Integration testing- Complex meaning defined in later
chapter
For now, acknowledge that some tests are more complex
than unit tests
Benefit from being written by developers
More sophisticated setup
May execute slower
9. MAINTENANCE
Maintenance- the majority of a systems life cycle is
about maintenance
2 categories of maintenance
Maintenance of a system under development
Patching and bug fixes
Legacy code- code without tests
safe way to work with it is to pin down its behavior
before making changes (characterization tests)
time consuming, hard, and not always exciting
Necessary because better than just wishing that nothing
breaks
10. MAINTENANCE
OFASYSTEM
UNDER
DEVELOPMENT
The system is already running in production while new
features are being added to it
Collectively owned code that lots of people want to
work on
Needs to stay in working condition while new additions
are being added for other people to be able to efficiently
work on it as well
MAINTENANCE TYPE
11. PATCHINGAND
BUGFIXES
The system has been stable for quite a while and
requires little intervention, but once in a while a defect
pops up and a bug fix is required
Many people want to rush to fix the bug, but it is better
to write a test that would fail with the presence of the
bug but pass without the presence of the bug
The test is now in the codebase that ensures the
presence and correctness of the fix
MAINTENANCE
TYPE
13. TECHNICAL
SIDE
Made up of the processes and infrastructure
needed to achieve an automated stable build
Before committing anything to the version control
system, the developer fetches the latest version of
the code, merges it with the local changes, and runs
the test suite on his machine
If all tests pass, the developer commits the new code
to the version control system.
The build server picks up the changes, fetches the
latest version of the code, compiles it, and runs its
unit tests (bare bones for just started out teams)
Long running tests run nightly or as often as the
load on the CI server permits
CONTINUOUS INTEGRATION
14. SOCIAL
DIMENSION
About following the practices to the letter by actually
running the tests locally before committing frequently
REACTING TO BROKEN BUILDS AND FIXING
THEM IMMEDIATELY BEFORE COMMITING ANY
OTHER WORK
requires discipline and dedication
CONTINUOUS INTEGRATION
15. DEVELOPEROR
TESTER
NO straight forward rule when developers should test
and when testers should test
Up to a variety of factors i.e. application domain,
complexity, legal regulations, or team composition
Some typical questions those leaning towards testers
ask
How much, if any, should developers do?
What kind of testing will give the best return on
investment for this particular system?
Why is testability important, and how can it be achieved?
Why does a method/class/component seem untestable,
and how can it be made testable?
What is “testable” code anyways?
How “good” should test code be?
16. TESTERTESTS
Tests usually not done by developers
Performance testing
Security testing
Usability testing
Testing the untypical and pathological cases
18. ANSWERS
Team 1
Developers should write code
in such a way that makes
verification possible
Team 2
One type of maintenance
mentioned was Maintenance
of a System Under
Development OR Patching
and Bug Fixes
20. TESTINGAND
CHECKING
Testing- an activity that requires curiosity, flexibility,
and the ability to draw conclusions
Definition of testing by author
testing is the process of evaluating a product by learning
about it through exploration and experimentation, which
includes to some degree: questioning, study, modeling,
observation, inferences, etc.
Checking- a tedious process that compares the outcome
of performing some action to an expected result
usually best left to a machine
Tool based techniques better at verifying assumptions,
not uncovering new bugs or new insights
21. DEVELOPER
TESTING
Largely about developers writing code with automated
checks constantly in mind
Testing time isn’t wasted on checking
Developer testing turns human checking into machine
checking
Resulting in testable software and freeing up time for
more interesting and intellectually demanding activities
2 approaches to testing- usually don’t operate in the
extreme of either, but one usually prevails more
Testing to critique
Testing to support
22. TESTINGTO
CRITIQUE
Test something that’s finished and needs evaluating
following questions answered:
Will the user be delighted by the software?
Is the scope reasonable?
Has any functionality been forgotten?
Is the software compliant with legal regulations?
Vocabulary:
Tester mind set- Want to break the code
Developer mind set- Create an unbreakable code
See code as part of themselves, are bias, make excuses
Nobody would ever do that
Works on my machine
I didn’t even touch that bit of code
TESTING APPROACH
23. TESTINGTO
SUPPORT
About safety, sustainable pace, and the team’s ability
to work fast and without fear of introducing defects
during development
Purpose- provide feedback and help the team achieve
immediate and constant confidence in the software it
produces
Examples:
Test automation
Test driven development
Activities that aim to stabilize
TESTING APPROACH
25. TRADITIONAL
TESTING
Testing is thought of as a verification phase after a construction
phase
Something gets build, and then is verified to work correctly
Assumes that there’s a master blueprint to guide all aspects of
the construction
Downfall- Can create an environment where testers and
developers don’t like each other
Lack in communication
Each group takes it in their own way and alters the blueprint in
their mind
Come together and it’s a mess
Fundamental test process
Test planning and control
Test analysis and design
Implementation and execution
Evaluating exit criteria and reporting
Test closure activities
IMAGE P 13
TESTING STYLES
26. AGILETESTING
Testing that enables agile development
About empowering the tester and increasing collaboration within
the team and with external stake holders
Both developers and testers play a part in the development
process
Several immediate advantages:
No testing crunch
No handovers
Local testing expertise
Everyone is responsible for creating the customer’s requested
software, but testers spend more time with customer to clarify
Developers still write unit tests but will always have a colleague
to ask about test design
How will you test this?
What else will you test?
TESTING STYLES
27. BDD,ATDD,AND
SPECIFICATION BY
EXAMPLE
Behavior driven development- offers advice on the actual design of
the code
Acceptance test driven development
Specification by example
All address the problem of different stakeholders using different
vocabularies, which in turn results in incorrect interpretation of
requirements and discrepancies between code, tests, and customer
expectations
All three practices incorporate the following:
Before starting to implement a story, the team makes sure that everybody
is on the same page (customer, tester, developer)
Ubiquitous language- the language of the customer be retained and used
all the time by everybody
Even nontechnical stakeholders can verify tests
Also use examples
Textual scenarios
Tabular form
Example p 16
TESTING STYLES
28. EXAMPLEP 16
Given that I’m a loyal customer, when my order exceeds
$99, I get a free gift
Confusing words like “loyal customer” and ”exceeds”
Clarify that customers considered loyal if they’ve placed at
least 3 orders
Purchases made
so far
Purchase
amount
Get gift
1 $150 NO
3 $99.01 YES
10 $99 NO
30. ANSWERS
Team 1
What are 2 testing styles?
Traditional Testing
Agile Testing
BDD, ATDD, and Speficication
by Example
Team 2
What are the 2 approaches to
testing?
Testing to Critique
Testing to Support
32. TERMINOLOGY
Terminology in a team always varies
Ideally the team decides on what terms mean, then
puts them up on a poster in the office
Errors defects in the software, bug (named for bugs
trapped in ancient hardware) software failures
33. WHITEBOX
ANDBLACKBOX
TESTING
White Box- Have access to the source code and can
inspect it for verification or inspiration for new tests
Black Box- Only access to external interface, so no way to
observe internal state (most frequent)
What is its interface to the outside world?
What inputs does it take?
How does it communicate success or failure?
How does it react to bad input?
Does it surprise by doing something unexpected or unusual?
TERMINOLOGY
34. CLASSIFYING
TESTS
(LEVELS)
Unit test- authoring fast, low level tests that target a
small part of the system
Integration test- Blurry line between unit test and this
Not a unit test and not a system test
Definitely developers job
System test- the activity of verifying that the entire
system works
Often executed from a black box perspective
Acceptance test
PAST- activity performed by the end users to validate
that the software they received conforms to the
specifications and their expectations and is ready for use
NOW- automated black box testing performed by a
framework to ensure that a story or part of a story has
been correctly implemented
35. CLASSIFYING
TESTS(TYPES)
Test types- refers to the purpose of the test and its
objectives
Functional testing- most prevalent
Executing the software and checking whether its
behavior matches explicit expectations, feeding it
different inputs, and comparing the results with the
specification
Answers “WHAT”
Nonfunctional testing
Targets a solution’s quality attributes such as
usability, reliability, performance, maintainability, and
portability
Answers “HOW”
Performance testing
Focuses on a systems responsiveness, throughput, and
reliability given different loads
36. CLASSIFYING
TESTS(TYPES)
Security testing
Can be performed as an audit
validate policies
More aggressively
compromise the system using black hat techniques
Performed by trained security professionals
CIA Triangle
Regression testing
Establish whether changes in the system have
broken existing functionality or caused old defects
to resurface
Rerunning a number of, or all, test cases after a
change in the code has been made
37. BENEFITS
Benefits of putting test levels and test types to work
All cards on the table
The team clearly sees what activities there are to consider
and may plan accordingly
Team gets to talk about its combined skill set, as the
various tests require different levels of effort, time,
resources, training, and experience
Avoid misunderstandings, omissions, blame, and
potential conflicts
TEST LEVALS & TEST TYPES
38. AGILETESTING
QUADRANTS
The Agile Testing Quadrants
Focus on business facing and technology facing tests,
and also guide development and critique the product
Business facing- tests that make sense to a person
responsible for business decisions
Technology facing- expressed using technical terms and
implemented by the developers
39. OTHERTYPES
OFTESTING
Smoke Testing- one or a few simple tests executed
immediately after the system has been deployed
End to End Testing- purpose is to include the entire
execution path or process through a system, which may
involve actions outside the system
Characterization Testing- kind of testing you’re forced to
engage in when changing old code that supposedly works
but it’s unclear what requirements it’s based on
Positive and Negative Testing
Positive- purpose is to verify that whatever is tested works
as expected and behaves like it’s supposed to
Negative- purpose is to verify that the system behaves
correctly if supplied with invalid values
Small, Medium, and Large Tests- corresponds to
limitations set on tests
Used by Google to classify in order to avoid confusing other
terminology to describe tests
40. TEAM
QUESTIONS
Team 1
Ideally, how is terminology
determined?
Team 2
What is the difference
between White Box testing
and Black Box testing?
41. ANSWERS
Team 1
Ideally, how is terminology
determined?
By the team members
Team 2
What is the difference between
White Box testing and Black Box
testing?
White Box- can see source code
Black Box- can’t see source code
43. TESTABE
SOFTWARE
Encourages the existence of tests
Big Ball of Mud- on the other side from testable
software
Developers won’t want to test it
i.e. long start up time
Testers will struggle
44. BENEFITS
Its Functionality can be Verified
Mechanical, find bug, fix bug, run some tests, instead of a
guessing game
It comes with few surprises
Want to know progress
Can say 95% done but that would not include fixing bugs
Have tests to fix bugs along the way, then 95% done is
accurate and credible
It can be changed
Developers nervous to change code if it might break
something else in the software
Tests find these breaks and the bugs quickly
TESTABLE SOFTWARE
45. TESTABLE
Testable- can be put in a known state, acted on, and
then observed
Observability
Controllability
Smallness
46. OBSERVABILITY
AND
CONTROLABILITY
Observability- Observe the output
Developers should add observation points
Controllability- The ability to put something in a
certain state
Isolability- being able to isolate a program element
under testing
Deployability- a measure of the amount of work
needed to deploy the system into production
TESTABLE
47. SMALLNESS
The smaller the software, the better the testability,
because there is less to test
Singularity- Only one instance of something
When we want to make a change we only need to
make it in one place
Level of abstraction- Determined by the choice of
programming language and framework
Higher level of abstraction, fewer tests needed
Efficiency- The ability to express intent in the
programming language in an idiomatic way and
making use of that language’s functionality to keep
the code expressive and concise
Reuse- Making use of third party components to
avoid reinventing the wheel
Don’t need to test credible code from outside sources
TESTABLE
48. MAINTAIN
All the following need to be used together to keep
maintainability in mind
Lack or too much of one can make a system difficult or
impossible to maintain
49. TEAM
QUESTIONS
Team 1
Why is controllability
important for testable
software?
Team 2
Why is smallness important for
testable software?
50. ANSWERS
Team 1
Why is controllability important
for testable software?
Having the ability to put
something in a certain state
Team 2
Why is smallness important for
testable software?
The smaller the software, the
better the testability