Falcon's Invoice Discounting: Your Path to Prosperity
Software Testing Life Cycle
1. Software Testing Foundations
Testing in the Lifecycle
1 Principles 2 Lifecycle 3 Static testing
4 Dynamic test
5 Management 6 Tools
techniques
2. Lifecycle
1 2 3
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
3. V-Model: test levels
Business Acceptance
Requirements Testing
Project Integration Testing
Specification in the Large
System System
Specification Testing
Design Integration Testing
Specification in the Small
Component
Code
Testing
4. V-Model: late test design
Tests
Business Acceptance
Requirements Testing
Tests
Project Integration Testing
“We don’t have
Specification in the Large
time to design
tests early”Tests
System System
Specification Testing
Tests
Design Integration Testing
Specification in the Small
Tests
Component
Design
Code
Testing
Tests?
5. V-Model: early test design
Tests Tests
Business Acceptance
Requirements Testing
Tests Tests
Project Integration Testing
Specification in the Large
Tests Tests
System System
Specification Testing
Tests Tests
Design Integration Testing
Specification in the Small
Tests Tests
Component
Design Run
Code
Testing
Tests Tests
6. Early test design
test design finds faults
s
faults found early are cheaper to fix
s
most significant faults found first
s
faults prevented, not built in
s
no additional effort, re-schedule test design
s
changing requirements caused by test design
s
Early test design helps to build quality,
stops fault multiplication
7. Experience report: Phase 1
2 mo 2 mo
Phase 1: Plan
dev test
quot;has to go inquot;
but didn't work
Actual
fraught, lots of dev overtime
test 1st mo. users
Quality not
150 faults 50 faults happy
8. Experience report: Phase 2
2 mo 2 mo
2 mo 6 wks
Phase 1:2: Plan
Plan 2 mo 6 wks
Phase 2: Plan
Phase dev test
dev test
dev test quot;has to go inquot;
acc test: full
acc test: full
but didn't work day)
week (vs half day)
week (vs half
Actual
Actual on time
Actual on time
fraught, lots of dev overtime
smooth, not much for dev to do
smooth, not much for dev to do
test 1st mo. users
test 1st mo.
Quality test 1st mo.
Quality not
happy
Quality happy
150 faults 500 faults happy
users!
50 faults 0 faults users!
50 faults faults
Source: Simon Barlow & Alan Veitch, Scottish Widows, Feb 96
9. VV&T
Verification
s
• the process of evaluating a system or component to
determine whether the products of the given
development phase satisfy the conditions imposed at
the start of that phase [BS 7925-1]
Validation
s
• determination of the correctness of the products of
software development with respect to the user needs
and requirements [BS 7925-1]
Testing
s
• the process of exercising software to verify that it
satisfies specified requirements and to detect faults
11. How would you test this spec?
A computer program plays chess with one
s
user. It displays the board and the pieces on
the screen. Moves are made by dragging
pieces.
12. “Testing is expensive”
Compared to what?
s
What is the cost of NOT testing, or of faults
s
missed that should have been found in test?
- Cost to fix faults escalates the later the fault is found
- Poor quality software costs more to use
• users take more time to understand what to do
• users make more mistakes in using it
• morale suffers
• => lower productivity
Do you know what it costs your organisation?
s
13. What do software faults cost?
Have you ever accidentally destroyed a PC?
s
- knocked it off your desk?
- poured coffee into the hard disc drive?
- dropped it out of a 2nd storey window?
How would you feel?
s
How much would it cost?
s
18. How expensive for you?
Do your own calculation
s
- calculate cost of testing
• people’s time, machines, tools
- calculate cost to fix faults found in testing
- calculate cost to fix faults missed by testing
Estimate if no data available
s
- your figures will be the best your company has!
(10 minutes)
19. Lifecycle
1 2 3
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
20. (Before planning for a set of tests)
set organisational test strategy
s
identify people to be involved (sponsors,
s
testers, QA, development, support, et al.)
examine the requirements or functional
s
specifications (test basis)
set up the test organisation and infrastructure
s
defining test deliverables & reporting
s
structure
See: Structured Testing, an introduction to TMap®, Pol & van Veenendaal, 1998
21. High level test planning
What is the purpose of a high level test plan?
s
- Who does it communicate to?
- Why is it a good idea to have one?
What information should be in a high level test
s
plan?
- What is your standard for contents of a test plan?
- Have you ever forgotten something important?
- What is not included in a test plan?
22. Test Plan 1
1 Test Plan Identifier
s
2 Introduction
s
- software items and features to be tested
- references to project authorisation, project plan, QA
plan, CM plan, relevant policies & standards
3 Test items
s
- test items including version/revision level
- how transmitted (net, disc, CD, etc.)
- references to software documentation
Source: ANSI/IEEE Std 829-1998, Test Documentation
23. Test Plan 2
4 Features to be tested
s
- identify test design specification / techniques
5 Features not to be tested
s
- reasons for exclusion
24. Test Plan 3
6 Approach
s
- activities, techniques and tools
- detailed enough to estimate
- specify degree of comprehensiveness (e.g. coverage)
and other completion criteria (e.g. faults)
- identify constraints (environment, staff, deadlines)
7 Item Pass/Fail Criteria
s
8 Suspension criteria and resumption criteria
s
- for all or parts of testing activities
- which activities must be repeated on resumption
25. Test Plan 4
9 Test Deliverables
s
- Test plan
- Test design specification
- Test case specification
- Test procedure specification
- Test item transmittal reports
- Test logs
- Test incident reports
- Test summary reports
26. Test Plan 5
10 Testing tasks
s
- including inter-task dependencies & special skills
11 Environment
s
- physical, hardware, software, tools
- mode of usage, security, office space
12 Responsibilities
s
- to manage, design, prepare, execute, witness, check,
resolve issues, providing environment, providing the
software to test
27. Test Plan 6
13 Staffing and Training Needs
s
14 Schedule
s
- test milestones in project schedule
- item transmittal milestones
- additional test milestones (environment ready)
- what resources are needed when
15 Risks and Contingencies
s
- contingency plan for each identified risk
16 Approvals
s
- names and when approved
28. Lifecycle
1 2 3
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
29. Component testing
lowest level
s
tested in isolation
s
most thorough look at detail
s
- error handling
- interfaces
usually done by programmer
s
also known as unit, module, program testing
s
30. Component test strategy 1
specify test design techniques and rationale
s
- from Section 3 of the standard*
specify criteria for test completion and
s
rationale
- from Section 4 of the standard
document the degree of independence for test
s
design
- component author, another person, from different
section, from different organisation, non-human
*Source: BS 7925-2, Software Component Testing Standard
31. Component test strategy 2
component integration and environment
s
- isolation, top-down, bottom-up, or mixture
- hardware and software
document test process and activities
s
- including inputs and outputs of each activity
affected activities are repeated after any fault
s
fixes or changes
project component test plan
s
- dependencies between component tests
32. Component Component
Test Strategy
Test Document
Hierarchy
Project
Component
Test Plan
Component
Test Plan
Source: BS 7925-2,
Software Component
Testing Standard, Component
Annex A Test
Specification
Component
Test Report
33. Component test process
BEGIN
Component
Test Planning
Component
Test Specification
Component
Test Execution
Component
Test Recording
Checking for
END
Component
Test Completion
34. Component test process
Component test planning
BEGIN
- how the test strategy and
Component
project test plan apply to
Test Planning
the component under test
- any exceptions to the strategy
Component
- all software the component
Test Specification
will interact with (e.g. stubs
Component
and drivers
Test Execution
Component
Test Recording
Checking for
END
Component
Test Completion
35. Component test process
BEGIN
Component Component test specification
Test Planning
- test cases are designed
using the test case design
Component
techniques specified in the
Test Specification
test plan (Section 3)
Component - Test case:
Test Execution
objective
initial state of component
Component
input
Test Recording
expected outcome
Checking for - test cases should be
END
Component
repeatable
Test Completion
36. Component test process
BEGIN
Component
Test Planning
Component
Test Specification
Component test execution
- each test case is executed
Component
Test Execution - standard does not specify
whether executed manually
Component
or using a test execution
Test Recording
tool
Checking for
END
Component
Test Completion
37. Component test process
Component test recording
BEGIN
- identities & versions of
component, test specification
Component
- actual outcome recorded &
Test Planning
compared to expected outcome
Component - discrepancies logged
Test Specification
- repeat test activities to establish
removal of the discrepancy
Component
(fault in test or verify fix)
Test Execution
- record coverage levels achieved
Component for test completion criteria
Test Recording
specified in test plan
Checking for
Sufficient to show test
END
Component
activities carried out
Test Completion
38. Component test process
BEGIN
Component Checking for component
Test Planning
test completion
- check test records against
Component
specified test completion
Test Specification
criteria
Component - if not met, repeat test
Test Execution
activities
- may need to repeat test
Component
specification to design test
Test Recording
cases to meet completion
Checking for criteria (e.g. white box)
END
Component
Test Completion
39. Also a measurement
Test design techniques technique? = Yes
= No
“Black box” “White box”
s s
- Equivalence partitioning - Statement testing
- Boundary value analysis - Branch / Decision testing
- State transition testing - Data flow testing
- Cause-effect graphing - Branch condition testing
- Syntax testing - Branch condition
combination testing
- Random testing
- Modified condition
How to specify other
s
decision testing
techniques
- LCSAJ testing
40. Lifecycle
1 2 3
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
41. Integration testing
in the small
more than one (tested) component
s
communication between components
s
what the set can perform that is not possible
s
individually
non-functional aspects if possible
s
integration strategy: big-bang vs incremental
s
(top-down, bottom-up, functional)
done by designers, analysts, or
s
independent testers
42. Big-Bang Integration
In theory:
s
- if we have already tested components why not just
combine them all at once? Wouldn’t this save time?
- (based on false assumption of no faults)
In practice:
s
- takes longer to locate and fix faults
- re-testing after fixes more extensive
- end result? takes more time
43. Incremental Integration
Baseline 0: tested component
s
Baseline 1: two components
s
Baseline 2: three components, etc.
s
Advantages:
s
- easier fault location and fix
- easier recovery from disaster / problems
- interfaces should have been tested in component
tests, but ..
- add to tested baseline
44. Top-Down Integration
Baselines:
s
a
- baseline 0: component a
- baseline 1: a + b
b c
- baseline 2: a + b + c
- baseline 3: a + b + c + d
d e f g
- etc.
Need to call to lower
s
hi j k l m
level components not
yet integrated
n o
Stubs: simulate missing
s
components
45. Stubs
Stub replaces a called component for
s
integration testing
Keep it Simple
s
- print/display name (I have been called)
- reply to calling module (single value)
- computed reply (variety of values)
- prompt for reply from tester
- search list of replies
- provide timing delay
46. Pros & cons of top-down approach
Advantages:
s
- critical control structure tested first and most often
- can demonstrate system early (show working menus)
Disadvantages:
s
- needs stubs
- detail left until last
- may be difficult to quot;seequot; detailed output (but should
have been tested in component test)
- may look more finished than it is
47. Bottom-up Integration
a
Baselines:
s
- baseline 0: component n
b c
- baseline 1: n + i
- baseline 2: n + i + o
d e f g
- baseline 3: n + i + o + d
- etc.
hi j k l m
Needs drivers to call
s
the baseline configuration
n o
Also needs stubs
s
for some baselines
48. Drivers
Driver: test harness: scaffolding
s
specially written or general purpose
s
(commercial tools)
- invoke baseline
- send any data baseline expects
- receive any data baseline produces (print)
each baseline has different requirements from
s
the test driving software
49. Pros & cons of bottom-up approach
Advantages:
s
- lowest levels tested first and most thoroughly (but
should have been tested in unit testing)
- good for testing interfaces to external environment
(hardware, network)
- visibility of detail
Disadvantages
s
- no working system until last baseline
- needs both drivers and stubs
- major control problems found last
50. Minimum Capability Integration
(also called Functional)
a
Baselines:
s
- baseline 0: component a
b c
- baseline 1: a + b
- baseline 2: a + b + d
d e f g
- baseline 3: a + b + d + i
- etc.
h i j k l m
Needs stubs
s
Shouldn't need drivers
s
n o
(if top-down)
51. Pros & cons of Minimum Capability
Advantages:
s
- control level tested first and most often
- visibility of detail
- real working partial system earliest
Disadvantages
s
- needs stubs
52. Thread Integration
(also called functional)
order of processing some event
s
a
determines integration order
interrupt, user transaction b c
s
minimum capability in time
s
d e f g
advantages:
s
- critical processing first
hi j k l m
- early warning of
performance problems
n o
disadvantages:
s
- may need complex drivers and stubs
53. Integration Guidelines
minimise support software needed
s
integrate each component only once
s
each baseline should produce an easily
s
verifiable result
integrate small numbers of components at
s
once
- one at a time for critical or fault-prone components
- combine simple related components
54. Integration Planning
integration should be planned in the
s
architectural design phase
the integration order then determines the
s
build order
- components completed in time for their baseline
- component development and integration testing can
be done in parallel - saves time
55. Lifecycle
1 2 3
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
56. System testing
last integration step
s
functional
s
- functional requirements and requirements-based
testing
- business process-based testing
non-functional
s
- as important as functional requirements
- often poorly specified
- must be tested
often done by independent test group
s
57. Functional system testing
Functional requirements
s
- a requirement that specifies a function that a system
or system component must perform (ANSI/IEEE Std
729-1983, Software Engineering Terminology)
Functional specification
s
- the document that describes in detail the
characteristics of the product with regard to its
intended capability (BS 4778 Part 2, BS 7925-1)
58. Requirements-based testing
Uses specification of requirements as the
s
basis for identifying tests
- table of contents of the requirements spec provides an
initial test inventory of test conditions
- for each section / paragraph / topic / functional area,
• risk analysis to identify most important / critical
• decide how deeply to test each functional area
59. Business process-based testing
Expected user profiles
s
- what will be used most often?
- what is critical to the business?
Business scenarios
s
- typical business transactions (birth to death)
Use cases
s
- prepared cases based on real situations
60. Non-functional system testing
different types of non-functional system tests:
s
- usability - configuration / installation
- security - reliability / qualities
- documentation - back-up / recovery
- storage - performance, load, stress
- volume
61. Performance Tests
Timing Tests
s
- response and service times
- database back-up times
Capacity & Volume Tests
s
- maximum amount or processing rate
- number of records on the system
- graceful degradation
Endurance Tests (24-hr operation?)
s
- robustness of the system
- memory allocation
62. Multi-User Tests
Concurrency Tests
s
- small numbers, large benefits
- detect record locking problems
Load Tests
s
- the measurement of system behaviour under realistic
multi-user load
Stress Tests
s
- go beyond limits for the system - know what will
happen
- particular relevance for e-commerce
Source: Sue Atkins, Magic Performance Management
63. Usability Tests
messages tailored and meaningful to (real)
s
users?
coherent and consistent interface?
s
sufficient redundancy of critical information?
s
within the quot;human envelopequot;? (7±2 choices)
s
feedback (wait messages)?
s
clear mappings (how to escape)?
s
Who should design / perform these tests?
64. Security Tests
passwords
s
encryption
s
hardware permission devices
s
levels of access to information
s
authorisation
s
covert channels
s
physical security
s
65. Configuration and Installation
Configuration Tests
s
- different hardware or software environment
- configuration of the system itself
- upgrade paths - may conflict
Installation Tests
s
- distribution (CD, network, etc.) and timings
- physical aspects: electromagnetic fields, heat,
humidity, motion, chemicals, power supplies
- uninstall (removing installation)
66. Reliability / Qualities
Reliability
s
- quot;system will be reliablequot; - how to test this?
- quot;2 failures per year over ten yearsquot;
- Mean Time Between Failures (MTBF)
- reliability growth models
Other Qualities
s
- maintainability, portability, adaptability, etc.
67. Back-up and Recovery
Back-ups
s
- computer functions
- manual procedures (where are tapes stored)
Recovery
s
- real test of back-up
- manual procedures unfamiliar
- should be regularly rehearsed
- documentation should be detailed, clear and thorough
68. Documentation Testing
Documentation review
s
- check for accuracy against other documents
- gain consensus about content
- documentation exists, in right format
Documentation tests
s
- is it usable? does it work?
- user manual
- maintenance documentation
69. Lifecycle
1 2 3
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
70. Integration testing in the large
Tests the completed system working in
s
conjunction with other systems, e.g.
- LAN / WAN, communications middleware
- other internal systems (billing, stock, personnel,
overnight batch, branch offices, other countries)
- external systems (stock exchange, news, suppliers)
- intranet, internet / www
- 3rd party packages
- electronic data interchange (EDI)
71. Approach
Identify risks
s
- which areas missing or malfunctioning would be
most critical - test them first
“Divide and conquer”
s
- test the outside first (at the interface to your system,
e.g. test a package on its own)
- test the connections one at a time first
(your system and one other)
- combine incrementally - safer than “big bang”
(non-incremental)
72. Planning considerations
resources
s
- identify the resources that will be needed
(e.g. networks)
co-operation
s
- plan co-operation with other organisations
(e.g. suppliers, technical support team)
development plan
s
- integration (in the large) test plan could influence
development plan (e.g. conversion software needed
early on to exchange data formats)
73. Lifecycle
1 2 3
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
74. User acceptance testing
Final stage of validation
s
- customer (user) should perform or be closely
involved
- customer can perform any test they wish, usually
based on their business processes
- final user sign-off
Approach
s
- mixture of scripted and unscripted testing
- ‘Model Office’ concept sometimes used
75. Why customer / user involvement
Users know:
s
- what really happens in business situations
- complexity of business relationships
- how users would do their work using the system
- variants to standard tasks (e.g. country-specific)
- examples of real cases
- how to identify sensible work-arounds
Benefit: detailed understanding of the new system
76. User Acceptance testing
Acceptance testing
distributed over
this line
80% of function
by 20% of code
20% of function
by 80% of code
System testing
distributed over
this line
77. Contract acceptance testing
Contract to supply a software system
s
- agreed at contract definition stage
- acceptance criteria defined and agreed
- may not have kept up to date with changes
Contract acceptance testing is against the
s
contract and any documented agreed changes
- not what the users wish they had asked for!
- this system, not wish system
78. Alpha and Beta tests: similarities
Testing by [potential] customers or
s
representatives of your market
- not suitable for bespoke software
When software is stable
s
Use the product in a realistic way in its
s
operational environment
Give comments back on the product
s
- faults found
- how the product meets their expectations
- improvement / enhancement suggestions?
79. Alpha and Beta tests: differences
Alpha testing
s
- simulated or actual operational testing at an in-house
site not otherwise involved with the software
developers (i.e. developers’ site)
Beta testing
s
- operational testing at a site not otherwise involved
with the software developers (i.e. testers’ site, their
own location)
80. Acceptance testing motto
If you don't have patience to test the system
the system will surely test your patience
81. Lifecycle
1 2 3
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
82. Maintenance testing
Testing to preserve quality:
s
- different sequence
• development testing executed bottom-up
• maintenance testing executed top-down
• different test data (live profile)
- breadth tests to establish overall confidence
- depth tests to investigate changes and critical areas
- predominantly regression testing
83. What to test in maintenance testing
Test any new or changed code
s
Impact analysis
s
- what could this change have an impact on?
- how important is a fault in the impacted area?
- test what has been affected, but how much?
• most important affected areas?
• areas most likely to be affected?
• whole system?
The answer: “It depends”
s
84. Poor or missing specifications
Consider what the system should do
s
- talk with users
Document your assumptions
s
- ensure other people have the opportunity to review
them
Improve the current situation
s
- document what you do know and find out
Track cost of working with poor specifications
s
- to make business case for better specifications
85. What should the system do?
Alternatives
s
- the way the system works now must be right (except
for the specific change) - use existing system as the
baseline for regression tests
- look in user manuals or guides (if they exist)
- ask the experts - the current users
Without a specification, you cannot really test,
s
only explore. You can validate, but not verify.
86. Lifecycle
1 2 3
4 5 6
Summary: Key Points
V-model shows test levels, early test design
High level test planning
Component testing using the standard
Integration testing in the small: strategies
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing: user responsibility
Maintenance testing to preserve quality