2. IEEE 829 - Standard for Software Test Documentation
TEST PLANNING AND CONTROL
Master Test Plan
Level Test Plan
TEST ANALYSIS AND DESIGN
Level Test Design
Level Test Case
TEST IMPLEMENTATION
AND EXECUTION Bug Report
Level Test Procedure
EVALUATING EXIT
CRITERIA AND
REPORTING
Interim Test Status Report
Test Log
Level Test Report
TEST CLOSURE ACTIVITIES Master Test Report
3. LEVELS OF TEST PLAN
Develop Master
Test Plan
Develop Detailed
Test Plans
Unit Test Plan
Integration Test Plan
SystemTest Plan
AcceptanceTest Plan
Level of Test Plan defines what the test plan is being created for.
Test Plan document follows the same structure for each level of test
plan. The only difference is the content and detail.
TEST PLANNING AND CONTROL
4. MASTER TEST PLAN
The purpose of the
MTP is to:
provide the overall framework for all the testing activities
define the scope of the testing
identify whether there is any unnecessary duplication of
testing taking place
identify the departures from the Test Process
documentation set
define the approach to each stage of testing
specify the ABC project’s staff’s responsibilities for testing
activities for each stage of testing
5. A document describing the
scope, approach, resources and
schedule of intended test
activities. It identifies amongst
others test items, the features to
be tested, the testing tasks, who
will do each task, degree of tester
independence, the test
environment, the test design
techniques and entry and exit
criteria to be used, and the
rationale for their choice, and
any risks requiring contingency
planning. It is a record of the test
planning process.
A test plan that
typically addresses
multiple test levels.
A test plan that
typically addresses
one test phase.
Test Plan
6. The format and content of a software test plan vary depending on the
processes, standards, and test management tools being implemented. Following
format provides a summary of what a test plan can/should contain.
1) Test Plan ID: Unique No or Name
2) Introduction: Provide an overview of the test plan, specify the goals/objectives.
3) Test Items: Modules/ Functions/ Services/ Features/ etc.
4) Features to be tested: Responsible Modules for Test design
5) Features not to be tested: Which feature is not to be tested and Why?
6) Approach: List of selected testing techniques to be applied on above specified
modules in reference to the TRM (Test Responsible Matrix)
7) Feature pass or fail criteria: When a feature is pass or fail description
8) Suspension criteria
9) Test Environment: Required software & Hardware to be tested on above features
10) Test Deliverables: Required testing document to be prepared
11) Testing Task: Necessary tasks to do before start every feature testing
12) Staff & Training: Names of selected Test Engineers & training requirements to
them
13) Responsibilities: Work allocation to every member in the team
14) Schedule: Dates & Times of testing modules
15) List & Mitigation: Possible testing level risks & solution to overcome them
16) Approvals: Signatures of Test plan authors & Project Manager / QA
Whattotest?Howtotest?Whentotest?
7. TEST DESIGN LEVELS STRUCTURE BASED ON THE V-MODEL
Overall Business
Requirements
Acceptance Test
Design
Software
requirements
System Test
Design
High level
requirements
Integration Test
Design
Low level
requirements
Component Test
Design
Coding Unit Test Design
Unit Test
Execution
Component
Test Execution
Integration
Test Execution
System Test
Execution
Acceptance
Test Execution
TEST ANALYSIS AND DESIGN
8. TEST DESIGN
Test Design Phase – In software engineering, test design phase is a process of
reviewing and analyzing test basis, selecting test design techniques and creating
designed test cases, checklists and scenarios for testing software.
Test Design Specification - it is a document that describes features to be tested
and specifies list of all test scenarios or test cases, which should be designed for
providing the testing of software. Basically test design is the act of creating and
writing test suites for testing a software.
Test design could require all or one of:
1) Knowledge of the software, and the business area it operates on
2) Knowledge of the functionality being tested
3) Knowledge of testing techniques
4) Planning skills to schedule in which order the test cases should be designed,
given the effort, time and cost needed or the consequences for the most
important and/or risky features
9. Review and Analyze
Test Basis
Select Test Design
Techniques
Create Test Design
Specification
Create Test Cases
Specification
Test Plan
Requirements
Mock-ups
Test Design
Specification
Test Case
Specification
BASIC TEST DESIGN STEPS:
10. • The internal factors that influence the decision about which technique to use are:
– Tester knowledge and experience
– Expected defects
– Test objectives
– Documentation
– Life cycle model
• The external factors are:
– Risks
– Customer requirements
– System type
– Time and budget
Choosing A Test Design Technique
11. According to IEEE-829 standard template structure looks in the following way:
1. Test Design Specification Identifier
1.1 Purpose
1.2 References
1.3 Definitions, acronyms and abbreviations
2. Features to be Tested
3. Approach Refinements
4. Test Identification
4.1 <Test Item 1>
4.2 <Test Item …>
4.3 <Test Item N>
5. Feature Pass/Fail Criteria
Test Design Specification StructureBelow–theexplanation
12. 3)Approach Refinements section describes the
following:
– Specific test techniques to be used for testing
features or combinations of features
– Types of testing which will be provided
– Methods of analyzing test results
– Test results reporting
– Whether automation of test cases will be
provided or not
– Any other information which describes
approach to testing
Test Design Specification Structure
1) Test Design Specification Identifier section
covers:
– Purpose of the document
– Scope of the document
– List of references which should include
references on test plan, functional
specification, test case specification, etc.
– Definitions, acronyms and abbreviations
used in Test Design Specification
2) Features to be Tested identifies test items and
describes features and combinations of features
that are the object of this design specification.
Reference on Functional Specification for each
feature or combination of features should be
included.
4) Feature Pass/Fail Criteria specifies the criteria to
be used to determine whether the feature or feature
combination has passed or failed.
The following items can be considered as “pass /
fail criteria”:
1) Feature works according to stated requirements
2) Feature works correctly on the test platforms
3) Feature works correctly with other modules of
application
4) All issues with High and Medium Priority will be
verified and closed
13. 5) Test Identification section is separated to sub-section according to the amount of
test items identifying future documentation which will be created for testing features or
combinations of features that are the object of this design specification.
Features can be covered by test objectives in different ways depending on projects
needs, approaches for testing etc.
Three
examples
of such
coverage:
Feature
covered by
test cases
Feature
covered by
test scenarios
Feature
covered by
checklists
14. Test Case Specification
Test Specification – It is a detailed summary of what scenarios will be tested, how they
will be tested, how often they will be tested, and so on and so forth, for a given feature.
Revision History - This section contain information like Who created the test specification?
When was it created? When was the last time it was updated?
Feature Description – A brief description of what area is being tested.
What is tested? – An overview of what scenarios are tested.
What is not tested? - Are there any areas that are not being tested.
Nightly Test Cases – A list of the test cases and high-level description of what is tested whenever a
new build becomes available.
Breakout of Major Test Areas - It is the most interesting part of the test specification where testers
arrange test cases according to what they are testing.
Specific Functionality Tests – Tests to verify the feature is working according to the design
specification. This area also includes verifying error conditions.
Security tests – Any tests that are related to security.
Accessibility Tests – Any tests that are related to accessibility.
Performance Tests - This section includes verifying any performance requirements for your
feature.
Localization / Globalization - tests to ensure you’re meeting your product’s Local and
International requirements.
NOTE: Test Specification document should prioritize the test case easily like nightly test
cases, weekly test cases and full test pass etc:
Nightly - Must run whenever a new build is available.
Weekly - Other major functionality tests run once every three or four builds.
Lower priority - Run once every major coding milestone.
ContentsofaTestSpecification:
15. Test cases are a set of conditions or variables
under which a tester will determine if a requirement
upon an application is partially or fully satisfied. There
must be at least one test case for each requirement for
traceability.
Test Cases
Test Step – specifies
an action to
perform, and the
expected response of
the test application.
For example: Action :
Type the password in
the password
box, Expected result:
Your password
should be dotted /
hidden.
Test Case – a list of
test steps. Also
defines the
environmental
situation and may
link to related
bugs, requirements
etc.
Test Scenario –
usually comes directly
from business
requirements or user
story. Scenario
contains a list of test
cases and often their
sequence.
Test scenario Test case Test Step
16. Standard fields of sample test case template:
Test case ID: Unique ID for each test case.
Test priority (Low/Medium/High): This is useful while test execution. Test priority for business rules and
functional test cases can be medium or higher whereas minor user interface cases can be low priority.
Module Name – Mention name of main module or sub module.
Test Designed By: Name of tester
Test Designed Date: Date when wrote
Test Executed By: Name of tester who executed this test. To be filled after test execution.
Test Execution Date: Date when test executed.
Test Title/Name: Test case title.
Test Summary/Description: Describe test objective in brief.
Pre-condition: Any prerequisite that must be fulfilled before execution of this test case. List all pre-
conditions in order to successfully execute this test case.
Dependencies: Mention any dependencies on other test cases or test requirement.
Test Steps: List all test execution steps in detail. Write test steps in the order in which these should be
executed. Make sure to provide as much details as you can.
Test Data: Use of test data as an input for this test case. You can provide different data sets with exact
values to be used as an input.
Expected Result: What should be the system output after test execution?
Status (Pass/Fail): If actual result is not as per the expected result mark this test as failed. Otherwise passed.
Notes/Comments/Questions: To support above fields if there are some special conditions which can’t be
described in any of the above fields or there are questions related to expected or actual results mention
those here.
17. Fields of sample test case template, if necessary:
Defect ID/Link: If test status is fail, then include the link to
defect log or mention the defect number.
Test Type/Keywords: This field can be used to classify tests
based on test types. E.g. functional, usability, business rules
etc.
Requirements: Requirements for which this test case is being
written. Preferably the exact section number of the
requirement doc.
Attachments/References: This field is useful for complex
test scenarios.
Automation? (Yes/No): Whether this test case is automated
or not. Useful to track automation status when test cases are
automated.
18. The purpose of an LTPr is to specify the steps for
executing a set of test cases or, more generally, the steps
used to exercise a software product or software-based
system item in order to evaluate a set of features.
1. Identification (Each Test Procedure Specification should be assigned a
unique identifier.)
2. Purpose (explain the purpose of the test procedure and reference any test
cases it executes.)
3. Special Requirements (list any special hardware, software or training
requirements for this procedure.)
4. Procedure Steps (actual steps of the procedure are described. IEEE has
listed several steps:
Log: explain methods / formats for logging results of test execution.
Set up : actions necessary to prepare for the execution of procedure.
Start : actions necessary to begin execution of the procedure.
Proceed : actions necessary during the execution of the procedure.
Measure : explain ho test measurements need to be made.
Shutdown : actions necessary to suspend testing when some unscheduled
event happens.
Restart : actions necessary to restart the procedure.
Stop : actions necessary to bring execution to systematic halt.
Level Test
Procedure
TestProcedureSpecificationshouldhave
thefollowingelements:
TEST IMPLEMENTATION AND EXECUTION
19. ID Unique identifier given to the defect.
Project Project name.
Release Version Release version of the product. (e.g. 1.2.3)
Module Specific module of the product where the defect was detected.
Detected Build Version Build version of the product where the defect was detected (e.g. 1.2.3.5)
Summary Summary of the defect. Keep this clear and concise.
Description Detailed description of the defect. Describe as much as possible but without repeating anything
or using complex words. Keep it simple but comprehensive.
Steps to Reproduce Step by step description of the way to reproduce the defect. Number the steps.
Actual Result The actual result you received when you followed the steps.
Expected Results The expected results.
Attachments Attach any additional information like screenshots and logs.
Remarks Any additional comments on the defect.
Defect Severity The seriousness of defect with respect to functionality
High(Show Stopper):-Without fixing this defect tester not able to continue testing.
Medium:-Able to continue but mandatory to fix.
Low:-Able to continue testing but may or may not to fix.
Defect Priority The importance of defect fixing with respect to customer interest.
Reported By The name of the person who reported the defect.
Assigned To The name of the person that is assigned to analyze/fix the defect.
Status The status of the defect. (new/reopen)
Fixed Build Version Build version of the product where the defect was fixed (e.g. 1.2.3.9)
DEFECT REPORT
20. Test Log
Test Log is details of what tests cases were run,
who ran the tests, in what order they were run, and
whether or not individual tests were passed or
failed.
The purpose of the Level Test Log is to provide a
chronological record of relevant details about the
execution of tests. An automated tool may capture
all or part of this information.
EVALUATING EXIT CRITERIA AND REPORTING
21. Interim Test
Status Report
The purpose of the ITSR is to
summarize the results of the
designated testing activities and
optionally to provide evaluations
and recommendations based on
these results.
Eight Interim
Reports:
Functional
Testing
Status
Functions
Working
Timeline
Expected
verses
Actual
Defects
Uncovered
Timeline
Defects
Uncovered
verses
Corrected
Gap
Timeline
Average
Age of
Uncorrecte
d Defects
by Type
Defect
Distribution
Relative
Defect
Distribution
Testing
Action
22. Functional Testing Status Report
Fully tested
Tested with
open
defects
Not tested
This report will
show
percentages of
the functions
which have been:
23. Functions Working Timeline
This report will
show the actual
plan to have all
functions
working versus
the current
status of
functions
working.
An ideal
format
could be
a line
graph.
24. Expected versus Actual Defects Uncovered Timeline
This report will provide an analysis between the number of defects being generated
against the expected number of defects expected from the planning stage.
25. This report, ideally in a line graph format, will
show the number of defects uncovered versus
the number of defects being corrected and
accepted by the testing group.
If the gap grows too large, the project may not
be ready when originally planned.
Defects Detected versus Corrected Gap
26. This report will show the average days of
outstanding defects by type. In the planning stage, it
is beneficial to determine the acceptable open days
by defect type.
Average Age Uncorrected Defects by Type
27. This report will show the defect distribution by function or module. It can also show items such
as numbers of tests completed.
Defect Distribution
28. This report will take the previous report (Defect Distribution) and normalize the level of defects. An
example would be one application might be more in depth than another, and would probably have a higher
level of defects. However, when normalized over the number of functions or lines of code, would show a
more accurate level of defects.
Relative Defect Distribution
This report can show many different things, including possible shortfalls in
testing. Examples of data to show might be number of Sev 1 defects, tests that
are behind schedule, and other information that would present an accurate testing
picture.
Testing Action
29. Level
Test
Report
(LTR)
The purpose of the
Level Test Report
(LTR) is to summarize
the results of the
designated testing
activities and to provide
evaluations and
recommendations
based on these results
Master
Test
Report
The purpose of the MTR is to
summarize the results of the levels of
the designated testing activities and to
provide evaluations based on these
results. This report may be used by
any organization using the MTP.
Whenever an MTP is generated and
implemented, there needs to be a
corresponding MTR that describes
the results of the MTP
implementation
TEST CLOSURE ACTIVITIES