2. Independent and integrated testing
We talked about independent testing from the perspective of indi- vidual tester
psychology. In this chapter, we'll look at the organizational and managerial
implications of independence.
The approaches to organizing a test team vary, as do the places in the organ-
ization structure where the test team fits. Since testing is an assessment of
quality, and since that assessment is not always positive, many organizations
strive to create an organizational climate where testers can deliver an inde-
pendent, objective assessment of quality.
3. When thinking about how independent the test team is, recognize that
inde- pendence is not an either/or condition, but a continuum. At one end
of the continuum lies the absence of independence, where the
programmer performs testing within the programming team.
Moving toward independence, you find an integrated tester or group of
testers working alongside the programmers, but still within and reporting
to the development manager. You might find a team of testers who are
independ- ent and outside the development team, but reporting to project
management.
4. Workingas a test leader
We have seen that the location of a test team within a project
organization can vary widely. Similarly there is wide variation in the
roles that people within the test team play. Some of these roles occur
frequently, some infrequently. Two roles that are found within many
test teams are those of the test leader and the tester, though the same
people may play both roles at various points during the project. Let's
take a look at the work done in these roles, starting with the test
leader.
5. Defining the skills test staff need
People involved in testing
need basic professional and
social qualifications such as
literacy, the ability to prepare
and deliver written and verbal
reports, the ability to
communicate effectively, and
so on. Going beyond that,
when we think of the skills
that testers need,
Three main areas
come to mind
ï Application or business domain: A tester must
understand the intended behavior, the problem the
system will solve, the process it will automate and
so forth, in order to spot improper behavior while
testing and recognize the 'must work' functions and
features.
ï Technology: A tester must be aware of issues,
limitations and capabilities of the chosen
implementation technology, in order to effectively
and effi ciently locate problems and recognize the
'likely to fail' functions and features.
ï Testing: A tester must know the testing topics
discussed in this book - and often more advanced
testing topics - in order to effectively and
efficiently carry out the test tasks assigned.
6. TEST PLANS , ESTIMATES AND STRATEGIES
Let's look closely at how to prepare a test plan, examining
issues related to planning for a project, for a test level or phase, for a
specific test type and for test execution. We'll examine typical
factors that influence the effort related to testing, and see two
different estimation approaches: metrics-based and expert- based.
We'll discuss selecting test strategies and ways to establish adequate
exit criteria for testing. In addition, we'll look at various tasks related
to test preparation and execution that need planning.
7. While people tend to have different definitions of what goes in a test plan,
for us a test plan is the project plan for the testing work to be done. It is
not a test design specification, a collection of test cases or a set of test
procedures; in fact, most of our test plans do not address that level of
detail.
Why do we write test plans? We have three main reasons.
First, writing a test plan guides our thinking. We find that if we can
explain something in words, we understand it. If not, there's a good chance
we don't.
Writing a test plan forces us to confront the challenges that await us and
focus our thinking on important topics. In Chapter 2 of Fred Brooks'
brilliant and essential book on software engineering management, The
Mythical Man-Month, he explains the importance of careful estimation
and planning for testing as follows:
The purpose andsubstance
of test plans
8. CONFIGURATIONMANAGEMENT
Configuration management is a topic that often perplexes new practitioners,
but, if you ever have the bad luck to work as a tester on a project where this
critical activity is handled poorly, you'll never forget how important it is. Briefly
put, configuration management is in part about determining clearly what the
items are that make up the software or system. These items include source code,
test scripts, third-party software, hardware, data and both development and test
documentation. Configuration management is also about making sure that these
items are managed carefully, thoroughly and attentively throughout the entire
project and product life cycle.
9. RISK AND TESTING
Risks and levels of risk Product risks
Risk is a word we all
use loosely, but what
exactly is risk? Simply
put, it's the possibility
of a negative or
undesirable outcome.
In the future, a risk has
some likelihood
between 0% and
100%; it is a
possibility, not a
certainty.
You can think of a
product risk as the
possibility that the
system or software
might fail to satisfy
some reasonable
customer, user, or
stakeholder expecta-
tion.
To deal with the
project risks that apply
to testing, we can use
the same concepts we
apply to identifying,
prioritizing and
managing product
risks
We can deal with test-
related risks to the
project and product by
applying some
straightforward,
structured risk
management
techniques.
Project risks
Tying it all together for
risk management
10. INCIDENT MANAGEMENT
When running a test, you might observe actual results that
vary from expected results. This is not a bad thing - one of the
major goals of testing is to find prob- lems. Different
organizations have different names to describe such situations.
Commonly, they're called incidents, bugs, defects, problems
or issues.
To be precise, we sometimes draw a distinction between
incidents on the one hand and defects or bugs on the other. An
incident is any situation where the system exhibits
questionable behavior, but often we refer to an incident as a
defect only when the root cause is some problem in the item
we're testing.