SlideShare ist ein Scribd-Unternehmen logo
1 von 78
Downloaden Sie, um offline zu lesen
TD
AM Tutorial
4/30/13 8:30AM

Management Issues in Test
Automation
Presented by:
Dorothy Graham
Software Test Consultant

Brought to you by:

340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
Dorothy Graham
In testing for more than thirty years, Dorothy Graham is coauthor of four books—Software Inspection,
Software Test Automation, Foundations of Software Testing, and Experiences of Test Automation: Case
Studies of Software Test Automation. Dot was a founding member of the ISEB Software Testing Board, a
member of the working party that developed the first ISTQB Foundation Syllabus, and served on the
boards of conferences and publications in software testing. A popular and entertaining speaker at
conferences and seminars worldwide, she has been coming to STAR conferences since the first one in
1992. Dot holds the European Excellence Award in Software Testing. Learn more about Dot at
DorothyGraham.co.uk.
Management Issues in Test Automation

Contents
Session 0: Introduction to the tutorial
Tutorial objectives
What we cover (and don’t cover) today
Session 1: Planning and Managing Test Automation
Responsibilities
Pilot project
Test automation objectives (and exercise)
Return on Investment (ROI)
Session 2: Technical Issues for Managers
Testware architecture
Scripting, keywords and Domain-Specific Test Language (DSTL)
Automating more than execution

Session 3: Final Advice, Strategy and Conclusion
Final advice
Strategy exercise
Conclusion

Appendix (useful stuff)
That’s no reason to automate (Better Software article)
Man and Machine, Jonathan Kohl (Better Software)
Technical vs non-technical skills in test automation
0-1

Management Issues
in Test Automation
Prepared and presented by

Dorothy Graham
www.DorothyGraham.co.uk
email: info@dorothygraham.co.uk
Twitter: @DorothyGraham
© Dorothy Graham 2013
0-1

Objectives of this tutorial
•  help you achieve better success in automation
–  independent of any particular tool

•  mainly management but a few technical issues
–  responsibilities, pilot project
–  objectives for automation
–  Return on Investment (ROI)
–  critical technical issues for managers
–  what works in practice (case studies)

•  help you plan an effective automation strategy
0-2

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
0-2

Tutorial contents

1) Planning & Managing Test Automation
2) Technical Issues for Managers
3) Final advice and Conclusion

0-3

Shameless commercial plug

Part 1: How to do
automation - still relevant
today, though we plan to
update it at some point

New
book!

www.DorothyGraham.co.uk
info@dorothygraham.co.uk
0-4

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
0-3

What is today about? (and not about)
•  test execution automation (not other tools)
•  I will NOT cover:
–  demos of tools (time, which one, expo)
–  comparative tool info / selecting a tool*

•  at the end of the day
–  understand management issues
–  be aware of critical technical issues
–  have your own automation objectives
–  plan your own automation strategy
* I will email you Ch 10 on request – info@dorothygraham.co.uk

0-5

About you
•  your Summary and Strategy document
–  where are you now with your automation?
–  what are your most pressing automation
problems?
–  why are you here today?
•  your objectives for this tutorial

0-6

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

Management Issues in Test Automation

Planning & Managing Test Automation

1 Managing

2 Technical

3 Conclusion

1-1

Management Issues in
Test Automation

Managing

1

2

3

Contents
Responsibilities
Pilot project
Test automation objectives
Return on Investment (ROI)
1-2

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

What is an automated test?
•  a test!
–  designed by a tester for a purpose

•  test is executed
–  implemented / constructed to run automatically
using a tool
–  could be run manually also

•  who decides what tests to run?
•  who decides how a test is run?
1-3

Existing perceptions of automation skills
•  many books & articles don’t mention
automation skills
–  or assume that they must be acquired by testers

•  test automation is technical in some ways
•  using the test execution tool directly (script writing)
•  designing the testware architecture (framework /
regime)
•  debugging automation problems

–  this work requires technical skill
–  most people now realise this (but many still don’t)
See article: “Technical vs non-technical skills in test automation”

presented by Dorothy Graham
info@dorothygraham.co.uk

1-4

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

Responsibilities
Testers
•  test the software
–  design tests
–  select tests for automation
•  requires planning / negotiation

Automators
•  automate tests (requested by
testers)
•  support automated testing
–  allow testers to execute tests
–  help testers debug failed tests
–  provide additional tools (homegrown)

•  execute automated tests
–  should not need detailed
technical expertise

•  analyse failed automated
tests
–  report bugs found by tests
–  problems with the tests may
need help from the automation
team

•  predict
–  maintenance effort for software
changes
–  cost of automating new tests

•  improve the automation
–  more benefits, less cost

1-5

Test manager’s dilemma
•  who should undertake automation work
–  not all testers can automate (well)
–  not all testers want to automate
–  not all automators want to test!

•  conflict of responsibilities
–  automate tests vs. run tests manually

•  get additional resources as automators?
–  contractors? borrow a developer? tool vendor?
1-6

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

Roles within the automation team
•  Testware architect
–  designs the overall structure for the automation

•  Champion
–  “sells” automation to managers and testers

•  Tool specialist
–  technical aspects, licensing, updates to the tool

•  Automated test (& script) developers
–  write new keyword scripts as needed
–  debug automation problems
1-7

Agile automation: Lisa Crispin
–  starting point: buggy code, new functionality
needed, whole team regression tests manually
–  testable architecture: (open source tools)
•  want unit tests automated (TDD), start with new code
•  start with GUI smoke tests - regression
•  business logic in middle level with FitNesse

–  100% regression tests automated in one year
•  selected set of smoke tests for coverage of stories

–  every 6 mos, engineering sprint on the automation
–  key success factors
•  management support & communication
•  whole team approach, celebration & refactoring

presented by Dorothy Graham
info@dorothygraham.co.uk

1-8

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

Automation and agile
•  agile automation: apply agile principles to
automation
–  multidisciplinary team
–  automation sprints
–  refactor when needed

•  fitting automation into agile development
–  ideal: automation is part of “done” for each sprint
•  Test-Driven Design = write and automate tests first

–  alternative: automation in the following sprint ->
•  may be better for system level tests
See www.satisfice.com/articles/agileauto-paper.pdf (James Bach)

1-9

Automation in agile/iterative development
A

manual testing of
this release (testers)
A

B

regression testing (automators automate the best tests)

A

B

C

run automated tests (testers)
A

B

C

D

E

F
1-10

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

Requirements for agile test framework
•  Support manual and automated testing
–  using the same test construction process

•  Support fully manual execution at any time
–  requires good naming convention for components

•  Support manual + automated execution
–  so test can be used before it is 100% automated

•  Implement reusable objects
•  Allow “stubbing” objects before GUI available
Source: Dave Martin, LDSChurch.org, email

Management Issues in
Test Automation

Managing

1

2

1-11

3

Contents
Responsibilities
Pilot project
Test automation objectives
Return on Investment (ROI)
1-12

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

A tale of two projects: Ane Clausen
–  Project 1: 5 people part-time, within test group
•  no objectives, no standards, no experience, unstable
•  after 6 months was closed down

–  Project 2: 3 people full time, 3-month pilot
•  worked on two (easy) insurance products, end to end
•  1st month: learn and plan, 2nd & 3rd months: implement
•  started with simple, stable, positive tests, easy to do
•  close cooperation with business, developers, delivery
•  weekly delivery of automated Business Process Tests

–  after 6 months, automated all insurance products
1-13

Pilot project
•  reasons
–  you’re unique
–  many variables /
unknowns at start

•  benefits
–  find the best way for you
(best practice)
–  solve problems once
–  establish confidence
(based on experience)
–  set realistic targets

•  objectives
–  demonstrate tool value
–  gain experience / skills
in the use of the tool
–  identify changes to
existing test process
–  set internal standards
and conventions
–  refine assessment of
costs and achievable
benefits
1-14

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

What to explore in the pilot
•  build / implement automated tests (architecture)
–  different ways to build stable tests (e.g. 10 – 20)

•  maintenance
–  different versions of the application
–  reduce maintenance for most likely changes

•  failure analysis
–  support for identifying bugs
–  coping with common bugs affecting many
automated tests
Also: naming conventions, reporting results, measurement

1-15

After the pilot…
•  having processes & standards is only the start
–  30% on new process
–  70% on deployment

Source: Eric Van Veenendaal,
successful test process improvement

•  marketing, training, coaching
•  feedback, focus groups, sharing what’s been done

•  the (psychological) Change Equation
–  change only happens if (x + y + z) > w
x = dissatisfaction with the current state
y = shared vision of the future
z = knowledge of the steps to take to get from here to there
w = psychological / emotional cost to change for this person
1-16

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

Management Issues in
Test Automation

Managing

1

2

3

Contents
Responsibilities
Pilot project
Test automation objectives
Return on Investment (ROI)
1-17

An automation effort
•  is a project
–  with goals, responsibilities, and monitoring
–  but not just a project – ongoing effort is needed

•  not just one effort – different projects
–  when acquiring a tool – pilot project
–  when anticipated benefits have not materialized
–  different projects at different times
•  with different objectives

•  objectives are important for automation efforts
–  where are we going? are we getting there?

presented by Dorothy Graham
info@dorothygraham.co.uk

1-18

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

Efficiency and effectiveness
better
good
slow
testing

Manual testing

High

good
fast
testing

Automated

Efficiency
poor
fast
testing

poor
slow
testing
worst

greatest
benefit

Effectiveness

not good but
common

Low

1-19

Good objectives for automation?
–  run regression tests evenings and weekends
–  give testers a new skill / enhance their image
–  run tests tedious and error-prone if run manually
–  gain confidence in the system
–  reduce the number of defects found by users
1-20

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
Test Automation Objectives Exercise

Test Automation Objectives Exercise
The following are some possible test automation objectives. Evaluate each objective – is it
a suitable objective for automation? If not, why not?
Which are already in place in your own organisation?

Possible test automation objectives
Achieve faster performance for the
system

Good
automation objective?
(If not, why not)

Already in
place?

NO – this is not an objective for test
execution automation, nor is it an objective for
performance testing! Performance test tools
may help by giving the measurements to see
whether the system is faster.

Achieve good results and quick
payback with no additional resources,
effort or time
Automate all tests
Build a long-lasting automation regime
that is easy to maintain
Easy to add new automated tests
Ensure repeatability of regression tests
Ensure that we meet our release
deadlines
Find more bugs
Find defects in less time
Free testers from repeated (boring) test
execution to spend more time in test
design

© Dorothy Graham, 2011

STA1110126

Page 1 of 5
Test Automation Objectives Exercise

Possible test automation objectives

Good
automation objective?
(If not, why not)

Already in
place?

Improve our testing
Reduce elapsed time for testing by x%
Reduce the cost and time for test
design
Reduce the number of test staff
Run more tests
Run regression tests more often
Run tests every night on all PCs
Achieve a positive Return on
Investment in no more than <x> test
interations
(where x = ?)
Other objectives:

© Dorothy Graham, 2011

STA1110126

Page 2 of 5
1-Managing

Reduce test execution time
edit tests
(maintenance) set-up

execute

analyse
failures

clear-up

Manual
testing

Same tests
automated

More mature
automation

1-21

Automate x% of the tests
•  are your existing tests worth automating?
–  if testing is in chaos, automating gives you faster
chaos

•  which tests to automate (first)?
•  what % of manual tests should be automated?
–  “100%” sounds impressive but may not be wise

•  what else can be automated
–  automation can do things not possible or practical
in manual testing!
1-22

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

Manual vs automated
manual
tests

automated
tests

tests not
automated
yet

tests not
worth
automating

tests (&
verification)
not possible to
do manually
manual tests
automated
(% manual)

exploratory
test
automation
1-23

Success = find lots of bugs?
•  tests find bugs, not automation
•  automation is a mechanism for running tests
•  the bug-finding ability of a test is not affected
by the manner in which it is executed
•  this can be a dangerous objective
–  especially for regression automation!
1-24

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

When is “find more bugs” a good
objective for automation?
•  objective is “fewer regression bugs missed”
•  when the first run of a given test is
automated
–  MBT, Exploratory test automation, automated
test design
–  keyword-driven (e.g. users populate
spreadsheet)

•  find bugs in parts we wouldn’t have tested?
1-25

Good objectives for test automation
•  become measurable quality attributes for
automation
•  realistic and achievable
•  short and long term
•  regularly re-visited and revised
•  should be different objectives for testing and
for automation
•  automation should support testing activities
1-26

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

Quality attributes for automation
•  related to objectives
•  measurable (see Tom Gilb’s work)
•  examples
–  maintenance time for testware
–  failure analysis time
–  improved support for testers
–  coverage of system tested by automation
–  increasing EMTE
1-27

EMTE – what is it?
•  Equivalent Manual Test Effort
–  given a set of automated tests,
–  how much effort would it take
•  IF those tests were run manually

•  note
–  you would not actually run these tests manually
–  EMTE = what you could have tested manually
•  and what you did test automatically

–  used to show test automation benefit
1-28

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

EMTE – how does it work?
a manual test

Manual testing

Automate the manual testing?
the manual test
now automated
doesn’t make sense –
can run them more

only time to run the tests
1.5 times

1-29

EMTE – how does it work? (2)
Automated testing

EMTE
1-30

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

EMTE example
•  example
–  automated tests take 2 hours
–  if those same tests were run manually, 4 days

•  frequency
–  automated tests run every day for 2 weeks
(including once at the weekend), 11 times

•  calculation
–  EMTE =

1-31

Management Issues in
Test Automation

Managing

1

2

3

Contents
Responsibilities
Pilot project
Test automation objectives
Return on Investment (ROI)
1-32

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

Is this Return on Investment (ROI)?
• 
• 
• 
• 
• 

tests are run more often
tests take less time to run
it takes less human effort to run tests
we can test (cover) more of the system
we can run the equivalent of days / weeks of
manual testing in a few minutes / hours
•  faster time to market
ROI = (benefit – cost)
cost

these are (good) benefits
but are not ROI
1-33

How important is ROI?
•  ROI can be dangerous
–  easiest way to measure: tester time
–  may give impression that tools replace people

•  “automation is an enabler for success, not a
cost reduction tool”
•  Yoram Mizrachi, “Planning a mobile test automation
strategy that works, ATI magazine, July 2012

•  many achieve lasting success without
measuring ROI (depends on your context)
–  need to be aware of benefits (and publicize them)
1-34

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

An example comparative benefits chart
80
70
60
50
40

man
aut

30
20
10
0

exec speed
14 x faster

times run

data variety tester work

5 x more often 4 x more data 12 x less effort

ROI spreadsheet – email me for a copy
1-35

Why measure automation ROI?
•  to justify and confirm starting automation
–  business case for purchase/investment decision,
to confirm ROI has been achieved e.g. after pilot
–  both compare manual vs automated testing

•  to monitor on-going automation
–  for increased efficiency, continuous improvement
–  build time, maintenance time, failure analysis time,
refactoring time
•  on-going costs – what are the benefits?
1-36

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

MBT @ ESA: Stefan Mohacsi, Armin Beer
–  home-grown tool interfaced to commercial tools
•  Model-Based Testing and Test Case Generation
•  layers of abstraction for maintainability

–  define model before software is ready
•  capture and assign GUI objects later
•  developers build in testability

–  ROI calculations
•  invest 460 hours in automation infrastructure
•  break-even after 4 test cycles
1-37

Example ROI graph using MBT
1400

1200

1000

People
Hours

800
Manual hrs
Automated hrs

600

400

200

0
1

2

3

4

5

6

Source: Stefan Mohacsi & Armin Beer

presented by Dorothy Graham
info@dorothygraham.co.uk

1-38

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

Database testing: Henri van de Scheur
–  tool developed in-house (now open source)
•  agreed requirements with relevant people up front
•  9 months, 4 developers in Java (right people)
•  good architecture, start with quick wins
–  flexible configuration, good reporting, metrics used to improve

–  results: 2400 times more efficient
•  from: 20 people run 40 tests on 6 platforms in 4 days
•  to: 1 person runs 200 tests on 10 platforms in 1 day
•  quick dev tests, nightly regression, release tests
•  life cycle of automated tests
•  little maintenance, machines used 24x7, better quality
1-39

Large S Africa bank: Michael Snyman
•  was project-based, too late, lessons not learned
–  “our shelves were littered with tools..”

•  2006: automation project, resourced, goals
–  formal automation process

•  ROI after 3 years
–  US$4m on testing project, automation $850K
–  savings $8m, ROI 900%
•  20 testers for 4 weeks to 2 in 1 week

–  automation ROI justified the testing project
•  only initiative that was measured accurately

presented by Dorothy Graham
info@dorothygraham.co.uk

1-40

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

Example ROI graph
Savings % vs Tests
100%
50%
0%
0

500

1000

1500

2000

2500

-50%
-100%
-150%
-200%

monthly

weekly

daily

Source: Lars Wahlberg,
Chapter 18 in “Experiences of Test Automation”

1-41

Sample ‘starter kit’ for metrics for test
automation (and testing)
•  some measure of benefit
–  e.g. EMTE or coverage

•  average time to automate a test (or set of
related tests)
•  total effort spent on maintaining automated
tests (expressed as an average per test)
•  also measure testing, e.g. Defect Detection
Percentage (DDP) – test effectiveness
–  more info on DDP on my web site & blog
1-42

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

Recommendations
•  don’t measure everything!
•  choose three or four measures
–  applicable to your most important objectives

•  monitor for a few months
–  see what you learn

•  change measures if they don’t give useful
information

1-43

Managing

1

2

3

Management Issues in
Test Automation

Summary: key points
• 
• 
• 
• 
• 

Assign responsibility for automation (and testing)
Use a pilot project to explore the best ways of
doing things
Know your automation objectives
Measure what’s important to you
Show ROI from automation

1-44

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
1-Managing

Good objectives for automation?
(answers)
–  run regression tests evenings and weekends
not a good objective, unless they are worthwhile tests!

–  give testers a new skill / enhance their image
not a good objective, could be a useful by-product

–  run tests tedious and error-prone if run manually
good objective

–  gain confidence in the system
an objective for testing, but automated regression tests help achieve it

–  reduce the number of defects found by users
not a good objective for automation, good objective for testing!
1-45

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
Test Automation Objectives Solution

Test Automation Objectives Solution
We have given some ideas as to which objectives are good and why the others are not.

Good
automation objective?
(If not, why not)

Possible test automation objectives
Achieve faster performance for the
system

NO – this is not an objective for test execution
automation, nor is it an objective for performance
testing! Performance test tools may help by giving the
measurements to see whether the system is faster.

Achieve good results and quick
payback with no additional resources,
effort or time

NO – this is totally unrealistic – expecting a miracle
with no investment!

Automate all tests

NO – automating ALL tests is not realistic nor
sensible. Automate only those tests that are worth
automating.

Build a long-lasting automation
regime that is easy to maintain

YES – this is an excellent objective for test
automation, and it is measurable.

Easy to add new automated tests

YES. with a good automation regime, it can be easier
to add a new automated test than to run that test
manually.

Ensure repeatability of regression tests YES. The tools will run the same test in the same way
every time.
Ensure that we meet our release
deadlines

NO. Automation may help to run some tests that are
required before release, but there are many more
factors that go into a release decision.

Find more bugs

NO. Automation just runs tests. It is the tests that find
the bugs, whether they are run manually or are
automated.

Find defects in less time

Not really. Some types of defects (regression bugs)
will be found more quickly by automated tests, but it
may actually take longer to analyse the failures found.

Free testers from repeated (boring) test YES. This is a good objective for test execution
execution to spend more time in test
automation.
design

© Dorothy Graham, 2011

STA110126

Page 3 of 5
Test Automation Objectives Solution

Good
automation objective?
(If not, why not)

Possible test automation objectives
Improve our testing

NO. Better testing practices and better use of
techniques will improve testing.

Reduce elapsed time for testing by x% NO. Elapsed time depends on many factors, and not
much on whether tests are automated (see further
explanation in the slides).
Reduce the cost and time for test
design

NO. Test design is independent from automation – the
time spent in design is not affected by how those tests
are executed.

Reduce the number of test staff

NO. You will need more staff to implement the
automation, not less. It can make existing staff more
productive by spending more time on test design.

Run more tests

YES but only long term. Short term, you may actually
run fewer tests because of the effort taken to automate
them.

Run regression tests more often

YES – this is what the test execution tools do best.

Run tests every night on all PCs

NO. It may look impressive, but what tests are being
run? Are they useful? If not, this is a waste of
electricity.

Achieve a positive Return on
Investment in no more than <6> test
interations

YES. This is a good objective, if the number of
iterations is a reasonable number (e.g. 6).

Other objectives:

© Dorothy Graham, 2011

STA110126

Page 4 of 5
Test Automation Objectives Solution
Test Automation Objectives: Selection and Measurement
On this page, record the test objectives that would be most appropriate for your
organisation (and why), and how you will measure them (what to measure and how to
measure it). I suggest that you include at least one about showing Return on Investment.
If you currently have automation objectives in place in your organisation that are not good
ones, make sure that they are removed and replaced by the better ones below!

Proposed test automation objective
(with justification)

What to measure and how to measure it

Add any comments or thoughts here or on the back of this page.

© Dorothy Graham, 2011

STA110126

Page 5 of 5
2-Technical

Management Issues in Test Automation

Technical Issues for Managers

1 Managing

2 Technical

3 Conclusion

2-1

Technical

1

2

Management Issues in
Test Automation

3

Contents
Testware architecture
Scripting, keywords and DSTL
Automating more than execution
2-2

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
2-Technical

Testware architecture

testware	
  architecture	
  

Testers	
  	
  
write	
  tests	
  (in	
  DSTL)	
  

abstraction here:
easier to write
automated tests
 widely used

High Level Keywords

structured	
  
testware	
  

Test
Automator(s)

Structured Scripts

Test	
  Execu/on	
  Tool	
  
runs	
  scripts	
  

abstraction here:
easier to maintain,
and change tools
 long life

2-3

Easy way out: use the tool’s architecture
•  tool will have its own way of organising tests
–  where to put things (for the convenience of the
tool!)
–  will “lock you in” to that tool – good for vendors!

•  a better way (gives independence from tools)
–  organise your tests to suit you
–  as part of pre-processing, copy files to where the
tool needs (expects) to find them
–  as part of post-processing, copy back to where
you want things to live
2-4

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
2-Technical

Tool-specific script ratio
Testers	
  	
  

Testers	
  	
  

Not Toolspecific

Tool-specific
scripts

Test	
  Execu/on	
  Tool	
  

High
maintenance
and/or tooldependence

Test	
  Execu/on	
  Tool	
  
2-5

Key issues
•  scale
–  the number of scripts, data files, results files,
benchmark files, etc. will be large and growing

•  shared scripts and data
–  efficient automation demands reuse of scripts and
data through sharing, not multiple copies

•  multiple versions
–  as the software changes so too will some tests but
the old tests may still be required

•  multiple environments / platforms
2-6

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
2-Technical

Terms

- Testware artefacts
Testware

Test Materials

Test Results
Products

inputs

By-Products

scripts
doc (specifications)
data
env
utilities

expected
results

logs
actual
results

status

differences
differences
summary
summary
2-7

Benefits of standard approach
•  tools can assume knowledge (architecture)
–  they need less information; are easier to use;
fewer errors will be made

•  can automate many tasks
–  checking (completeness, interdependencies);
documentation (summaries, reports); browsing

•  portability of tests
–  between people, projects, organisations, etc.

•  shorter learning curve
2-8

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
2-Technical
Technical

1

2

Management Issues in
Test Automation

3

Contents
Testware architecture
Scripting, keywords and DSTL
Automating more than execution
2-9

Levels of scripting
•  capture replay  high maintenance costs
•  structured scripts use programming constructs
–  modular, calling structure, loops, IF statements
–  few scripts affected by software changes

•  data-driven: control scripts process SSs/ DBs
–  easy to add new similar tests

•  keyword-driven / DSTL / Framework
–  one control script proccess actions and data
–  including verification actions
2-10

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
2-Technical

Data-driven example
ADD

MOVE DELETE

For each record
ReadDataFile(RECORD)
Case (Column(RECORD))

countries
Sweden
Data file: TestCase2
USA
FILE
Europe
Norway

For each TESTCASE
OpenDataFile(TESTCASEn)
ReadDataFile(RECORD)

Data file: TestCase1

FILE

Control script

ADD
4,1

MOVE DELETE

France
Germany

FILE:
OpenFile(INPUTFILE)
ADD:
AddItem(ITEM)

2
7
1,3
2,2
1
5,3

MOVE:
MoveItem(FROM, TO)
DELETE:
DeleteItem(ITEM)
…..

Next record
Next TESTCASE

2-11

About keywords
•  single control script (Interactive Test Environment)
–  improvements to this benefit all tests (ROI)
–  extracts high-level instructions from scripts

•  ‘test definition’
–  independent of tool scripting language
–  a language tailored to testers’ requirements
•  software design
•  application domain
•  business processes

•  more tests, fewer scripts
2-12

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
2-Technical

Comparison of data files
data-driven approach

keyword approach

FILE
ADD MOVE DELETE SAVE
Europe
France
Italy
1,3
2,2
1
5,2
Test2
which is easier to
read/understand?

ScribbleOpen Europe
AddToList France Italy
MoveItem 1 to 3
MoveItem 2 to 2
DeleteItem 1
MoveItem 5 to 2
SaveAs Test2

what happens when
the test becomes
large and complex?

this looks more like
a test
2-13

Execution-tool-independent framework
script
script
libraries
libraries

some tests run manually

framework

tool independent

presented by Dorothy Graham
info@dorothygraham.co.uk

Another
Test
Tool

tool dependent

sut

test
procedures
/definitions

Test
Test
Tool
Tool

software under test
software under test

tool independent
scripting language

2-14

© Dorothy Graham 2013
www.DorothyGraham.co.uk
2-Technical
Technical

1

2

Management Issues in
Test Automation

3

Contents
Testware architecture
Scripting, keywords and DSTL
Automating more than execution
2-15

Automated tests/automated testing
Automated tests

Automated testing

Select / identify test cases to run
Set-up test environment:
•  create test environment
•  load test data
Repeat for each test case:
•  set-up test pre-requisites
•  execute
•  compare results
•  log results
•  analyse test failures
•  report defect(s)
•  clear-up after test case

Select / identify test cases to run
Set-up test environment:
•  create test environment
•  load test data
Repeat for each test case:
•  set-up test pre-requisites
•  execute
•  compare results
•  log results
•  clear-up after test case
Clear-up test environment:
•  delete unwanted data
•  save important data

Clear-up test environment:
•  delete unwanted data
•  save important data
Summarise results
Manual process

presented by Dorothy Graham
info@dorothygraham.co.uk

Summarise results
Analyse test failures
Report defects
Automated process

2-16

© Dorothy Graham 2013
www.DorothyGraham.co.uk
2-Technical

Two types of comparison
•  dynamic comparison
–  done during test
execution
–  performed by the test tool
–  can be used to direct the
progress of the test
•  e.g. if this fails, do that
instead

–  fail information written to
test log (usually)

•  post-execution
comparison
–  done after the test
execution has completed
–  good for comparing files
or databases
–  can be separated from
test execution
–  can have different levels
of comparison
•  e.g. compare in detail if all
high level comparisons
pass
2-17

Sensitive versus specific(robust) test
Test is supposed
to change only
this field

Specific test
verifies this
field only

Test outcome

Unexpected
change occurs

Sensitive test
verifies the
entire outcome
2-18

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
2-Technical

Too much sensitivity = redundancy
Three tests,
each changes
a different field

If all tests are
specific, the
unexpected
change is
missed

Test
outcome
Unexpected
change occurs
for every test

If all tests are
sensitive, they
all show the
unexpected change
2-19

Comparison is not simple
–  your expected results (“golden version”)
–  masking/filtering (e.g. date test is run)
•  may take significant effort to compare what you want
and exclude what you don’t

–  different order of output
–  false fail (should pass)
•  e.g. bitmap comparison on images, can eat time

–  false pass (should fail)
•  gives unjustified confidence (“zombie tests”)

–  make your automated tests red until proved green
2-20

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
2-Technical

Outside the box: Jonathan Kohl
–  task automation (throw-away scripts)
•  entering data sets to 2 browsers (verify by watching)
•  install builds, copy test data

–  support manual exploratory testing
–  testing under the GUI to the database (“side door”)
–  don’t believe everything you see
•  1000s of automated tests pass too quickly
•  monitoring tools to see what was happening
•  “if there’s no error message, it must be ok”
–  defects didn’t make it to the test harness
–  overloaded system ignored data that was wrong
2-21

DSTL
structured

Dis
sc

po
s
rip able
ts

testware
architecture

execution
comparison

s
litie d
Uti ta loa
da
eg

loosen your
oracles
ETA,
monkeys

presented by Dorothy Graham
info@dorothygraham.co.uk

Automation +

st
po g
&
n
Pre cessi
pro

Me
tric
e.g s
EM .
TE

2-22

© Dorothy Graham 2013
www.DorothyGraham.co.uk
2-Technical
Technical

1

2

3

Management Issues in
Test Automation

Summary: key points
• 
• 

Structure your automation testware to suit you
Use the highest level of scripting that you need
• 

• 

e.g. keyword / Domain-Specific Test Language

Automate more than execution

2-23

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
3-Conclusion

Management Issues in Test Automation

Final Advice and Conclusion

1 Managing

2 Technical

3 Conclusion

2-1

Conclusion

1

2

Management Issues in
Test Automation

3

Contents
Final advice
Your strategy
Conclusion
2-2

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
3-Conclusion

Dealing with high level management
•  management support
–  building good automation takes time and effort
–  set realistic expectations

•  benefits and ROI
–  make benefits visible (charts on the walls)
–  metrics for automation
•  to justify it, compare to manual test costs over iterations
•  on-going continuous improvement
–  build cost, maintenance cost, failure analysis cost
–  coverage of system tested
2-3

Dealing with developers
•  critical aspect for successful automation
–  automation is development
•  may need help from developers
•  automation needs development standards to work
–  testability is critical for automatability
–  why should they work to new standards if there is “nothing in it
for them”?

–  seek ways to cooperate and help each other
•  run tests for them
–  in different environments
–  rapid feedback from smoke tests

•  help them design better tests?
2-4

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
3-Conclusion

Standards and technical factors
•  standards for the testware architecture
–  where to put things
–  what to name things
–  how to do things
•  but allow exceptions if needed

•  new technology can be great
–  but only if the context is appropriate for it (e.g.
Model-Based Testing)

•  use automation “outside the box”
2-5

On-going automation
•  you are never finished
–  don’t “stand still” - schedule regular review and refactoring of the automation
–  change tools, hardware when needed
–  re-structure if your current approach is causing
problems

•  regular “pruning” of tests
–  don’t have “tenured” test suites
•  check for overlap, removed features
•  each test should earn its place
2-6

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
3-Conclusion

Information and web sites
–  Automated Testing Institute (and magazine)
•  www.automatedtestinginstitute.com

–  SQE (Software Quality Engineering sqe.com)
•  www.stickyminds.com
•  Linda Hayes automation course
•  Hans Buwalda’s tutorial

–  Randy Rice: presentation on Free and Cheap tools and
automation course
•  www.riceconsulting.com (search on “free tools”)

–  FreeTest Conference (Trondheim, Norway)
•  http://free-test.org

–  LinkedIn has a test automation group
2-7

Conclusion

1

2

Management Issues in
Test Automation

3

Contents
Final advice
Your strategy
Conclusion
2-8

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
3-Conclusion

What next?
•  we have looked at a number of ideas about
test automation today
•  what is your situation?
–  what are the most important things for you now?
–  where do you want to go?
–  how will you get there?

•  make a start on your test automation
strategy now
–  adapt it to your own situation tomorrow
2-9

Strategy exercise
•  your automation strategy / action plan
–  review your objectives for today (p1)
–  review your “take-aways” so far (p2)
–  identify the top 3 changes you want to make to your
automation (top of p3)
–  note your plans now on p3

2-10

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
3-Conclusion
Conclusion

1

2

3

Management Issues in
Test Automation

Summary: key points
• 

Management issues:
• 

• 

Technical issues:
• 

• 
• 

staffing, pilot, objectives, Return on Investment (ROI)
testware architecture, scripting, others

Final advice
Your Objectives and Strategy
2-11

any more questions?
please email me!
info@DorothyGraham.co.uk
Thank you for coming today
I hope this was / will be useful for you
All the best in your automation!
2-12

presented by Dorothy Graham
info@dorothygraham.co.uk

© Dorothy Graham 2013
www.DorothyGraham.co.uk
32	

BETTER SOFTWARE	

JULY/AUGUST 2009	

www.StickyMinds.com
“Why automate?”

This seems such
an easy question to answer; yet many
people don’t achieve the success they
hoped for. If you are aiming in the wrong
direction, you will not hit your target!
This article explains why some
testing objectives don’t work for automation, even though they may be very
sensible goals for testing in general. We
take a look at what makes a good test
automation objective; then we examine
six commonly held—but misguided—
objectives for test execution automation,
explaining the good ideas behind them,
where they fail, and how these objectives
can be modified for successful test automation.

Good Objectives for Test
Automation
A good objective for test automation
should have a number of characteristics.
First of all, it should be measurable so
that you can tell whether or not you
have achieved it.
Objectives for test automation should
support testing activities but should not
be the same as the objectives for testing.
Testing and automation are different and
distinct activities.
Objectives should be realistic and
achievable; otherwise, you will set yourself up for failure. It is better to have
smaller-scale goals that can be met than
far-reaching goals that seem impossible.
Of course, many small steps can take
you a long way!
Automation objectives should be
both short and long term. The shortterm goals should focus on what can be
achieved in the next month or quarter.
The long-term goals focus on where you
want to be in a year or two.
Objectives should be regularly revised
in the light of experience.

Misguided Objectives for
Test Automation
Objective 1: Find More Bugs
Good ideas behind this objective:
•	 Testing should find bugs, so automated testing should find them
quicker.
•	 Since tests are run quicker, we can
run more tests and find even more
bugs.
	

•	 We can test more of the system
so we should also find bugs in
the parts we weren’t able to test
manually.
Basing the success of automation on
finding bugs—especially the automation of regression tests—is not a good
thing to do for several reasons. First, it
is the quality of the tests that determines
whether or not bugs are found, and this
has very little, if anything, to do with
automation. Second, if tests are first run
manually, any bugs will be found then,
and they may be fixed by the time the
automated tests are run. Finally, it sets
an expectation that the main purpose of
test automation is to find bugs, but this
is not the case: A repeated test is much
less likely to find a new bug than a new
test. If the software is really good, automation may be seen as a waste of time
and resources.
Regression testing looks for unexpected, detrimental side effects in unchanged software. This typically involves running a lot of tests, many of
which will not find any defects. This is
ideal ground for test automation as it
can significantly reduce the burden of
this repetitive work, freeing the testers
to focus on running manual tests where
more defects are likely to be. It is the
testing that finds bugs—not the automation. It is the testers who may be able to
find more bugs, if the automation frees
them from mundane repetitive work.
The number of bugs found is a misleading measure for automation in any
case. A better measure would be the percentage of regression bugs found (compared to a currently known total). This
is known as the defect detection percentage (DDP). See the StickyNotes for
more information.
Sometimes this objective is phrased
in a slightly different way: “Improve
the quality of the software.” But identifying bugs does nothing to improve
software—it is the fixing of bugs that
improves the software, and this is a development task.
If finding more bugs is something that
you want to do, make it an objective for
measuring the value of testing, not for
measuring the value of automation.
Better automation objective: Help teswww.StickyMinds.com	

ters find more regression bugs (so fewer
regression failures occur in operation).
This could be measured by increased
DDP for regression bugs, together with
a rating from the testers about how well
the automation has supported their objectives.

Objective 2: Run Regression Tests
Overnight and on Weekends
Good ideas behind this objective:
•	 We have unused resources (evenings and weekends).
•	 We could run automated tests
“while we sleep.”
At first glance, this seems an excellent
objective for test execution automation,
and it does have some good points.
Once you have a good set of automated regression tests, it is a good idea
to run the tests unattended overnight
and on weekends, but resource use is not
the most important thing.
What about the value of the tests
that are being run? If the regression
tests that would be run “off peak” are
really valuable tests, giving confidence
that the main areas of the system are still
working correctly, then this is useful.
But the focus needs to be on supporting
good testing.
It is too easy to meet this stated objective by just running any test, whether it
is worth running or not. For example, if
you ran the same one test over and over
again every night and every weekend,
you would have achieved the goal as
stated, but it is a total waste of time
and electricity. In fact, we have heard of
someone who did just this! (We think he
left the company soon after.)
Of course, automated tests can be run
much more often, and you may want
some evidence of the increased test execution. One way to measure this is using
equivalent manual test effort (EMTE).
For all automated tests, estimate how
long it would have taken to run those
tests manually (even though you have no
intention of doing so). Then each time
the test is run automatically, add that
EMTE to your running total.
Better automation objective: Run the
most important or most useful tests, employing under-used computer resources
when possible. This could be partially
JULY/AUGUST 2009	

BETTER SOFTWARE 	

33
measured by the increased use of resources and by EMTE, but should also
include a measure of the value of the
tests run, for example, the top 25 percent of the current priority list of most
important tests (priority determined by
the testers for each test cycle).

Objective 3: Reduce Testing Staff
Good ideas behind this objective:
•	 We are spending money on the
tool, so we should be able to save
elsewhere.
•	 We want to reduce costs overall,
and staff costs are high.
This is an objective that seems to
be quite popular with managers. Some
managers may go even further and think
that the tool will do the testing for them,
so they don’t need the testers—this is
just wrong. Perhaps managers also think
that a tool won’t be as argumentative as
a tester!
It is rare that staffing levels are reduced when test automation is introduced; on the contrary, more staff are
usually needed, since we now need
people with test script development skills
in addition to people with testing skills.
You wouldn’t want to let four testers go
and then find that you need eight test automators to maintain their tests!
Automation supports testing activities; it does not usurp them. Tools cannot
make intelligent decisions about which
tests to run, when, and how often. This
is a task for humans able to assess the
current situation and make the best use
of the available time and resources.
Furthermore, automated testing is
not automatic testing. There is much
work for people to do in building the automated tests, analyzing the results, and
maintaining the testware.
Having tests automated does—or at
least should—make life better for testers.
The most tedious and boring tasks are
the ones that are most amenable for automation, since the computer will happily do repetitive tasks more consistently
and without complaining. Automation
can make test execution more efficient,
but it is the testers who make the tests
themselves effective. We have yet to see
a tool that can think up tests as well as a
human being can!
34	

BETTER SOFTWARE	

JULY/AUGUST 2009	

The objective as stated is a management objective, not an appropriate objective for automation. A better management objective is “Ensure that everyone
is performing tasks they are good at.”
This is not an automation objective
either, nor is “Reducing the cost of
testing.” These could be valid objectives,
but they are related to management, not
automation.
Better automation objective: The total
cost of the automation effort should be
significantly less than the total testing effort saved by the automation. This could
be partially measured by an increase in
tests run or coverage achieved per hour
of human effort.

Objective 4: Reduce Elapsed Time
for Testing
Good ideas behind this objective:
•	 Reduce deadline pressure—any
way we can save time is good.
•	 Testing is a bottleneck, so faster
testing will help overall.
•	 We want to be quicker to market.
This one seems very sensible at first
and sometimes it is even quantified—
“Reduce elapsed time by X%”—which
sounds even more impressive. However,
this objective can be dangerous because
of confusion between “testing” and “test
execution.”
The first problem with this objective is that there are much easier ways
to achieve it: run fewer tests, omit long
tests, or cut regression testing. These are
not good ideas, but they would achieve
the objective as stated.
The second problem with this objective is its generality. Reducing the
elapsed time for “testing” gives the impression we are talking about reducing
the elapsed time for testing as a whole.
However, test execution automation
tools are focused on the execution of
the tests (the clue is in the name!) not
the whole of testing. The total elapsed
time for testing may be reduced only if
the test execution time is reduced sufficiently to make an impact on the whole.
What typically happens, though, is that
the tests are run more frequently or
more tests are run. This can result in
more bugs being found (a good thing),
that take time to fix (a fact of life), and
www.StickyMinds.com

increase the need to run the tests again
(an unavoidable consequence).
The third problem is that there are
many factors other than execution that
contribute to the overall elapsed time
for testing: How long does it take to set
up the automated run and clear up after
it? How long does it take to recognize a
test failure and find out what is actually
wrong (test fault, software fault, environment problem)? When you are testing
manually, you know the context—you
know what you have done just before
the bug occurs and what you were doing
in the previous ten minutes. When a tool
identifies a bug, it just tells you about the
actual discrepancy at that time. Whoever
analyzes the bug has to put together the
context for the bug before he or she can
really identify the bug.
In figures 1 and 2, the blocks represent the relative effort for the different
activities involved in testing. In manual
testing, there is time taken for editing
tests, maintenance, set up of tests, executing the tests (the largest component
of manual testing), analyzing failures,
and clearing up after tests have completed. In figure 1, when those same tests
are automated, we see the illusion that
automating test execution will save us
a lot of time, since the relative time for
execution is dramatically reduced. However, figure 2 shows us the true picture—
total elapsed time for testing may actually increase, even though the time for
test execution has been reduced. When
test automation is more mature, then the
total elapsed time for all of the testing
activities may decrease below what it
was initially for manual testing. Note
that this is not to scale; the effects may
be greater than we have illustrated.
We now can see that the total elapsed
time for testing depends on too many
things that are outside the control or influence of the test automator.
The main thing that causes increased
testing time is the quality of the software—the number of bugs that are already there. The more bugs there are,
the more often a test fails, the more bug
reports need to be written up, and the
more retesting and regression testing
are needed. This has nothing to do with
whether or not the tests are automated or
manual, and the quality of the software
is the responsibility of the developers,
not the testers or the test automators.
Finally, how much time is spent maintaining the automated tests? Depending
on the test infrastructure, architecture,
or framework, this could add considerably to the elapsed time for testing.
Maintenance of the automated tests for
later versions of the software can consume a lot of effort that also will detract
from the savings made in test execution.
This is particularly problematic when
the automation is poorly implemented,
without thought for maintenance issues
when designing the testware architecture. We may achieve our goal with the
first release of software, but later versions may fail to repeat the success and

may even become worse.
Here is how the automator and tester
should work together: The tester may
request automated support for things
that are difficult or time consuming, for
example, a comparison or ensuring that
files are in the right place before a test
runs. The automator would then provide utilities or ways to do them. But the
automator, by observing what the tester
is doing, may suggest other things that
could be supported and “sell” additional
tool support to the tester. The rationale
is to make life easier for the tester and
to make the testing faster, thus reducing
elapsed time.
Better automation objective: Reduce
the elapsed time for all tool-supported

Figure 1

Figure 2

	

www.StickyMinds.com	

testing activities. This is an ongoing
objective for automation, seeking to
improve both manual and existing automated testing. It could be measured by
elapsed time for specified testing activities, such as maintenance time or failure
analysis time.

Objective 5: Run More Tests
Good ideas behind this objective:
•	 Testing more of the software gives
better coverage.
•	 Testing is good, so more testing
must be better.
More is not better! Good testing is
not found in the number of tests run, but
in the value of the tests that are run. In
fact, the fewer tests for the same value,
the better. It is definitely the quality of
the tests that counts, not the quantity.
Automating a lot of poor tests gives you
maintenance overhead with little return.
Automating the best tests (however many
that is) gives you value for the time and
money spent in automating them.
If we do want to run more tests, we
need to be careful when choosing which
additional tests to run. It may be easier
to automate tests for one area of the
software than for another. However, if it
is more valuable to have automated tests
for this second area than the first, then
automating a few of the more difficult
tests is better than automating many of
the easier (and less useful) tests.
A raw count of the number of automated tests is a fairly useless way of
gauging the contribution of automation
to testing. For example, suppose testers
decide there is a particular set of tests
that they would like to automate. The
real value of automation is not that the
tests are automated but the number of
times they are run. It is possible that the
testers make the wrong choice and end
up with a set of automated tests that
they hardly ever use. This is not the fault
of the automation, but of the testers’
choice of which tests to automate.
It is important that automation is
responsive, flexible, and able to automate different tests quickly as needed.
Although we try to plan which tests to
automate and when, we should always
start automating the most important
tests first. Once we are running the tests,
JULY/AUGUST 2009	

BETTER SOFTWARE 	

35
the testers may discover new information
that shows that different tests should be
automated rather than the ones that had
been planned. The automation regime
needs to be able to cope with a change of
direction without having to start again
from the beginning.
During the journey to effective test
automation, it may take far longer to automate a test than to run that test manually. Hence, trying to automate may
lead, in the short term at least, to running fewer tests, and this may be OK.
Better automation objective: Automate the optimum number of the most
useful and valuable tests, as identified
by the testers. This could be measured
as the number or percentage automated
out of the valuable tests identified.

Objective 6: Automate X% of
Testing
Good ideas behind this objective:
•	 We should measure the progress
of our automation effort.
•	 We should measure the quality of
our automation.
This objective is often seen as “Automate 100 percent of testing.” In this
form, it looks very decisive and macho!
The aim of this objective is to ensure
that a significant proportion of existing
manual tests is automated, but this may
not be the best idea.
A more important and fundamental
point is to ask about the quality of the
tests that you already have, rather than
how many of them should be automated. The answer might be none—let’s
have better tests first! If they are poor
tests that don’t do anything for you,
automating them still doesn’t do anything for you (but faster!). As Dorothy
Graham has often been quoted, “Automated chaos is just faster chaos.”
If the objective is to automate 50 percent of the tests, will the right 50 percent
be automated? The answer to this will
depend on who is making the decisions
and what criteria they apply. Ideally, the
decision should be made through negotiation between the testers and the automators. This negotiation should weigh
the cost of automating individual tests
or sets of tests, and the potential costs of
maintaining the tests, against the value
36	

BETTER SOFTWARE	

JULY/AUGUST 2009	

Figure 3

of automating those tests. We’ve heard
of one automated test taking two weeks
to build when running the test manually
took only thirty minutes—and it was
only run once a month. It is difficult to
see how the cost of automating this test
will ever be repaid!
What percentage of tests could be automated? First, eliminate those tests that
are actually impossible or totally impractical to automate. For example, a test
that consists of assessing whether the
screen colors work well together is not
a good candidate for automation. Automating 2 percent of your most important
and often-repeated tests may give more
benefit than automating 50 percent of
tests that don’t provide much value.
Measuring the percentage of manual
tests that have been automated also
leaves out a potentially greater benefit of
automation—there are tests that can be
done automatically that are impossible
or totally impractical to do manually. In
figure 3 we see that the best automation
includes tests that don’t make sense as
manual tests and does not include tests
that make sense only as manual tests.
Automation provides tool support
for testing; it should not simply automate tests. For example, a utility could
be developed by the automators to make
comparing results easier for the testers.
This does not automate any tests but
may be a great help to the testers, save
them a lot of time, and make things
much easier for them. This is good automation support.
www.StickyMinds.com

Better automation objective: Automation should provide valuable support to
testing. This could be measured by how
often the testers used what was provided
by the automators, including automated
tests run and utilities and other support.
It could also be measured by how useful
the testers rated the various types of support provided by the automation team.
Another objective could be: The number
of additional verifications made that
couldn’t be checked manually. This could
be related to the number of tests, in the
form of a ratio that should be increasing.
What are your objectives for test
execution automation? Are they good
ones? If not, this may seriously impact
the success of your automation efforts.
Don’t confuse objectives for testing with
objectives for automation. Choose more
appropriate objectives and measure the
extent to which you are achieving them,
and you will be able to show how your
automation efforts benefit your organization. {end}

Sticky
	 Notes
For more on the following topics go to
www.StickyMinds.com/bettersoftware.
n	

n	

Dorothy Graham’s blog on DDP and
test automation
Software Test Automation
---

20

BETTER SOFTWARE

DECEMBER 2007

www.StickyMinds.com

,
Technical versus non-technical skills in test automation
Dorothy Graham
Software Testing Consultant

info@DorothyGraham.co.uk
SUMMARY
In this paper, I discuss the role of the testers and test automators in test automation. Technical skills are needed by test automators, but
testers who do not have technical skills should not be prohibited from writing and running automated tests.

Keywords
Tester, test automator, test automation, skills.
1. INTRODUCTION
Test automation is a popular topic in software testing, and an area where a number of organizations have
had good success. Tests that may take days to run manually can be executed in hours, running overnight
and at weekends, with greater accuracy and repeatability. Tests can be run more often, giving immediate
feedback for new builds.
Yet despite the obvious potential, many organizations are still struggling to achieve good benefits from
automation. I believe that one reason for this is the role of the “test automator”. There is a common
misperception that testers should take on this role. This paper explains why this may not be the best
solution.
It is popular for testers to be encouraged to develop programming skills. For example at EuroStar 2012,
a keynote speaker advised all testers to learn to code. I don’t agree with this, and this paper, originally
written for the CAST conference 2010, explains why.
2. TERMS
I will start by defining the terms I use in this paper.
Test automation: the computer-assisted running of software tests, i.e. the automation of test execution.
Test automator: A person who builds and maintains the testware associated with automated tests. [4]
Tester: A person who identifies test conditions, designs test cases and verifies test results. A tester may
also build and execute tests and compare test results. [4]
Testware: The artifacts required to plan, design and execute tests, such as documentation, scripts, inputs,
expected outcomes, set-up and clear-up procedures, files, databases, environments, and any additional
software or utilities used in testing. [4]

© Dorothy Graham, 2013

Page 1 of 5
3. TEST AUTOMATION SKILLS
3.1 Existing perceptions
The automation of test execution is a popular application of computer technology to itself. There are a
number of books about test automation. [1,2,3,4,7,8,10,11,12] Many of them do not appear to mention
skills needed (or it was not obvious if they did). There is a general perception that testers must be or
become technical, i.e. programmers, if they are to become involved in automation, although there are a
few exceptions that mention a distinction between testers and automators.
Linda Hayes in her useful booklet on automation [7] says: “… developing test scripts is essentially a
form of programming; for this role, a more technical background is needed.” She distinguishes between
“Test Developers” i.e. testers, and “Script Developers”, which is part of the role of a test automator.
Dustin et al in [3] says: “When people think of AST [Automated Software Testing], they often think the
skill set required is one of a ‘tester’, and that any manual tester can pick up and learn how to use an
automated testing tool. Although the skills [of a tester] … are still needed to implement AST, a
complement of skills similar to the broad range of skill sets needed to develop the software product itself
is needed.” (p 225)
A paper by Mosaic [13] mentions three roles: “Manual Test Engineer”, “Automation Test Engineer” and
“Lead Automator”. In this model, the design of tests (i.e. the tester’s role) is done by both test engineers;
the automation work (i.e. test automator’s role) is done by the lead automator and automation test
engineer. The key distinction is who designs the tests, which in my view is best done by the tester, but
collaborating with the test automator for tests that are to be automated.
3.2 Is test automation a technical task?
The answer to this question depends on what you include as part of “test automation”. If you view it as
the direct use of a test execution tool, i.e. writing, editing and running scripts written in the tool’s
scripting language, then it is a technical task, and programming (i.e. scripting) skills are needed.
Another technical aspect of test automation is the design of the testware architecture – the structure and
relationship of all of the items of testware that comprise the artefacts required for automated tests to
successfully run. The design of the testware architecture is a critical aspect for successful test
automation, and the skills needed for this include technical expertise, as well as knowledge of how the
tests are to be used. The person who designs the testware architecture may be called a test automator,
test architect, or lead automator.
3.3 Constructing automated tests is not entirely a technical process
The construction of the automation architecture, and the scripts and other testware that will be used to
run automated tests is a technical task, but automated testing is not just the structure of the architecture
and scripts.
The whole purpose of test automation is to make it possible to run tests with minimal human
involvement in test execution (and comparison).
There is a need for testers to be able to use automated tests, both to write tests to be run automatically,
and to run those tests and view the results. The tests that are to be automated could be technical tests,
such as those written by developers as part of Test-Driven Development or unit or integration testing,
© Dorothy Graham, 2013

Page 2 of 5
but system and acceptance tests can also be automated, and the testers who write those tests are not
always technical (i.e. software developers).
The content of the test needs to be determined, but this is a task that is done by a tester; the
implementation of the test is what is done by the automator.
4. TESTERS TO AUTOMATORS?
4.1 Testers become automators?
I have seen it work well to have a team of manual testers embarking on an automation project, where all
(or nearly all) of the testers effectively become programmers, i.e. test programmers, or scripters. At a
former colleague’s company, five out of the team of six testers went on the tool vendor’s training course
and became familiar with the tool’s scripting language. One tester decided he didn’t want to become
technical, so he concentrated on manual testing, but the others all became good test automators. There
were two interesting side-effects of the testers’ newly acquired skillset. First, they had a lot more
sympathy for the developers, as they now understood first-hand the frustrations of trying to get the
computer to do what you wanted it to do. Second, they found that the developers treated them with a bit
more respect, as they now also had some development skills. This led to a better relationship between
the developers and testers.
Another example where it worked very well to have all of the testers become automators is described in
a chapter by Lisa Crispin [2] in our forthcoming book. An agile team of 9 to 12 people were all involved
in doing manual regression testing, so were highly motivated to automate 20% of their work, and
everyone became involved in the automation.
4.2 A separate team of test automators?
I have seen other organizations where a separate team is set up to automate tests, leaving the testers free
to concentrate on designing tests and running manual tests. As the automation team gets going, they
automate tests nominated by the testers, freeing the testers from having to do those tests manually. The
automation team provides a service to the testers, designing the testware architecture and structure of the
tests, and assisting where needed when problems are encountered with the automated tests. For example,
if an automated test fails, it could be because of a software fault (in which case the tester would have
found a bug), but it could fail for a technical reason such as a problem with the environment, a missing
testware item (i.e. a bug in the automated testware), or a problem with the tool itself. The tester, not
being technical, will need technical assistance to identify the source of the problem.
So we have the situation where test automation does require technical skills, but we have testers who do
not have those skills – can this really work? Yes it can, but it needs two key separations or layers of
abstraction.
5. AUTOMATION SUCCESS NEEDS LAYERS OF ABSTRACTION
5.1 Technical Layer
Technical aspects are very important for test automation. A good testware architecture will have two
layers of abstraction [6]. The technical layer will implement good software development practices for
the testware, separating the tool itself and the direct scripting of the tool from the software or scriptware
© Dorothy Graham, 2013

Page 3 of 5
that calls and uses the lower level scripts. Modularity and reuse are key factors in minimizing
maintenance of automated testware. If something changes in the software, the testware will need to
reflect that change. With lower levels of scripting (a recorded test or linear script being the lowest), a
small change to a screen can result in making “magnetic trash” [9] of the automated tests.
If possible, the testware should be designed so that it can cope with changes in the software under test
without needing any changes to the testware. If this is not possible, the effects of any change to the
software being tested should be confined to only one testware artefact (or a minimum number if this is
not practical).
This layer gives good maintainability to the automated test regime.
5.2 Tester Layer
If all of the testers are technical, such as developers who are doing Test-Driven Design or unit testing,
then this layer is not as critical. The Tester layer of abstraction is needed when system testers or user
acceptance testers want to use test automation, but do not want to become technical, i.e. programmers.
In order to achieve this, the non-technical testers must be able both to write tests (that can then be run
automatically) and also to run tests, i.e. to “kick off” a set of automated tests.
If the testware architecture uses a keyword-driven approach [1,4,5,6], the testers can write tests using
keywords that are related to the business knowledge or domain knowledge that they are familiar with.
Yes, they do have to follow the correct syntax for the keywords, but tools enable this to be relatively
easy to do, for example by providing a drop-down list of valid keywords and checking the syntax of
parameters entered to the keywords.
The keywords are implemented (i.e programmed) by test automators, using the scripting language of the
tool, or using any other programming language that they know and would be appropriate. The testers are
not involved in the implementation of the keywords, but they are able to use them to write tests.
The testers also need to be able to select a set of tests to be run automatically. This can be implemented
by the test automators to make it easy for the testers to kick off a set of tests, for example by providing
options in a user-friend interface to the automation.
The testers also need to receive and understand the results of the automated tests, and the way in which
this information is communicated to them is also designed by the test automator.
This separation of the tester from the automation is needed for the automation to grow within an
organization and to give long-lasting benefits and wide-spread acceptance.
6. SUMMARY AND CONCLUSION
Test automation does need technical skill – for those who are closest to the tool itself.
The skills of the tester and the skills of the test automator may be found in the same person, but it may
work better to have different people performing the two roles.
The test automator’s role is critical in establishing a modular and well-structure testware architecture,
separating the tool from the testware, and providing a tester-friendly interface to the testware for nontechnical testers.
Not every tester can or should become a test automator. Many non-technical people are very good
testers; they should be able to use test automation without needing to have technical skills. Getting to
© Dorothy Graham, 2013

Page 4 of 5
this point, however, does require good technical support, but that support does not have to be provided
by the tester.
7. REFERENCES
[1] Buwalda, H., Janssen, D. and Pinkster, I. 2002. Integrated Test Design and Automation. Addison Wesley/Pearson Education, London.
[2] Crispin, L. Zero to 100% Regression Test Automation in one year: an Agile Approach to Automation 2010. In Graham, D. and
Fewster, M. Experiences of Test Automation. [Publisher not yet determined]
[3] Dustin, E., Garrett, T. and Gauf, B. 2009. Implementing Automated Software Testing. Addison Wesley/Pearson Education, Boston,
MA.
[4] Fewster, M. and Graham, D. 1999. Software Test Automation. Addison Wesley/Pearson Education, ACM Press, NY.
[5] Gijsen, M. 2009. Effective Automated Testing with a DSTL [Domain Specific Test Language]. Paper from the author and
http://www.linkedin.com/ppl/webprofile?action=ctu&id=5550465&pvs=pp&authToken=7sp6&authType=name&trk=ppro_getintr&ln
k=cnt_dir
[6] Graham, D. and Fewster, M. 2012 Experiences of Test Automation, Addison Wesley/Pearson Education, Boston, MA.
[7] Hoffman, D and Strooper, P. 1995. Software Design, Automated Testing, and Maintenance. International Thompson Computer Press,
Boston, MA.
[8] Kaner, C., Falk, J. and Nguyen, H. Q. 1993. Testing Computer Software. Van Nostrand Reinhold, NY.
[9] Mosley, D. J. and Posey, Bruce. A. 2002. Just Enough Software Test Automation. Yourdon Press/Pearson Education, Upper Saddle
River, NJ.
[10] Siteur, M.M. 2005. Automate your testing! Sdu Uitgevers bv, Den Haag.
[11] Stottlemyer, D. 2001. Automated Web Testing Toolkit. Wiley, NY.
[12] [author unknown] 2002. Staffing your test automation team. Mosaic Inc, Chicago IL.
www.mosaicinc.com/mosaicinc/successful_test.htm

© Dorothy Graham, 2013

Page 5 of 5
Management Issues in Test Automation

Weitere ähnliche Inhalte

Was ist angesagt?

Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010TEST Huddle
 
Better Test Designs to Drive Test Automation Excellence
Better Test Designs to Drive Test Automation ExcellenceBetter Test Designs to Drive Test Automation Excellence
Better Test Designs to Drive Test Automation ExcellenceTechWell
 
CICS TS for z/OS, From Waterfall to Agile using Rational Jazz Technology - no...
CICS TS for z/OS, From Waterfall to Agile using Rational Jazz Technology - no...CICS TS for z/OS, From Waterfall to Agile using Rational Jazz Technology - no...
CICS TS for z/OS, From Waterfall to Agile using Rational Jazz Technology - no...IBM Danmark
 
Agile Testing – embedding testing into agile software development lifecycle
Agile Testing – embedding testing into agile software development lifecycle Agile Testing – embedding testing into agile software development lifecycle
Agile Testing – embedding testing into agile software development lifecycle Kari Kakkonen
 
Agile vs Iterative vs Waterfall models
Agile vs Iterative vs Waterfall models Agile vs Iterative vs Waterfall models
Agile vs Iterative vs Waterfall models Marraju Bollapragada V
 
System-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartSystem-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartTechWell
 
Software testing 2012 - A Year in Review
Software testing 2012 - A Year in ReviewSoftware testing 2012 - A Year in Review
Software testing 2012 - A Year in ReviewJohan Hoberg
 
Vipul Kocher - Software Testing, A Framework Based Approach
Vipul Kocher - Software Testing, A Framework Based ApproachVipul Kocher - Software Testing, A Framework Based Approach
Vipul Kocher - Software Testing, A Framework Based ApproachTEST Huddle
 
Continuous Deployment and Testing Workshop from Better Software West
Continuous Deployment and Testing Workshop from Better Software WestContinuous Deployment and Testing Workshop from Better Software West
Continuous Deployment and Testing Workshop from Better Software WestCory Foy
 
From Gatekeeper to Partner by Kelsey Shannahan
From Gatekeeper to Partner by Kelsey ShannahanFrom Gatekeeper to Partner by Kelsey Shannahan
From Gatekeeper to Partner by Kelsey ShannahanQA or the Highway
 
Unit4 Proof of Correctness, Statistical Tools, Clean Room Process and Quality...
Unit4 Proof of Correctness, Statistical Tools, Clean Room Process and Quality...Unit4 Proof of Correctness, Statistical Tools, Clean Room Process and Quality...
Unit4 Proof of Correctness, Statistical Tools, Clean Room Process and Quality...Reetesh Gupta
 
Agile Testing Strategy
Agile Testing StrategyAgile Testing Strategy
Agile Testing Strategytharindakasun
 
ATD-2018_kroth_agile_thinking
ATD-2018_kroth_agile_thinkingATD-2018_kroth_agile_thinking
ATD-2018_kroth_agile_thinkingNorbertKroth
 
Testing in Agile Projects
Testing in Agile ProjectsTesting in Agile Projects
Testing in Agile Projectssriks7
 
'Growing to a Next Level Test Organisation' by Tim Koomen
'Growing to a Next Level Test Organisation' by Tim Koomen'Growing to a Next Level Test Organisation' by Tim Koomen
'Growing to a Next Level Test Organisation' by Tim KoomenTEST Huddle
 

Was ist angesagt? (20)

Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010
 
Better Test Designs to Drive Test Automation Excellence
Better Test Designs to Drive Test Automation ExcellenceBetter Test Designs to Drive Test Automation Excellence
Better Test Designs to Drive Test Automation Excellence
 
Agile test tools
Agile test toolsAgile test tools
Agile test tools
 
Agile Testing
Agile Testing Agile Testing
Agile Testing
 
CICS TS for z/OS, From Waterfall to Agile using Rational Jazz Technology - no...
CICS TS for z/OS, From Waterfall to Agile using Rational Jazz Technology - no...CICS TS for z/OS, From Waterfall to Agile using Rational Jazz Technology - no...
CICS TS for z/OS, From Waterfall to Agile using Rational Jazz Technology - no...
 
Agile Testing – embedding testing into agile software development lifecycle
Agile Testing – embedding testing into agile software development lifecycle Agile Testing – embedding testing into agile software development lifecycle
Agile Testing – embedding testing into agile software development lifecycle
 
Agile vs Iterative vs Waterfall models
Agile vs Iterative vs Waterfall models Agile vs Iterative vs Waterfall models
Agile vs Iterative vs Waterfall models
 
System-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartSystem-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good Start
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Software testing 2012 - A Year in Review
Software testing 2012 - A Year in ReviewSoftware testing 2012 - A Year in Review
Software testing 2012 - A Year in Review
 
Vipul Kocher - Software Testing, A Framework Based Approach
Vipul Kocher - Software Testing, A Framework Based ApproachVipul Kocher - Software Testing, A Framework Based Approach
Vipul Kocher - Software Testing, A Framework Based Approach
 
Continuous Deployment and Testing Workshop from Better Software West
Continuous Deployment and Testing Workshop from Better Software WestContinuous Deployment and Testing Workshop from Better Software West
Continuous Deployment and Testing Workshop from Better Software West
 
From Gatekeeper to Partner by Kelsey Shannahan
From Gatekeeper to Partner by Kelsey ShannahanFrom Gatekeeper to Partner by Kelsey Shannahan
From Gatekeeper to Partner by Kelsey Shannahan
 
Unit4 Proof of Correctness, Statistical Tools, Clean Room Process and Quality...
Unit4 Proof of Correctness, Statistical Tools, Clean Room Process and Quality...Unit4 Proof of Correctness, Statistical Tools, Clean Room Process and Quality...
Unit4 Proof of Correctness, Statistical Tools, Clean Room Process and Quality...
 
Agile Testing Strategy
Agile Testing StrategyAgile Testing Strategy
Agile Testing Strategy
 
ATD-2018_kroth_agile_thinking
ATD-2018_kroth_agile_thinkingATD-2018_kroth_agile_thinking
ATD-2018_kroth_agile_thinking
 
Testing in Agile Projects
Testing in Agile ProjectsTesting in Agile Projects
Testing in Agile Projects
 
Presentation on Agile Testing
Presentation on Agile TestingPresentation on Agile Testing
Presentation on Agile Testing
 
'Growing to a Next Level Test Organisation' by Tim Koomen
'Growing to a Next Level Test Organisation' by Tim Koomen'Growing to a Next Level Test Organisation' by Tim Koomen
'Growing to a Next Level Test Organisation' by Tim Koomen
 
What is Agile Testing?
What is Agile Testing? What is Agile Testing?
What is Agile Testing?
 

Andere mochten auch

Connecting with Customers
Connecting with CustomersConnecting with Customers
Connecting with CustomersTechWell
 
Cause-Effect Graphing: Rigorous Test Case Design
Cause-Effect Graphing: Rigorous Test Case DesignCause-Effect Graphing: Rigorous Test Case Design
Cause-Effect Graphing: Rigorous Test Case DesignTechWell
 
Building Successful Test Teams
Building Successful Test TeamsBuilding Successful Test Teams
Building Successful Test TeamsTechWell
 
Creating Great User Experiences: Tips and Techniques
Creating Great User Experiences: Tips and TechniquesCreating Great User Experiences: Tips and Techniques
Creating Great User Experiences: Tips and TechniquesTechWell
 
Dealing with Estimation, Uncertainty, Risk, and Commitment
Dealing with Estimation, Uncertainty, Risk, and CommitmentDealing with Estimation, Uncertainty, Risk, and Commitment
Dealing with Estimation, Uncertainty, Risk, and CommitmentTechWell
 
Make the Cloud Less Cloudy: A Perspective for Software Development Teams
Make the Cloud Less Cloudy: A Perspective for Software Development TeamsMake the Cloud Less Cloudy: A Perspective for Software Development Teams
Make the Cloud Less Cloudy: A Perspective for Software Development TeamsTechWell
 
It’s All Fun and Games: Using Play to Improve Tester Creativity
It’s All Fun and Games: Using Play to Improve Tester CreativityIt’s All Fun and Games: Using Play to Improve Tester Creativity
It’s All Fun and Games: Using Play to Improve Tester CreativityTechWell
 
Build Your Mobile Testing Knowledge
Build Your Mobile Testing KnowledgeBuild Your Mobile Testing Knowledge
Build Your Mobile Testing KnowledgeTechWell
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersTechWell
 
Designing Self-maintaining UI Tests for Web Applications
Designing Self-maintaining UI Tests for Web ApplicationsDesigning Self-maintaining UI Tests for Web Applications
Designing Self-maintaining UI Tests for Web ApplicationsTechWell
 
Collaboration without Chaos
Collaboration without ChaosCollaboration without Chaos
Collaboration without ChaosTechWell
 
Test Management for Cloud-based Applications
Test Management for Cloud-based ApplicationsTest Management for Cloud-based Applications
Test Management for Cloud-based ApplicationsTechWell
 
Adaptive Leadership: Accelerating Enterprise Agility
Adaptive Leadership: Accelerating Enterprise AgilityAdaptive Leadership: Accelerating Enterprise Agility
Adaptive Leadership: Accelerating Enterprise AgilityTechWell
 
Data Collection and Analysis for Better Requirements
Data Collection and Analysis for Better RequirementsData Collection and Analysis for Better Requirements
Data Collection and Analysis for Better RequirementsTechWell
 
Six Free Ideas to Improve Agile Success
Six Free Ideas to Improve Agile SuccessSix Free Ideas to Improve Agile Success
Six Free Ideas to Improve Agile SuccessTechWell
 
Risk-based Testing: Not for the Fainthearted
Risk-based Testing: Not for the FaintheartedRisk-based Testing: Not for the Fainthearted
Risk-based Testing: Not for the FaintheartedTechWell
 
Test Automation for Packaged Systems: Yes, You Can!
Test Automation for Packaged Systems: Yes, You Can!Test Automation for Packaged Systems: Yes, You Can!
Test Automation for Packaged Systems: Yes, You Can!TechWell
 
Agile at Scale with Scrum: The Good, the Bad, and the Ugly
Agile at Scale with Scrum: The Good, the Bad, and the UglyAgile at Scale with Scrum: The Good, the Bad, and the Ugly
Agile at Scale with Scrum: The Good, the Bad, and the UglyTechWell
 

Andere mochten auch (18)

Connecting with Customers
Connecting with CustomersConnecting with Customers
Connecting with Customers
 
Cause-Effect Graphing: Rigorous Test Case Design
Cause-Effect Graphing: Rigorous Test Case DesignCause-Effect Graphing: Rigorous Test Case Design
Cause-Effect Graphing: Rigorous Test Case Design
 
Building Successful Test Teams
Building Successful Test TeamsBuilding Successful Test Teams
Building Successful Test Teams
 
Creating Great User Experiences: Tips and Techniques
Creating Great User Experiences: Tips and TechniquesCreating Great User Experiences: Tips and Techniques
Creating Great User Experiences: Tips and Techniques
 
Dealing with Estimation, Uncertainty, Risk, and Commitment
Dealing with Estimation, Uncertainty, Risk, and CommitmentDealing with Estimation, Uncertainty, Risk, and Commitment
Dealing with Estimation, Uncertainty, Risk, and Commitment
 
Make the Cloud Less Cloudy: A Perspective for Software Development Teams
Make the Cloud Less Cloudy: A Perspective for Software Development TeamsMake the Cloud Less Cloudy: A Perspective for Software Development Teams
Make the Cloud Less Cloudy: A Perspective for Software Development Teams
 
It’s All Fun and Games: Using Play to Improve Tester Creativity
It’s All Fun and Games: Using Play to Improve Tester CreativityIt’s All Fun and Games: Using Play to Improve Tester Creativity
It’s All Fun and Games: Using Play to Improve Tester Creativity
 
Build Your Mobile Testing Knowledge
Build Your Mobile Testing KnowledgeBuild Your Mobile Testing Knowledge
Build Your Mobile Testing Knowledge
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test Managers
 
Designing Self-maintaining UI Tests for Web Applications
Designing Self-maintaining UI Tests for Web ApplicationsDesigning Self-maintaining UI Tests for Web Applications
Designing Self-maintaining UI Tests for Web Applications
 
Collaboration without Chaos
Collaboration without ChaosCollaboration without Chaos
Collaboration without Chaos
 
Test Management for Cloud-based Applications
Test Management for Cloud-based ApplicationsTest Management for Cloud-based Applications
Test Management for Cloud-based Applications
 
Adaptive Leadership: Accelerating Enterprise Agility
Adaptive Leadership: Accelerating Enterprise AgilityAdaptive Leadership: Accelerating Enterprise Agility
Adaptive Leadership: Accelerating Enterprise Agility
 
Data Collection and Analysis for Better Requirements
Data Collection and Analysis for Better RequirementsData Collection and Analysis for Better Requirements
Data Collection and Analysis for Better Requirements
 
Six Free Ideas to Improve Agile Success
Six Free Ideas to Improve Agile SuccessSix Free Ideas to Improve Agile Success
Six Free Ideas to Improve Agile Success
 
Risk-based Testing: Not for the Fainthearted
Risk-based Testing: Not for the FaintheartedRisk-based Testing: Not for the Fainthearted
Risk-based Testing: Not for the Fainthearted
 
Test Automation for Packaged Systems: Yes, You Can!
Test Automation for Packaged Systems: Yes, You Can!Test Automation for Packaged Systems: Yes, You Can!
Test Automation for Packaged Systems: Yes, You Can!
 
Agile at Scale with Scrum: The Good, the Bad, and the Ugly
Agile at Scale with Scrum: The Good, the Bad, and the UglyAgile at Scale with Scrum: The Good, the Bad, and the Ugly
Agile at Scale with Scrum: The Good, the Bad, and the Ugly
 

Ähnlich wie Management Issues in Test Automation

Management Issues in Test Automation
Management Issues in Test AutomationManagement Issues in Test Automation
Management Issues in Test AutomationTechWell
 
When Testers Feel Left Out in the Cold
When Testers Feel Left Out in the ColdWhen Testers Feel Left Out in the Cold
When Testers Feel Left Out in the ColdTechWell
 
Intelligent Mistakes in Test Automation
Intelligent Mistakes in Test AutomationIntelligent Mistakes in Test Automation
Intelligent Mistakes in Test AutomationTechWell
 
Blunders in Test Automation
Blunders in Test AutomationBlunders in Test Automation
Blunders in Test AutomationTechWell
 
It Seemed a Good Idea at the Time: Intelligent Mistakes in Test Automation
It Seemed a Good Idea at the Time: Intelligent Mistakes in Test AutomationIt Seemed a Good Idea at the Time: Intelligent Mistakes in Test Automation
It Seemed a Good Idea at the Time: Intelligent Mistakes in Test AutomationTechWell
 
Why Automation Fails—in Theory and Practice
Why Automation Fails—in Theory and PracticeWhy Automation Fails—in Theory and Practice
Why Automation Fails—in Theory and PracticeTechWell
 
Blunders in Test Automation
Blunders in Test AutomationBlunders in Test Automation
Blunders in Test AutomationTechWell
 
How to make Automation an asset for Organization
How to make Automation an asset for OrganizationHow to make Automation an asset for Organization
How to make Automation an asset for Organizationanuvip
 
Blunders in Test Automation
Blunders in Test AutomationBlunders in Test Automation
Blunders in Test AutomationTechWell
 
Top Ten Tips for Tackling Test Automation Webinar Presentation.pptx
Top Ten Tips for Tackling Test Automation Webinar Presentation.pptxTop Ten Tips for Tackling Test Automation Webinar Presentation.pptx
Top Ten Tips for Tackling Test Automation Webinar Presentation.pptxInflectra
 
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreThe Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreTechWell
 
How to manage your testing automation project ttm methodology
How to manage your testing automation project   ttm methodologyHow to manage your testing automation project   ttm methodology
How to manage your testing automation project ttm methodologyRam Yonish
 
Tune Agile Test Strategies to Project and Product Maturity
Tune Agile Test Strategies to Project and Product MaturityTune Agile Test Strategies to Project and Product Maturity
Tune Agile Test Strategies to Project and Product MaturityTechWell
 
Why Test Automation Fails
Why Test Automation FailsWhy Test Automation Fails
Why Test Automation FailsRanorex
 
DevOps Tactical Adoption Theory: Continuous Testing
DevOps Tactical Adoption Theory: Continuous TestingDevOps Tactical Adoption Theory: Continuous Testing
DevOps Tactical Adoption Theory: Continuous TestingBerk Dülger
 
Presentation1
Presentation1Presentation1
Presentation1anuvip
 
How To Transform the Manual Testing Process to Incorporate Test Automation
How To Transform the Manual Testing Process to Incorporate Test AutomationHow To Transform the Manual Testing Process to Incorporate Test Automation
How To Transform the Manual Testing Process to Incorporate Test AutomationRanorex
 
Introduction to Automation Testing
Introduction to Automation TestingIntroduction to Automation Testing
Introduction to Automation TestingFayis-QA
 
Automation Culture: Essential to Agile Success
Automation Culture: Essential to Agile SuccessAutomation Culture: Essential to Agile Success
Automation Culture: Essential to Agile SuccessTechWell
 

Ähnlich wie Management Issues in Test Automation (20)

Management Issues in Test Automation
Management Issues in Test AutomationManagement Issues in Test Automation
Management Issues in Test Automation
 
When Testers Feel Left Out in the Cold
When Testers Feel Left Out in the ColdWhen Testers Feel Left Out in the Cold
When Testers Feel Left Out in the Cold
 
Intelligent Mistakes in Test Automation
Intelligent Mistakes in Test AutomationIntelligent Mistakes in Test Automation
Intelligent Mistakes in Test Automation
 
Blunders in Test Automation
Blunders in Test AutomationBlunders in Test Automation
Blunders in Test Automation
 
It Seemed a Good Idea at the Time: Intelligent Mistakes in Test Automation
It Seemed a Good Idea at the Time: Intelligent Mistakes in Test AutomationIt Seemed a Good Idea at the Time: Intelligent Mistakes in Test Automation
It Seemed a Good Idea at the Time: Intelligent Mistakes in Test Automation
 
Why Automation Fails—in Theory and Practice
Why Automation Fails—in Theory and PracticeWhy Automation Fails—in Theory and Practice
Why Automation Fails—in Theory and Practice
 
Blunders in Test Automation
Blunders in Test AutomationBlunders in Test Automation
Blunders in Test Automation
 
How to make Automation an asset for Organization
How to make Automation an asset for OrganizationHow to make Automation an asset for Organization
How to make Automation an asset for Organization
 
Blunders in Test Automation
Blunders in Test AutomationBlunders in Test Automation
Blunders in Test Automation
 
Top Ten Tips for Tackling Test Automation Webinar Presentation.pptx
Top Ten Tips for Tackling Test Automation Webinar Presentation.pptxTop Ten Tips for Tackling Test Automation Webinar Presentation.pptx
Top Ten Tips for Tackling Test Automation Webinar Presentation.pptx
 
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreThe Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
 
Automation testing
Automation testingAutomation testing
Automation testing
 
How to manage your testing automation project ttm methodology
How to manage your testing automation project   ttm methodologyHow to manage your testing automation project   ttm methodology
How to manage your testing automation project ttm methodology
 
Tune Agile Test Strategies to Project and Product Maturity
Tune Agile Test Strategies to Project and Product MaturityTune Agile Test Strategies to Project and Product Maturity
Tune Agile Test Strategies to Project and Product Maturity
 
Why Test Automation Fails
Why Test Automation FailsWhy Test Automation Fails
Why Test Automation Fails
 
DevOps Tactical Adoption Theory: Continuous Testing
DevOps Tactical Adoption Theory: Continuous TestingDevOps Tactical Adoption Theory: Continuous Testing
DevOps Tactical Adoption Theory: Continuous Testing
 
Presentation1
Presentation1Presentation1
Presentation1
 
How To Transform the Manual Testing Process to Incorporate Test Automation
How To Transform the Manual Testing Process to Incorporate Test AutomationHow To Transform the Manual Testing Process to Incorporate Test Automation
How To Transform the Manual Testing Process to Incorporate Test Automation
 
Introduction to Automation Testing
Introduction to Automation TestingIntroduction to Automation Testing
Introduction to Automation Testing
 
Automation Culture: Essential to Agile Success
Automation Culture: Essential to Agile SuccessAutomation Culture: Essential to Agile Success
Automation Culture: Essential to Agile Success
 

Mehr von TechWell

Failing and Recovering
Failing and RecoveringFailing and Recovering
Failing and RecoveringTechWell
 
Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization TechWell
 
Test Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTest Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTechWell
 
Build Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyBuild Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyTechWell
 
Testing Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTesting Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTechWell
 
Implement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowImplement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowTechWell
 
Develop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityDevelop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityTechWell
 
Eliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyEliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyTechWell
 
Transform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTransform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTechWell
 
The Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipThe Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipTechWell
 
Resolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsResolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsTechWell
 
Pin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GamePin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GameTechWell
 
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsAgile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsTechWell
 
A Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationA Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationTechWell
 
Databases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessDatabases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessTechWell
 
Mobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateMobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateTechWell
 
Cultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessCultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessTechWell
 
Turn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTurn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTechWell
 
Scale: The Most Hyped Term in Agile Development Today
Scale: The Most Hyped Term in Agile Development TodayScale: The Most Hyped Term in Agile Development Today
Scale: The Most Hyped Term in Agile Development TodayTechWell
 

Mehr von TechWell (20)

Failing and Recovering
Failing and RecoveringFailing and Recovering
Failing and Recovering
 
Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization
 
Test Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTest Design for Fully Automated Build Architecture
Test Design for Fully Automated Build Architecture
 
Build Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyBuild Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test Strategy
 
Testing Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTesting Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for Success
 
Implement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowImplement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlow
 
Develop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityDevelop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your Sanity
 
Ma 15
Ma 15Ma 15
Ma 15
 
Eliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyEliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps Strategy
 
Transform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTransform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOps
 
The Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipThe Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—Leadership
 
Resolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsResolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile Teams
 
Pin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GamePin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile Game
 
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsAgile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
 
A Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationA Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps Implementation
 
Databases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessDatabases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery Process
 
Mobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateMobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to Automate
 
Cultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessCultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for Success
 
Turn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTurn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile Transformation
 
Scale: The Most Hyped Term in Agile Development Today
Scale: The Most Hyped Term in Agile Development TodayScale: The Most Hyped Term in Agile Development Today
Scale: The Most Hyped Term in Agile Development Today
 

Kürzlich hochgeladen

"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 

Kürzlich hochgeladen (20)

"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 

Management Issues in Test Automation

  • 1. TD AM Tutorial 4/30/13 8:30AM Management Issues in Test Automation Presented by: Dorothy Graham Software Test Consultant Brought to you by: 340 Corporate Way, Suite 300, Orange Park, FL 32073 888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
  • 2. Dorothy Graham In testing for more than thirty years, Dorothy Graham is coauthor of four books—Software Inspection, Software Test Automation, Foundations of Software Testing, and Experiences of Test Automation: Case Studies of Software Test Automation. Dot was a founding member of the ISEB Software Testing Board, a member of the working party that developed the first ISTQB Foundation Syllabus, and served on the boards of conferences and publications in software testing. A popular and entertaining speaker at conferences and seminars worldwide, she has been coming to STAR conferences since the first one in 1992. Dot holds the European Excellence Award in Software Testing. Learn more about Dot at DorothyGraham.co.uk.
  • 3. Management Issues in Test Automation Contents Session 0: Introduction to the tutorial Tutorial objectives What we cover (and don’t cover) today Session 1: Planning and Managing Test Automation Responsibilities Pilot project Test automation objectives (and exercise) Return on Investment (ROI) Session 2: Technical Issues for Managers Testware architecture Scripting, keywords and Domain-Specific Test Language (DSTL) Automating more than execution Session 3: Final Advice, Strategy and Conclusion Final advice Strategy exercise Conclusion Appendix (useful stuff) That’s no reason to automate (Better Software article) Man and Machine, Jonathan Kohl (Better Software) Technical vs non-technical skills in test automation
  • 4.
  • 5. 0-1 Management Issues in Test Automation Prepared and presented by Dorothy Graham www.DorothyGraham.co.uk email: info@dorothygraham.co.uk Twitter: @DorothyGraham © Dorothy Graham 2013 0-1 Objectives of this tutorial •  help you achieve better success in automation –  independent of any particular tool •  mainly management but a few technical issues –  responsibilities, pilot project –  objectives for automation –  Return on Investment (ROI) –  critical technical issues for managers –  what works in practice (case studies) •  help you plan an effective automation strategy 0-2 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 6. 0-2 Tutorial contents 1) Planning & Managing Test Automation 2) Technical Issues for Managers 3) Final advice and Conclusion 0-3 Shameless commercial plug Part 1: How to do automation - still relevant today, though we plan to update it at some point New book! www.DorothyGraham.co.uk info@dorothygraham.co.uk 0-4 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 7. 0-3 What is today about? (and not about) •  test execution automation (not other tools) •  I will NOT cover: –  demos of tools (time, which one, expo) –  comparative tool info / selecting a tool* •  at the end of the day –  understand management issues –  be aware of critical technical issues –  have your own automation objectives –  plan your own automation strategy * I will email you Ch 10 on request – info@dorothygraham.co.uk 0-5 About you •  your Summary and Strategy document –  where are you now with your automation? –  what are your most pressing automation problems? –  why are you here today? •  your objectives for this tutorial 0-6 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 8.
  • 9. 1-Managing Management Issues in Test Automation Planning & Managing Test Automation 1 Managing 2 Technical 3 Conclusion 1-1 Management Issues in Test Automation Managing 1 2 3 Contents Responsibilities Pilot project Test automation objectives Return on Investment (ROI) 1-2 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 10. 1-Managing What is an automated test? •  a test! –  designed by a tester for a purpose •  test is executed –  implemented / constructed to run automatically using a tool –  could be run manually also •  who decides what tests to run? •  who decides how a test is run? 1-3 Existing perceptions of automation skills •  many books & articles don’t mention automation skills –  or assume that they must be acquired by testers •  test automation is technical in some ways •  using the test execution tool directly (script writing) •  designing the testware architecture (framework / regime) •  debugging automation problems –  this work requires technical skill –  most people now realise this (but many still don’t) See article: “Technical vs non-technical skills in test automation” presented by Dorothy Graham info@dorothygraham.co.uk 1-4 © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 11. 1-Managing Responsibilities Testers •  test the software –  design tests –  select tests for automation •  requires planning / negotiation Automators •  automate tests (requested by testers) •  support automated testing –  allow testers to execute tests –  help testers debug failed tests –  provide additional tools (homegrown) •  execute automated tests –  should not need detailed technical expertise •  analyse failed automated tests –  report bugs found by tests –  problems with the tests may need help from the automation team •  predict –  maintenance effort for software changes –  cost of automating new tests •  improve the automation –  more benefits, less cost 1-5 Test manager’s dilemma •  who should undertake automation work –  not all testers can automate (well) –  not all testers want to automate –  not all automators want to test! •  conflict of responsibilities –  automate tests vs. run tests manually •  get additional resources as automators? –  contractors? borrow a developer? tool vendor? 1-6 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 12. 1-Managing Roles within the automation team •  Testware architect –  designs the overall structure for the automation •  Champion –  “sells” automation to managers and testers •  Tool specialist –  technical aspects, licensing, updates to the tool •  Automated test (& script) developers –  write new keyword scripts as needed –  debug automation problems 1-7 Agile automation: Lisa Crispin –  starting point: buggy code, new functionality needed, whole team regression tests manually –  testable architecture: (open source tools) •  want unit tests automated (TDD), start with new code •  start with GUI smoke tests - regression •  business logic in middle level with FitNesse –  100% regression tests automated in one year •  selected set of smoke tests for coverage of stories –  every 6 mos, engineering sprint on the automation –  key success factors •  management support & communication •  whole team approach, celebration & refactoring presented by Dorothy Graham info@dorothygraham.co.uk 1-8 © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 13. 1-Managing Automation and agile •  agile automation: apply agile principles to automation –  multidisciplinary team –  automation sprints –  refactor when needed •  fitting automation into agile development –  ideal: automation is part of “done” for each sprint •  Test-Driven Design = write and automate tests first –  alternative: automation in the following sprint -> •  may be better for system level tests See www.satisfice.com/articles/agileauto-paper.pdf (James Bach) 1-9 Automation in agile/iterative development A manual testing of this release (testers) A B regression testing (automators automate the best tests) A B C run automated tests (testers) A B C D E F 1-10 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 14. 1-Managing Requirements for agile test framework •  Support manual and automated testing –  using the same test construction process •  Support fully manual execution at any time –  requires good naming convention for components •  Support manual + automated execution –  so test can be used before it is 100% automated •  Implement reusable objects •  Allow “stubbing” objects before GUI available Source: Dave Martin, LDSChurch.org, email Management Issues in Test Automation Managing 1 2 1-11 3 Contents Responsibilities Pilot project Test automation objectives Return on Investment (ROI) 1-12 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 15. 1-Managing A tale of two projects: Ane Clausen –  Project 1: 5 people part-time, within test group •  no objectives, no standards, no experience, unstable •  after 6 months was closed down –  Project 2: 3 people full time, 3-month pilot •  worked on two (easy) insurance products, end to end •  1st month: learn and plan, 2nd & 3rd months: implement •  started with simple, stable, positive tests, easy to do •  close cooperation with business, developers, delivery •  weekly delivery of automated Business Process Tests –  after 6 months, automated all insurance products 1-13 Pilot project •  reasons –  you’re unique –  many variables / unknowns at start •  benefits –  find the best way for you (best practice) –  solve problems once –  establish confidence (based on experience) –  set realistic targets •  objectives –  demonstrate tool value –  gain experience / skills in the use of the tool –  identify changes to existing test process –  set internal standards and conventions –  refine assessment of costs and achievable benefits 1-14 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 16. 1-Managing What to explore in the pilot •  build / implement automated tests (architecture) –  different ways to build stable tests (e.g. 10 – 20) •  maintenance –  different versions of the application –  reduce maintenance for most likely changes •  failure analysis –  support for identifying bugs –  coping with common bugs affecting many automated tests Also: naming conventions, reporting results, measurement 1-15 After the pilot… •  having processes & standards is only the start –  30% on new process –  70% on deployment Source: Eric Van Veenendaal, successful test process improvement •  marketing, training, coaching •  feedback, focus groups, sharing what’s been done •  the (psychological) Change Equation –  change only happens if (x + y + z) > w x = dissatisfaction with the current state y = shared vision of the future z = knowledge of the steps to take to get from here to there w = psychological / emotional cost to change for this person 1-16 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 17. 1-Managing Management Issues in Test Automation Managing 1 2 3 Contents Responsibilities Pilot project Test automation objectives Return on Investment (ROI) 1-17 An automation effort •  is a project –  with goals, responsibilities, and monitoring –  but not just a project – ongoing effort is needed •  not just one effort – different projects –  when acquiring a tool – pilot project –  when anticipated benefits have not materialized –  different projects at different times •  with different objectives •  objectives are important for automation efforts –  where are we going? are we getting there? presented by Dorothy Graham info@dorothygraham.co.uk 1-18 © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 18. 1-Managing Efficiency and effectiveness better good slow testing Manual testing High good fast testing Automated Efficiency poor fast testing poor slow testing worst greatest benefit Effectiveness not good but common Low 1-19 Good objectives for automation? –  run regression tests evenings and weekends –  give testers a new skill / enhance their image –  run tests tedious and error-prone if run manually –  gain confidence in the system –  reduce the number of defects found by users 1-20 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 19. Test Automation Objectives Exercise Test Automation Objectives Exercise The following are some possible test automation objectives. Evaluate each objective – is it a suitable objective for automation? If not, why not? Which are already in place in your own organisation? Possible test automation objectives Achieve faster performance for the system Good automation objective? (If not, why not) Already in place? NO – this is not an objective for test execution automation, nor is it an objective for performance testing! Performance test tools may help by giving the measurements to see whether the system is faster. Achieve good results and quick payback with no additional resources, effort or time Automate all tests Build a long-lasting automation regime that is easy to maintain Easy to add new automated tests Ensure repeatability of regression tests Ensure that we meet our release deadlines Find more bugs Find defects in less time Free testers from repeated (boring) test execution to spend more time in test design © Dorothy Graham, 2011 STA1110126 Page 1 of 5
  • 20.
  • 21. Test Automation Objectives Exercise Possible test automation objectives Good automation objective? (If not, why not) Already in place? Improve our testing Reduce elapsed time for testing by x% Reduce the cost and time for test design Reduce the number of test staff Run more tests Run regression tests more often Run tests every night on all PCs Achieve a positive Return on Investment in no more than <x> test interations (where x = ?) Other objectives: © Dorothy Graham, 2011 STA1110126 Page 2 of 5
  • 22.
  • 23. 1-Managing Reduce test execution time edit tests (maintenance) set-up execute analyse failures clear-up Manual testing Same tests automated More mature automation 1-21 Automate x% of the tests •  are your existing tests worth automating? –  if testing is in chaos, automating gives you faster chaos •  which tests to automate (first)? •  what % of manual tests should be automated? –  “100%” sounds impressive but may not be wise •  what else can be automated –  automation can do things not possible or practical in manual testing! 1-22 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 24. 1-Managing Manual vs automated manual tests automated tests tests not automated yet tests not worth automating tests (& verification) not possible to do manually manual tests automated (% manual) exploratory test automation 1-23 Success = find lots of bugs? •  tests find bugs, not automation •  automation is a mechanism for running tests •  the bug-finding ability of a test is not affected by the manner in which it is executed •  this can be a dangerous objective –  especially for regression automation! 1-24 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 25. 1-Managing When is “find more bugs” a good objective for automation? •  objective is “fewer regression bugs missed” •  when the first run of a given test is automated –  MBT, Exploratory test automation, automated test design –  keyword-driven (e.g. users populate spreadsheet) •  find bugs in parts we wouldn’t have tested? 1-25 Good objectives for test automation •  become measurable quality attributes for automation •  realistic and achievable •  short and long term •  regularly re-visited and revised •  should be different objectives for testing and for automation •  automation should support testing activities 1-26 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 26. 1-Managing Quality attributes for automation •  related to objectives •  measurable (see Tom Gilb’s work) •  examples –  maintenance time for testware –  failure analysis time –  improved support for testers –  coverage of system tested by automation –  increasing EMTE 1-27 EMTE – what is it? •  Equivalent Manual Test Effort –  given a set of automated tests, –  how much effort would it take •  IF those tests were run manually •  note –  you would not actually run these tests manually –  EMTE = what you could have tested manually •  and what you did test automatically –  used to show test automation benefit 1-28 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 27. 1-Managing EMTE – how does it work? a manual test Manual testing Automate the manual testing? the manual test now automated doesn’t make sense – can run them more only time to run the tests 1.5 times 1-29 EMTE – how does it work? (2) Automated testing EMTE 1-30 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 28. 1-Managing EMTE example •  example –  automated tests take 2 hours –  if those same tests were run manually, 4 days •  frequency –  automated tests run every day for 2 weeks (including once at the weekend), 11 times •  calculation –  EMTE = 1-31 Management Issues in Test Automation Managing 1 2 3 Contents Responsibilities Pilot project Test automation objectives Return on Investment (ROI) 1-32 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 29. 1-Managing Is this Return on Investment (ROI)? •  •  •  •  •  tests are run more often tests take less time to run it takes less human effort to run tests we can test (cover) more of the system we can run the equivalent of days / weeks of manual testing in a few minutes / hours •  faster time to market ROI = (benefit – cost) cost these are (good) benefits but are not ROI 1-33 How important is ROI? •  ROI can be dangerous –  easiest way to measure: tester time –  may give impression that tools replace people •  “automation is an enabler for success, not a cost reduction tool” •  Yoram Mizrachi, “Planning a mobile test automation strategy that works, ATI magazine, July 2012 •  many achieve lasting success without measuring ROI (depends on your context) –  need to be aware of benefits (and publicize them) 1-34 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 30. 1-Managing An example comparative benefits chart 80 70 60 50 40 man aut 30 20 10 0 exec speed 14 x faster times run data variety tester work 5 x more often 4 x more data 12 x less effort ROI spreadsheet – email me for a copy 1-35 Why measure automation ROI? •  to justify and confirm starting automation –  business case for purchase/investment decision, to confirm ROI has been achieved e.g. after pilot –  both compare manual vs automated testing •  to monitor on-going automation –  for increased efficiency, continuous improvement –  build time, maintenance time, failure analysis time, refactoring time •  on-going costs – what are the benefits? 1-36 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 31. 1-Managing MBT @ ESA: Stefan Mohacsi, Armin Beer –  home-grown tool interfaced to commercial tools •  Model-Based Testing and Test Case Generation •  layers of abstraction for maintainability –  define model before software is ready •  capture and assign GUI objects later •  developers build in testability –  ROI calculations •  invest 460 hours in automation infrastructure •  break-even after 4 test cycles 1-37 Example ROI graph using MBT 1400 1200 1000 People Hours 800 Manual hrs Automated hrs 600 400 200 0 1 2 3 4 5 6 Source: Stefan Mohacsi & Armin Beer presented by Dorothy Graham info@dorothygraham.co.uk 1-38 © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 32. 1-Managing Database testing: Henri van de Scheur –  tool developed in-house (now open source) •  agreed requirements with relevant people up front •  9 months, 4 developers in Java (right people) •  good architecture, start with quick wins –  flexible configuration, good reporting, metrics used to improve –  results: 2400 times more efficient •  from: 20 people run 40 tests on 6 platforms in 4 days •  to: 1 person runs 200 tests on 10 platforms in 1 day •  quick dev tests, nightly regression, release tests •  life cycle of automated tests •  little maintenance, machines used 24x7, better quality 1-39 Large S Africa bank: Michael Snyman •  was project-based, too late, lessons not learned –  “our shelves were littered with tools..” •  2006: automation project, resourced, goals –  formal automation process •  ROI after 3 years –  US$4m on testing project, automation $850K –  savings $8m, ROI 900% •  20 testers for 4 weeks to 2 in 1 week –  automation ROI justified the testing project •  only initiative that was measured accurately presented by Dorothy Graham info@dorothygraham.co.uk 1-40 © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 33. 1-Managing Example ROI graph Savings % vs Tests 100% 50% 0% 0 500 1000 1500 2000 2500 -50% -100% -150% -200% monthly weekly daily Source: Lars Wahlberg, Chapter 18 in “Experiences of Test Automation” 1-41 Sample ‘starter kit’ for metrics for test automation (and testing) •  some measure of benefit –  e.g. EMTE or coverage •  average time to automate a test (or set of related tests) •  total effort spent on maintaining automated tests (expressed as an average per test) •  also measure testing, e.g. Defect Detection Percentage (DDP) – test effectiveness –  more info on DDP on my web site & blog 1-42 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 34. 1-Managing Recommendations •  don’t measure everything! •  choose three or four measures –  applicable to your most important objectives •  monitor for a few months –  see what you learn •  change measures if they don’t give useful information 1-43 Managing 1 2 3 Management Issues in Test Automation Summary: key points •  •  •  •  •  Assign responsibility for automation (and testing) Use a pilot project to explore the best ways of doing things Know your automation objectives Measure what’s important to you Show ROI from automation 1-44 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 35. 1-Managing Good objectives for automation? (answers) –  run regression tests evenings and weekends not a good objective, unless they are worthwhile tests! –  give testers a new skill / enhance their image not a good objective, could be a useful by-product –  run tests tedious and error-prone if run manually good objective –  gain confidence in the system an objective for testing, but automated regression tests help achieve it –  reduce the number of defects found by users not a good objective for automation, good objective for testing! 1-45 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 36.
  • 37. Test Automation Objectives Solution Test Automation Objectives Solution We have given some ideas as to which objectives are good and why the others are not. Good automation objective? (If not, why not) Possible test automation objectives Achieve faster performance for the system NO – this is not an objective for test execution automation, nor is it an objective for performance testing! Performance test tools may help by giving the measurements to see whether the system is faster. Achieve good results and quick payback with no additional resources, effort or time NO – this is totally unrealistic – expecting a miracle with no investment! Automate all tests NO – automating ALL tests is not realistic nor sensible. Automate only those tests that are worth automating. Build a long-lasting automation regime that is easy to maintain YES – this is an excellent objective for test automation, and it is measurable. Easy to add new automated tests YES. with a good automation regime, it can be easier to add a new automated test than to run that test manually. Ensure repeatability of regression tests YES. The tools will run the same test in the same way every time. Ensure that we meet our release deadlines NO. Automation may help to run some tests that are required before release, but there are many more factors that go into a release decision. Find more bugs NO. Automation just runs tests. It is the tests that find the bugs, whether they are run manually or are automated. Find defects in less time Not really. Some types of defects (regression bugs) will be found more quickly by automated tests, but it may actually take longer to analyse the failures found. Free testers from repeated (boring) test YES. This is a good objective for test execution execution to spend more time in test automation. design © Dorothy Graham, 2011 STA110126 Page 3 of 5
  • 38.
  • 39. Test Automation Objectives Solution Good automation objective? (If not, why not) Possible test automation objectives Improve our testing NO. Better testing practices and better use of techniques will improve testing. Reduce elapsed time for testing by x% NO. Elapsed time depends on many factors, and not much on whether tests are automated (see further explanation in the slides). Reduce the cost and time for test design NO. Test design is independent from automation – the time spent in design is not affected by how those tests are executed. Reduce the number of test staff NO. You will need more staff to implement the automation, not less. It can make existing staff more productive by spending more time on test design. Run more tests YES but only long term. Short term, you may actually run fewer tests because of the effort taken to automate them. Run regression tests more often YES – this is what the test execution tools do best. Run tests every night on all PCs NO. It may look impressive, but what tests are being run? Are they useful? If not, this is a waste of electricity. Achieve a positive Return on Investment in no more than <6> test interations YES. This is a good objective, if the number of iterations is a reasonable number (e.g. 6). Other objectives: © Dorothy Graham, 2011 STA110126 Page 4 of 5
  • 40.
  • 41. Test Automation Objectives Solution Test Automation Objectives: Selection and Measurement On this page, record the test objectives that would be most appropriate for your organisation (and why), and how you will measure them (what to measure and how to measure it). I suggest that you include at least one about showing Return on Investment. If you currently have automation objectives in place in your organisation that are not good ones, make sure that they are removed and replaced by the better ones below! Proposed test automation objective (with justification) What to measure and how to measure it Add any comments or thoughts here or on the back of this page. © Dorothy Graham, 2011 STA110126 Page 5 of 5
  • 42.
  • 43. 2-Technical Management Issues in Test Automation Technical Issues for Managers 1 Managing 2 Technical 3 Conclusion 2-1 Technical 1 2 Management Issues in Test Automation 3 Contents Testware architecture Scripting, keywords and DSTL Automating more than execution 2-2 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 44. 2-Technical Testware architecture testware  architecture   Testers     write  tests  (in  DSTL)   abstraction here: easier to write automated tests  widely used High Level Keywords structured   testware   Test Automator(s) Structured Scripts Test  Execu/on  Tool   runs  scripts   abstraction here: easier to maintain, and change tools  long life 2-3 Easy way out: use the tool’s architecture •  tool will have its own way of organising tests –  where to put things (for the convenience of the tool!) –  will “lock you in” to that tool – good for vendors! •  a better way (gives independence from tools) –  organise your tests to suit you –  as part of pre-processing, copy files to where the tool needs (expects) to find them –  as part of post-processing, copy back to where you want things to live 2-4 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 45. 2-Technical Tool-specific script ratio Testers     Testers     Not Toolspecific Tool-specific scripts Test  Execu/on  Tool   High maintenance and/or tooldependence Test  Execu/on  Tool   2-5 Key issues •  scale –  the number of scripts, data files, results files, benchmark files, etc. will be large and growing •  shared scripts and data –  efficient automation demands reuse of scripts and data through sharing, not multiple copies •  multiple versions –  as the software changes so too will some tests but the old tests may still be required •  multiple environments / platforms 2-6 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 46. 2-Technical Terms - Testware artefacts Testware Test Materials Test Results Products inputs By-Products scripts doc (specifications) data env utilities expected results logs actual results status differences differences summary summary 2-7 Benefits of standard approach •  tools can assume knowledge (architecture) –  they need less information; are easier to use; fewer errors will be made •  can automate many tasks –  checking (completeness, interdependencies); documentation (summaries, reports); browsing •  portability of tests –  between people, projects, organisations, etc. •  shorter learning curve 2-8 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 47. 2-Technical Technical 1 2 Management Issues in Test Automation 3 Contents Testware architecture Scripting, keywords and DSTL Automating more than execution 2-9 Levels of scripting •  capture replay  high maintenance costs •  structured scripts use programming constructs –  modular, calling structure, loops, IF statements –  few scripts affected by software changes •  data-driven: control scripts process SSs/ DBs –  easy to add new similar tests •  keyword-driven / DSTL / Framework –  one control script proccess actions and data –  including verification actions 2-10 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 48. 2-Technical Data-driven example ADD MOVE DELETE For each record ReadDataFile(RECORD) Case (Column(RECORD)) countries Sweden Data file: TestCase2 USA FILE Europe Norway For each TESTCASE OpenDataFile(TESTCASEn) ReadDataFile(RECORD) Data file: TestCase1 FILE Control script ADD 4,1 MOVE DELETE France Germany FILE: OpenFile(INPUTFILE) ADD: AddItem(ITEM) 2 7 1,3 2,2 1 5,3 MOVE: MoveItem(FROM, TO) DELETE: DeleteItem(ITEM) ….. Next record Next TESTCASE 2-11 About keywords •  single control script (Interactive Test Environment) –  improvements to this benefit all tests (ROI) –  extracts high-level instructions from scripts •  ‘test definition’ –  independent of tool scripting language –  a language tailored to testers’ requirements •  software design •  application domain •  business processes •  more tests, fewer scripts 2-12 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 49. 2-Technical Comparison of data files data-driven approach keyword approach FILE ADD MOVE DELETE SAVE Europe France Italy 1,3 2,2 1 5,2 Test2 which is easier to read/understand? ScribbleOpen Europe AddToList France Italy MoveItem 1 to 3 MoveItem 2 to 2 DeleteItem 1 MoveItem 5 to 2 SaveAs Test2 what happens when the test becomes large and complex? this looks more like a test 2-13 Execution-tool-independent framework script script libraries libraries some tests run manually framework tool independent presented by Dorothy Graham info@dorothygraham.co.uk Another Test Tool tool dependent sut test procedures /definitions Test Test Tool Tool software under test software under test tool independent scripting language 2-14 © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 50. 2-Technical Technical 1 2 Management Issues in Test Automation 3 Contents Testware architecture Scripting, keywords and DSTL Automating more than execution 2-15 Automated tests/automated testing Automated tests Automated testing Select / identify test cases to run Set-up test environment: •  create test environment •  load test data Repeat for each test case: •  set-up test pre-requisites •  execute •  compare results •  log results •  analyse test failures •  report defect(s) •  clear-up after test case Select / identify test cases to run Set-up test environment: •  create test environment •  load test data Repeat for each test case: •  set-up test pre-requisites •  execute •  compare results •  log results •  clear-up after test case Clear-up test environment: •  delete unwanted data •  save important data Clear-up test environment: •  delete unwanted data •  save important data Summarise results Manual process presented by Dorothy Graham info@dorothygraham.co.uk Summarise results Analyse test failures Report defects Automated process 2-16 © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 51. 2-Technical Two types of comparison •  dynamic comparison –  done during test execution –  performed by the test tool –  can be used to direct the progress of the test •  e.g. if this fails, do that instead –  fail information written to test log (usually) •  post-execution comparison –  done after the test execution has completed –  good for comparing files or databases –  can be separated from test execution –  can have different levels of comparison •  e.g. compare in detail if all high level comparisons pass 2-17 Sensitive versus specific(robust) test Test is supposed to change only this field Specific test verifies this field only Test outcome Unexpected change occurs Sensitive test verifies the entire outcome 2-18 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 52. 2-Technical Too much sensitivity = redundancy Three tests, each changes a different field If all tests are specific, the unexpected change is missed Test outcome Unexpected change occurs for every test If all tests are sensitive, they all show the unexpected change 2-19 Comparison is not simple –  your expected results (“golden version”) –  masking/filtering (e.g. date test is run) •  may take significant effort to compare what you want and exclude what you don’t –  different order of output –  false fail (should pass) •  e.g. bitmap comparison on images, can eat time –  false pass (should fail) •  gives unjustified confidence (“zombie tests”) –  make your automated tests red until proved green 2-20 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 53. 2-Technical Outside the box: Jonathan Kohl –  task automation (throw-away scripts) •  entering data sets to 2 browsers (verify by watching) •  install builds, copy test data –  support manual exploratory testing –  testing under the GUI to the database (“side door”) –  don’t believe everything you see •  1000s of automated tests pass too quickly •  monitoring tools to see what was happening •  “if there’s no error message, it must be ok” –  defects didn’t make it to the test harness –  overloaded system ignored data that was wrong 2-21 DSTL structured Dis sc po s rip able ts testware architecture execution comparison s litie d Uti ta loa da eg loosen your oracles ETA, monkeys presented by Dorothy Graham info@dorothygraham.co.uk Automation + st po g & n Pre cessi pro Me tric e.g s EM . TE 2-22 © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 54. 2-Technical Technical 1 2 3 Management Issues in Test Automation Summary: key points •  •  Structure your automation testware to suit you Use the highest level of scripting that you need •  •  e.g. keyword / Domain-Specific Test Language Automate more than execution 2-23 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 55. 3-Conclusion Management Issues in Test Automation Final Advice and Conclusion 1 Managing 2 Technical 3 Conclusion 2-1 Conclusion 1 2 Management Issues in Test Automation 3 Contents Final advice Your strategy Conclusion 2-2 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 56. 3-Conclusion Dealing with high level management •  management support –  building good automation takes time and effort –  set realistic expectations •  benefits and ROI –  make benefits visible (charts on the walls) –  metrics for automation •  to justify it, compare to manual test costs over iterations •  on-going continuous improvement –  build cost, maintenance cost, failure analysis cost –  coverage of system tested 2-3 Dealing with developers •  critical aspect for successful automation –  automation is development •  may need help from developers •  automation needs development standards to work –  testability is critical for automatability –  why should they work to new standards if there is “nothing in it for them”? –  seek ways to cooperate and help each other •  run tests for them –  in different environments –  rapid feedback from smoke tests •  help them design better tests? 2-4 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 57. 3-Conclusion Standards and technical factors •  standards for the testware architecture –  where to put things –  what to name things –  how to do things •  but allow exceptions if needed •  new technology can be great –  but only if the context is appropriate for it (e.g. Model-Based Testing) •  use automation “outside the box” 2-5 On-going automation •  you are never finished –  don’t “stand still” - schedule regular review and refactoring of the automation –  change tools, hardware when needed –  re-structure if your current approach is causing problems •  regular “pruning” of tests –  don’t have “tenured” test suites •  check for overlap, removed features •  each test should earn its place 2-6 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 58. 3-Conclusion Information and web sites –  Automated Testing Institute (and magazine) •  www.automatedtestinginstitute.com –  SQE (Software Quality Engineering sqe.com) •  www.stickyminds.com •  Linda Hayes automation course •  Hans Buwalda’s tutorial –  Randy Rice: presentation on Free and Cheap tools and automation course •  www.riceconsulting.com (search on “free tools”) –  FreeTest Conference (Trondheim, Norway) •  http://free-test.org –  LinkedIn has a test automation group 2-7 Conclusion 1 2 Management Issues in Test Automation 3 Contents Final advice Your strategy Conclusion 2-8 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 59. 3-Conclusion What next? •  we have looked at a number of ideas about test automation today •  what is your situation? –  what are the most important things for you now? –  where do you want to go? –  how will you get there? •  make a start on your test automation strategy now –  adapt it to your own situation tomorrow 2-9 Strategy exercise •  your automation strategy / action plan –  review your objectives for today (p1) –  review your “take-aways” so far (p2) –  identify the top 3 changes you want to make to your automation (top of p3) –  note your plans now on p3 2-10 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 60. 3-Conclusion Conclusion 1 2 3 Management Issues in Test Automation Summary: key points •  Management issues: •  •  Technical issues: •  •  •  staffing, pilot, objectives, Return on Investment (ROI) testware architecture, scripting, others Final advice Your Objectives and Strategy 2-11 any more questions? please email me! info@DorothyGraham.co.uk Thank you for coming today I hope this was / will be useful for you All the best in your automation! 2-12 presented by Dorothy Graham info@dorothygraham.co.uk © Dorothy Graham 2013 www.DorothyGraham.co.uk
  • 62. “Why automate?” This seems such an easy question to answer; yet many people don’t achieve the success they hoped for. If you are aiming in the wrong direction, you will not hit your target! This article explains why some testing objectives don’t work for automation, even though they may be very sensible goals for testing in general. We take a look at what makes a good test automation objective; then we examine six commonly held—but misguided— objectives for test execution automation, explaining the good ideas behind them, where they fail, and how these objectives can be modified for successful test automation. Good Objectives for Test Automation A good objective for test automation should have a number of characteristics. First of all, it should be measurable so that you can tell whether or not you have achieved it. Objectives for test automation should support testing activities but should not be the same as the objectives for testing. Testing and automation are different and distinct activities. Objectives should be realistic and achievable; otherwise, you will set yourself up for failure. It is better to have smaller-scale goals that can be met than far-reaching goals that seem impossible. Of course, many small steps can take you a long way! Automation objectives should be both short and long term. The shortterm goals should focus on what can be achieved in the next month or quarter. The long-term goals focus on where you want to be in a year or two. Objectives should be regularly revised in the light of experience. Misguided Objectives for Test Automation Objective 1: Find More Bugs Good ideas behind this objective: • Testing should find bugs, so automated testing should find them quicker. • Since tests are run quicker, we can run more tests and find even more bugs. • We can test more of the system so we should also find bugs in the parts we weren’t able to test manually. Basing the success of automation on finding bugs—especially the automation of regression tests—is not a good thing to do for several reasons. First, it is the quality of the tests that determines whether or not bugs are found, and this has very little, if anything, to do with automation. Second, if tests are first run manually, any bugs will be found then, and they may be fixed by the time the automated tests are run. Finally, it sets an expectation that the main purpose of test automation is to find bugs, but this is not the case: A repeated test is much less likely to find a new bug than a new test. If the software is really good, automation may be seen as a waste of time and resources. Regression testing looks for unexpected, detrimental side effects in unchanged software. This typically involves running a lot of tests, many of which will not find any defects. This is ideal ground for test automation as it can significantly reduce the burden of this repetitive work, freeing the testers to focus on running manual tests where more defects are likely to be. It is the testing that finds bugs—not the automation. It is the testers who may be able to find more bugs, if the automation frees them from mundane repetitive work. The number of bugs found is a misleading measure for automation in any case. A better measure would be the percentage of regression bugs found (compared to a currently known total). This is known as the defect detection percentage (DDP). See the StickyNotes for more information. Sometimes this objective is phrased in a slightly different way: “Improve the quality of the software.” But identifying bugs does nothing to improve software—it is the fixing of bugs that improves the software, and this is a development task. If finding more bugs is something that you want to do, make it an objective for measuring the value of testing, not for measuring the value of automation. Better automation objective: Help teswww.StickyMinds.com ters find more regression bugs (so fewer regression failures occur in operation). This could be measured by increased DDP for regression bugs, together with a rating from the testers about how well the automation has supported their objectives. Objective 2: Run Regression Tests Overnight and on Weekends Good ideas behind this objective: • We have unused resources (evenings and weekends). • We could run automated tests “while we sleep.” At first glance, this seems an excellent objective for test execution automation, and it does have some good points. Once you have a good set of automated regression tests, it is a good idea to run the tests unattended overnight and on weekends, but resource use is not the most important thing. What about the value of the tests that are being run? If the regression tests that would be run “off peak” are really valuable tests, giving confidence that the main areas of the system are still working correctly, then this is useful. But the focus needs to be on supporting good testing. It is too easy to meet this stated objective by just running any test, whether it is worth running or not. For example, if you ran the same one test over and over again every night and every weekend, you would have achieved the goal as stated, but it is a total waste of time and electricity. In fact, we have heard of someone who did just this! (We think he left the company soon after.) Of course, automated tests can be run much more often, and you may want some evidence of the increased test execution. One way to measure this is using equivalent manual test effort (EMTE). For all automated tests, estimate how long it would have taken to run those tests manually (even though you have no intention of doing so). Then each time the test is run automatically, add that EMTE to your running total. Better automation objective: Run the most important or most useful tests, employing under-used computer resources when possible. This could be partially JULY/AUGUST 2009 BETTER SOFTWARE 33
  • 63. measured by the increased use of resources and by EMTE, but should also include a measure of the value of the tests run, for example, the top 25 percent of the current priority list of most important tests (priority determined by the testers for each test cycle). Objective 3: Reduce Testing Staff Good ideas behind this objective: • We are spending money on the tool, so we should be able to save elsewhere. • We want to reduce costs overall, and staff costs are high. This is an objective that seems to be quite popular with managers. Some managers may go even further and think that the tool will do the testing for them, so they don’t need the testers—this is just wrong. Perhaps managers also think that a tool won’t be as argumentative as a tester! It is rare that staffing levels are reduced when test automation is introduced; on the contrary, more staff are usually needed, since we now need people with test script development skills in addition to people with testing skills. You wouldn’t want to let four testers go and then find that you need eight test automators to maintain their tests! Automation supports testing activities; it does not usurp them. Tools cannot make intelligent decisions about which tests to run, when, and how often. This is a task for humans able to assess the current situation and make the best use of the available time and resources. Furthermore, automated testing is not automatic testing. There is much work for people to do in building the automated tests, analyzing the results, and maintaining the testware. Having tests automated does—or at least should—make life better for testers. The most tedious and boring tasks are the ones that are most amenable for automation, since the computer will happily do repetitive tasks more consistently and without complaining. Automation can make test execution more efficient, but it is the testers who make the tests themselves effective. We have yet to see a tool that can think up tests as well as a human being can! 34 BETTER SOFTWARE JULY/AUGUST 2009 The objective as stated is a management objective, not an appropriate objective for automation. A better management objective is “Ensure that everyone is performing tasks they are good at.” This is not an automation objective either, nor is “Reducing the cost of testing.” These could be valid objectives, but they are related to management, not automation. Better automation objective: The total cost of the automation effort should be significantly less than the total testing effort saved by the automation. This could be partially measured by an increase in tests run or coverage achieved per hour of human effort. Objective 4: Reduce Elapsed Time for Testing Good ideas behind this objective: • Reduce deadline pressure—any way we can save time is good. • Testing is a bottleneck, so faster testing will help overall. • We want to be quicker to market. This one seems very sensible at first and sometimes it is even quantified— “Reduce elapsed time by X%”—which sounds even more impressive. However, this objective can be dangerous because of confusion between “testing” and “test execution.” The first problem with this objective is that there are much easier ways to achieve it: run fewer tests, omit long tests, or cut regression testing. These are not good ideas, but they would achieve the objective as stated. The second problem with this objective is its generality. Reducing the elapsed time for “testing” gives the impression we are talking about reducing the elapsed time for testing as a whole. However, test execution automation tools are focused on the execution of the tests (the clue is in the name!) not the whole of testing. The total elapsed time for testing may be reduced only if the test execution time is reduced sufficiently to make an impact on the whole. What typically happens, though, is that the tests are run more frequently or more tests are run. This can result in more bugs being found (a good thing), that take time to fix (a fact of life), and www.StickyMinds.com increase the need to run the tests again (an unavoidable consequence). The third problem is that there are many factors other than execution that contribute to the overall elapsed time for testing: How long does it take to set up the automated run and clear up after it? How long does it take to recognize a test failure and find out what is actually wrong (test fault, software fault, environment problem)? When you are testing manually, you know the context—you know what you have done just before the bug occurs and what you were doing in the previous ten minutes. When a tool identifies a bug, it just tells you about the actual discrepancy at that time. Whoever analyzes the bug has to put together the context for the bug before he or she can really identify the bug. In figures 1 and 2, the blocks represent the relative effort for the different activities involved in testing. In manual testing, there is time taken for editing tests, maintenance, set up of tests, executing the tests (the largest component of manual testing), analyzing failures, and clearing up after tests have completed. In figure 1, when those same tests are automated, we see the illusion that automating test execution will save us a lot of time, since the relative time for execution is dramatically reduced. However, figure 2 shows us the true picture— total elapsed time for testing may actually increase, even though the time for test execution has been reduced. When test automation is more mature, then the total elapsed time for all of the testing activities may decrease below what it was initially for manual testing. Note that this is not to scale; the effects may be greater than we have illustrated. We now can see that the total elapsed time for testing depends on too many things that are outside the control or influence of the test automator. The main thing that causes increased testing time is the quality of the software—the number of bugs that are already there. The more bugs there are, the more often a test fails, the more bug reports need to be written up, and the more retesting and regression testing are needed. This has nothing to do with whether or not the tests are automated or manual, and the quality of the software
  • 64. is the responsibility of the developers, not the testers or the test automators. Finally, how much time is spent maintaining the automated tests? Depending on the test infrastructure, architecture, or framework, this could add considerably to the elapsed time for testing. Maintenance of the automated tests for later versions of the software can consume a lot of effort that also will detract from the savings made in test execution. This is particularly problematic when the automation is poorly implemented, without thought for maintenance issues when designing the testware architecture. We may achieve our goal with the first release of software, but later versions may fail to repeat the success and may even become worse. Here is how the automator and tester should work together: The tester may request automated support for things that are difficult or time consuming, for example, a comparison or ensuring that files are in the right place before a test runs. The automator would then provide utilities or ways to do them. But the automator, by observing what the tester is doing, may suggest other things that could be supported and “sell” additional tool support to the tester. The rationale is to make life easier for the tester and to make the testing faster, thus reducing elapsed time. Better automation objective: Reduce the elapsed time for all tool-supported Figure 1 Figure 2 www.StickyMinds.com testing activities. This is an ongoing objective for automation, seeking to improve both manual and existing automated testing. It could be measured by elapsed time for specified testing activities, such as maintenance time or failure analysis time. Objective 5: Run More Tests Good ideas behind this objective: • Testing more of the software gives better coverage. • Testing is good, so more testing must be better. More is not better! Good testing is not found in the number of tests run, but in the value of the tests that are run. In fact, the fewer tests for the same value, the better. It is definitely the quality of the tests that counts, not the quantity. Automating a lot of poor tests gives you maintenance overhead with little return. Automating the best tests (however many that is) gives you value for the time and money spent in automating them. If we do want to run more tests, we need to be careful when choosing which additional tests to run. It may be easier to automate tests for one area of the software than for another. However, if it is more valuable to have automated tests for this second area than the first, then automating a few of the more difficult tests is better than automating many of the easier (and less useful) tests. A raw count of the number of automated tests is a fairly useless way of gauging the contribution of automation to testing. For example, suppose testers decide there is a particular set of tests that they would like to automate. The real value of automation is not that the tests are automated but the number of times they are run. It is possible that the testers make the wrong choice and end up with a set of automated tests that they hardly ever use. This is not the fault of the automation, but of the testers’ choice of which tests to automate. It is important that automation is responsive, flexible, and able to automate different tests quickly as needed. Although we try to plan which tests to automate and when, we should always start automating the most important tests first. Once we are running the tests, JULY/AUGUST 2009 BETTER SOFTWARE 35
  • 65. the testers may discover new information that shows that different tests should be automated rather than the ones that had been planned. The automation regime needs to be able to cope with a change of direction without having to start again from the beginning. During the journey to effective test automation, it may take far longer to automate a test than to run that test manually. Hence, trying to automate may lead, in the short term at least, to running fewer tests, and this may be OK. Better automation objective: Automate the optimum number of the most useful and valuable tests, as identified by the testers. This could be measured as the number or percentage automated out of the valuable tests identified. Objective 6: Automate X% of Testing Good ideas behind this objective: • We should measure the progress of our automation effort. • We should measure the quality of our automation. This objective is often seen as “Automate 100 percent of testing.” In this form, it looks very decisive and macho! The aim of this objective is to ensure that a significant proportion of existing manual tests is automated, but this may not be the best idea. A more important and fundamental point is to ask about the quality of the tests that you already have, rather than how many of them should be automated. The answer might be none—let’s have better tests first! If they are poor tests that don’t do anything for you, automating them still doesn’t do anything for you (but faster!). As Dorothy Graham has often been quoted, “Automated chaos is just faster chaos.” If the objective is to automate 50 percent of the tests, will the right 50 percent be automated? The answer to this will depend on who is making the decisions and what criteria they apply. Ideally, the decision should be made through negotiation between the testers and the automators. This negotiation should weigh the cost of automating individual tests or sets of tests, and the potential costs of maintaining the tests, against the value 36 BETTER SOFTWARE JULY/AUGUST 2009 Figure 3 of automating those tests. We’ve heard of one automated test taking two weeks to build when running the test manually took only thirty minutes—and it was only run once a month. It is difficult to see how the cost of automating this test will ever be repaid! What percentage of tests could be automated? First, eliminate those tests that are actually impossible or totally impractical to automate. For example, a test that consists of assessing whether the screen colors work well together is not a good candidate for automation. Automating 2 percent of your most important and often-repeated tests may give more benefit than automating 50 percent of tests that don’t provide much value. Measuring the percentage of manual tests that have been automated also leaves out a potentially greater benefit of automation—there are tests that can be done automatically that are impossible or totally impractical to do manually. In figure 3 we see that the best automation includes tests that don’t make sense as manual tests and does not include tests that make sense only as manual tests. Automation provides tool support for testing; it should not simply automate tests. For example, a utility could be developed by the automators to make comparing results easier for the testers. This does not automate any tests but may be a great help to the testers, save them a lot of time, and make things much easier for them. This is good automation support. www.StickyMinds.com Better automation objective: Automation should provide valuable support to testing. This could be measured by how often the testers used what was provided by the automators, including automated tests run and utilities and other support. It could also be measured by how useful the testers rated the various types of support provided by the automation team. Another objective could be: The number of additional verifications made that couldn’t be checked manually. This could be related to the number of tests, in the form of a ratio that should be increasing. What are your objectives for test execution automation? Are they good ones? If not, this may seriously impact the success of your automation efforts. Don’t confuse objectives for testing with objectives for automation. Choose more appropriate objectives and measure the extent to which you are achieving them, and you will be able to show how your automation efforts benefit your organization. {end} Sticky Notes For more on the following topics go to www.StickyMinds.com/bettersoftware. n n Dorothy Graham’s blog on DDP and test automation Software Test Automation
  • 66.
  • 68.
  • 69.
  • 70.
  • 71.
  • 72.
  • 73. Technical versus non-technical skills in test automation Dorothy Graham Software Testing Consultant info@DorothyGraham.co.uk SUMMARY In this paper, I discuss the role of the testers and test automators in test automation. Technical skills are needed by test automators, but testers who do not have technical skills should not be prohibited from writing and running automated tests. Keywords Tester, test automator, test automation, skills. 1. INTRODUCTION Test automation is a popular topic in software testing, and an area where a number of organizations have had good success. Tests that may take days to run manually can be executed in hours, running overnight and at weekends, with greater accuracy and repeatability. Tests can be run more often, giving immediate feedback for new builds. Yet despite the obvious potential, many organizations are still struggling to achieve good benefits from automation. I believe that one reason for this is the role of the “test automator”. There is a common misperception that testers should take on this role. This paper explains why this may not be the best solution. It is popular for testers to be encouraged to develop programming skills. For example at EuroStar 2012, a keynote speaker advised all testers to learn to code. I don’t agree with this, and this paper, originally written for the CAST conference 2010, explains why. 2. TERMS I will start by defining the terms I use in this paper. Test automation: the computer-assisted running of software tests, i.e. the automation of test execution. Test automator: A person who builds and maintains the testware associated with automated tests. [4] Tester: A person who identifies test conditions, designs test cases and verifies test results. A tester may also build and execute tests and compare test results. [4] Testware: The artifacts required to plan, design and execute tests, such as documentation, scripts, inputs, expected outcomes, set-up and clear-up procedures, files, databases, environments, and any additional software or utilities used in testing. [4] © Dorothy Graham, 2013 Page 1 of 5
  • 74. 3. TEST AUTOMATION SKILLS 3.1 Existing perceptions The automation of test execution is a popular application of computer technology to itself. There are a number of books about test automation. [1,2,3,4,7,8,10,11,12] Many of them do not appear to mention skills needed (or it was not obvious if they did). There is a general perception that testers must be or become technical, i.e. programmers, if they are to become involved in automation, although there are a few exceptions that mention a distinction between testers and automators. Linda Hayes in her useful booklet on automation [7] says: “… developing test scripts is essentially a form of programming; for this role, a more technical background is needed.” She distinguishes between “Test Developers” i.e. testers, and “Script Developers”, which is part of the role of a test automator. Dustin et al in [3] says: “When people think of AST [Automated Software Testing], they often think the skill set required is one of a ‘tester’, and that any manual tester can pick up and learn how to use an automated testing tool. Although the skills [of a tester] … are still needed to implement AST, a complement of skills similar to the broad range of skill sets needed to develop the software product itself is needed.” (p 225) A paper by Mosaic [13] mentions three roles: “Manual Test Engineer”, “Automation Test Engineer” and “Lead Automator”. In this model, the design of tests (i.e. the tester’s role) is done by both test engineers; the automation work (i.e. test automator’s role) is done by the lead automator and automation test engineer. The key distinction is who designs the tests, which in my view is best done by the tester, but collaborating with the test automator for tests that are to be automated. 3.2 Is test automation a technical task? The answer to this question depends on what you include as part of “test automation”. If you view it as the direct use of a test execution tool, i.e. writing, editing and running scripts written in the tool’s scripting language, then it is a technical task, and programming (i.e. scripting) skills are needed. Another technical aspect of test automation is the design of the testware architecture – the structure and relationship of all of the items of testware that comprise the artefacts required for automated tests to successfully run. The design of the testware architecture is a critical aspect for successful test automation, and the skills needed for this include technical expertise, as well as knowledge of how the tests are to be used. The person who designs the testware architecture may be called a test automator, test architect, or lead automator. 3.3 Constructing automated tests is not entirely a technical process The construction of the automation architecture, and the scripts and other testware that will be used to run automated tests is a technical task, but automated testing is not just the structure of the architecture and scripts. The whole purpose of test automation is to make it possible to run tests with minimal human involvement in test execution (and comparison). There is a need for testers to be able to use automated tests, both to write tests to be run automatically, and to run those tests and view the results. The tests that are to be automated could be technical tests, such as those written by developers as part of Test-Driven Development or unit or integration testing, © Dorothy Graham, 2013 Page 2 of 5
  • 75. but system and acceptance tests can also be automated, and the testers who write those tests are not always technical (i.e. software developers). The content of the test needs to be determined, but this is a task that is done by a tester; the implementation of the test is what is done by the automator. 4. TESTERS TO AUTOMATORS? 4.1 Testers become automators? I have seen it work well to have a team of manual testers embarking on an automation project, where all (or nearly all) of the testers effectively become programmers, i.e. test programmers, or scripters. At a former colleague’s company, five out of the team of six testers went on the tool vendor’s training course and became familiar with the tool’s scripting language. One tester decided he didn’t want to become technical, so he concentrated on manual testing, but the others all became good test automators. There were two interesting side-effects of the testers’ newly acquired skillset. First, they had a lot more sympathy for the developers, as they now understood first-hand the frustrations of trying to get the computer to do what you wanted it to do. Second, they found that the developers treated them with a bit more respect, as they now also had some development skills. This led to a better relationship between the developers and testers. Another example where it worked very well to have all of the testers become automators is described in a chapter by Lisa Crispin [2] in our forthcoming book. An agile team of 9 to 12 people were all involved in doing manual regression testing, so were highly motivated to automate 20% of their work, and everyone became involved in the automation. 4.2 A separate team of test automators? I have seen other organizations where a separate team is set up to automate tests, leaving the testers free to concentrate on designing tests and running manual tests. As the automation team gets going, they automate tests nominated by the testers, freeing the testers from having to do those tests manually. The automation team provides a service to the testers, designing the testware architecture and structure of the tests, and assisting where needed when problems are encountered with the automated tests. For example, if an automated test fails, it could be because of a software fault (in which case the tester would have found a bug), but it could fail for a technical reason such as a problem with the environment, a missing testware item (i.e. a bug in the automated testware), or a problem with the tool itself. The tester, not being technical, will need technical assistance to identify the source of the problem. So we have the situation where test automation does require technical skills, but we have testers who do not have those skills – can this really work? Yes it can, but it needs two key separations or layers of abstraction. 5. AUTOMATION SUCCESS NEEDS LAYERS OF ABSTRACTION 5.1 Technical Layer Technical aspects are very important for test automation. A good testware architecture will have two layers of abstraction [6]. The technical layer will implement good software development practices for the testware, separating the tool itself and the direct scripting of the tool from the software or scriptware © Dorothy Graham, 2013 Page 3 of 5
  • 76. that calls and uses the lower level scripts. Modularity and reuse are key factors in minimizing maintenance of automated testware. If something changes in the software, the testware will need to reflect that change. With lower levels of scripting (a recorded test or linear script being the lowest), a small change to a screen can result in making “magnetic trash” [9] of the automated tests. If possible, the testware should be designed so that it can cope with changes in the software under test without needing any changes to the testware. If this is not possible, the effects of any change to the software being tested should be confined to only one testware artefact (or a minimum number if this is not practical). This layer gives good maintainability to the automated test regime. 5.2 Tester Layer If all of the testers are technical, such as developers who are doing Test-Driven Design or unit testing, then this layer is not as critical. The Tester layer of abstraction is needed when system testers or user acceptance testers want to use test automation, but do not want to become technical, i.e. programmers. In order to achieve this, the non-technical testers must be able both to write tests (that can then be run automatically) and also to run tests, i.e. to “kick off” a set of automated tests. If the testware architecture uses a keyword-driven approach [1,4,5,6], the testers can write tests using keywords that are related to the business knowledge or domain knowledge that they are familiar with. Yes, they do have to follow the correct syntax for the keywords, but tools enable this to be relatively easy to do, for example by providing a drop-down list of valid keywords and checking the syntax of parameters entered to the keywords. The keywords are implemented (i.e programmed) by test automators, using the scripting language of the tool, or using any other programming language that they know and would be appropriate. The testers are not involved in the implementation of the keywords, but they are able to use them to write tests. The testers also need to be able to select a set of tests to be run automatically. This can be implemented by the test automators to make it easy for the testers to kick off a set of tests, for example by providing options in a user-friend interface to the automation. The testers also need to receive and understand the results of the automated tests, and the way in which this information is communicated to them is also designed by the test automator. This separation of the tester from the automation is needed for the automation to grow within an organization and to give long-lasting benefits and wide-spread acceptance. 6. SUMMARY AND CONCLUSION Test automation does need technical skill – for those who are closest to the tool itself. The skills of the tester and the skills of the test automator may be found in the same person, but it may work better to have different people performing the two roles. The test automator’s role is critical in establishing a modular and well-structure testware architecture, separating the tool from the testware, and providing a tester-friendly interface to the testware for nontechnical testers. Not every tester can or should become a test automator. Many non-technical people are very good testers; they should be able to use test automation without needing to have technical skills. Getting to © Dorothy Graham, 2013 Page 4 of 5
  • 77. this point, however, does require good technical support, but that support does not have to be provided by the tester. 7. REFERENCES [1] Buwalda, H., Janssen, D. and Pinkster, I. 2002. Integrated Test Design and Automation. Addison Wesley/Pearson Education, London. [2] Crispin, L. Zero to 100% Regression Test Automation in one year: an Agile Approach to Automation 2010. In Graham, D. and Fewster, M. Experiences of Test Automation. [Publisher not yet determined] [3] Dustin, E., Garrett, T. and Gauf, B. 2009. Implementing Automated Software Testing. Addison Wesley/Pearson Education, Boston, MA. [4] Fewster, M. and Graham, D. 1999. Software Test Automation. Addison Wesley/Pearson Education, ACM Press, NY. [5] Gijsen, M. 2009. Effective Automated Testing with a DSTL [Domain Specific Test Language]. Paper from the author and http://www.linkedin.com/ppl/webprofile?action=ctu&id=5550465&pvs=pp&authToken=7sp6&authType=name&trk=ppro_getintr&ln k=cnt_dir [6] Graham, D. and Fewster, M. 2012 Experiences of Test Automation, Addison Wesley/Pearson Education, Boston, MA. [7] Hoffman, D and Strooper, P. 1995. Software Design, Automated Testing, and Maintenance. International Thompson Computer Press, Boston, MA. [8] Kaner, C., Falk, J. and Nguyen, H. Q. 1993. Testing Computer Software. Van Nostrand Reinhold, NY. [9] Mosley, D. J. and Posey, Bruce. A. 2002. Just Enough Software Test Automation. Yourdon Press/Pearson Education, Upper Saddle River, NJ. [10] Siteur, M.M. 2005. Automate your testing! Sdu Uitgevers bv, Den Haag. [11] Stottlemyer, D. 2001. Automated Web Testing Toolkit. Wiley, NY. [12] [author unknown] 2002. Staffing your test automation team. Mosaic Inc, Chicago IL. www.mosaicinc.com/mosaicinc/successful_test.htm © Dorothy Graham, 2013 Page 5 of 5