2. License
The content of these slides is made available
under the Create Commons Attribution
Share-Alike 3.0 United States license.
3. Where this talk came from…
• F5 Networks has lots of product teams who
have their own automation tools.
• We were taking inventory of the tools and
trying to see where we could share and
eliminate duplication.
• We looked at 11 different test automation
tools.
4. We saw 6 common tasks…
• All the tools had ways of addressing common
tasks:
– Test distribution and run control
– Test case set up
– Test case execution
– Test case evaluation
– Test case tear down
– Results reporting
5. We discussed the tasks…
• Test distribution and run-time control
– Some tools had sophisticated controls, some were rudimentary.
– Some were automatic, some required manual effort.
• Set up / tear down
– Some tools took care of a lot of the work in setting up and
tearing down a test, others left it all to the test case.
• Test execution and verification
– Except for data-driven tests, all tools left this entirely to the test
case.
– We saw huge variations in complexity of verification.
• Reporting
– All tools sent results via email.
– Some had web GUIs, some didn’t.
6. We argued…
• Everyone agreed that the six tasks existed and
were important.
• We did not agree on the relative importance
of each task.
• We did not agree on what was needed to
meet requirements for each task.
7. The light bulb came on…
• We realized that we were approaching test
automation from different directions, with
different intentions.
• In short, we had different contexts.
8. We looked at the tools again…
• We tried to figure out how to group the tools
• The context of the tool was the key
• Who writes the tests?
• Who looks at the results?
• What decisions do the results influence?
9. We came up with 4 contexts in our setting…
Context Tests written by Results looked at
by
Decisions
influenced by
Individual
Developer
Developers Developers Code check-in
Development
Team
Developers
and/or testers
Testers,
developers, PM’s
Branch merges,
releases
Project Testers Testers, PM’s Project
milestones,
releases
Product Line Testers Testers, PM’s,
senior
management
Updates and
maintenance
releases
11. Individual Developer context…
• A common example is Unit Tests.
• These tests need to be very quick, duration
measured in seconds.
• They test very small pieces of functionality—
e.g. a single procedure in an API.
• Writing them requires deep knowledge of
product code.
• They should be considered part of the product
code deliverable, i.e. the code isn’t finished if
there are no unit tests
• See xUnit Test Patterns, by Gerard Meszaro
12. If we merge our feature
branch to main, will we
break anything?
13. Development Team context…
• These tests focus on a specific area of
functionality or a subsystem of the product.
• They still need to be fast, but speed is not as
critical.
• The tests may use an interface that is not directly
available to product users.
• Writing these kinds of tests requires significant
expertise in the specific protocol/feature.
• Once fully implemented, the tests can be
migrated to project/product-line testing.
15. Project context…
• Focus on user functionality of the system
• Speed is desirable, but not essential
• Requires a more complex infrastructure
– Hardware dependencies
– Variations in expected results from release to
release
– Other external dependencies
• Reporting is critical
• Can be migrated to Product Line easily
16. Will this patch work for
customers running Basic, Pro,
and Premiere editions with
Service Packs 1, 2 or 3?
17. Product Line context…
• This automation is intended to run on releases
that are out in the field.
• The automation may take a very long time to run.
• Goals:
– Ensure that patches fix the problem they claim to fix.
– Ensure that they don’t break something else.
• Reliability is critical.
• These tests are challenging to maintain.
• Run-time control is a big deal.
18. Case Study: ITE…
• Summary:
– The ITE is STAF/STAX based.
– It was developed by testers for use by other testers,
with developers as a secondary target.
• ITE Design Criteria:
– Allow hands-off execution of tests.
– Allow the test harness to automatically determine
which tests to run.
– Reduce the set up/tear down burden on test writers.
19. ITE…
• Distribution / Runtime control
– Tests and framework are distributed as a linux chroot
that includes all dependencies.
– Both tests and framework stored in source control.
– Tests tagged with meta-data used to control runs.
• Test Setup
– The ITE offers services to configure DUT and various
test services.
• Execution
– This is largely left to test writer. The ITE is beginning
to support data-driven tests.
20. ITE…
• Verification
– This is largely left to the test writer.
– The ITE performs “health checks” on DUT.
• Teardown
– It performs more extensive cleanup after “subjobs”
complete.
• Results Reporting
– The ITE sends email after completion of runs, it also
stores results in database.
– Web pages are available for viewing results.
21. Case Study: xBVT…
• Summary:
– The xBVT is a Perl based system.
– It was developed by a developer for use by other
developers, with testers as secondary target.
• xBVT Design Criteria:
– Tests should be able to run inside or outside the
tool.
– Impose little/no overhead on test writers and
runners.
22. xBVT…
• Distribution / Runtime control
– Tests and framework are stored in source control.
– Tests are stored with the product code
– Runtime execution is determined by “test manifests.”
Manifests can be nested to arbitrary depths.
• Setup
– The xBVT provides the test with login credentials and
ip address. Test is responsible for configuring system.
• Test execution
– Execution is left to test writer.
23. xBVT…
• Results verification
– Verification is left to test writer.
• Teardown
– Teardown is left to test writer. The expectation is that
each test will clean up completely, leaving system as it
was prior to test.
• Reporting
– A text file is generated containing pass/fail results for
each test.
– The text files are emailed out when run is completed.
They are also stored on a web page.
24. What I Learned…
• If you have trouble agreeing, take a step back.
• There are many different approaches that will
work, the one that will work best for you
depends on your test writers, framework
writers, and automation customers.
• Rather than build “one framework to test
them all”, consider building sharable
components.
25. How you can use this…
• Ask yourself:
– Who is going to write and maintain the
framework?
– Who will build and maintain the tests?
– How are the tests going to be used?
– How long will the tests live?
26. In conclusion…
Define your context:
– Who is going to write the tests?
– Who is going to look at the results?
– What decisions will the test results influence?
Determine how your automation will implement the 6 actions:
– Test distribution and run control
– Test set up / tear down
– Test execution / Results evaluation
– Reporting
27. Acknowledgements …
Thanks to the members of F5’s cross-functional tools team
• Brian Sullivan, Chris Rouillard, Ephraim Dan, Patrick Walters, Sebastian Kamyshenko,
Bob Conard, Terry Swartz
Thanks to the members of F5’s automated test team
• Henry Su, Rex Stith, Randy Holte, Richard Jones, James Saryerwinnie
Special Thanks to
• John Hall, Brian DeGeeter, Ryan Allen, and Brian Branagan