Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Agile Acceptance testing with Fitnesse
1. Bridging the Gap
Creating better quality software
using Agile Acceptance Testing
and Fitnesse
Clare McLennan
clare.mclennan@gmail.com
http://crazy-it-adventures.blogspot.com
http://twitter.com/claremclennan
2. Contents
• The Challenge
• What means Testability?
• Fitnesse
• 0 to 90% in 9 months
• Our System Test Toolbox
• Success Factors
4. The Challenge
• In the beginning, time to market was
everything
• As our customer base grew it became
important that the system always worked
• Big culture change
• 99.99% uptime target
• Releases weekly
• System was hard to test
6. Testability
• Any working system is testable
• Aim for Easy to Test
• How much
– Setup
– Knowledge
is needed to test one aspect of the system?
• See podcast, Test Driven Development is
Design
7. Symptoms of Low Testability
• Frustrating and slow to test anything
• Testers needing continuous help from
developers
• Developers may believe testers are stupid
• Developers avoid system testing and stop after
unit testing succeeds
• Poor understanding of the System develops
• Can’t easily introduce new people to the project
8. Automated
Testing
System
Test
System Predict-
Knowledge ability
Debugging Status
Methods Installer Tools
9. System Test
System Predict-
Knowledge ability
Auto
Debugging Status Tools Test
Methods
Installer
10. How We Improved Testability
• Created an installer so everyone runs the
system in the same way
• Created a means to query the system for
when processing is finished
• Added business time to the system so we
instantly test functionality that takes long
periods of time
Example: Conversion tracking works over a
period of 30 days.
12. Fitnesse in a Nut Shell
• Means of capturing requirements as tests
• Tests turn green if passed, red if failed
• Requirements stay up to date
• Customer or testers write the tests
• Programmers write fixture code to make
the tests run
14. The Technical Side
• Fitnesse is a wiki
• Recommend to store tests with the code
• Use SLIM (has replaced FIT)
• Java, C++, Ruby, Python and more
• Test fail when testers write them
• Testers can reuse fixtures to create more
tests
22. Test Organisation
• Tests organised into heirachy of suites
• SetUp and TearDown run before each test
• Test History of success/failures
• Tests can have explanatory text
• Fixture toolbox documentation
23. How To Write Good Tests
• Use user language, not programmer
mumbo-jumbo
• Make each test specific
• Write cases not scripts - you should only
specify things relevant for this example
• Generally, if you can’t do it manually you
won’t be able to automate it.
• See http://www.concordion.org/Technique.html
24. Evolution of Our Tests (1)
Test uses magic
numbers from
database – can't
see what this test
is about
29. Creating Quality Processes
Preprocesses
• First Fitnesse tests were written to prove it
was possible
• First testers joined the project
But...
• Writing automated tests didn't catch on
30. Creating Quality Processes
Stage 1
• QA group was formed to
– Recruite and train testers
– Write and program the Fitnesse automated
tests
– Test new functionality
But...
• QA group struggled to keep up with
development effort
31. Creating Quality Processes
Stage 2
• QA group continued to write tests.
• QA group responsible for running tests
• New Fixture requests were handed over to
development team at start of sprint
But...
• QA group still needed needed system
programmers knowledge to keep tests
working
• Hard to specify upfront all fixtures required
• Programmers hated writing fixtures
32. Creating Quality Processes
Stage 3
• Testers joined in development teams.
• Testers responsibility to write tests
• Dev teams responsibility to get tests
running
• Dev team given a test box to run tests on
• Weekly QA meeting for testers to share
changes and ideas
33. Creating Quality Processes
Stage 3
Highly successful!!
After initial teething in sprint (3 weeks)
everyone was positive about the change
34. What Happened During Stage 3?
• DBA sped up tests
• We reduced the number of GUI
functionality tests required because of
good unit test coverage
• Many manual testing issues were resolved
• Finally testing and development occurred
at the same pace
• Programmers embraced writing tests as
part of their job to maintain quality
35. Change in Testers Role
• More about ensuring good specifications
to prevent bugs
• More testing time spent on exploratory
testing
• Better relationships with programmers
• Less dull work
• More influence on how the system is
written to make testing easier
37. Our System Test Toolbox
• Ask User fixture
• Business time
• Staged Deployment
• Separate Functionality, Gui Functionality,
Gui Layout, Load, Full System, Sanity and
Full system tests
• Close communication between testers and
programmers to find optimal test strategies
39. Ask User
• Mix and match human and automated
processes
• Allows tests to be written and run before
all automation is ironed out
– Example: Gui testing will eventually be
automated with Selenium
• User created objects can be referred to in
Fitnesse tests
• Simple idea but really practical!
40. Business Time
• Changes the current time in the system
• Allows testing of scenarios that take a long
time
• Only in testing mode
• Low risk, as if forgotten system still works
correctly in production environment
41. Staged Deployment
• Think Beta testing
• Test throughly, then do a partial release to
– Only some customers, or
– For a small proportion of the daily
impressions, or
– Run old and new system side by side
• Gives a more accurate test
42. Types of testing
Unit – Programmers tests. Pinpoint bugs
quickly.
Functionality – Pure, automatic testing of
the whole system. All processes triggered
Gui Functionality – Test functionality of Gui
page at a time. All other objects required
are generated as for functionality testing
43. Types of Testing
Gui Layout – Pure gui test of layout,
updating.
Load – Use real database, or extra large
database
Full System – Main functionality,with
processes on timers, as in production
Sanity Check – Final check performed
before a each release
45. Outcomes
• Automated testing of the system takes one
hour, plus some quick manual tests
• Programmers attitude has changed from
expecting outsiders to validate the system,
to sharing responsibility for this task
• System more easy to test, deploy, monitor
and manage
• A self-running QA process which is
continually improved by development
teams
46. Success Factors
• Realistic time frame (12 months)
• Treated building automated testing system
as a project of it’s own
• Influence over design of system to be
tested
• Started with high ROI, hard to manual test,
functionality
• Get tests working, then perfect
• Gently challenged company culture
47. Future Improvements?
• Automate GUI functionality tests with
Selenium
• Start sprints with a Specification Workshop
• Improve Load tests - run them on a cloud
• Move system knowledge from Fitnesse,
back into the system
• Make tests more user orientated
48. To learn more read
Bridging the
Communications
Gap
by Gojko Adzic
49. References
• Fitnesse http://www.fitnesse.org
• Bridging the Communications Gap by Gojko
Adzic
• Test Driven Development is Design - The Last
Word on TDD, Hansel Minutes Podcast
starring Scott Bellware and Scott Hanselman
• Hints and Tips [for writing acceptance tests] by
David Peterson
http://www.concordion.org/Technique.html