Developing without automated testing is hard and risky. Making legacy code testable is hard. Try to improve your logging, add a data layer and see if you can drive testing from user behavior.
2. What is the goal of automated tests?
• Automated tests make sure that your code acts as it did previously
• Unless you are doing TDD
• They make it possible to identify where change has occurred
• We look at the failure and decide if that change was desired
• bug or new feature?
• We then make use these changed locations to target manual tests
3. Improving unit test coverage is hard
• Components are not testable
• Tests are often fragile
• How do you maximize the benefit of tests you write?
• Bad tests waste time with false positives
• Difficult to understand tests are difficult to use
• Tests take time to write
• Tests take time to maintain
• Data requirements
4. What show our tests be doing?
https://www.youtube.com/watch?v=URSWYvyc42M
"The Magic Tricks of Testing" by Sandi Metz
5. Inputs + Code + External State= Results
• Input is known
• We own our user entry points
• Code is static
• I hope, please use source control
• External state is always changing
• How much set up do you need to reproduce a given result
• Results can vary
• If only we could control state
6. How do we identify our inputs?
• Watch/ask users
• Guess
• Documentation
• Check logs when something doesn’t work
7. How do we identify our external state?
• Snapshots
• Guesses (logical and black arts)
• Log inferences
8. Great logs make production support easier
• Quickly identify user actions to reproduce failures
• Proactively identify user issues
• Understand what your users are doing in your application
• Log anything needed to reproduce that user action (except for
external state)
9. Can we recreate I+C+ES=R
• Input
• yes, we have logs
• Code
• yes, we have source control
• External State
• sometimes?
• Results
• If we know External State
10. Re-creating external state
• Simplify the moving parts
• Identify inputs
• Simulate outputs
• Input= Select Count(*) from Sometable
• Output = 42
• Log the inputs, log enough of the output to be able to rebuild relevant output
• I can log 42
• I can log 1 row
• I maybe can’t log 100 rows, but I can log enough metadata to reproduce them in a
meaningful way
• External State becomes a simple function: inputs + result metadata = outputs
• Create a data layer and track everything coming through
11. How do I use this for testing?
• Log
• user actions
• External system inputs
• External system output metadata
• Curate
• Remove – actions probably have lots of duplication
• Replace – sometimes things should change
• Add – new functionality, test it manually and then add it to your test suite
• Simulate
• Swap out data layer
• Return simulated responses based on inputs and metadata collected in live systems
• Validate
• Compare simulated inputs vs logged inputs
• Compare simulation output against simulation output from known good source (i.e. prod)
• Add a simulation api
12. Where would this help?
• Legacy applications not written for testability
• Too many code paths to keep track of
• See #1
• Application usage is not well understood
• don’t know what critical functionality is handled by users
• Application dependencies are not well understood
• Wide and shallow applications
• Very focused applications tend to have changes which affect a higher
percentage of tests – which invalidates too many tests every time
13. Concept references
• Use current production as a model for tests
• http://githubengineering.com/scientist/
• What should be tested?
• Magic Tricks of Testing by Sandi Metz -
https://www.youtube.com/watch?v=URSWYvyc42M
• Working Effectively with Legacy Code by Michael Feathers