This document provides an overview of test automation from the perspective of a test automation engineer. It discusses key topics like the test automation pyramid, reporting, design considerations, and deployment. The test automation pyramid emphasizes unit testing, integration testing, and end-to-end testing from the bottom to top. Reporting and metrics are important for understanding test results and efficiency. Design focuses on aspects like data-driven testing, robustness, and repeatability. Deployment involves piloting automation, maintaining scripts, and supporting evolving environments. The goal is to improve testing in areas like coverage, speed, and cost while maintaining quality.
2. 2
Agenda
1. What is test automation
2. Test automation pyramid
3. Test automation reporting
4. Test automation design
5. Test automation deployment / rollout
5. 5
Test automation objective
1. improve efficiency
2. expand test function coverage
3. reduce total cost
4. execute tests manual tester cannot
5. improve test speed , increase test cycles (frequency)
6. execute exactly same behavior every time
※ improve quality is not included
6. 6
Efficiency & Total cost
Manual test
Test automation
Test reputation
cost
Manual test
Test automation
Test reputation
time
Time Cost
7. 7
Expand test function coverage & execute test manual tester cannot
• Many data variation pattern test
• Many combination function test
• Repeat same test in many times
10,000 test pattern
9. 9
Test automation merit & demerit
Merit Demerit
1. increase test cycle
2. possibility to execute test manual tester cannot,
difficult, complex
3. execute test faster
4. reduce human mistake to execute test
5. more efficient use of test resources
6. quick feedback of test result
7. improve consistency of tests
1. consider initial cost and maintenance cost
2. require technical skill, tool
3. tend to forget true testing objective.
4. testing to be complex
5. additional investigation for automation testing error
6. Difficult to find new bug
10. 10
Execute test faster & Quick feedback of test result
Test A
Test B
Test C
Test D
Test E
Faster execute
Degrade found
in function D
11. 11
Reduce human mistake to execute test & Improve consistency of test
V.S.
• Different operation every time
• Mistake operation sometimes
• Easy reproduce
• Follow test spec perfectly
15. 15
Additional investigation for automation testing error
Test Automation
ManualTest
1. Execute test
2. Bug is found and know detail
1. Test automation failed
2. Check test result
3. Run test automation by manual
4. Know detail
16. 16
Difficult to find new bug
V.S.
• Regression
• Bug is found during scripting
• Repeat same behavior
• Exploratory testing
• Ad-hoc testing
18. 18
Test automation limitation
1. not all manual test can be automated
2. can only check machine-understandable result
3. can only check actual result with expectation prepared
4. cannot do exploratory test
19. 19
Not all manual test can be automated
• Exploratory test
• Design broken check
• Test automation tool not support
Difficult to automate
20. 20
can only check understandable result & check actual result with expectation prepared
• Need to get actual data to validate
• Prepare expectation in advance
23. 23
Test automation pyramid
characteristic
End to end test • Front end & Back end, total system test
• Black-Box test
• Test by End user
• test speed is slow/ take time to test
• If need huge test case, cannot finish test
• Tend to increase test cost
• test environment is fragile
• A few Test cycle
Integration test • Example API test
• Cover difficult point to test by unit test
• Test with Database (Unit test usually use stub)
• Black-Box test , based on Inter Face Specification
Unit test • Small size, component function test
• Easy to find bug
• Independent test of other unit
• Check success or failure
• Tend to increase test set
• Require test speed
• It’s usual to run test every time code change, make
24. 24
Anti Test automation pyramid
Cupcake Ice cream cone hourglass Dual pyramid
What do you imagine?
25. 25
Cupcake
• Each person in charge do similar test
• No collaboration
• Do manual test after E2E test
• Finally do exploratory test due to worry
26. 26
Ice cream cone
• Increase test automation volume as test progress
• Do my best by black box test , not unit test
• Increase test cost
• Take time to find bug route cause
30. 30
Test automation metrics
External metrics : how impact to another activity
• ROI(how much reduce cost, install cost, maintenance cost)
• A number of bug found(bug found ratio、compare with manual)
• Performance (execution speed)
• Accuracy (script fail ratio、false negative, false positive)
Internal metrics : major test automation effect, efficiency
• Scripting cost
• Script bug ratio
• performance
to plan test automation strategy, monitor effect , efficiency
31. 31
Note : False positive & False negative
● False positive
● False negative
Report say bug, but bug is not found
Report say no bug, but bug is found
→ check cost increase, but keep quality
→ missing bug
32. 32
Test automation logging and reporting
• Content of the reports to know which tests have failed and the reasons for failure
• Publishing the reports to know if test execution was success or not
● Log type
● structure to give result properly
● report format depend on receiver
• Test automation current status, execution result
• Execution step in detail, screen shot, test data
• System log (crush dump, stack trace)
• Dashboard, to understand all test automation summary
• Historical test results are stored
38. 38
4. Robust and Sensitive
Robust Sensitive
• Validate some points
• Scripting cost is small
• Test is not failed when UI change a little
• Take time to investigate reason when test is failed
• Sometimes false negative happen
• Test execution time is short
• Validate many points
• Validate data format in detail
• Scripting cost is huge
• A little UI change impact test automation
• Test execution time is long
Which direction we should choose before script depend on test automation objective
41. 41
5. Scenario independency
One test scenario has independency to execute stand-alone
Users run test automation without pre-condition
But to keep independency, too big test scenario is not good.
Independency and scenario size balance are required
42. 42
5. Scenario independency
Independency Dependency
Create ID
Reservation hotel
1. Create new ID
2. Login mypage to validate it
1. Reserve Hotel
2. Validate reservation in mypage
3. Cancel this reservation
43. 43
6. Scenario size
Test execution time depend on test scenario size
This issue cause investigation for failed script
1. Investigate error reason with logs
2. Fix issue
3. Run test scenario again
Step 3 take much time
success
Failure
investigation
1
st
execution
time
2
nd
execution
time
44. 44
7. Setup and Teardown
Task Before test execution and After test execution
What task do you think?
To keep consistency of tests
45. 45
7. Setup and Teardown
Setup (task before test execution)
• Open browser
• Open Native App
• Close unnecessary popup in browser
• Clear cookie , clear cach
• Set default data
46. 46
7. Setup and Teardown
Teardown (task after test execution)
This task should run even if test is failed in the middle
• Initialize data
• Close browser
• Close Native App
47. 47
8. Analyzable report
Test report have SUCCESS or FAILURE status exactly
Also it have information to investigate it when test is failed
What information is needed?
48. 48
8. Analyzable report
• Test step
• Test data used
• Where and why test failed
• Screen shot when failed
• Movie when failed
• Can trace changing value
• Total and each step test execution time
Test report should have ..
49. 49
9. Repeatability
Repeatability is “Can execute same test scenario many times”
• Want to run test every time when deployment
• Want to run test after fix failure test
What scenario affects repeatability?
50. 50
9. Repeatability
1. One email address register membership only once, but can withdraw membership
2. One email address register membership only once and cannot withdraw it
3. Set favorite
4. Purchase items by pool money
5. Reservation system to choose date like golf , travel
There are example scenario to affect repeatability
51. 51
10. Break flaky
Flaky is a test that both pass and fail periodically without any application change
Test result history
• Identify Object by X,Y location
• Unstable environment(performance, network, etc)
• Test precondition is not clear or changeable
• Test data is not maintained
• Out of sync (data , UI , etc)
Route cause
54. 54
11. Flexible trap
“Flexible scripting” is test automation to do suitable behavior depend on situation
It’s not clear what test is running every time
It could cause “false negative”
This test automation is not failed in any situation
55. 55
Flexible trap example
Base test scenario : book for 2 person
Condition Result Flexible scenario
Full Fail Test skip
1 available Fail Book for 1 person
2 available Success Book for 2 person
No search result Fail Test skip
Flexible test scenario
Is it OK to do this behavior without observe ?
56. 56
12. Performance
One of test automation objective is feedback speed.
Performance is important
• Optimize loop operation , reduce same step
• Fast search object
• Reduce unnecessary “wait”
Example
57. 57
12. Performance
login
Add favorite X
Validate X
login
Open chrome
Close chrome
X = 1,2,3 …
before after
login
Add favorite X
Validate X
login
Open chrome
Close chrome
X = 1,2,3 …
58. 58
13. Simple scripting
Test automation is application
Test automation should be tested
Test automation should be simple
61. 61
Pilot project
●Choose project
• Not big , not small
• Important and schedule must project should be avoid
• How to use ? ,Why use ?
• Decide test scope
• Test designer, script engineer, operator etc.
• Test automation tool, environment
●Clear test automation objective and scope
●Resource
62. 62
Deployment
●Report
• Build system / flow to get metrix like coverage, performance (automatically)
• Build system to correct test result and analyze it
●Process & Documentation
• How to support project by test automation
• Guideline (coding rule, test etc)
• Training for new members
63. 63
How to approach to project
Requirement Design Coding Test Release
Class, id guideline?
When environment is ready?
When get stable?
Able to start coding
from here?
64. 64
Maintenance
●Update script
• Create new or update current script to support latest specification
• Update script to improve performance , adopt changeable environment
●support environment
• Update test environment (OS, browser, device)
• Update test tool (tool, middle ware like java)
• Scale up, scale out test resource