Strong software testing process is a key to project success. This presentation helps to understand better how to improve the process by automating routine API endpoints testing.
3. Why do we value software quality?
If we can’t control the quality of
applications we can’t control the quality
of our lives
4. Have you heard about Centerlink fault?
Centrelink - government company that helps families and
people with low income
November 2016 - New automatic debt collection system was
introduced with 169 000 letters
Complaints rate - only 276 complaints from those 169,000
letters (Jan 9)
4 billions will be raised
What’s wrong?
5. What is API?
API - a set of functions and procedures that allow the creation of
applications which access the features or data of an operating system,
application, or other service.
6. Why do we need to start with API Automation?
● Manual testing takes far longer for automated solutions
● UI Automation is usually less stable
● Maintenance of UI tests takes a significant amount of time
● Test Early and Test Often - APIs usually are implemented first
7. Why automation is a quick win?
This is what we get after hard work is done
8. What are drawbacks of API implemented?
1. Automation can bring us false sense of security
2.Automation requires everyday support
3.Automation can delay releases
12. Frustration ----------> Hope Frisby!
How to start?
Learn this:
● How does http protocol work?
● What is API?
● What is JSON?
● Basic commands in Javascript
● How to install testing frameworks?
13. What Frisby does well:
•Send request/Adding headers to request
•Check types and values of responses
•Authorization via tokens and cookies
•Allows custom methods
Frisby – RESTful API testing framework
14. Structure of Frisby
test case
Example: testing API calls
to subscribe/unsubscribe
user
•Spec ending
•Test description
•Request/Response
•Defining expectations
•Toss Frisby
General workflow:
● Identify user
● Get access token
● Subscribe call
● Unsubscribe call
15. Structure of our test case: Part 1 – identify user and get access token
Define variables: var
Define new test: Frisby create()
Perform http GET request: get()
Check response code: expectStatus()
16. Structure of our test case: Part 2 – verify response data
Verify responses:
Check type of response:
expectJSONtypes
Check exact values:
afterJSON(function (body) {})
Matchers:
Expect.toEqual
Expect.toMatch
What we see in the console ------->
17. Structure of our test case: Part 3 – subscribe and unsubscribe user
Nested test
Define variables:
Var
Define new test:
Frisby.create()
Print results to console:
inspectJSON()
Fire the request:
toss()
20. Case 2
Device Caps - this system is responsible for mapping Game/Application
renditions to specified devices from Game Suppliers. It also contains the
logic to find known compatibility and extend coverage on this basis.
Search system module that works based on device caps functionality as the
user should be able to find only content that is supported.
21. Challenges
Over 2000 mobile devices
At least 5 main browsers
At least 15 different
Android OSes to test
Time limitation of 2 weeks
26. Code structure
“ It “
.post/.get
.set
.send
.end .end
Verify your API
response status, type
and other data
Part II
27. Response verification with chai and should
res.should.have.status(200);
res.should.be.json;
res.body.should.be.a('array');
res.body.should.have.length(5);
res.body.should.be.a('object');
res.body.should.have.property('id');
res.body.id.should.be.a('number');
res.body.name.should.equal('3JHGFJHJHKJHK2211');
28. Traditional TDD interfaces in Mocha
suite - similar to describe
test - similar to it
setup - similar to before
teardown - similar to after
suiteSetup - similar to beforeEach
suiteTeardown - similar to afterEach