YouTube: https://www.youtube.com/watch?v=f-DyEiTN6nc&index=4&list=PLnKL6-WWWE_VtIMfNLW3N3RGuCUcQkDMl
OutSystems builds a complex software product. As the company and the product complexity kept growing (and at a faster pace) to a model where we needed to be able to release more frequently, challenges appeared on the way we were doing automated testing and continuous integration / delivery, which demanded significant changes and improvements in these processes, from the tools to the culture. I will share with you our journey towards Continuous Delivery @ OutSystems R&D, namely describing where we were, where are we now (and how are we doing it) and where do we want to go. This is a very interesting story where we were able to change a lot in a relatively small period of time.
5. International company with its R&D based in Lisbon, Portugal
OutSystems provides a low-code rapid application development and
delivery platform (plus integration of custom code)
It consists on a complete application lifecycle system to develop, manage
and change enterprise web & native mobile applications
400+ Employees
100+ at R&D / Engineering
21. Test management and orchestration tool
â Solved the problem initially
â Continued to solve it, thanks to people with high pain threshold
â Was the only thing that gave us the green âship itâ light
â Tested too much
â Evolved by everyone, without a vision
22. Engineering team size
July 2013
18 Engineers July 2014
41 SW Engineers
â Fast-forward tip:
â Kept growing - 85 Engineers in 2015, and 130 Engineers in 2016
23. Branching
Teams working on separate environments
â Achieved through branching (per team and/or project)
â ~30 active (SVN) branches as of November 2014
REINTEGRATE HELL
25. Quality Assurance
How were we doing QA?
Full Build + Full Test Run
in Test Environments
Long Feedback Loop!
26. Quality Assurance
~26 hours to run ~10.000 tests over different stacks
No daily visibility
Slow Builds
Unreliable Environments
Flaky/Unstable Tests
Long Feedback Loop
(~100.000 test executions)
28. Testing Infrastructure
Developer: âI want a test environment to run
testsâ
Ops Guy: âOk, let me get one machine and then
in two weeks I will spend 3 days configuring
it using ourâŠâ
âŠ49 pages long manual
29. This model was not anything near CD!
But⊠For our release frequency (1 major per year) and support model
(only corrective maintenance) this worked well enough
Where were we then?
31. What made us change
â No more corrective maintenance only
â Features released in âmaintenance releasesâ
â Amount of development going on increased a lot
â Full run not enough anymore (and not reliable)
So...
32. What made us change
⊠the need for faster feedback (and with quality!) started to grow
But our processes, tools and infrastructure were not in place for that
33. What made us change
We perceived the boiling water on time, before the frog started to die (our
frog was smarter)!
Which meansâŠ
We understood that the need for faster feedback and more frequent
releases would keep growing, so we started building our own journey
towards CD.
37. Letâs automate!
â Saved ~3 days of 1 person per each test environment
â Easier for development teams to keep infrastructure code updated (itâs
code after all!)
38. Still some problems...
Test environment configurations are
automated, but...
⊠still need to wait for someone to
create the machine first (1, 2, 3 weeksâŠ).
39. âNimbusâ project - moving to the cloud
â âNimbusâ project - move our testing
infrastructure to the cloud (AWS)
40. âNimbusâ project - moving to the cloud
â Test environment provisioning much faster (1 hour by clicking a button
- includes all the environment configuration)
â Easy to recover
â Reliable and performant
â Scalable and elastic
42. Some developers (by their initiative) went to their managers...
Hey! We are really
excited about CD and
we have this ideaâŠ
Give us one week and
we'll save you 3 months
of wasted effort!
Impact on developers
44. And, in that week, they createdâŠ
CINTIA!
(Continuous INTegration and Intelligent Alert system)
The rise of CINTIA
45. The rise of CINTIA
â Automated incremental builds
â Automated installations
â Some automated tests (~1000 tests to start...)
â Automatic assign to right âculpritsâ!
â Developed using our own product (UI) + Python (Orchestration)
46.
47. What did CINTIA bring at this point?
â Build + Installation + ~1000 Tests in 19 minutes, automatically
triggered by commits
â Fast feedback!
â Automatic âculpritâ assign, and fast!
48. From CINTIA PoC to a real CI system
Challenge: how to achieve fast feedback with 100.000 test executions (taking
in account all the stack combinations)?
Do we really need to
always run all the tests
for all the stack
combinations?
49. From CINTIA PoC to a real CI system
Letâs apply some risk management here
â Letâs run almost all the tests for 1/2 particular stacks on each commit
â Letâs run all the other tests weekly, on milestones and prior to releases
50. Still some problems with tests...
Even by taking this choice, there were still open challenges:
â Reinventing the wheel with custom Python orchestration
â Too many tests to execute (still not-so-fast feedback)
â Unclear test categorization (unit, integration, etc.)
â A âmonolithicâ test stage
â Flaky tests
56. More challenges...
Despite the fact we had TVs giving visibility, developers needed more detail
Why is the test failing?
How can they troubleshoot it?
62. The Continuous Delivery Journey
BEFORE (2 years ago)
~10.000 tests in ~26 hours
No daily visibility
Slow Builds
Unreliable Environments
Flaky/Unstable Tests
Long Feedback Loop (testing all stacks)
NOW
~8.000 tests in ~1h
Full daily visibility
Fast/incremental Builds
Reliable Test Environment
Focus on creating fast/good design tests
Fast Feedback Loop (testing 2 stacks)
65. The future - Open Challenges
What are we up now? What challenges are we facing?
â Align validation process with product architecture
â Ownership
â Having the right tests / design for testability
â Refactor in a âmoving trainâ
â Single branch (per major) development
â Culture and mindset (e.g., âyou break it, you fix it, fastâ)
â Take developers out of the release decision
Achieve Continuous Delivery :-)