This document discusses the need for continuous delivery and integration of testing into software development pipelines. It notes that while pipelines focus on speed of delivery, testing is needed to ensure quality and avoid breaking changes. A central hub is needed to provide a single view of all test results from different sources to help determine if a release is ready. Intelligent test selection and optimization could help run relevant test subsets to improve feedback speed while maintaining coverage. An integrated test analysis tool can help address these challenges of continuous delivery.
2. 2 Copyright 2014. Confidential – Distribution prohibited without permission
Agenda
▪ The Need For Speed
▪ The Two Faces of CD
▪ Testing is Changing
▪ A Central Hub for Application Quality For Your Pipeline
▪ Beyond Test Automation: Active Test Optimization
3. 3 Copyright 2014. Confidential – Distribution prohibited without permission
About Me
▪ Product Manager XL TestView for XebiaLabs
▪ Traversed through all phases of the software development lifecycle
▪ Supported major organization in setting up a
test strategy and test automation strategy
▪ Is eager to flip the way (most) organizations do testing
4. 4 Copyright 2014. Confidential – Distribution prohibited without permission
About XebiaLabs
We build tools to solve problems around DevOps and Continuous Delivery at scale
5. 5 Copyright 2014. Confidential – Distribution prohibited without permission
The Need For Speed
▪ Every business is an IT business
− Known as “Software-defined Enterprise”, even traditionally brick and mortar business is becoming
software based
▪ Customers demand that you deliver new features faster whilst maintaining high
levels of quality
▪ If you don’t, your competitor probably will
6. 6 Copyright 2014. Confidential – Distribution prohibited without permission
The Need For Speed
▪ Every business is an IT business
− Known as “Software-defined Enterprise”, even traditionally brick and mortar business is becoming
software based
▪ Customers demand that you deliver new features faster whilst maintaining high
levels of quality
▪ If you don’t, your competitor probably will
7. 7 Copyright 2014. Confidential – Distribution prohibited without permission
The Need For Speed
▪ What is so compelling about CD?
▪ Business initiative with cool technical implementation
▪ “CD eats DevOps for breakfast as the business eats IT”
8. 8 Copyright 2014. Confidential – Distribution prohibited without permission
TheTwo Faces of CD
▪ A lot of focus right now is on pipeline execution
▪ …but there’s no point delivering at light speed if everything starts breaking
▪ Testing (= quality/risk) needs to be a first-class citizen of your CD initiative!
9. 9 Copyright 2014. Confidential – Distribution prohibited without permission
TheTwo Faces of CD
▪ CD = Execution + Analysis
10. 10 Copyright 2014. Confidential – Distribution prohibited without permission
TheTwo Faces of CD
▪ CD = Execution + Analysis
▪ = Speed + Quality
11. 11 Copyright 2014. Confidential – Distribution prohibited without permission
TheTwo Faces of CD
▪ CD = Execution + Analysis
▪ = Speed + Quality
▪ = Pipeline orchestration + ..?
12. 12 Copyright 2014. Confidential – Distribution prohibited without permission
SPECIFY INTEGRATE RELEASEREGRESSION
VALUE CHAINCONCEPT CASH
TestEffort
USER ACCEPTANCEDESIGN BUILD TEST
Testing is Changing
13. 13 Copyright 2014. Confidential – Distribution prohibited without permission
SPECIFY INTEGRATE RELEASEREGRESSION
VALUE CHAINCONCEPT CASH
TestEffort
USER ACCEPTANCE
USER ACCEPTANCE
Acceptance
Driven Testing
“I add value by sharpening the
acceptance criteria of requested
features”
Automate ALL
“Test automation serves as the safety net
for my new functionality: I focus on running
the appropriate tests continuously during
the iterations”
T
E
S
T
D
E
S
I
G
N
B
U
I
L
D
D B
T
T D
Development = Test
Test = Development
“Testing is transforming to a
automation mindset and skill
instead of a separate activity”
Testing is Changing
14. 14 Copyright 2014. Confidential – Distribution prohibited without permission
Testing is Changing: Challenges
▪ Many test tools for each of the test levels, but no single place to answer “Good
enough to go live?”
▪ Requirements coverage is not available
− “Did we test enough?”
▪ Minimize the mean time to repair
− Support for failure analysis
JUnit, FitNesse, JMeter, YSlow,
Vanity Check, WireShark, SOAP-UI,
Jasmine, Karma, Speedtrace,
Selenium, WebScarab, TTA,
DynaTrace, HP Diagnostics, ALM
stack AppDynamics, Code Tester for
Oracle, Arachnid, Fortify, Sonar, …
15. 15 Copyright 2014. Confidential – Distribution prohibited without permission
Testing is Changing: Challenges
▪ Thousands of tests makes test sets hard to manage:
− “Where is my subset?”
− “What tests add most value, what tests are superfluous?”
− “When to run what tests?”
▪ Running all tests all the time takes too long, feedback is too late
▪ Quality control of the tests themselves and maintenance of testware
▪ Tooling overstretch
16. 16 Copyright 2014. Confidential – Distribution prohibited without permission
Testing is Changing: Challenges
▪ Thousands of tests makes test sets hard to manage:
− “Where is my subset?”
− “What tests add most value, what tests are superfluous?”
− “When to run what tests?”
▪ Running all tests all the time takes too long, feedback is too late
▪ Quality control of the tests themselves and maintenance of testware
▪ Tooling overstretch
17. 17 Copyright 2014. Confidential – Distribution prohibited without permission
Testing is Changing: Best Practices
▪ Focus on functional coverage, not technical coverage
▪ Say 40 user stories, 400 tests
− Do I have relatively more tests for the more important user stories?
− How do I link tests to user stories/features/fixes?
▪ Metrics
− Number of tests
− Number of tests that have not passed in <time>
− Flaky tests
− Duration
18. 18 Copyright 2014. Confidential – Distribution prohibited without permission
Testing is Changing: Best Practices
▪ “Slice and dice” your test code
− Responsible team
− Topic
− Functional area
− Flaky
− Known issue
− etc.
▪ Radical parallelization
− Fail faster!
19. 19 Copyright 2014. Confidential – Distribution prohibited without permission
Making Sense ofTest Results
▪ Real go/no go decisions are non-trivial
− No failing tests
− 5 % of failing tests
− No regression (tests that currently fail but passed previously)
− List of tests-that-should-not-fail
▪ Need historical context
▪ One integrated view
▪ Data to guide improvement
20. 20 Copyright 2014. Confidential – Distribution prohibited without permission
Example Job Distribution
Build Deploy Int. Tests Test Test Test Perf. Tests
Build Deploy Int. Tests Test
Test
Test
Perf. Tests
21. 21 Copyright 2014. Confidential – Distribution prohibited without permission
Example Job Distribution
Build Deploy Int. Tests Test Test Test Perf. Tests
Build Deploy Int. Tests Test
Test
Test
Perf. Tests
Simple pipelines – scattered test results
22. 22 Copyright 2014. Confidential – Distribution prohibited without permission
Making Sense ofTest Results
Executing tests from Jenkins is great, but…
▪ Different testing jobs use different plugins or scripts, each with different
visualization styles
▪ No consolidated historic view available across jobs
▪ Pass/Unstable/Fail is too coarse
− How to do “Passed, but with known failures”?
23. 23 Copyright 2014. Confidential – Distribution prohibited without permission
Making Sense ofTest Results
▪ Ultimate analysis question (“are we good to go live?”) is difficult to answer
▪ No obvious solution for now, unless all your tests are running through one
service
24. 24 Copyright 2014. Confidential – Distribution prohibited without permission
Test Analysis: Homebrew
24
25. 25 Copyright 2014. Confidential – Distribution prohibited without permission
Test Analysis: Custom Reporting
25
26. 26 Copyright 2014. Confidential – Distribution prohibited without permission
A Central Hub for Application Quality
What is needed:
1. A single, integrated overview of all the test (= quality, risk) information related
to your current release
27. 27 Copyright 2014. Confidential – Distribution prohibited without permission
A Central Hub for Application Quality
What is needed:
1. A single, integrated overview of all the test (= quality, risk) information related
to your current release
2. …irrespective of where or by whom the information was produced
28. 28 Copyright 2014. Confidential – Distribution prohibited without permission
A Central Hub for Application Quality
What is needed:
1. A single, integrated overview of all the test (= quality, risk) information related
to your current release
2. …irrespective of where or by whom the information was produced
3. The ability to analyze and “slice and dice” the test results for different
audiences and use cases
29. 29 Copyright 2014. Confidential – Distribution prohibited without permission
A Central Hub for Application Quality
What is needed:
1. A single, integrated overview of all the test (= quality, risk) information related
to your current release
2. …irrespective of where or by whom the information was produced
3. The ability to analyze and “slice and dice” the test results for different
audiences and use cases
4. The ability to access historical context and other test attributes to make real-
world “go/no-go” decisions
30. 30 Copyright 2014. Confidential – Distribution prohibited without permission
BeyondTest Automation
Can we go further? How about
5. The ability to use the aggregated test results, historical contexts and other
attributes to invoke tests more intelligently?
31. 31 Copyright 2014. Confidential – Distribution prohibited without permission
BeyondTest Automation
It’s a bit of an open question:
▪ Google: it’s too expensive and time-consuming to run all the tests all the time -
automated selection of a subset of tests to run
▪ Dave Farley: if you can’t run all the tests all the time, you need to optimize
your tests or you have the wrong tests in the first place
32. 32 Copyright 2014. Confidential – Distribution prohibited without permission
BeyondTest Automation
Middle ground:
▪ Label your tests along all relevant dimensions to ensure that you can easily
select a relevant subset of your tests if needed
▪ Consider automatically annotating tests related to features (e.g.
added/modified in the same commit), or introducing that as a practice
▪ Use data from your test aggregation tool to ignore flaky/”known failure” tests
(and then fix those flaky tests, of course ;-))
33. 33 Copyright 2014. Confidential – Distribution prohibited without permission
Summary
▪ Testing = Automation
− Testers are developers
▪ Structure and annotate tests
− Conway’s Law for Tests
− Link to functions/features/use cases
▪ Radical parallelization
− Throwaway environments
34. 34 Copyright 2014. Confidential – Distribution prohibited without permission
Summary
▪ CD = Speed + Quality = Execution + Analysis
▪ Making sense of scattered test results is still a challenge
▪ Need to figure out how to address real world go/no go decisions
35. 35 Copyright 2014. Confidential – Distribution prohibited without permission
Summary
▪ CD = Speed + Quality = Execution + Analysis
▪ Making sense of scattered test results is still a challenge
▪ Need to figure out how to address real world go/no go decisions
36. 36 Copyright 2014. Confidential – Distribution prohibited without permission
AnalyzingTest Results
37. 37 Copyright 2014. Confidential – Distribution prohibited without permission
AnalyzingTest Results
38. 38 Copyright 2014. Confidential – Distribution prohibited without permission
TaggingTests
39. 39 Copyright 2014. Confidential – Distribution prohibited without permission
Evaluating Go/No-go Criteria
40. 40 Copyright 2014. Confidential – Distribution prohibited without permission
Next steps
▪ Next-Generation Testing: The Key to Continuous Delivery
https://xebialabs.com/resources/whitepapers/next-generation-testing-the-
key-to-continuous-delivery/
▪ An Introduction to XL TestView
https://www.youtube.com/watch?v=_17xKtB3iWU
▪ Download XL TestView
https://xebialabs.com/products/xl-testview/community
In this demo, we will first give an introduction into the major challenges involved in testing and explain our vision on how the (traditional) testing activities are prone to change. Next, given this inevitable change, we will focus on test automation and discuss the major test automation challenges and call for functionality this poses. We continue with positioning XL Test, our major test automation framework. Finally, we conclude with a demo of the key functionality of XL Test to indicate how the challenges and questions can be addressed using XL Test.
In this demo, we will first give an introduction into the major challenges involved in testing and explain our vision on how the (traditional) testing activities are prone to change. Next, given this inevitable change, we will focus on test automation and discuss the major test automation challenges and call for functionality this poses. We continue with positioning XL Test, our major test automation framework. Finally, we conclude with a demo of the key functionality of XL Test to indicate how the challenges and questions can be addressed using XL Test.
In this demo, we will first give an introduction into the major challenges involved in testing and explain our vision on how the (traditional) testing activities are prone to change. Next, given this inevitable change, we will focus on test automation and discuss the major test automation challenges and call for functionality this poses. We continue with positioning XL Test, our major test automation framework. Finally, we conclude with a demo of the key functionality of XL Test to indicate how the challenges and questions can be addressed using XL Test.
More specifically, the following challenges typically occur with organizations that have matured along the way in test automation. We will mention a few:
1. How to translate scattered insights from your test tools into a single answer “are we good enough to go live?”, or to “promote to the acceptance environment”.
2. When the amount of tests is growing, it becomes more and more important to label, tag, or select the appropriate tests. These may include tests that you or your development team is interested in, or tests that cover a designated part of the application’s functionality, or otherwise. Flexible test set management is key. Moreover, it becomes important to make sane selections as to what tests actually need to be run. Tests that are ‘green’ all the time may indicate that these are superfluous. Certainly when overlap in tests exists.
3. With the growing trend of releasing to production as often, as quickly, and as early as possible, we observe that more and more organizations would like to bring individual features or functionality to production and NOT wait until a sprint of a couple of weeks has ended. Again, this calls for flexibility: for that feature or functionality, select all appropriate tests (functional tests, performance tests, pre-production tests, etc.) that need to be run, or which run status needs to be examined – again, across testing tools. Furthermore, this flexible test set can then be linked to an individual entry / issue in your issue management software, such as JIRA or Rally. In this way, we can verify whether all tests for a given requirement or issue are passed, allowing to bring that functionality to production.
4. One often heard challenge is that tests are flaky (all of the sudden, tests that have been green, turn red). Typically, the development team that performs a series of activities to find out why that test has failed. This includes analyzing appropriate log files, verify test data, et cetera. Wouldn’t it be handy if all relevant information is automatically collected and stored alongside the test results?
<remark>
Other issues may also exist:
How to optimize testing activities in a chain environment when functionality of a number of applications jointly is verified.
How to include the results of manual tests in the quality dashboard.
More specifically, the following challenges typically occur with organizations that have matured along the way in test automation. We will mention a few:
1. How to translate scattered insights from your test tools into a single answer “are we good enough to go live?”, or to “promote to the acceptance environment”.
2. When the amount of tests is growing, it becomes more and more important to label, tag, or select the appropriate tests. These may include tests that you or your development team is interested in, or tests that cover a designated part of the application’s functionality, or otherwise. Flexible test set management is key. Moreover, it becomes important to make sane selections as to what tests actually need to be run. Tests that are ‘green’ all the time may indicate that these are superfluous. Certainly when overlap in tests exists.
3. With the growing trend of releasing to production as often, as quickly, and as early as possible, we observe that more and more organizations would like to bring individual features or functionality to production and NOT wait until a sprint of a couple of weeks has ended. Again, this calls for flexibility: for that feature or functionality, select all appropriate tests (functional tests, performance tests, pre-production tests, etc.) that need to be run, or which run status needs to be examined – again, across testing tools. Furthermore, this flexible test set can then be linked to an individual entry / issue in your issue management software, such as JIRA or Rally. In this way, we can verify whether all tests for a given requirement or issue are passed, allowing to bring that functionality to production.
4. One often heard challenge is that tests are flaky (all of the sudden, tests that have been green, turn red). Typically, the development team that performs a series of activities to find out why that test has failed. This includes analyzing appropriate log files, verify test data, et cetera. Wouldn’t it be handy if all relevant information is automatically collected and stored alongside the test results?
<remark>
Other issues may also exist:
How to optimize testing activities in a chain environment when functionality of a number of applications jointly is verified.
How to include the results of manual tests in the quality dashboard.
More specifically, the following challenges typically occur with organizations that have matured along the way in test automation. We will mention a few:
1. How to translate scattered insights from your test tools into a single answer “are we good enough to go live?”, or to “promote to the acceptance environment”.
2. When the amount of tests is growing, it becomes more and more important to label, tag, or select the appropriate tests. These may include tests that you or your development team is interested in, or tests that cover a designated part of the application’s functionality, or otherwise. Flexible test set management is key. Moreover, it becomes important to make sane selections as to what tests actually need to be run. Tests that are ‘green’ all the time may indicate that these are superfluous. Certainly when overlap in tests exists.
3. With the growing trend of releasing to production as often, as quickly, and as early as possible, we observe that more and more organizations would like to bring individual features or functionality to production and NOT wait until a sprint of a couple of weeks has ended. Again, this calls for flexibility: for that feature or functionality, select all appropriate tests (functional tests, performance tests, pre-production tests, etc.) that need to be run, or which run status needs to be examined – again, across testing tools. Furthermore, this flexible test set can then be linked to an individual entry / issue in your issue management software, such as JIRA or Rally. In this way, we can verify whether all tests for a given requirement or issue are passed, allowing to bring that functionality to production.
4. One often heard challenge is that tests are flaky (all of the sudden, tests that have been green, turn red). Typically, the development team that performs a series of activities to find out why that test has failed. This includes analyzing appropriate log files, verify test data, et cetera. Wouldn’t it be handy if all relevant information is automatically collected and stored alongside the test results?
<remark>
Other issues may also exist:
How to optimize testing activities in a chain environment when functionality of a number of applications jointly is verified.
How to include the results of manual tests in the quality dashboard.
More specifically, the following challenges typically occur with organizations that have matured along the way in test automation. We will mention a few:
1. How to translate scattered insights from your test tools into a single answer “are we good enough to go live?”, or to “promote to the acceptance environment”.
2. When the amount of tests is growing, it becomes more and more important to label, tag, or select the appropriate tests. These may include tests that you or your development team is interested in, or tests that cover a designated part of the application’s functionality, or otherwise. Flexible test set management is key. Moreover, it becomes important to make sane selections as to what tests actually need to be run. Tests that are ‘green’ all the time may indicate that these are superfluous. Certainly when overlap in tests exists.
3. With the growing trend of releasing to production as often, as quickly, and as early as possible, we observe that more and more organizations would like to bring individual features or functionality to production and NOT wait until a sprint of a couple of weeks has ended. Again, this calls for flexibility: for that feature or functionality, select all appropriate tests (functional tests, performance tests, pre-production tests, etc.) that need to be run, or which run status needs to be examined – again, across testing tools. Furthermore, this flexible test set can then be linked to an individual entry / issue in your issue management software, such as JIRA or Rally. In this way, we can verify whether all tests for a given requirement or issue are passed, allowing to bring that functionality to production.
4. One often heard challenge is that tests are flaky (all of the sudden, tests that have been green, turn red). Typically, the development team that performs a series of activities to find out why that test has failed. This includes analyzing appropriate log files, verify test data, et cetera. Wouldn’t it be handy if all relevant information is automatically collected and stored alongside the test results?
<remark>
Other issues may also exist:
How to optimize testing activities in a chain environment when functionality of a number of applications jointly is verified.
How to include the results of manual tests in the quality dashboard.
More specifically, the following challenges typically occur with organizations that have matured along the way in test automation. We will mention a few:
1. How to translate scattered insights from your test tools into a single answer “are we good enough to go live?”, or to “promote to the acceptance environment”.
2. When the amount of tests is growing, it becomes more and more important to label, tag, or select the appropriate tests. These may include tests that you or your development team is interested in, or tests that cover a designated part of the application’s functionality, or otherwise. Flexible test set management is key. Moreover, it becomes important to make sane selections as to what tests actually need to be run. Tests that are ‘green’ all the time may indicate that these are superfluous. Certainly when overlap in tests exists.
3. With the growing trend of releasing to production as often, as quickly, and as early as possible, we observe that more and more organizations would like to bring individual features or functionality to production and NOT wait until a sprint of a couple of weeks has ended. Again, this calls for flexibility: for that feature or functionality, select all appropriate tests (functional tests, performance tests, pre-production tests, etc.) that need to be run, or which run status needs to be examined – again, across testing tools. Furthermore, this flexible test set can then be linked to an individual entry / issue in your issue management software, such as JIRA or Rally. In this way, we can verify whether all tests for a given requirement or issue are passed, allowing to bring that functionality to production.
4. One often heard challenge is that tests are flaky (all of the sudden, tests that have been green, turn red). Typically, the development team that performs a series of activities to find out why that test has failed. This includes analyzing appropriate log files, verify test data, et cetera. Wouldn’t it be handy if all relevant information is automatically collected and stored alongside the test results?
<remark>
Other issues may also exist:
How to optimize testing activities in a chain environment when functionality of a number of applications jointly is verified.
How to include the results of manual tests in the quality dashboard.
More specifically, the following challenges typically occur with organizations that have matured along the way in test automation. We will mention a few:
1. How to translate scattered insights from your test tools into a single answer “are we good enough to go live?”, or to “promote to the acceptance environment”.
2. When the amount of tests is growing, it becomes more and more important to label, tag, or select the appropriate tests. These may include tests that you or your development team is interested in, or tests that cover a designated part of the application’s functionality, or otherwise. Flexible test set management is key. Moreover, it becomes important to make sane selections as to what tests actually need to be run. Tests that are ‘green’ all the time may indicate that these are superfluous. Certainly when overlap in tests exists.
3. With the growing trend of releasing to production as often, as quickly, and as early as possible, we observe that more and more organizations would like to bring individual features or functionality to production and NOT wait until a sprint of a couple of weeks has ended. Again, this calls for flexibility: for that feature or functionality, select all appropriate tests (functional tests, performance tests, pre-production tests, etc.) that need to be run, or which run status needs to be examined – again, across testing tools. Furthermore, this flexible test set can then be linked to an individual entry / issue in your issue management software, such as JIRA or Rally. In this way, we can verify whether all tests for a given requirement or issue are passed, allowing to bring that functionality to production.
4. One often heard challenge is that tests are flaky (all of the sudden, tests that have been green, turn red). Typically, the development team that performs a series of activities to find out why that test has failed. This includes analyzing appropriate log files, verify test data, et cetera. Wouldn’t it be handy if all relevant information is automatically collected and stored alongside the test results?
<remark>
Other issues may also exist:
How to optimize testing activities in a chain environment when functionality of a number of applications jointly is verified.
How to include the results of manual tests in the quality dashboard.
More specifically, the following challenges typically occur with organizations that have matured along the way in test automation. We will mention a few:
1. How to translate scattered insights from your test tools into a single answer “are we good enough to go live?”, or to “promote to the acceptance environment”.
2. When the amount of tests is growing, it becomes more and more important to label, tag, or select the appropriate tests. These may include tests that you or your development team is interested in, or tests that cover a designated part of the application’s functionality, or otherwise. Flexible test set management is key. Moreover, it becomes important to make sane selections as to what tests actually need to be run. Tests that are ‘green’ all the time may indicate that these are superfluous. Certainly when overlap in tests exists.
3. With the growing trend of releasing to production as often, as quickly, and as early as possible, we observe that more and more organizations would like to bring individual features or functionality to production and NOT wait until a sprint of a couple of weeks has ended. Again, this calls for flexibility: for that feature or functionality, select all appropriate tests (functional tests, performance tests, pre-production tests, etc.) that need to be run, or which run status needs to be examined – again, across testing tools. Furthermore, this flexible test set can then be linked to an individual entry / issue in your issue management software, such as JIRA or Rally. In this way, we can verify whether all tests for a given requirement or issue are passed, allowing to bring that functionality to production.
4. One often heard challenge is that tests are flaky (all of the sudden, tests that have been green, turn red). Typically, the development team that performs a series of activities to find out why that test has failed. This includes analyzing appropriate log files, verify test data, et cetera. Wouldn’t it be handy if all relevant information is automatically collected and stored alongside the test results?
<remark>
Other issues may also exist:
How to optimize testing activities in a chain environment when functionality of a number of applications jointly is verified.
How to include the results of manual tests in the quality dashboard.
Running stuff in parallel is quite quickly necessary to obtain quick feedback
So, how to run browser tests effectively in parallel as shown above? next slide
Running stuff in parallel is quite quickly necessary to obtain quick feedback
So, how to run browser tests effectively in parallel as shown above? next slide
Jeroen
When the amount of tests and testing effort increases, automation becomes key. Especially to safeguard quality when existing functionality and quality levels needs to be verified when at the same time new functionality needs to be tested and implemented. Typically, an increase in number of test tools used and an increase in number of tests are to be expected. Furthermore, different teams may select different tools for performing functional tests. This poses the question how to obtain oversight across these test tools. How can we still know what the level of quality is of what we are putting into production. Do we have a ‘single point of truth’ telling me what tests have run, what tests have not run, and what the resulting quality overview is?
In our experience, as the number of test tools increases, this poses challenges: test results are scattered and the integrated overview is lacking. In other cases, we would like to be more flexible and not run all the tests all the time. So, making sane and quick selections of tests to be run, across the testing tools, becomes important. These tests need to be run as soon as possible to provide feedback to the development team as quickly as possible. This will boost the productivity and velocity of the development team since the slack and delay involved in running superfluous tests is reduced to the minimum.
When the amount of tests and testing effort increases, automation becomes key. Especially to safeguard quality when existing functionality and quality levels needs to be verified when at the same time new functionality needs to be tested and implemented. Typically, an increase in number of test tools used and an increase in number of tests are to be expected. Furthermore, different teams may select different tools for performing functional tests. This poses the question how to obtain oversight across these test tools. How can we still know what the level of quality is of what we are putting into production. Do we have a ‘single point of truth’ telling me what tests have run, what tests have not run, and what the resulting quality overview is?
In our experience, as the number of test tools increases, this poses challenges: test results are scattered and the integrated overview is lacking. In other cases, we would like to be more flexible and not run all the tests all the time. So, making sane and quick selections of tests to be run, across the testing tools, becomes important. These tests need to be run as soon as possible to provide feedback to the development team as quickly as possible. This will boost the productivity and velocity of the development team since the slack and delay involved in running superfluous tests is reduced to the minimum.
When the amount of tests and testing effort increases, automation becomes key. Especially to safeguard quality when existing functionality and quality levels needs to be verified when at the same time new functionality needs to be tested and implemented. Typically, an increase in number of test tools used and an increase in number of tests are to be expected. Furthermore, different teams may select different tools for performing functional tests. This poses the question how to obtain oversight across these test tools. How can we still know what the level of quality is of what we are putting into production. Do we have a ‘single point of truth’ telling me what tests have run, what tests have not run, and what the resulting quality overview is?
In our experience, as the number of test tools increases, this poses challenges: test results are scattered and the integrated overview is lacking. In other cases, we would like to be more flexible and not run all the tests all the time. So, making sane and quick selections of tests to be run, across the testing tools, becomes important. These tests need to be run as soon as possible to provide feedback to the development team as quickly as possible. This will boost the productivity and velocity of the development team since the slack and delay involved in running superfluous tests is reduced to the minimum.
When the amount of tests and testing effort increases, automation becomes key. Especially to safeguard quality when existing functionality and quality levels needs to be verified when at the same time new functionality needs to be tested and implemented. Typically, an increase in number of test tools used and an increase in number of tests are to be expected. Furthermore, different teams may select different tools for performing functional tests. This poses the question how to obtain oversight across these test tools. How can we still know what the level of quality is of what we are putting into production. Do we have a ‘single point of truth’ telling me what tests have run, what tests have not run, and what the resulting quality overview is?
In our experience, as the number of test tools increases, this poses challenges: test results are scattered and the integrated overview is lacking. In other cases, we would like to be more flexible and not run all the tests all the time. So, making sane and quick selections of tests to be run, across the testing tools, becomes important. These tests need to be run as soon as possible to provide feedback to the development team as quickly as possible. This will boost the productivity and velocity of the development team since the slack and delay involved in running superfluous tests is reduced to the minimum.
When the amount of tests and testing effort increases, automation becomes key. Especially to safeguard quality when existing functionality and quality levels needs to be verified when at the same time new functionality needs to be tested and implemented. Typically, an increase in number of test tools used and an increase in number of tests are to be expected. Furthermore, different teams may select different tools for performing functional tests. This poses the question how to obtain oversight across these test tools. How can we still know what the level of quality is of what we are putting into production. Do we have a ‘single point of truth’ telling me what tests have run, what tests have not run, and what the resulting quality overview is?
In our experience, as the number of test tools increases, this poses challenges: test results are scattered and the integrated overview is lacking. In other cases, we would like to be more flexible and not run all the tests all the time. So, making sane and quick selections of tests to be run, across the testing tools, becomes important. These tests need to be run as soon as possible to provide feedback to the development team as quickly as possible. This will boost the productivity and velocity of the development team since the slack and delay involved in running superfluous tests is reduced to the minimum.
When the amount of tests and testing effort increases, automation becomes key. Especially to safeguard quality when existing functionality and quality levels needs to be verified when at the same time new functionality needs to be tested and implemented. Typically, an increase in number of test tools used and an increase in number of tests are to be expected. Furthermore, different teams may select different tools for performing functional tests. This poses the question how to obtain oversight across these test tools. How can we still know what the level of quality is of what we are putting into production. Do we have a ‘single point of truth’ telling me what tests have run, what tests have not run, and what the resulting quality overview is?
In our experience, as the number of test tools increases, this poses challenges: test results are scattered and the integrated overview is lacking. In other cases, we would like to be more flexible and not run all the tests all the time. So, making sane and quick selections of tests to be run, across the testing tools, becomes important. These tests need to be run as soon as possible to provide feedback to the development team as quickly as possible. This will boost the productivity and velocity of the development team since the slack and delay involved in running superfluous tests is reduced to the minimum.
Viktor
Viktor
Viktor
Viktor
When the amount of tests and testing effort increases, automation becomes key. Especially to safeguard quality when existing functionality and quality levels needs to be verified when at the same time new functionality needs to be tested and implemented. Typically, an increase in number of test tools used and an increase in number of tests are to be expected. Furthermore, different teams may select different tools for performing functional tests. This poses the question how to obtain oversight across these test tools. How can we still know what the level of quality is of what we are putting into production. Do we have a ‘single point of truth’ telling me what tests have run, what tests have not run, and what the resulting quality overview is?
In our experience, as the number of test tools increases, this poses challenges: test results are scattered and the integrated overview is lacking. In other cases, we would like to be more flexible and not run all the tests all the time. So, making sane and quick selections of tests to be run, across the testing tools, becomes important. These tests need to be run as soon as possible to provide feedback to the development team as quickly as possible. This will boost the productivity and velocity of the development team since the slack and delay involved in running superfluous tests is reduced to the minimum.