SlideShare ist ein Scribd-Unternehmen logo
1 von 41
Moving to Continuous
DeliveryWithout Breaking
Your Code
Viktor Clerc, 23 June 2015
2 Copyright 2014. Confidential – Distribution prohibited without permission
Agenda
▪ The Need For Speed
▪ The Two Faces of CD
▪ Testing is Changing
▪ A Central Hub for Application Quality For Your Pipeline
▪ Beyond Test Automation: Active Test Optimization
3 Copyright 2014. Confidential – Distribution prohibited without permission
About Me
▪ Product Manager XL TestView for XebiaLabs
▪ Traversed through all phases of the software development lifecycle
▪ Supported major organization in setting up a
test strategy and test automation strategy
▪ Is eager to flip the way (most) organizations do testing
4 Copyright 2014. Confidential – Distribution prohibited without permission
About XebiaLabs
We build tools to solve problems around DevOps and Continuous Delivery at scale
5 Copyright 2014. Confidential – Distribution prohibited without permission
The Need For Speed
▪ Every business is an IT business
− Known as “Software-defined Enterprise”, even traditionally brick and mortar business is becoming
software based
▪ Customers demand that you deliver new features faster whilst maintaining high
levels of quality
▪ If you don’t, your competitor probably will
6 Copyright 2014. Confidential – Distribution prohibited without permission
The Need For Speed
▪ Every business is an IT business
− Known as “Software-defined Enterprise”, even traditionally brick and mortar business is becoming
software based
▪ Customers demand that you deliver new features faster whilst maintaining high
levels of quality
▪ If you don’t, your competitor probably will
7 Copyright 2014. Confidential – Distribution prohibited without permission
The Need For Speed
▪ What is so compelling about CD?
▪ Business initiative with cool technical implementation
▪ “CD eats DevOps for breakfast as the business eats IT”
8 Copyright 2014. Confidential – Distribution prohibited without permission
TheTwo Faces of CD
▪ A lot of focus right now is on pipeline execution
▪ …but there’s no point delivering at light speed if everything starts breaking
▪ Testing (= quality/risk) needs to be a first-class citizen of your CD initiative!
9 Copyright 2014. Confidential – Distribution prohibited without permission
TheTwo Faces of CD
▪ CD = Execution + Analysis
10 Copyright 2014. Confidential – Distribution prohibited without permission
TheTwo Faces of CD
▪ CD = Execution + Analysis
▪ = Speed + Quality
11 Copyright 2014. Confidential – Distribution prohibited without permission
TheTwo Faces of CD
▪ CD = Execution + Analysis
▪ = Speed + Quality
▪ = Pipeline orchestration + ..?
12 Copyright 2014. Confidential – Distribution prohibited without permission
SPECIFY INTEGRATE RELEASEREGRESSION
VALUE CHAINCONCEPT CASH
TestEffort
USER ACCEPTANCEDESIGN BUILD TEST
Testing is Changing
13 Copyright 2014. Confidential – Distribution prohibited without permission
SPECIFY INTEGRATE RELEASEREGRESSION
VALUE CHAINCONCEPT CASH
TestEffort
USER ACCEPTANCE
USER ACCEPTANCE
Acceptance
Driven Testing
“I add value by sharpening the
acceptance criteria of requested
features”
Automate ALL
“Test automation serves as the safety net
for my new functionality: I focus on running
the appropriate tests continuously during
the iterations”
T
E
S
T
D
E
S
I
G
N
B
U
I
L
D
D B
T
T D
Development = Test
Test = Development
“Testing is transforming to a
automation mindset and skill
instead of a separate activity”
Testing is Changing
14 Copyright 2014. Confidential – Distribution prohibited without permission
Testing is Changing: Challenges
▪ Many test tools for each of the test levels, but no single place to answer “Good
enough to go live?”
▪ Requirements coverage is not available
− “Did we test enough?”
▪ Minimize the mean time to repair
− Support for failure analysis
JUnit, FitNesse, JMeter, YSlow,
Vanity Check, WireShark, SOAP-UI,
Jasmine, Karma, Speedtrace,
Selenium, WebScarab, TTA,
DynaTrace, HP Diagnostics, ALM
stack AppDynamics, Code Tester for
Oracle, Arachnid, Fortify, Sonar, …
15 Copyright 2014. Confidential – Distribution prohibited without permission
Testing is Changing: Challenges
▪ Thousands of tests makes test sets hard to manage:
− “Where is my subset?”
− “What tests add most value, what tests are superfluous?”
− “When to run what tests?”
▪ Running all tests all the time takes too long, feedback is too late
▪ Quality control of the tests themselves and maintenance of testware
▪ Tooling overstretch
16 Copyright 2014. Confidential – Distribution prohibited without permission
Testing is Changing: Challenges
▪ Thousands of tests makes test sets hard to manage:
− “Where is my subset?”
− “What tests add most value, what tests are superfluous?”
− “When to run what tests?”
▪ Running all tests all the time takes too long, feedback is too late
▪ Quality control of the tests themselves and maintenance of testware
▪ Tooling overstretch
17 Copyright 2014. Confidential – Distribution prohibited without permission
Testing is Changing: Best Practices
▪ Focus on functional coverage, not technical coverage
▪ Say 40 user stories, 400 tests
− Do I have relatively more tests for the more important user stories?
− How do I link tests to user stories/features/fixes?
▪ Metrics
− Number of tests
− Number of tests that have not passed in <time>
− Flaky tests
− Duration
18 Copyright 2014. Confidential – Distribution prohibited without permission
Testing is Changing: Best Practices
▪ “Slice and dice” your test code
− Responsible team
− Topic
− Functional area
− Flaky
− Known issue
− etc.
▪ Radical parallelization
− Fail faster!
19 Copyright 2014. Confidential – Distribution prohibited without permission
Making Sense ofTest Results
▪ Real go/no go decisions are non-trivial
− No failing tests
− 5 % of failing tests
− No regression (tests that currently fail but passed previously)
− List of tests-that-should-not-fail
▪ Need historical context
▪ One integrated view
▪ Data to guide improvement
20 Copyright 2014. Confidential – Distribution prohibited without permission
Example Job Distribution
Build Deploy Int. Tests Test Test Test Perf. Tests
Build Deploy Int. Tests Test
Test
Test
Perf. Tests
21 Copyright 2014. Confidential – Distribution prohibited without permission
Example Job Distribution
Build Deploy Int. Tests Test Test Test Perf. Tests
Build Deploy Int. Tests Test
Test
Test
Perf. Tests
Simple pipelines – scattered test results
22 Copyright 2014. Confidential – Distribution prohibited without permission
Making Sense ofTest Results
Executing tests from Jenkins is great, but…
▪ Different testing jobs use different plugins or scripts, each with different
visualization styles
▪ No consolidated historic view available across jobs
▪ Pass/Unstable/Fail is too coarse
− How to do “Passed, but with known failures”?
23 Copyright 2014. Confidential – Distribution prohibited without permission
Making Sense ofTest Results
▪ Ultimate analysis question (“are we good to go live?”) is difficult to answer
▪ No obvious solution for now, unless all your tests are running through one
service
24 Copyright 2014. Confidential – Distribution prohibited without permission
Test Analysis: Homebrew
24
25 Copyright 2014. Confidential – Distribution prohibited without permission
Test Analysis: Custom Reporting
25
26 Copyright 2014. Confidential – Distribution prohibited without permission
A Central Hub for Application Quality
What is needed:
1. A single, integrated overview of all the test (= quality, risk) information related
to your current release
27 Copyright 2014. Confidential – Distribution prohibited without permission
A Central Hub for Application Quality
What is needed:
1. A single, integrated overview of all the test (= quality, risk) information related
to your current release
2. …irrespective of where or by whom the information was produced
28 Copyright 2014. Confidential – Distribution prohibited without permission
A Central Hub for Application Quality
What is needed:
1. A single, integrated overview of all the test (= quality, risk) information related
to your current release
2. …irrespective of where or by whom the information was produced
3. The ability to analyze and “slice and dice” the test results for different
audiences and use cases
29 Copyright 2014. Confidential – Distribution prohibited without permission
A Central Hub for Application Quality
What is needed:
1. A single, integrated overview of all the test (= quality, risk) information related
to your current release
2. …irrespective of where or by whom the information was produced
3. The ability to analyze and “slice and dice” the test results for different
audiences and use cases
4. The ability to access historical context and other test attributes to make real-
world “go/no-go” decisions
30 Copyright 2014. Confidential – Distribution prohibited without permission
BeyondTest Automation
Can we go further? How about
5. The ability to use the aggregated test results, historical contexts and other
attributes to invoke tests more intelligently?
31 Copyright 2014. Confidential – Distribution prohibited without permission
BeyondTest Automation
It’s a bit of an open question:
▪ Google: it’s too expensive and time-consuming to run all the tests all the time -
automated selection of a subset of tests to run
▪ Dave Farley: if you can’t run all the tests all the time, you need to optimize
your tests or you have the wrong tests in the first place
32 Copyright 2014. Confidential – Distribution prohibited without permission
BeyondTest Automation
Middle ground:
▪ Label your tests along all relevant dimensions to ensure that you can easily
select a relevant subset of your tests if needed
▪ Consider automatically annotating tests related to features (e.g.
added/modified in the same commit), or introducing that as a practice
▪ Use data from your test aggregation tool to ignore flaky/”known failure” tests
(and then fix those flaky tests, of course ;-))
33 Copyright 2014. Confidential – Distribution prohibited without permission
Summary
▪ Testing = Automation
− Testers are developers
▪ Structure and annotate tests
− Conway’s Law for Tests
− Link to functions/features/use cases
▪ Radical parallelization
− Throwaway environments
34 Copyright 2014. Confidential – Distribution prohibited without permission
Summary
▪ CD = Speed + Quality = Execution + Analysis
▪ Making sense of scattered test results is still a challenge
▪ Need to figure out how to address real world go/no go decisions
35 Copyright 2014. Confidential – Distribution prohibited without permission
Summary
▪ CD = Speed + Quality = Execution + Analysis
▪ Making sense of scattered test results is still a challenge
▪ Need to figure out how to address real world go/no go decisions
36 Copyright 2014. Confidential – Distribution prohibited without permission
AnalyzingTest Results
37 Copyright 2014. Confidential – Distribution prohibited without permission
AnalyzingTest Results
38 Copyright 2014. Confidential – Distribution prohibited without permission
TaggingTests
39 Copyright 2014. Confidential – Distribution prohibited without permission
Evaluating Go/No-go Criteria
40 Copyright 2014. Confidential – Distribution prohibited without permission
Next steps
▪ Next-Generation Testing: The Key to Continuous Delivery
https://xebialabs.com/resources/whitepapers/next-generation-testing-the-
key-to-continuous-delivery/
▪ An Introduction to XL TestView
https://www.youtube.com/watch?v=_17xKtB3iWU
▪ Download XL TestView
https://xebialabs.com/products/xl-testview/community
Thank you!

Weitere ähnliche Inhalte

Was ist angesagt?

SOASTA Webinar: Process Compression For Mobile App Dev 120612
SOASTA Webinar: Process Compression For Mobile App Dev 120612SOASTA Webinar: Process Compression For Mobile App Dev 120612
SOASTA Webinar: Process Compression For Mobile App Dev 120612
SOASTA
 

Was ist angesagt? (20)

JUG CH September 2021 - Debugging distributed systems
JUG CH September 2021 - Debugging distributed systemsJUG CH September 2021 - Debugging distributed systems
JUG CH September 2021 - Debugging distributed systems
 
Continuous Delivery: better software, faster.
Continuous Delivery: better software, faster.Continuous Delivery: better software, faster.
Continuous Delivery: better software, faster.
 
Achieving Continuous Delivery with Puppet
Achieving Continuous Delivery with PuppetAchieving Continuous Delivery with Puppet
Achieving Continuous Delivery with Puppet
 
Devoxx Belgium 2019 - Better software, faster: Principles of Continuous Deliv...
Devoxx Belgium 2019 - Better software, faster: Principles of Continuous Deliv...Devoxx Belgium 2019 - Better software, faster: Principles of Continuous Deliv...
Devoxx Belgium 2019 - Better software, faster: Principles of Continuous Deliv...
 
DevoxxUK 2019 - Better software, faster.
DevoxxUK 2019 - Better software, faster.DevoxxUK 2019 - Better software, faster.
DevoxxUK 2019 - Better software, faster.
 
AmsterdamJUG September 2019 - Better software, faster: Principles of Continuo...
AmsterdamJUG September 2019 - Better software, faster: Principles of Continuo...AmsterdamJUG September 2019 - Better software, faster: Principles of Continuo...
AmsterdamJUG September 2019 - Better software, faster: Principles of Continuo...
 
OpenValue Vienna meetup september 2020 - Better software, faster: Principles ...
OpenValue Vienna meetup september 2020 - Better software, faster: Principles ...OpenValue Vienna meetup september 2020 - Better software, faster: Principles ...
OpenValue Vienna meetup september 2020 - Better software, faster: Principles ...
 
TDC 2021 - Better software, faster: Principles of Continuous Delivery and DevOps
TDC 2021 - Better software, faster: Principles of Continuous Delivery and DevOpsTDC 2021 - Better software, faster: Principles of Continuous Delivery and DevOps
TDC 2021 - Better software, faster: Principles of Continuous Delivery and DevOps
 
CodeOne 2018 - Better software, faster: principles of Continuous Delivery and...
CodeOne 2018 - Better software, faster: principles of Continuous Delivery and...CodeOne 2018 - Better software, faster: principles of Continuous Delivery and...
CodeOne 2018 - Better software, faster: principles of Continuous Delivery and...
 
When User Stories Are Not Enough
When User Stories Are Not EnoughWhen User Stories Are Not Enough
When User Stories Are Not Enough
 
Den Bosch Java User Group April 2020 - Better software, faster - Principles o...
Den Bosch Java User Group April 2020 - Better software, faster - Principles o...Den Bosch Java User Group April 2020 - Better software, faster - Principles o...
Den Bosch Java User Group April 2020 - Better software, faster - Principles o...
 
Webinar: Load Testing for Your Peak Season
Webinar: Load Testing for Your Peak SeasonWebinar: Load Testing for Your Peak Season
Webinar: Load Testing for Your Peak Season
 
Continuous Testing in DevOps
Continuous Testing in DevOpsContinuous Testing in DevOps
Continuous Testing in DevOps
 
Software architecture in a DevOps world
Software architecture in a DevOps worldSoftware architecture in a DevOps world
Software architecture in a DevOps world
 
Debugging distributed systems
Debugging distributed systemsDebugging distributed systems
Debugging distributed systems
 
What is Continuous Delivery really?
What is Continuous Delivery really?What is Continuous Delivery really?
What is Continuous Delivery really?
 
JavaZone 2019 - Better software, faster: Principles of Continuous Delivery an...
JavaZone 2019 - Better software, faster: Principles of Continuous Delivery an...JavaZone 2019 - Better software, faster: Principles of Continuous Delivery an...
JavaZone 2019 - Better software, faster: Principles of Continuous Delivery an...
 
OpenValue meetup June 2019 - Better, software faster: Principles of Continuou...
OpenValue meetup June 2019 - Better, software faster: Principles of Continuou...OpenValue meetup June 2019 - Better, software faster: Principles of Continuou...
OpenValue meetup June 2019 - Better, software faster: Principles of Continuou...
 
Closing the Requirements and Testing Loop Webinar
Closing the Requirements and Testing Loop WebinarClosing the Requirements and Testing Loop Webinar
Closing the Requirements and Testing Loop Webinar
 
SOASTA Webinar: Process Compression For Mobile App Dev 120612
SOASTA Webinar: Process Compression For Mobile App Dev 120612SOASTA Webinar: Process Compression For Mobile App Dev 120612
SOASTA Webinar: Process Compression For Mobile App Dev 120612
 

Ähnlich wie Moving to Continuous Delivery without breaking everything

Chicago CD Summit: 3 Pillars of Continuous Delivery
Chicago CD Summit: 3 Pillars of Continuous DeliveryChicago CD Summit: 3 Pillars of Continuous Delivery
Chicago CD Summit: 3 Pillars of Continuous Delivery
XebiaLabs
 

Ähnlich wie Moving to Continuous Delivery without breaking everything (20)

Moving to Continuous Delivery Without Breaking Your Code
Moving to Continuous Delivery Without Breaking Your CodeMoving to Continuous Delivery Without Breaking Your Code
Moving to Continuous Delivery Without Breaking Your Code
 
Your Team’s Not Agile If You’re Not Doing Agile Testing
Your Team’s Not Agile If You’re Not Doing Agile TestingYour Team’s Not Agile If You’re Not Doing Agile Testing
Your Team’s Not Agile If You’re Not Doing Agile Testing
 
Choosing Automation for DevOps & Continuous Delivery in the Enterprise
Choosing Automation for DevOps & Continuous Delivery in the EnterpriseChoosing Automation for DevOps & Continuous Delivery in the Enterprise
Choosing Automation for DevOps & Continuous Delivery in the Enterprise
 
Ship Faster Without Breaking Everything - XebiaLabs + SaltStack Webinar
Ship Faster Without Breaking Everything - XebiaLabs + SaltStack WebinarShip Faster Without Breaking Everything - XebiaLabs + SaltStack Webinar
Ship Faster Without Breaking Everything - XebiaLabs + SaltStack Webinar
 
So you-want-to-go-faster
So you-want-to-go-fasterSo you-want-to-go-faster
So you-want-to-go-faster
 
Continuous Delivery & the Database - the Final Frontier
Continuous Delivery & the Database - the Final FrontierContinuous Delivery & the Database - the Final Frontier
Continuous Delivery & the Database - the Final Frontier
 
Continuous Delivery & the Database- The Final Frontier
Continuous Delivery & the Database- The Final FrontierContinuous Delivery & the Database- The Final Frontier
Continuous Delivery & the Database- The Final Frontier
 
Usability Testing: A Complete Guide
Usability Testing: A Complete GuideUsability Testing: A Complete Guide
Usability Testing: A Complete Guide
 
Remote usability testing and remote user research for usability
Remote usability testing and remote user research for usabilityRemote usability testing and remote user research for usability
Remote usability testing and remote user research for usability
 
How to Actually DO High-volume Automated Testing
How to Actually DO High-volume Automated TestingHow to Actually DO High-volume Automated Testing
How to Actually DO High-volume Automated Testing
 
Holistic testing in DevOps
Holistic testing in DevOpsHolistic testing in DevOps
Holistic testing in DevOps
 
Culture, Processes and Tools of Continuous Delivery
Culture, Processes and Tools of Continuous DeliveryCulture, Processes and Tools of Continuous Delivery
Culture, Processes and Tools of Continuous Delivery
 
Continuous Delivery & Testing Madrid AfterTest
Continuous Delivery & Testing Madrid AfterTestContinuous Delivery & Testing Madrid AfterTest
Continuous Delivery & Testing Madrid AfterTest
 
Continuous Testing
Continuous TestingContinuous Testing
Continuous Testing
 
6 Easy Steps to Write Test Cases
6 Easy Steps to Write Test Cases6 Easy Steps to Write Test Cases
6 Easy Steps to Write Test Cases
 
Continuous testing for continuous delivery
Continuous testing for continuous deliveryContinuous testing for continuous delivery
Continuous testing for continuous delivery
 
Effective Test Automation in DevOps
Effective Test Automation in DevOpsEffective Test Automation in DevOps
Effective Test Automation in DevOps
 
5 Steps to Jump Start Your Test Automation
5 Steps to Jump Start Your Test Automation5 Steps to Jump Start Your Test Automation
5 Steps to Jump Start Your Test Automation
 
Continuous Load Testing with CloudTest and Jenkins
Continuous Load Testing with CloudTest and JenkinsContinuous Load Testing with CloudTest and Jenkins
Continuous Load Testing with CloudTest and Jenkins
 
Chicago CD Summit: 3 Pillars of Continuous Delivery
Chicago CD Summit: 3 Pillars of Continuous DeliveryChicago CD Summit: 3 Pillars of Continuous Delivery
Chicago CD Summit: 3 Pillars of Continuous Delivery
 

Mehr von XebiaLabs

Mehr von XebiaLabs (20)

Metrics That Matter: How to Measure Digital Transformation Success
Metrics That Matter: How to Measure Digital Transformation SuccessMetrics That Matter: How to Measure Digital Transformation Success
Metrics That Matter: How to Measure Digital Transformation Success
 
Infrastructure as Code in Large Scale Organizations
Infrastructure as Code in Large Scale OrganizationsInfrastructure as Code in Large Scale Organizations
Infrastructure as Code in Large Scale Organizations
 
Accelerate Your Digital Transformation: How to Achieve Business Agility with ...
Accelerate Your Digital Transformation: How to Achieve Business Agility with ...Accelerate Your Digital Transformation: How to Achieve Business Agility with ...
Accelerate Your Digital Transformation: How to Achieve Business Agility with ...
 
Don't Let Technology Slow Down Your Digital Transformation
Don't Let Technology Slow Down Your Digital Transformation Don't Let Technology Slow Down Your Digital Transformation
Don't Let Technology Slow Down Your Digital Transformation
 
Deliver More Customer Value with Value Stream Management
Deliver More Customer Value with Value Stream ManagementDeliver More Customer Value with Value Stream Management
Deliver More Customer Value with Value Stream Management
 
Building a Software Chain of Custody: A Guide for CTOs, CIOs, and Enterprise ...
Building a Software Chain of Custody: A Guide for CTOs, CIOs, and Enterprise ...Building a Software Chain of Custody: A Guide for CTOs, CIOs, and Enterprise ...
Building a Software Chain of Custody: A Guide for CTOs, CIOs, and Enterprise ...
 
XebiaLabs: DevOps 2020 with Gene Kim
XebiaLabs: DevOps 2020 with Gene KimXebiaLabs: DevOps 2020 with Gene Kim
XebiaLabs: DevOps 2020 with Gene Kim
 
From Chaos to Compliance: The New Digital Governance for DevOps
From Chaos to Compliance: The New Digital Governance for DevOpsFrom Chaos to Compliance: The New Digital Governance for DevOps
From Chaos to Compliance: The New Digital Governance for DevOps
 
Supercharge Your Digital Transformation by Establishing a DevOps Platform
Supercharge Your Digital Transformation by Establishing a DevOps PlatformSupercharge Your Digital Transformation by Establishing a DevOps Platform
Supercharge Your Digital Transformation by Establishing a DevOps Platform
 
Build a Bridge Between CI/CD and ITSM w/ Quint Technology
Build a Bridge Between CI/CD and ITSM w/ Quint TechnologyBuild a Bridge Between CI/CD and ITSM w/ Quint Technology
Build a Bridge Between CI/CD and ITSM w/ Quint Technology
 
Make Software Audit Nightmares a Thing of the Past
Make Software Audit Nightmares a Thing of the PastMake Software Audit Nightmares a Thing of the Past
Make Software Audit Nightmares a Thing of the Past
 
Is Your DevOps Ready for the Cloud?
Is Your DevOps Ready for the Cloud?Is Your DevOps Ready for the Cloud?
Is Your DevOps Ready for the Cloud?
 
Compliance und Sicherheit im Rahmen von Software-Deployments
Compliance und Sicherheit im Rahmen von Software-DeploymentsCompliance und Sicherheit im Rahmen von Software-Deployments
Compliance und Sicherheit im Rahmen von Software-Deployments
 
All Roads Lead to DevOps
All Roads Lead to DevOpsAll Roads Lead to DevOps
All Roads Lead to DevOps
 
Reaching Cloud Utopia: How to Create a Single Pipeline for Hybrid Deployments
Reaching Cloud Utopia: How to Create a Single Pipeline for Hybrid DeploymentsReaching Cloud Utopia: How to Create a Single Pipeline for Hybrid Deployments
Reaching Cloud Utopia: How to Create a Single Pipeline for Hybrid Deployments
 
Avoid Troubled Waters: Building a Bridge Between ServiceNow and CI/CD
Avoid Troubled Waters: Building a Bridge Between ServiceNow and CI/CDAvoid Troubled Waters: Building a Bridge Between ServiceNow and CI/CD
Avoid Troubled Waters: Building a Bridge Between ServiceNow and CI/CD
 
Shift Left and Automate: How to Bake Compliance and Security into Your Softwa...
Shift Left and Automate: How to Bake Compliance and Security into Your Softwa...Shift Left and Automate: How to Bake Compliance and Security into Your Softwa...
Shift Left and Automate: How to Bake Compliance and Security into Your Softwa...
 
2019 DevOps Predictions
2019 DevOps Predictions2019 DevOps Predictions
2019 DevOps Predictions
 
Building a Bridge Between CI/CD and ITSM
Building a Bridge Between CI/CD and ITSMBuilding a Bridge Between CI/CD and ITSM
Building a Bridge Between CI/CD and ITSM
 
DevOps Hits Adolescence – what’s next?
DevOps Hits Adolescence – what’s next?DevOps Hits Adolescence – what’s next?
DevOps Hits Adolescence – what’s next?
 

Kürzlich hochgeladen

TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
mohitmore19
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Health
 
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM TechniquesAI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
VictorSzoltysek
 

Kürzlich hochgeladen (20)

The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
 
VTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learnVTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learn
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
 
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) SolutionIntroducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
 
%in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park %in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park
 
AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation Template
 
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verifiedSector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
Pharm-D Biostatistics and Research methodology
Pharm-D Biostatistics and Research methodologyPharm-D Biostatistics and Research methodology
Pharm-D Biostatistics and Research methodology
 
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdfAzure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
 
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
 
%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.com
 
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM TechniquesAI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
 

Moving to Continuous Delivery without breaking everything

  • 1. Moving to Continuous DeliveryWithout Breaking Your Code Viktor Clerc, 23 June 2015
  • 2. 2 Copyright 2014. Confidential – Distribution prohibited without permission Agenda ▪ The Need For Speed ▪ The Two Faces of CD ▪ Testing is Changing ▪ A Central Hub for Application Quality For Your Pipeline ▪ Beyond Test Automation: Active Test Optimization
  • 3. 3 Copyright 2014. Confidential – Distribution prohibited without permission About Me ▪ Product Manager XL TestView for XebiaLabs ▪ Traversed through all phases of the software development lifecycle ▪ Supported major organization in setting up a test strategy and test automation strategy ▪ Is eager to flip the way (most) organizations do testing
  • 4. 4 Copyright 2014. Confidential – Distribution prohibited without permission About XebiaLabs We build tools to solve problems around DevOps and Continuous Delivery at scale
  • 5. 5 Copyright 2014. Confidential – Distribution prohibited without permission The Need For Speed ▪ Every business is an IT business − Known as “Software-defined Enterprise”, even traditionally brick and mortar business is becoming software based ▪ Customers demand that you deliver new features faster whilst maintaining high levels of quality ▪ If you don’t, your competitor probably will
  • 6. 6 Copyright 2014. Confidential – Distribution prohibited without permission The Need For Speed ▪ Every business is an IT business − Known as “Software-defined Enterprise”, even traditionally brick and mortar business is becoming software based ▪ Customers demand that you deliver new features faster whilst maintaining high levels of quality ▪ If you don’t, your competitor probably will
  • 7. 7 Copyright 2014. Confidential – Distribution prohibited without permission The Need For Speed ▪ What is so compelling about CD? ▪ Business initiative with cool technical implementation ▪ “CD eats DevOps for breakfast as the business eats IT”
  • 8. 8 Copyright 2014. Confidential – Distribution prohibited without permission TheTwo Faces of CD ▪ A lot of focus right now is on pipeline execution ▪ …but there’s no point delivering at light speed if everything starts breaking ▪ Testing (= quality/risk) needs to be a first-class citizen of your CD initiative!
  • 9. 9 Copyright 2014. Confidential – Distribution prohibited without permission TheTwo Faces of CD ▪ CD = Execution + Analysis
  • 10. 10 Copyright 2014. Confidential – Distribution prohibited without permission TheTwo Faces of CD ▪ CD = Execution + Analysis ▪ = Speed + Quality
  • 11. 11 Copyright 2014. Confidential – Distribution prohibited without permission TheTwo Faces of CD ▪ CD = Execution + Analysis ▪ = Speed + Quality ▪ = Pipeline orchestration + ..?
  • 12. 12 Copyright 2014. Confidential – Distribution prohibited without permission SPECIFY INTEGRATE RELEASEREGRESSION VALUE CHAINCONCEPT CASH TestEffort USER ACCEPTANCEDESIGN BUILD TEST Testing is Changing
  • 13. 13 Copyright 2014. Confidential – Distribution prohibited without permission SPECIFY INTEGRATE RELEASEREGRESSION VALUE CHAINCONCEPT CASH TestEffort USER ACCEPTANCE USER ACCEPTANCE Acceptance Driven Testing “I add value by sharpening the acceptance criteria of requested features” Automate ALL “Test automation serves as the safety net for my new functionality: I focus on running the appropriate tests continuously during the iterations” T E S T D E S I G N B U I L D D B T T D Development = Test Test = Development “Testing is transforming to a automation mindset and skill instead of a separate activity” Testing is Changing
  • 14. 14 Copyright 2014. Confidential – Distribution prohibited without permission Testing is Changing: Challenges ▪ Many test tools for each of the test levels, but no single place to answer “Good enough to go live?” ▪ Requirements coverage is not available − “Did we test enough?” ▪ Minimize the mean time to repair − Support for failure analysis JUnit, FitNesse, JMeter, YSlow, Vanity Check, WireShark, SOAP-UI, Jasmine, Karma, Speedtrace, Selenium, WebScarab, TTA, DynaTrace, HP Diagnostics, ALM stack AppDynamics, Code Tester for Oracle, Arachnid, Fortify, Sonar, …
  • 15. 15 Copyright 2014. Confidential – Distribution prohibited without permission Testing is Changing: Challenges ▪ Thousands of tests makes test sets hard to manage: − “Where is my subset?” − “What tests add most value, what tests are superfluous?” − “When to run what tests?” ▪ Running all tests all the time takes too long, feedback is too late ▪ Quality control of the tests themselves and maintenance of testware ▪ Tooling overstretch
  • 16. 16 Copyright 2014. Confidential – Distribution prohibited without permission Testing is Changing: Challenges ▪ Thousands of tests makes test sets hard to manage: − “Where is my subset?” − “What tests add most value, what tests are superfluous?” − “When to run what tests?” ▪ Running all tests all the time takes too long, feedback is too late ▪ Quality control of the tests themselves and maintenance of testware ▪ Tooling overstretch
  • 17. 17 Copyright 2014. Confidential – Distribution prohibited without permission Testing is Changing: Best Practices ▪ Focus on functional coverage, not technical coverage ▪ Say 40 user stories, 400 tests − Do I have relatively more tests for the more important user stories? − How do I link tests to user stories/features/fixes? ▪ Metrics − Number of tests − Number of tests that have not passed in <time> − Flaky tests − Duration
  • 18. 18 Copyright 2014. Confidential – Distribution prohibited without permission Testing is Changing: Best Practices ▪ “Slice and dice” your test code − Responsible team − Topic − Functional area − Flaky − Known issue − etc. ▪ Radical parallelization − Fail faster!
  • 19. 19 Copyright 2014. Confidential – Distribution prohibited without permission Making Sense ofTest Results ▪ Real go/no go decisions are non-trivial − No failing tests − 5 % of failing tests − No regression (tests that currently fail but passed previously) − List of tests-that-should-not-fail ▪ Need historical context ▪ One integrated view ▪ Data to guide improvement
  • 20. 20 Copyright 2014. Confidential – Distribution prohibited without permission Example Job Distribution Build Deploy Int. Tests Test Test Test Perf. Tests Build Deploy Int. Tests Test Test Test Perf. Tests
  • 21. 21 Copyright 2014. Confidential – Distribution prohibited without permission Example Job Distribution Build Deploy Int. Tests Test Test Test Perf. Tests Build Deploy Int. Tests Test Test Test Perf. Tests Simple pipelines – scattered test results
  • 22. 22 Copyright 2014. Confidential – Distribution prohibited without permission Making Sense ofTest Results Executing tests from Jenkins is great, but… ▪ Different testing jobs use different plugins or scripts, each with different visualization styles ▪ No consolidated historic view available across jobs ▪ Pass/Unstable/Fail is too coarse − How to do “Passed, but with known failures”?
  • 23. 23 Copyright 2014. Confidential – Distribution prohibited without permission Making Sense ofTest Results ▪ Ultimate analysis question (“are we good to go live?”) is difficult to answer ▪ No obvious solution for now, unless all your tests are running through one service
  • 24. 24 Copyright 2014. Confidential – Distribution prohibited without permission Test Analysis: Homebrew 24
  • 25. 25 Copyright 2014. Confidential – Distribution prohibited without permission Test Analysis: Custom Reporting 25
  • 26. 26 Copyright 2014. Confidential – Distribution prohibited without permission A Central Hub for Application Quality What is needed: 1. A single, integrated overview of all the test (= quality, risk) information related to your current release
  • 27. 27 Copyright 2014. Confidential – Distribution prohibited without permission A Central Hub for Application Quality What is needed: 1. A single, integrated overview of all the test (= quality, risk) information related to your current release 2. …irrespective of where or by whom the information was produced
  • 28. 28 Copyright 2014. Confidential – Distribution prohibited without permission A Central Hub for Application Quality What is needed: 1. A single, integrated overview of all the test (= quality, risk) information related to your current release 2. …irrespective of where or by whom the information was produced 3. The ability to analyze and “slice and dice” the test results for different audiences and use cases
  • 29. 29 Copyright 2014. Confidential – Distribution prohibited without permission A Central Hub for Application Quality What is needed: 1. A single, integrated overview of all the test (= quality, risk) information related to your current release 2. …irrespective of where or by whom the information was produced 3. The ability to analyze and “slice and dice” the test results for different audiences and use cases 4. The ability to access historical context and other test attributes to make real- world “go/no-go” decisions
  • 30. 30 Copyright 2014. Confidential – Distribution prohibited without permission BeyondTest Automation Can we go further? How about 5. The ability to use the aggregated test results, historical contexts and other attributes to invoke tests more intelligently?
  • 31. 31 Copyright 2014. Confidential – Distribution prohibited without permission BeyondTest Automation It’s a bit of an open question: ▪ Google: it’s too expensive and time-consuming to run all the tests all the time - automated selection of a subset of tests to run ▪ Dave Farley: if you can’t run all the tests all the time, you need to optimize your tests or you have the wrong tests in the first place
  • 32. 32 Copyright 2014. Confidential – Distribution prohibited without permission BeyondTest Automation Middle ground: ▪ Label your tests along all relevant dimensions to ensure that you can easily select a relevant subset of your tests if needed ▪ Consider automatically annotating tests related to features (e.g. added/modified in the same commit), or introducing that as a practice ▪ Use data from your test aggregation tool to ignore flaky/”known failure” tests (and then fix those flaky tests, of course ;-))
  • 33. 33 Copyright 2014. Confidential – Distribution prohibited without permission Summary ▪ Testing = Automation − Testers are developers ▪ Structure and annotate tests − Conway’s Law for Tests − Link to functions/features/use cases ▪ Radical parallelization − Throwaway environments
  • 34. 34 Copyright 2014. Confidential – Distribution prohibited without permission Summary ▪ CD = Speed + Quality = Execution + Analysis ▪ Making sense of scattered test results is still a challenge ▪ Need to figure out how to address real world go/no go decisions
  • 35. 35 Copyright 2014. Confidential – Distribution prohibited without permission Summary ▪ CD = Speed + Quality = Execution + Analysis ▪ Making sense of scattered test results is still a challenge ▪ Need to figure out how to address real world go/no go decisions
  • 36. 36 Copyright 2014. Confidential – Distribution prohibited without permission AnalyzingTest Results
  • 37. 37 Copyright 2014. Confidential – Distribution prohibited without permission AnalyzingTest Results
  • 38. 38 Copyright 2014. Confidential – Distribution prohibited without permission TaggingTests
  • 39. 39 Copyright 2014. Confidential – Distribution prohibited without permission Evaluating Go/No-go Criteria
  • 40. 40 Copyright 2014. Confidential – Distribution prohibited without permission Next steps ▪ Next-Generation Testing: The Key to Continuous Delivery https://xebialabs.com/resources/whitepapers/next-generation-testing-the- key-to-continuous-delivery/ ▪ An Introduction to XL TestView https://www.youtube.com/watch?v=_17xKtB3iWU ▪ Download XL TestView https://xebialabs.com/products/xl-testview/community

Hinweis der Redaktion

  1. In this demo, we will first give an introduction into the major challenges involved in testing and explain our vision on how the (traditional) testing activities are prone to change. Next, given this inevitable change, we will focus on test automation and discuss the major test automation challenges and call for functionality this poses. We continue with positioning XL Test, our major test automation framework. Finally, we conclude with a demo of the key functionality of XL Test to indicate how the challenges and questions can be addressed using XL Test.
  2. In this demo, we will first give an introduction into the major challenges involved in testing and explain our vision on how the (traditional) testing activities are prone to change. Next, given this inevitable change, we will focus on test automation and discuss the major test automation challenges and call for functionality this poses. We continue with positioning XL Test, our major test automation framework. Finally, we conclude with a demo of the key functionality of XL Test to indicate how the challenges and questions can be addressed using XL Test.
  3. In this demo, we will first give an introduction into the major challenges involved in testing and explain our vision on how the (traditional) testing activities are prone to change. Next, given this inevitable change, we will focus on test automation and discuss the major test automation challenges and call for functionality this poses. We continue with positioning XL Test, our major test automation framework. Finally, we conclude with a demo of the key functionality of XL Test to indicate how the challenges and questions can be addressed using XL Test.
  4. More specifically, the following challenges typically occur with organizations that have matured along the way in test automation. We will mention a few: 1. How to translate scattered insights from your test tools into a single answer “are we good enough to go live?”, or to “promote to the acceptance environment”. 2. When the amount of tests is growing, it becomes more and more important to label, tag, or select the appropriate tests. These may include tests that you or your development team is interested in, or tests that cover a designated part of the application’s functionality, or otherwise. Flexible test set management is key. Moreover, it becomes important to make sane selections as to what tests actually need to be run. Tests that are ‘green’ all the time may indicate that these are superfluous. Certainly when overlap in tests exists. 3. With the growing trend of releasing to production as often, as quickly, and as early as possible, we observe that more and more organizations would like to bring individual features or functionality to production and NOT wait until a sprint of a couple of weeks has ended. Again, this calls for flexibility: for that feature or functionality, select all appropriate tests (functional tests, performance tests, pre-production tests, etc.) that need to be run, or which run status needs to be examined – again, across testing tools. Furthermore, this flexible test set can then be linked to an individual entry / issue in your issue management software, such as JIRA or Rally. In this way, we can verify whether all tests for a given requirement or issue are passed, allowing to bring that functionality to production. 4. One often heard challenge is that tests are flaky (all of the sudden, tests that have been green, turn red). Typically, the development team that performs a series of activities to find out why that test has failed. This includes analyzing appropriate log files, verify test data, et cetera. Wouldn’t it be handy if all relevant information is automatically collected and stored alongside the test results? <remark> Other issues may also exist: How to optimize testing activities in a chain environment when functionality of a number of applications jointly is verified. How to include the results of manual tests in the quality dashboard.
  5. More specifically, the following challenges typically occur with organizations that have matured along the way in test automation. We will mention a few: 1. How to translate scattered insights from your test tools into a single answer “are we good enough to go live?”, or to “promote to the acceptance environment”. 2. When the amount of tests is growing, it becomes more and more important to label, tag, or select the appropriate tests. These may include tests that you or your development team is interested in, or tests that cover a designated part of the application’s functionality, or otherwise. Flexible test set management is key. Moreover, it becomes important to make sane selections as to what tests actually need to be run. Tests that are ‘green’ all the time may indicate that these are superfluous. Certainly when overlap in tests exists. 3. With the growing trend of releasing to production as often, as quickly, and as early as possible, we observe that more and more organizations would like to bring individual features or functionality to production and NOT wait until a sprint of a couple of weeks has ended. Again, this calls for flexibility: for that feature or functionality, select all appropriate tests (functional tests, performance tests, pre-production tests, etc.) that need to be run, or which run status needs to be examined – again, across testing tools. Furthermore, this flexible test set can then be linked to an individual entry / issue in your issue management software, such as JIRA or Rally. In this way, we can verify whether all tests for a given requirement or issue are passed, allowing to bring that functionality to production. 4. One often heard challenge is that tests are flaky (all of the sudden, tests that have been green, turn red). Typically, the development team that performs a series of activities to find out why that test has failed. This includes analyzing appropriate log files, verify test data, et cetera. Wouldn’t it be handy if all relevant information is automatically collected and stored alongside the test results? <remark> Other issues may also exist: How to optimize testing activities in a chain environment when functionality of a number of applications jointly is verified. How to include the results of manual tests in the quality dashboard.
  6. More specifically, the following challenges typically occur with organizations that have matured along the way in test automation. We will mention a few: 1. How to translate scattered insights from your test tools into a single answer “are we good enough to go live?”, or to “promote to the acceptance environment”. 2. When the amount of tests is growing, it becomes more and more important to label, tag, or select the appropriate tests. These may include tests that you or your development team is interested in, or tests that cover a designated part of the application’s functionality, or otherwise. Flexible test set management is key. Moreover, it becomes important to make sane selections as to what tests actually need to be run. Tests that are ‘green’ all the time may indicate that these are superfluous. Certainly when overlap in tests exists. 3. With the growing trend of releasing to production as often, as quickly, and as early as possible, we observe that more and more organizations would like to bring individual features or functionality to production and NOT wait until a sprint of a couple of weeks has ended. Again, this calls for flexibility: for that feature or functionality, select all appropriate tests (functional tests, performance tests, pre-production tests, etc.) that need to be run, or which run status needs to be examined – again, across testing tools. Furthermore, this flexible test set can then be linked to an individual entry / issue in your issue management software, such as JIRA or Rally. In this way, we can verify whether all tests for a given requirement or issue are passed, allowing to bring that functionality to production. 4. One often heard challenge is that tests are flaky (all of the sudden, tests that have been green, turn red). Typically, the development team that performs a series of activities to find out why that test has failed. This includes analyzing appropriate log files, verify test data, et cetera. Wouldn’t it be handy if all relevant information is automatically collected and stored alongside the test results? <remark> Other issues may also exist: How to optimize testing activities in a chain environment when functionality of a number of applications jointly is verified. How to include the results of manual tests in the quality dashboard.
  7. More specifically, the following challenges typically occur with organizations that have matured along the way in test automation. We will mention a few: 1. How to translate scattered insights from your test tools into a single answer “are we good enough to go live?”, or to “promote to the acceptance environment”. 2. When the amount of tests is growing, it becomes more and more important to label, tag, or select the appropriate tests. These may include tests that you or your development team is interested in, or tests that cover a designated part of the application’s functionality, or otherwise. Flexible test set management is key. Moreover, it becomes important to make sane selections as to what tests actually need to be run. Tests that are ‘green’ all the time may indicate that these are superfluous. Certainly when overlap in tests exists. 3. With the growing trend of releasing to production as often, as quickly, and as early as possible, we observe that more and more organizations would like to bring individual features or functionality to production and NOT wait until a sprint of a couple of weeks has ended. Again, this calls for flexibility: for that feature or functionality, select all appropriate tests (functional tests, performance tests, pre-production tests, etc.) that need to be run, or which run status needs to be examined – again, across testing tools. Furthermore, this flexible test set can then be linked to an individual entry / issue in your issue management software, such as JIRA or Rally. In this way, we can verify whether all tests for a given requirement or issue are passed, allowing to bring that functionality to production. 4. One often heard challenge is that tests are flaky (all of the sudden, tests that have been green, turn red). Typically, the development team that performs a series of activities to find out why that test has failed. This includes analyzing appropriate log files, verify test data, et cetera. Wouldn’t it be handy if all relevant information is automatically collected and stored alongside the test results? <remark> Other issues may also exist: How to optimize testing activities in a chain environment when functionality of a number of applications jointly is verified. How to include the results of manual tests in the quality dashboard.
  8. More specifically, the following challenges typically occur with organizations that have matured along the way in test automation. We will mention a few: 1. How to translate scattered insights from your test tools into a single answer “are we good enough to go live?”, or to “promote to the acceptance environment”. 2. When the amount of tests is growing, it becomes more and more important to label, tag, or select the appropriate tests. These may include tests that you or your development team is interested in, or tests that cover a designated part of the application’s functionality, or otherwise. Flexible test set management is key. Moreover, it becomes important to make sane selections as to what tests actually need to be run. Tests that are ‘green’ all the time may indicate that these are superfluous. Certainly when overlap in tests exists. 3. With the growing trend of releasing to production as often, as quickly, and as early as possible, we observe that more and more organizations would like to bring individual features or functionality to production and NOT wait until a sprint of a couple of weeks has ended. Again, this calls for flexibility: for that feature or functionality, select all appropriate tests (functional tests, performance tests, pre-production tests, etc.) that need to be run, or which run status needs to be examined – again, across testing tools. Furthermore, this flexible test set can then be linked to an individual entry / issue in your issue management software, such as JIRA or Rally. In this way, we can verify whether all tests for a given requirement or issue are passed, allowing to bring that functionality to production. 4. One often heard challenge is that tests are flaky (all of the sudden, tests that have been green, turn red). Typically, the development team that performs a series of activities to find out why that test has failed. This includes analyzing appropriate log files, verify test data, et cetera. Wouldn’t it be handy if all relevant information is automatically collected and stored alongside the test results? <remark> Other issues may also exist: How to optimize testing activities in a chain environment when functionality of a number of applications jointly is verified. How to include the results of manual tests in the quality dashboard.
  9. More specifically, the following challenges typically occur with organizations that have matured along the way in test automation. We will mention a few: 1. How to translate scattered insights from your test tools into a single answer “are we good enough to go live?”, or to “promote to the acceptance environment”. 2. When the amount of tests is growing, it becomes more and more important to label, tag, or select the appropriate tests. These may include tests that you or your development team is interested in, or tests that cover a designated part of the application’s functionality, or otherwise. Flexible test set management is key. Moreover, it becomes important to make sane selections as to what tests actually need to be run. Tests that are ‘green’ all the time may indicate that these are superfluous. Certainly when overlap in tests exists. 3. With the growing trend of releasing to production as often, as quickly, and as early as possible, we observe that more and more organizations would like to bring individual features or functionality to production and NOT wait until a sprint of a couple of weeks has ended. Again, this calls for flexibility: for that feature or functionality, select all appropriate tests (functional tests, performance tests, pre-production tests, etc.) that need to be run, or which run status needs to be examined – again, across testing tools. Furthermore, this flexible test set can then be linked to an individual entry / issue in your issue management software, such as JIRA or Rally. In this way, we can verify whether all tests for a given requirement or issue are passed, allowing to bring that functionality to production. 4. One often heard challenge is that tests are flaky (all of the sudden, tests that have been green, turn red). Typically, the development team that performs a series of activities to find out why that test has failed. This includes analyzing appropriate log files, verify test data, et cetera. Wouldn’t it be handy if all relevant information is automatically collected and stored alongside the test results? <remark> Other issues may also exist: How to optimize testing activities in a chain environment when functionality of a number of applications jointly is verified. How to include the results of manual tests in the quality dashboard.
  10. More specifically, the following challenges typically occur with organizations that have matured along the way in test automation. We will mention a few: 1. How to translate scattered insights from your test tools into a single answer “are we good enough to go live?”, or to “promote to the acceptance environment”. 2. When the amount of tests is growing, it becomes more and more important to label, tag, or select the appropriate tests. These may include tests that you or your development team is interested in, or tests that cover a designated part of the application’s functionality, or otherwise. Flexible test set management is key. Moreover, it becomes important to make sane selections as to what tests actually need to be run. Tests that are ‘green’ all the time may indicate that these are superfluous. Certainly when overlap in tests exists. 3. With the growing trend of releasing to production as often, as quickly, and as early as possible, we observe that more and more organizations would like to bring individual features or functionality to production and NOT wait until a sprint of a couple of weeks has ended. Again, this calls for flexibility: for that feature or functionality, select all appropriate tests (functional tests, performance tests, pre-production tests, etc.) that need to be run, or which run status needs to be examined – again, across testing tools. Furthermore, this flexible test set can then be linked to an individual entry / issue in your issue management software, such as JIRA or Rally. In this way, we can verify whether all tests for a given requirement or issue are passed, allowing to bring that functionality to production. 4. One often heard challenge is that tests are flaky (all of the sudden, tests that have been green, turn red). Typically, the development team that performs a series of activities to find out why that test has failed. This includes analyzing appropriate log files, verify test data, et cetera. Wouldn’t it be handy if all relevant information is automatically collected and stored alongside the test results? <remark> Other issues may also exist: How to optimize testing activities in a chain environment when functionality of a number of applications jointly is verified. How to include the results of manual tests in the quality dashboard.
  11. Running stuff in parallel is quite quickly necessary to obtain quick feedback So, how to run browser tests effectively in parallel as shown above?  next slide
  12. Running stuff in parallel is quite quickly necessary to obtain quick feedback So, how to run browser tests effectively in parallel as shown above?  next slide
  13. Jeroen
  14. When the amount of tests and testing effort increases, automation becomes key. Especially to safeguard quality when existing functionality and quality levels needs to be verified when at the same time new functionality needs to be tested and implemented. Typically, an increase in number of test tools used and an increase in number of tests are to be expected. Furthermore, different teams may select different tools for performing functional tests. This poses the question how to obtain oversight across these test tools. How can we still know what the level of quality is of what we are putting into production. Do we have a ‘single point of truth’ telling me what tests have run, what tests have not run, and what the resulting quality overview is? In our experience, as the number of test tools increases, this poses challenges: test results are scattered and the integrated overview is lacking. In other cases, we would like to be more flexible and not run all the tests all the time. So, making sane and quick selections of tests to be run, across the testing tools, becomes important. These tests need to be run as soon as possible to provide feedback to the development team as quickly as possible. This will boost the productivity and velocity of the development team since the slack and delay involved in running superfluous tests is reduced to the minimum.
  15. When the amount of tests and testing effort increases, automation becomes key. Especially to safeguard quality when existing functionality and quality levels needs to be verified when at the same time new functionality needs to be tested and implemented. Typically, an increase in number of test tools used and an increase in number of tests are to be expected. Furthermore, different teams may select different tools for performing functional tests. This poses the question how to obtain oversight across these test tools. How can we still know what the level of quality is of what we are putting into production. Do we have a ‘single point of truth’ telling me what tests have run, what tests have not run, and what the resulting quality overview is? In our experience, as the number of test tools increases, this poses challenges: test results are scattered and the integrated overview is lacking. In other cases, we would like to be more flexible and not run all the tests all the time. So, making sane and quick selections of tests to be run, across the testing tools, becomes important. These tests need to be run as soon as possible to provide feedback to the development team as quickly as possible. This will boost the productivity and velocity of the development team since the slack and delay involved in running superfluous tests is reduced to the minimum.
  16. When the amount of tests and testing effort increases, automation becomes key. Especially to safeguard quality when existing functionality and quality levels needs to be verified when at the same time new functionality needs to be tested and implemented. Typically, an increase in number of test tools used and an increase in number of tests are to be expected. Furthermore, different teams may select different tools for performing functional tests. This poses the question how to obtain oversight across these test tools. How can we still know what the level of quality is of what we are putting into production. Do we have a ‘single point of truth’ telling me what tests have run, what tests have not run, and what the resulting quality overview is? In our experience, as the number of test tools increases, this poses challenges: test results are scattered and the integrated overview is lacking. In other cases, we would like to be more flexible and not run all the tests all the time. So, making sane and quick selections of tests to be run, across the testing tools, becomes important. These tests need to be run as soon as possible to provide feedback to the development team as quickly as possible. This will boost the productivity and velocity of the development team since the slack and delay involved in running superfluous tests is reduced to the minimum.
  17. When the amount of tests and testing effort increases, automation becomes key. Especially to safeguard quality when existing functionality and quality levels needs to be verified when at the same time new functionality needs to be tested and implemented. Typically, an increase in number of test tools used and an increase in number of tests are to be expected. Furthermore, different teams may select different tools for performing functional tests. This poses the question how to obtain oversight across these test tools. How can we still know what the level of quality is of what we are putting into production. Do we have a ‘single point of truth’ telling me what tests have run, what tests have not run, and what the resulting quality overview is? In our experience, as the number of test tools increases, this poses challenges: test results are scattered and the integrated overview is lacking. In other cases, we would like to be more flexible and not run all the tests all the time. So, making sane and quick selections of tests to be run, across the testing tools, becomes important. These tests need to be run as soon as possible to provide feedback to the development team as quickly as possible. This will boost the productivity and velocity of the development team since the slack and delay involved in running superfluous tests is reduced to the minimum.
  18. When the amount of tests and testing effort increases, automation becomes key. Especially to safeguard quality when existing functionality and quality levels needs to be verified when at the same time new functionality needs to be tested and implemented. Typically, an increase in number of test tools used and an increase in number of tests are to be expected. Furthermore, different teams may select different tools for performing functional tests. This poses the question how to obtain oversight across these test tools. How can we still know what the level of quality is of what we are putting into production. Do we have a ‘single point of truth’ telling me what tests have run, what tests have not run, and what the resulting quality overview is? In our experience, as the number of test tools increases, this poses challenges: test results are scattered and the integrated overview is lacking. In other cases, we would like to be more flexible and not run all the tests all the time. So, making sane and quick selections of tests to be run, across the testing tools, becomes important. These tests need to be run as soon as possible to provide feedback to the development team as quickly as possible. This will boost the productivity and velocity of the development team since the slack and delay involved in running superfluous tests is reduced to the minimum.
  19. When the amount of tests and testing effort increases, automation becomes key. Especially to safeguard quality when existing functionality and quality levels needs to be verified when at the same time new functionality needs to be tested and implemented. Typically, an increase in number of test tools used and an increase in number of tests are to be expected. Furthermore, different teams may select different tools for performing functional tests. This poses the question how to obtain oversight across these test tools. How can we still know what the level of quality is of what we are putting into production. Do we have a ‘single point of truth’ telling me what tests have run, what tests have not run, and what the resulting quality overview is? In our experience, as the number of test tools increases, this poses challenges: test results are scattered and the integrated overview is lacking. In other cases, we would like to be more flexible and not run all the tests all the time. So, making sane and quick selections of tests to be run, across the testing tools, becomes important. These tests need to be run as soon as possible to provide feedback to the development team as quickly as possible. This will boost the productivity and velocity of the development team since the slack and delay involved in running superfluous tests is reduced to the minimum.
  20. Viktor
  21. Viktor
  22. Viktor
  23. Viktor
  24. When the amount of tests and testing effort increases, automation becomes key. Especially to safeguard quality when existing functionality and quality levels needs to be verified when at the same time new functionality needs to be tested and implemented. Typically, an increase in number of test tools used and an increase in number of tests are to be expected. Furthermore, different teams may select different tools for performing functional tests. This poses the question how to obtain oversight across these test tools. How can we still know what the level of quality is of what we are putting into production. Do we have a ‘single point of truth’ telling me what tests have run, what tests have not run, and what the resulting quality overview is? In our experience, as the number of test tools increases, this poses challenges: test results are scattered and the integrated overview is lacking. In other cases, we would like to be more flexible and not run all the tests all the time. So, making sane and quick selections of tests to be run, across the testing tools, becomes important. These tests need to be run as soon as possible to provide feedback to the development team as quickly as possible. This will boost the productivity and velocity of the development team since the slack and delay involved in running superfluous tests is reduced to the minimum.