SlideShare ist ein Scribd-Unternehmen logo
1 von 31
Downloaden Sie, um offline zu lesen
NEED FOR SPEED - “HOW TO
PERFORMANCE TEST THE RIGHT WAY”
Annie Bhaumik
Senior Performance Test Engineer,
OCLC
NEED FOR
SPEED
53% of users abandon the
website, if it takes more than 3
seconds to load.
79% of them do not come
back to visit the website.
3 seconds
That’s how long it takes users to get
impatient and abandon your website.
3s
DOES A
SLOWER
WEBSITE
MEAN LOST
DOLLARS?
8.11%
6.32%
4.64%
2.93%
2.24% 2.20%
1.66%
2.11%
1.77% 1.73%
0.00%
1.00%
2.00%
3.00%
4.00%
5.00%
6.00%
7.00%
8.00%
9.00%
0 1 2 3 4 5 6 7 8 9
Transaction Conversion Rates
Page Load Speeds(seconds)
WHAT IS THE
IDEAL PAGE
LOAD TIME
If 100 people visit a website for a $100 product, this could be
potential earnings:
0-2
seconds
REAL
WORLD
EXAMPLES
 Amazon - 1% sales loss for every 100ms extra
load time
 Walmart - 2% increase in conversion for every 1
sec improvement in load time
 Tagman - study showed that 1s delay in load time
resulted in loss of 7% revenue
WHAT IS
PERFORMANCE
TESTING
V
I
R
T
U
A
L
U
S
E
R
S
Application Under
Test
WHAT CAN
PERFORMANCE
TESTING DO
FORYOU
How many users
can the
application handle
?
What happens to
user experience
when users
increase to 500%
Do people in
different parts of
the world get the
same user
experience?
At what point
does the
application start
slowing or
crashing?
Which way is
performance
trending over
time?
What are my
Benchmarks and
Baselines
TYPES OF
PERFORMANCE
TESTPerformance
Testing
Load
Testing
Endurance
Testing
Spike
Testing
Stress
Testing
Volume
Testing
Scalability
Testing
LOAD TEST  Load testing is performed to
determine a system's behavior under
both normal and anticipated peak
load conditions.
Are desired
performance
objectives specified in
SLA met :
• Response Times
• Throughput
• Resource Utilization
ENDURANCE
TEST  Testing a system with expected
amount of load over a long period of
time to find the behaviour of system• Will performance be
consistent over
time?
• Are there slowly
growing problems
that have not yet
been detected?
SPIKE
TEST
 Spike Testing is to determine the
behavior of the system under sudden
increase of load (a large number of
users) on the system• What happens if the
production load exceeds
the anticipated peak
load?
• What kinds of failures
should we plan for?
• What indicators should
we look for?
STRESS
TEST
 Stress testing involves testing an
application under extreme workloads to
see how it handles high traffic or data
processing. The objective is to identify the
breaking point of an application
• Reveals
application bugs
that surface only
under high load
conditions
?
• Identify
application’s weak
points
VOLUME
TEST  Testing a software application with a
large amount of data to be
processed to check the efficiency of
the application
• Identify problems
related to high
volumes of data
• Real-world usage
readiness
SCALABILITY
TEST
 Scalability Testing is to determine the
software application's effectiveness
in "scaling up" to support an increase
in user load. It helps plan capacity
addition to your software system
• Plan for future
growth
• Scaling strategy
REALISTIC PERFORMANCE TESTING - IS IT POSSIBLE ?
SHIFT LEFT
AND
RIGHT
Shift left Shift Right
Test
Performance
of new code
Test
Performance
of every build
Test
Performance
of every
deployment
Test
Performance
in production
USER BEHAVIOR -
ANALYSIS
 Real User Monitoring
WORKLOAD
MODELLING
 The process of identifying one or more composite application
usage profiles for use in performance testing is known
as workload modeling.
 Adequate Coverage
 80 - 20 rule
New User Existing User Admin
Production
30% 60% 10%
Script 1 Script 2 Script 3
Test Environment
Application Under TestApplication in Production
30% 60% 10%
LITTLE’S LAW
 The long-term average number of customers in a
stable system N is equal to the long-term average
effective arrival rate, λ, multiplied by the average
time a customer spends in the system, W; or
expressed algebraically: N = λW.
 N = Throughput * (Response Time + Think Time)
Can be used in
Performance Testing
for:
• Workload Modelling
LITTLE’S LAW-
WORKLOAD
MODELLING
• Throughput
• Users
• Think time
N = Throughput * (Response Time + Think Time)
• User session lasts 555 seconds
• During session, the user views 8.78 pages
• Time between 2-page view = 555/8.78 = 63 seconds
• Response Time + Think Time = 63 seconds
• Assume throughput is 9.56, peak users 3904
N=9.56 *63 (assuming throughput is 9.56,peak users 3904)
N=602 users
602 users are enough to simulate peak load of 3904 users
TEST DATA
 Use realistic test data
 Enough data volume in database
 Unique users, different user roles
 Enough test data to avoid
caching problems
 Randomness in test data
TEST
ENVIRONMENT
SIMILAR TO
PRODUCTION
• Infrastructure similar
or comparable
• Any Other
Applications running
SIMULATE
DIFFERENT
NETWORK
SPEEDS - MOBILE
NETWORK
SPEEDS 
• Network Throttling
Growth of Mobile Usage
DISTRIBUTED LOAD
TESTING FROM
MULTIPLE
GEOGRAPHIC
LOCATIONS
 Virtual Users in
different Geographical
locations
THE CODE
FUNCTIONALLY STABLE CODE -
NO MAJOR FUNCTIONAL BUGS
PRESENT IN CODE
TEST WITH MOST RECENTVERSION
IF POSSIBLE
IDENTIFY ANY OTHER
PROCESSES/SYSTEM USING THE
ARCHITECTURE
EFFECTIVE
TEST
EXECUTION
 Tests conducted on an isolated network
segment
 Execute same tests twice to ensure
accuracy of results
 Ensure load generators are not themselves
bottleneck
PERFORMANCE
TESTING
RESULT
ANALYSIS - KPI
Label # Samples Average Median 90% Line 95%Min Max Error%
Throughp
ut Std.Dev
Login1 19381 177 124 147 163 4 20810 0.08% 87.3/sec 719.2848
Login2 19365 68 63 81 101 14 738 0.03% 87.5/sec 23.37372
Login3 19360 69 63 90 112 7 688 0.05% 87.6/sec 24.4878
TOTAL 58166 105 68 131 142 4 20810 0.05% 261.1/sec 418.8086
Login1
Login2
Login3
MATHEMATICAL
UNDERSTANDIN
G OF METRICS
 Average - This means arithmetic mean
 90 Percentile - To find the 90th percentile value for a data set
consisting of 100 page-response-time measurements, you
would sort the measurements from largest to smallest and
then count down eleven data points from the largest. 90
percent of the simulated users experienced a response time of
[the 11th-slowest value] or less for this test scenario
 Median - A median is simply the middle value in a data set
when sequenced from lowest to highest.
 Standard Deviation - One standard deviation is the amount of
variance within a set of measurements that encompasses
approximately the top 68 percent of all measurements in the
data set; in other words, knowing the standard deviation of
your data set tells you how densely the data points are
clustered around the mean.
Data with a
standard deviation
greater than half
of its mean should
be treated as
suspect.
PERFORMANCE
TESTING RESULT
ANALYSIS - KPI
• Resource
Utilizations
ANY
QUESTIONS?
THANKYOU

Weitere ähnliche Inhalte

Was ist angesagt?

Context-Driven Performance Testing
Context-Driven Performance TestingContext-Driven Performance Testing
Context-Driven Performance Testing
Alexander Podelko
 
Load Testing & Apache JMeter
Load Testing & Apache JMeterLoad Testing & Apache JMeter
Load Testing & Apache JMeter
WO Community
 
Test scenarios for sending & receiving emails
Test scenarios for sending & receiving emailsTest scenarios for sending & receiving emails
Test scenarios for sending & receiving emails
Morpheous Algan
 

Was ist angesagt? (20)

SFScon 21 - Matteo Camilli - Performance assessment of microservices with str...
SFScon 21 - Matteo Camilli - Performance assessment of microservices with str...SFScon 21 - Matteo Camilli - Performance assessment of microservices with str...
SFScon 21 - Matteo Camilli - Performance assessment of microservices with str...
 
Microservices Testing Strategies: The Good, the Bad, and the Reality
Microservices Testing Strategies: The Good, the Bad, and the RealityMicroservices Testing Strategies: The Good, the Bad, and the Reality
Microservices Testing Strategies: The Good, the Bad, and the Reality
 
Henk Doornbos & Rix Groenboom - Test Patterns: A New Concept For Testing
Henk Doornbos & Rix Groenboom - Test Patterns: A New Concept For TestingHenk Doornbos & Rix Groenboom - Test Patterns: A New Concept For Testing
Henk Doornbos & Rix Groenboom - Test Patterns: A New Concept For Testing
 
Predictive Analytics based Regression Test Optimization
Predictive Analytics based Regression Test OptimizationPredictive Analytics based Regression Test Optimization
Predictive Analytics based Regression Test Optimization
 
[Vu Van Nguyen] Test Estimation in Practice
[Vu Van Nguyen]  Test Estimation in Practice[Vu Van Nguyen]  Test Estimation in Practice
[Vu Van Nguyen] Test Estimation in Practice
 
Eclipse Day India 2015 - Eclipse RCP testing using Jubula based automation
Eclipse Day India 2015 - Eclipse RCP testing using Jubula based automationEclipse Day India 2015 - Eclipse RCP testing using Jubula based automation
Eclipse Day India 2015 - Eclipse RCP testing using Jubula based automation
 
Best practices for test automation
Best practices for test automationBest practices for test automation
Best practices for test automation
 
Adding value in an agile context
Adding value in an agile contextAdding value in an agile context
Adding value in an agile context
 
Continuous Performance Testing
Continuous Performance TestingContinuous Performance Testing
Continuous Performance Testing
 
ICPE2015
ICPE2015ICPE2015
ICPE2015
 
Performance Testing With Jmeter
Performance Testing With JmeterPerformance Testing With Jmeter
Performance Testing With Jmeter
 
Ivan Pashko - Simplifying test automation with design patterns
Ivan Pashko - Simplifying test automation with design patternsIvan Pashko - Simplifying test automation with design patterns
Ivan Pashko - Simplifying test automation with design patterns
 
ISTQB Advanced Test Manager Training
ISTQB Advanced Test Manager TrainingISTQB Advanced Test Manager Training
ISTQB Advanced Test Manager Training
 
Jonathon Wright - Intelligent Performance Cognitive Learning (AIOps)
Jonathon Wright - Intelligent Performance Cognitive Learning (AIOps)Jonathon Wright - Intelligent Performance Cognitive Learning (AIOps)
Jonathon Wright - Intelligent Performance Cognitive Learning (AIOps)
 
Context-Driven Performance Testing
Context-Driven Performance TestingContext-Driven Performance Testing
Context-Driven Performance Testing
 
Rpt ppt
Rpt pptRpt ppt
Rpt ppt
 
All you need to know about regression testing | David Tzemach
All you need to know about regression testing | David TzemachAll you need to know about regression testing | David Tzemach
All you need to know about regression testing | David Tzemach
 
Load Testing & Apache JMeter
Load Testing & Apache JMeterLoad Testing & Apache JMeter
Load Testing & Apache JMeter
 
Test scenarios for sending & receiving emails
Test scenarios for sending & receiving emailsTest scenarios for sending & receiving emails
Test scenarios for sending & receiving emails
 
Load Testing and JMeter Presentation
Load Testing and JMeter PresentationLoad Testing and JMeter Presentation
Load Testing and JMeter Presentation
 

Ähnlich wie Need for Speed: How to Performance Test the right way by Annie Bhaumik

Performance Test Slideshow Recent
Performance Test Slideshow RecentPerformance Test Slideshow Recent
Performance Test Slideshow Recent
Future Simmons
 
Performance Test Slideshow R E C E N T
Performance Test Slideshow R E C E N TPerformance Test Slideshow R E C E N T
Performance Test Slideshow R E C E N T
Future Simmons
 
VCS_QAPerformanceSlides
VCS_QAPerformanceSlidesVCS_QAPerformanceSlides
VCS_QAPerformanceSlides
Michael Cowan
 
Performance Test Plan - Sample 1
Performance Test Plan - Sample 1Performance Test Plan - Sample 1
Performance Test Plan - Sample 1
Atul Pant
 
PerformanceTestingWithLoadrunner
PerformanceTestingWithLoadrunnerPerformanceTestingWithLoadrunner
PerformanceTestingWithLoadrunner
techgajanan
 
Performance Testing With Loadrunner
Performance Testing With LoadrunnerPerformance Testing With Loadrunner
Performance Testing With Loadrunner
vladimir zaremba
 

Ähnlich wie Need for Speed: How to Performance Test the right way by Annie Bhaumik (20)

Performance Testing using LoadRunner
Performance Testing using LoadRunnerPerformance Testing using LoadRunner
Performance Testing using LoadRunner
 
Getting start with Performance Testing
Getting start with Performance Testing Getting start with Performance Testing
Getting start with Performance Testing
 
What does it take to be a performance tester?
What does it take to be a performance tester?What does it take to be a performance tester?
What does it take to be a performance tester?
 
Performance Test Slideshow Recent
Performance Test Slideshow RecentPerformance Test Slideshow Recent
Performance Test Slideshow Recent
 
Performance Test Slideshow R E C E N T
Performance Test Slideshow R E C E N TPerformance Test Slideshow R E C E N T
Performance Test Slideshow R E C E N T
 
Quick guide to plan and execute a load test
Quick guide to plan and execute a load testQuick guide to plan and execute a load test
Quick guide to plan and execute a load test
 
Fundamentals Performance Testing
Fundamentals Performance TestingFundamentals Performance Testing
Fundamentals Performance Testing
 
Performance Testing
Performance TestingPerformance Testing
Performance Testing
 
5 Essential Tips for Load Testing Beginners
5 Essential Tips for Load Testing Beginners5 Essential Tips for Load Testing Beginners
5 Essential Tips for Load Testing Beginners
 
VCS_QAPerformanceSlides
VCS_QAPerformanceSlidesVCS_QAPerformanceSlides
VCS_QAPerformanceSlides
 
What are Software Testing Methodologies | Software Testing Techniques | Edureka
What are Software Testing Methodologies | Software Testing Techniques | EdurekaWhat are Software Testing Methodologies | Software Testing Techniques | Edureka
What are Software Testing Methodologies | Software Testing Techniques | Edureka
 
Towards a Unified View of Cloud Elasticity
Towards a Unified View of Cloud ElasticityTowards a Unified View of Cloud Elasticity
Towards a Unified View of Cloud Elasticity
 
Performance Test Plan - Sample 1
Performance Test Plan - Sample 1Performance Test Plan - Sample 1
Performance Test Plan - Sample 1
 
Performance tuning Grails applications
 Performance tuning Grails applications Performance tuning Grails applications
Performance tuning Grails applications
 
PerformanceTestingWithLoadrunner
PerformanceTestingWithLoadrunnerPerformanceTestingWithLoadrunner
PerformanceTestingWithLoadrunner
 
Performance Testing With Loadrunner
Performance Testing With LoadrunnerPerformance Testing With Loadrunner
Performance Testing With Loadrunner
 
JMeter
JMeterJMeter
JMeter
 
Designing apps for resiliency
Designing apps for resiliencyDesigning apps for resiliency
Designing apps for resiliency
 
Netflix SRE perf meetup_slides
Netflix SRE perf meetup_slidesNetflix SRE perf meetup_slides
Netflix SRE perf meetup_slides
 
4.3.application performance
4.3.application performance4.3.application performance
4.3.application performance
 

Mehr von QA or the Highway

Jeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdf
Jeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdfJeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdf
Jeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdf
QA or the Highway
 

Mehr von QA or the Highway (20)

KrishnaToolComparisionPPT.pdf
KrishnaToolComparisionPPT.pdfKrishnaToolComparisionPPT.pdf
KrishnaToolComparisionPPT.pdf
 
Ravi Lakkavalli - World Quality Report.pptx
Ravi Lakkavalli - World Quality Report.pptxRavi Lakkavalli - World Quality Report.pptx
Ravi Lakkavalli - World Quality Report.pptx
 
Caleb Crandall - Testing Between the Buckets.pptx
Caleb Crandall - Testing Between the Buckets.pptxCaleb Crandall - Testing Between the Buckets.pptx
Caleb Crandall - Testing Between the Buckets.pptx
 
Thomas Haver - Mobile Testing.pdf
Thomas Haver - Mobile Testing.pdfThomas Haver - Mobile Testing.pdf
Thomas Haver - Mobile Testing.pdf
 
Thomas Haver - Example Mapping.pdf
Thomas Haver - Example Mapping.pdfThomas Haver - Example Mapping.pdf
Thomas Haver - Example Mapping.pdf
 
Joe Colantonio - Actionable Automation Awesomeness in Testing Farm.pdf
Joe Colantonio - Actionable Automation Awesomeness in Testing Farm.pdfJoe Colantonio - Actionable Automation Awesomeness in Testing Farm.pdf
Joe Colantonio - Actionable Automation Awesomeness in Testing Farm.pdf
 
Sarah Geisinger - Continious Testing Metrics That Matter.pdf
Sarah Geisinger - Continious Testing Metrics That Matter.pdfSarah Geisinger - Continious Testing Metrics That Matter.pdf
Sarah Geisinger - Continious Testing Metrics That Matter.pdf
 
Jeff Sing - Quarterly Service Delivery Reviews.pdf
Jeff Sing - Quarterly Service Delivery Reviews.pdfJeff Sing - Quarterly Service Delivery Reviews.pdf
Jeff Sing - Quarterly Service Delivery Reviews.pdf
 
Leandro Melendez - Chihuahua Load Tests.pdf
Leandro Melendez - Chihuahua Load Tests.pdfLeandro Melendez - Chihuahua Load Tests.pdf
Leandro Melendez - Chihuahua Load Tests.pdf
 
Rick Clymer - Incident Management.pdf
Rick Clymer - Incident Management.pdfRick Clymer - Incident Management.pdf
Rick Clymer - Incident Management.pdf
 
Robert Fornal - ChatGPT as a Testing Tool.pptx
Robert Fornal - ChatGPT as a Testing Tool.pptxRobert Fornal - ChatGPT as a Testing Tool.pptx
Robert Fornal - ChatGPT as a Testing Tool.pptx
 
Federico Toledo - Extra-functional testing.pdf
Federico Toledo - Extra-functional testing.pdfFederico Toledo - Extra-functional testing.pdf
Federico Toledo - Extra-functional testing.pdf
 
Andrew Knight - Managing the Test Data Nightmare.pptx
Andrew Knight - Managing the Test Data Nightmare.pptxAndrew Knight - Managing the Test Data Nightmare.pptx
Andrew Knight - Managing the Test Data Nightmare.pptx
 
Melissa Tondi - Automation We_re Doing it Wrong.pdf
Melissa Tondi - Automation We_re Doing it Wrong.pdfMelissa Tondi - Automation We_re Doing it Wrong.pdf
Melissa Tondi - Automation We_re Doing it Wrong.pdf
 
Jeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdf
Jeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdfJeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdf
Jeff Van Fleet and John Townsend - Transition from Testing to Leadership.pdf
 
DesiradhaRam Gadde - Testers _ Testing in ChatGPT-AI world.pptx
DesiradhaRam Gadde - Testers _ Testing in ChatGPT-AI world.pptxDesiradhaRam Gadde - Testers _ Testing in ChatGPT-AI world.pptx
DesiradhaRam Gadde - Testers _ Testing in ChatGPT-AI world.pptx
 
Damian Synadinos - Word Smatter.pdf
Damian Synadinos - Word Smatter.pdfDamian Synadinos - Word Smatter.pdf
Damian Synadinos - Word Smatter.pdf
 
Lee Barnes - What Successful Test Automation is.pdf
Lee Barnes - What Successful Test Automation is.pdfLee Barnes - What Successful Test Automation is.pdf
Lee Barnes - What Successful Test Automation is.pdf
 
Jordan Powell - API Testing with Cypress.pptx
Jordan Powell - API Testing with Cypress.pptxJordan Powell - API Testing with Cypress.pptx
Jordan Powell - API Testing with Cypress.pptx
 
Carlos Kidman - Exploring AI Applications in Testing.pptx
Carlos Kidman - Exploring AI Applications in Testing.pptxCarlos Kidman - Exploring AI Applications in Testing.pptx
Carlos Kidman - Exploring AI Applications in Testing.pptx
 

Kürzlich hochgeladen

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Kürzlich hochgeladen (20)

ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source Milvus
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 

Need for Speed: How to Performance Test the right way by Annie Bhaumik

  • 1. NEED FOR SPEED - “HOW TO PERFORMANCE TEST THE RIGHT WAY” Annie Bhaumik Senior Performance Test Engineer, OCLC
  • 2. NEED FOR SPEED 53% of users abandon the website, if it takes more than 3 seconds to load. 79% of them do not come back to visit the website. 3 seconds That’s how long it takes users to get impatient and abandon your website. 3s
  • 3. DOES A SLOWER WEBSITE MEAN LOST DOLLARS? 8.11% 6.32% 4.64% 2.93% 2.24% 2.20% 1.66% 2.11% 1.77% 1.73% 0.00% 1.00% 2.00% 3.00% 4.00% 5.00% 6.00% 7.00% 8.00% 9.00% 0 1 2 3 4 5 6 7 8 9 Transaction Conversion Rates Page Load Speeds(seconds)
  • 4. WHAT IS THE IDEAL PAGE LOAD TIME If 100 people visit a website for a $100 product, this could be potential earnings: 0-2 seconds
  • 5. REAL WORLD EXAMPLES  Amazon - 1% sales loss for every 100ms extra load time  Walmart - 2% increase in conversion for every 1 sec improvement in load time  Tagman - study showed that 1s delay in load time resulted in loss of 7% revenue
  • 7. WHAT CAN PERFORMANCE TESTING DO FORYOU How many users can the application handle ? What happens to user experience when users increase to 500% Do people in different parts of the world get the same user experience? At what point does the application start slowing or crashing? Which way is performance trending over time? What are my Benchmarks and Baselines
  • 9. LOAD TEST  Load testing is performed to determine a system's behavior under both normal and anticipated peak load conditions. Are desired performance objectives specified in SLA met : • Response Times • Throughput • Resource Utilization
  • 10. ENDURANCE TEST  Testing a system with expected amount of load over a long period of time to find the behaviour of system• Will performance be consistent over time? • Are there slowly growing problems that have not yet been detected?
  • 11. SPIKE TEST  Spike Testing is to determine the behavior of the system under sudden increase of load (a large number of users) on the system• What happens if the production load exceeds the anticipated peak load? • What kinds of failures should we plan for? • What indicators should we look for?
  • 12. STRESS TEST  Stress testing involves testing an application under extreme workloads to see how it handles high traffic or data processing. The objective is to identify the breaking point of an application • Reveals application bugs that surface only under high load conditions ? • Identify application’s weak points
  • 13. VOLUME TEST  Testing a software application with a large amount of data to be processed to check the efficiency of the application • Identify problems related to high volumes of data • Real-world usage readiness
  • 14. SCALABILITY TEST  Scalability Testing is to determine the software application's effectiveness in "scaling up" to support an increase in user load. It helps plan capacity addition to your software system • Plan for future growth • Scaling strategy
  • 15. REALISTIC PERFORMANCE TESTING - IS IT POSSIBLE ?
  • 16. SHIFT LEFT AND RIGHT Shift left Shift Right Test Performance of new code Test Performance of every build Test Performance of every deployment Test Performance in production
  • 17. USER BEHAVIOR - ANALYSIS  Real User Monitoring
  • 18. WORKLOAD MODELLING  The process of identifying one or more composite application usage profiles for use in performance testing is known as workload modeling.  Adequate Coverage  80 - 20 rule New User Existing User Admin Production 30% 60% 10% Script 1 Script 2 Script 3 Test Environment Application Under TestApplication in Production 30% 60% 10%
  • 19. LITTLE’S LAW  The long-term average number of customers in a stable system N is equal to the long-term average effective arrival rate, λ, multiplied by the average time a customer spends in the system, W; or expressed algebraically: N = λW.  N = Throughput * (Response Time + Think Time) Can be used in Performance Testing for: • Workload Modelling
  • 20. LITTLE’S LAW- WORKLOAD MODELLING • Throughput • Users • Think time N = Throughput * (Response Time + Think Time) • User session lasts 555 seconds • During session, the user views 8.78 pages • Time between 2-page view = 555/8.78 = 63 seconds • Response Time + Think Time = 63 seconds • Assume throughput is 9.56, peak users 3904 N=9.56 *63 (assuming throughput is 9.56,peak users 3904) N=602 users 602 users are enough to simulate peak load of 3904 users
  • 21. TEST DATA  Use realistic test data  Enough data volume in database  Unique users, different user roles  Enough test data to avoid caching problems  Randomness in test data
  • 22. TEST ENVIRONMENT SIMILAR TO PRODUCTION • Infrastructure similar or comparable • Any Other Applications running
  • 23. SIMULATE DIFFERENT NETWORK SPEEDS - MOBILE NETWORK SPEEDS  • Network Throttling Growth of Mobile Usage
  • 24. DISTRIBUTED LOAD TESTING FROM MULTIPLE GEOGRAPHIC LOCATIONS  Virtual Users in different Geographical locations
  • 25. THE CODE FUNCTIONALLY STABLE CODE - NO MAJOR FUNCTIONAL BUGS PRESENT IN CODE TEST WITH MOST RECENTVERSION IF POSSIBLE IDENTIFY ANY OTHER PROCESSES/SYSTEM USING THE ARCHITECTURE
  • 26. EFFECTIVE TEST EXECUTION  Tests conducted on an isolated network segment  Execute same tests twice to ensure accuracy of results  Ensure load generators are not themselves bottleneck
  • 27. PERFORMANCE TESTING RESULT ANALYSIS - KPI Label # Samples Average Median 90% Line 95%Min Max Error% Throughp ut Std.Dev Login1 19381 177 124 147 163 4 20810 0.08% 87.3/sec 719.2848 Login2 19365 68 63 81 101 14 738 0.03% 87.5/sec 23.37372 Login3 19360 69 63 90 112 7 688 0.05% 87.6/sec 24.4878 TOTAL 58166 105 68 131 142 4 20810 0.05% 261.1/sec 418.8086 Login1 Login2 Login3
  • 28. MATHEMATICAL UNDERSTANDIN G OF METRICS  Average - This means arithmetic mean  90 Percentile - To find the 90th percentile value for a data set consisting of 100 page-response-time measurements, you would sort the measurements from largest to smallest and then count down eleven data points from the largest. 90 percent of the simulated users experienced a response time of [the 11th-slowest value] or less for this test scenario  Median - A median is simply the middle value in a data set when sequenced from lowest to highest.  Standard Deviation - One standard deviation is the amount of variance within a set of measurements that encompasses approximately the top 68 percent of all measurements in the data set; in other words, knowing the standard deviation of your data set tells you how densely the data points are clustered around the mean. Data with a standard deviation greater than half of its mean should be treated as suspect.
  • 29. PERFORMANCE TESTING RESULT ANALYSIS - KPI • Resource Utilizations