According to the online retail research firm comScore, this year’s Cyber Monday sales surpassed the $1.25 billion mark, breaking all previous records and setting the pace for the busiest online retail season in history. The key to many of this year’s success was deep insight into how web systems would handle realistic user traffic coupled and where issues would occur. Firms must adopt the correct application testing and performance monitoring methodologies to ensure revenues and avoid disaster. Join SOASTA and Correlsense for this engaging webinar, which will give you an executable plan for your IT operations team. You will learn:
Best Practices in planning, executing, and closing out your holiday performance testing campaign
How to test and monitor mission critical applications leading into and during the holiday season
Performance testing and monitoring techniques that will guarantee a satisfying end user experience
Technologies which will allow you to quickly determine the most important tests to run based on what real users are doing
Case studies and demonstrations showing how the combination of deep transactional data and cloud-based testing deliver confidence in holiday retail readiness
9. The IT Landscape Has Changed
What is “Peak Load”?
• 100%, 200%...500%+?
• How much mobile traffic should we prepare for…and how?
What Are The Most Important Transactions?
• What are the most profitable paths users follow?
• What is the affect of non-buying “browsing” on paying customers?
What System Do You Test In?
• Is the lab good enough?
• Production testing is taboo (isn’t it)?
How Will You Find The Issues?
• Complexity reigns
• How do we see where issues are at load?
10. The IT Landscape Has Changed
What is “Peak Load”?
• You must to test to new limits with a mix of web and mobile traffic
What Are The Most Important Transactions?
• Those that you determine as profitable, complex and/or risky
What System Do You Test In?
• Testing is a continuous process from the lab to live production
How Will You Find The Issues?
• Monitoring during tests for the the end-to-end view while tests run
12. Monitoring best practice 1
“Assume Nothing”
Oops. A
production server!
UAT Environment
topology autodetected
13. Monitoring best practice 2
Visibility explains REAL phenomenas
Specific transaction
type is failing
Specific location is
failing
14. Monitoring best practice 2
Visibility explains REAL phenomenas
It is pretty
easy to
see the
load
balance
mismatch
You can see that
the % Time spent
between User
and Data Center
is the issue here
When compared Or the %Time
to % Time spent spent rendering
within the Data on the user’s
Center device
15. Monitoring best practice 3
Baseline and compare
Compare application model of
100% load vs. 150% of load
16. Testing best practice 1
• Start early & test progressively.
– Begin in development
– Run many iterative tests that address performance from
code though infrastructure
– Finish in production (Live prod to really be sure)
17. Testing best practice 2
• Test realistically.
– Model users acting like humans
– Stress & measure at a realistic pace
– Run at true scale
– From different locations and devices
18. Testing best practice 3
• Seek a single source of performance truth.
– Get Dev & Ops on the same page. (Is 2 seconds okay?)
– Measure with the same tools in Dev, Test & Ops
– Correlate monitoring data with test data as tests run
19. Thankfully, There is a Solution
SharePath identifies critical transactions
to test what matters most
CloudTest enables testing to any
level with web or mobile traffic – in
lab or production environments
Monitor critical metrics during
and after tests to isolate and
prevent production outages
21. Summary
• The IT Landscape has changed
• Start your testing early, test progressively, and
test realistically
• When monitoring critical applications, assume
nothing, look for root cause phenomena, use
baselines, and compare
• SharePath and CloudTest provide an integrated
solution
22. Questions
Contact SOASTA: Contact Correlsense:
www.soasta.com/cloudtest/ www.correlsense.com/demo
info@soasta.com www.real-user-monitoring.com
866.344.8766 info@correlsense.com
Follow us: Follow us:
twitter.com/cloudtest twitter.com/correlsense
facebook.com/cloudtest facebook.com/correlsense
Get your free copy
Download
GET STARTED SharePath RUM!
CloudTest Lite - FREE!
TODAY! http://www.real-user-monitoring.com/
www.soasta.com/cloudtest/lite/
Hinweis der Redaktion
Testing with a production server. Every day at the same time, production outage that was not explained. Turned out this was the time they ran the tests. Part of the team had to stop the test because they got called in to a Production war room, not knowing it was directly related.
CPU is spiking. So what? Example about network monitoring complaining about high throughput. What does it affect? Is it burning the cable?Should be the other way around – look for phenomenas affecting users and drill to root cause
CPU is spiking. So what? Example about network monitoring complaining about high throughput. What does it affect? Is it burning the cable?Should be the other way around – look for phenomenas affecting users and drill to root cause
Give example of chatty application – transaction calling a webservice that has changed. For each call, now making 540 calls instead of 20. Response time stayed the same, but model has changed. This would have popped up in production. Must compare models.