Practical Tips for Ops: End User Monitoring
Watch replay here: https://info.dynatrace.com/apm_wc_devops_journey_series_end_user_monitoring_na_registration.html
Companies that have adopted DevOps Best Practices have 2555x faster lead times* in delivering new features to their end users. However, speed of delivery is not the only success metric! Success must also be measured on how end-users react to the speed of innovation.
Getting insights into how your end-users react to the changes you deploy allows you to share valuable feedback to the Dev and Biz teams. The teams can then see clearly how their changes impacted end-users and where fine tuning can improve infrastructure performance.
In this webcast Andreas Grabner, Chief DevOps Activist, and Brian Chandler, Systems Engineer, share practical tips that IT groups can start to implement quickly. You'll learn:
• Best approach for monitoring end-user mobile versus desktop versus tablet versus service end-points
• How to evaluate network bandwidth requirements by app, service and feature; to better understand and optimize resource consumption
• How to optimize your delivery chain in depth by understanding who is using your app, where, and on what device
• Clear view on which features are being used the most, the least, and what kind of behavior can be observed that is useful in tuning performance
If you are stuck in analysis paralysis, get insights that you can apply today!
*In addition, companies using DevOps are two times more likely to exceed profitability, market share and productivity goals (from the State of DevOps report by Puppet Labs 2016)
1. Andreas Grabner
Chief DevOps Activist @ Dynatrace
Twitter: @grabnerandi
Brian Chandler
Sales Engineer @ Dynatrace
Twitter: @Channer531
Practical Tips for Ops: End User Monitoring
The DevOps Journey Series Part 3
2. State of DevOps Report Adoption Metrics
200x 2,555x
more frequent deployments faster lead times than their peers
Dynatrace DevOps Adoption Metrics
12x
More feature
releases
170
Deployments / Day
93%
Production bugs found before
impacting end users
3. Interesting Ops Learnings from Adopters
New Tech Stack and Architectures
3rd Party / CDN
More Apps / Multi-Version
„Twitter Driven“ Load Models
4. DevOps Requirements and Engagement Options for Ops
Feedback through High Quality App & User Data
Ops as a Service: “Self-Service for Application Teams”
+ Promote YOUR Monitoring through Shift-Left
Bridge the Gap between Server Side and End User
Shift-Left: (No)Ops as “Part of Application Delivery”
RequirementsEngagementOptions
5. Basic App Monitoring1
App Dependencies2
End User Monitoring3
How to monitor mobile vs desktop vs tablet vs service endpoints?
How much network bandwidth is required per app, service and feature?
Where to start optimizing bandwidth: CDNs, Caching, Compression?
Are our applications up and running?
What load patterns do we have per application?
What is the resource consumption per application?
What are the dependencies between apps, services, DB and infra?
How to monitor „non custom app“ tiers?
Where are the dependency bottlenecks? Where is the weakest link?
Closing the Ops to Dev Feedback Loop: One Step at a Time!
“Soft-Launch” Support4
Virtualization Monitoring5 How to automatically monitor virtual and container instances?
What to monitor when deploying into public or private clouds?
How to deploy and monitor multiple versions of the same app / service?
What and how to baseline?
Do we have a better or worse version of an app/service/feature?
Ops: Need answers to these questions! Closing the gap to AppBizDev
Ready for “Cloud Native” How to alert on real problems and not architectural patterns?
How to consolidate monitoring between Cloud Native and Enterprise?
Who is using our apps? Geo? Device?
Which features are used? Whats the behavior?
Where to start optimizing? App Flow? Page Size?
Conversion Rates? Bounce Rates?
Where are the performance / resource hotspots?
When and where do applications break?
Do we have bad dependencies through code or config?
How does the system really behave in production?
What to learn for future architecturs?
What are the usage patterns for A/B or Green/Blue?
Difference between different versions and features?
Does the architecture work in these dynamic enviornments?
Does scale up/down work as expected?
Provide „Monitoring as a Service“ for Cloud Native
Application Teams6
Today
7. Out-Side In Perspective: See your App
from your users perspective
7
User Experience = Availability (Synthetic) + Performance, Errors & User Behavior (Real Users)
12. Key User Experience Metrics Feedback
#1: Who are they?
#2: Bandwidth!
#3: Response
Time Breakdown
#4: Conversions:
Total & Rate
#5: Client-Side
Errors!
#6: CPU / Memory
#4: Conversions:
Total & Rate
#6: Key User
Action(s)
13. Questions to answer!
Efficiency: How to optimize end user experience, infrastructure & costs?
Optimize Top vs Remove Flop Features!
Analyze and optimize page load, network traffic and costs!
Impact: Do we impact our end users experience?
Is the issue in Content Delivery, Network or Server Side?
Can users use our services? Crashes? Bad or Slow Responses?
Mobile: as First Class Citizen!
Usage feedback based on mobile versions & user experience
Analyze crashes and optimize server-side resource usage
14. confidential
Impact: Do we impact our end users experience?
Is the issue in Content Delivery, Network or Server
Side?
Can users use our services? Crashes? Bad or Slow
Responses?
15. 50,000 Foot View on User Experience
Birds eye view of holistic user experience
Green – Satisfied
Yellow – Tolerating
Red – Frustrated
• Line chart represents volume
• Market Open
• 60 User Actions per second
22. User Experience
Green – Satisfied
Yellow – Tolerating
Red – Frustrated
API Performance
Green – Fast
Yellow – Warning
Red – Slow
Purple – Error
• Problem with mainframe
(HPNS)
• Major outage on
proprietary web server
• Notification of the
problem at 5:30am
Purple creeping death
24. confidential
Efficiency: How to optimize end user experience,
infrastructure & costs?
Optimize Top vs Remove Flop Features!
Analyze and optimize page load, network traffic and
costs!
26. Client Center sees a
peak of about 3,800
Request/min against
the it’s API.
Daily Traffic Pattern – bucketizing usage
27. Client Center sees a
peak of about 3,800
Request/min against
the it’s API.
60 unique
calls/functions that
make up the Client
Center API
Daily Traffic Pattern – bucketizing usage
28. Client Center sees a
peak of about 3,800
Request/min against
the it’s API.
60 unique
calls/functions that
make up the Client
Center API
~20% of that traffic is
ClientCenter/API/Holdings
Daily Traffic Pattern – bucketizing usage
29. Client Center sees a
peak of about 3,800
Request/min against
the it’s API.
60 unique
calls/functions that
make up the Client
Center API
~20% of that traffic is
ClientCenter/API/Holdings
~20% of that traffic is
ClientCenter/API/ClientDetails
Daily Traffic Pattern – bucketizing usage
30. Client Center sees a
peak of about 3,800
Request/min against
the it’s API.
60 unique
calls/functions that
make up the Client
Center API
~20% of that traffic is
ClientCenter/API/Holdings
~20% of that traffic is
ClientCenter/API/ClientDetails
~20% of that traffic is
ClientCenter/API/RecentSearch
Daily Traffic Pattern – bucketizing usage
38. confidential
Mobile: as First Class Citizen!
Usage feedback based on mobile versions & user
experience
Analyze crashes and optimize server-side resource
usage
41. Questions to answer!
Efficiency: How to optimize end user experience, infrastructure & costs?
Optimize Top vs Remove Flop Features!
Analyze and optimize page load, network traffic and costs!
Impact: Do we impact our end users experience?
Is the issue in Content Delivery, Network or Server Side?
Can users use our services? Crashes? Bad or Slow Responses?
Mobile: as First Class Citizen!
Usage feedback based on mobile versions & user experience
Analyze crashes and optimize server-side resource usage
42. How Can You Scale in the New DevOps World?
New Tech Stack and Architectures
3rd Party / CDN
More Apps / Multi-Version
„Twitter Driven“ Load Models
43. Confidential, Dynatrace, LLC
Monitoring redefined
Every user, every app, everywhere. AI powered, full stack, automated.
Full lifecycle - development, test, and production
44. Confidential, Dynatrace, LLC
Complete monitoring coverage for all applications
Digital experience analytics Application performance Cloud, container, infrastructure
Agents
Wire
data
Synthetics
Log
data
Real user
monitoring
49. Confidential, Dynatrace, LLC
A better way
Self-service for all
Automated monitoring
User experience is everything
More time innovating, not monitoring
50. Basic App Monitoring1
App Dependencies2
End User Monitoring3
How to monitor mobile vs desktop vs tablet vs service endpoints?
How much network bandwidth is required per app, service and feature?
Where to start optimizing bandwidth: CDNs, Caching, Compression?
Are our applications up and running?
What load patterns do we have per application?
What is the resource consumption per application?
What are the dependencies between apps, services, DB and infra?
How to monitor „non custom app“ tiers?
Where are the dependency bottlenecks? Where is the weakest link?
Closing the Ops to Dev Feedback Loop: One Step at a Time!
“Soft-Launch” Support4
Virtualization Monitoring5 How to automatically monitor virtual and container instances?
What to monitor when deploying into public or private clouds?
How to deploy and monitor multiple versions of the same app / service?
What and how to baseline?
Do we have a better or worse version of an app/service/feature?
Ops: Need answers to these questions! Closing the gap to AppBizDev
Ready for “Cloud Native” How to alert on real problems and not architectural patterns?
How to consolidate monitoring between Cloud Native and Enterprise?
Who is using our apps? Geo? Device?
Which features are used? Whats the behavior?
Where to start optimizing? App Flow? Page Size?
Conversion Rates? Bounce Rates?
Where are the performance / resource hotspots?
When and where do applications break?
Do we have bad dependencies through code or config?
How does the system really behave in production?
What to learn for future architecturs?
What are the usage patterns for A/B or Green/Blue?
Difference between different versions and features?
Does the architecture work in these dynamic enviornments?
Does scale up/down work as expected?
Provide „Monitoring as a Service“ for Cloud Native
Application Teams6
Today
51. DXS DevOps Xcelerator will:
Differentiate your sale
Create value based outcomes
Accelerate growth opportunities
Watch the DXS Enablement
Course on Dynatrace
University!
Stop by the DXS networking
table to learn more!
52. confidential
Q & A
Brian Chandler
Sales Engineer @ Dynatrace
@Channer531
Andreas Grabner
Chief DevOps Activist @ Dynatrace
@grabnerandi
Try Dynatrace: http://bit.ly/dtsaastrial
List to our Podcast: http://bit.ly/pureperf
Read more on our blog: http://blog.dynatrace.com