Believe it or not - 85% of mobile apps are removed after first usage! In this presentation - given at the APM Meetup in Singapore in April 2015 - I talked about the challenges, best practices and especially metrics to avoid this situation.
Key Points of the Presentation
The two key trends "Internet of Things" and "DevOps" play a big role in our life when we talk about User Experience and especially mobile user experience. In this presentation I tell you what metrics to use to make sure you deliver your ideas faster to your mobile end users but also ensuring the right quality and user experience so that your users stay loyal and dont delete the mobile app after first usage.
WSO2CON 2024 - Freedom First—Unleashing Developer Potential with Open Source
Mobile User Experience:Auto Drive through Performance Metrics
1. @Dynatrace
- More on http://blog.dynatrace.com
- Dynatrace Free Trial: http://bit.ly/dttrial
Mobile User Experience:
Auto Drive through
Performance Metrics
Hosted by: Andreas Grabner - @grabnerandi
12. 700 Deployments / Year
50-60 Deployments / Day
10+ Deployments / Day
Every 11.6 seconds
13. Inside the Amazon Numbers!
75% fewer outages since 2006
90% fewer outage minutes
~0.001% of deployments cause a problem
Instantaneous automatic rollback
Deploying every 11.6s
34. Distance Calculation Issues
480km biking
in 1 hour!
Solution: Unit Test in
Live App reports Geo
Calc Problems
Finding: Only
happens on certain
Android versions
40. Using Hibernate results in 4k+ SQL Statements to
display 3 items!
Hibernate
Executes 4k+
Statements
Individual
Execution VERY
FAST
But Total SUM
takes 6s
45. •# Images
•# Redirects
•Size of Resources
•# SQL Executions
•# of SAME SQLs
•# Items per Page
•# AJAX per Page
Remember: Metrics-based decisions
•Time Spent in API
•# Calls into API
•# Functional Errors
•3rd Party calls
•# of Domains
•Total Size
•…
46.
47. Putting it into Continuous Deployment
12 0 120ms
3 1 68ms
Build 20 testPurchase OK
testSearch OK
Build 17 testPurchase OK
testSearch OK
Build 18 testPurchase FAILED
testSearch OK
Build 19 testPurchase OK
testSearch OK
Build # Test Case Status # SQL # Excep CPU
12 0 120ms
3 1 68ms
12 5 60ms
3 1 68ms
75 0 230ms
3 1 68ms
Test & Monitoring Framework Results Architectural Data
We identified a regresesion
Problem solved
Exceptions probably reason for
failed tests
Problem fixed but now we have an
architectural regression
Problem fixed but now we have an
architectural regressionNow we have the functional and
architectural confidence
Let’s look behind the
scenes
On the one side we have a drastic change in how end users are interacting with services using different sorts of devices. Whether it is their smart phone, tablet, watch, car, ….
There is rapid change in requirements and user expectations for us as service providers
The other trend is the DevOps movements that tries to help us here. Heavily promoted and pushed by several people, organizations, books and conferences
If you havent read the phoenix project please do so. Also make sure you are getting up to speed with concepts such as Continuous Delivery and doing this in an efficient way – as this is what we need to do in order to comply with the rapidly changing requirements
Cycle time is the most relevant metric in the software delivery process.
“How long would it take your organization to deploy a change that involves just one single line of code?” Mary Poppendieck
Cycle time is the most relevant metric in the software delivery process.
“How long would it take your organization to deploy a change that involves just one single line of code?” Mary Poppendieck
Cycle time is the most relevant metric in the software delivery process.
“How long would it take your organization to deploy a change that involves just one single line of code?” Mary Poppendieck
The key goal that people want to achieve is to Reduce Lead Time. An automated build pipeline plays a huge role in it as we get rid of a lot of manual tasks that otherwise hold up the process
When pushing out features faster it is important to also close the feedback loop to constantly improve the process and quality of the developed software
Several companies changed their way they develop and deploy software over the years. Here are some examples (numbers from 2011 – 2014)
Cars: from 2 deployments to 700
Flicks: 10+ per Day
Etsy: lets every new employee on their first day of employment make a code change and push it through the pipeline in production: THAT’S the right approach towards required culture change
Amazon: every 11.6s
Remember: these are very small changes – which is also a key goal of continuous delivery. The smaller the change the easier it is to deploy, the less risk it has, the easier it is to test and the easier is it to take it out in case it has a problem.
So – our goal is to deploy new features faster to get it in front of our paying end users or employees
For many companies that tried this it may also meant that they fail faster
Its also very important to keep the focus right – building and fixing those things that matter.
We need to listen to general trends but also need to put monitoring into our product to figure whats used and what not
http://fintalk.cdw.com/2015/01/08/financial-it-trends-banks-infographic-2015/
So – we have seen a lot of metrics. The goal now is that you start with one metric. Pick a single metric and take it back to your engineering team (Dev, Test, Ops and Business). Sit down and agree on what this metric means for everyone, how to measure it and also how to report it
Also remember that for most of these use cases discussed and metrics derived from it we only need a single user test. Even though we can identify performance, scalability and architectural issues – in most cases we don’t need a load test. Single user tests or unit tests are good enough
Once we figured out how to get these measures it is time to automate the capturing but also automate quality alerting in case these metrics are showing us that we ran into one of these well known use cases.
Here is how we do this. In addition to looking at functional and unit test results which only tell us how functionality is we also look into these backed metrics for every test. With that we can immediately identify whether code changes result in any performance, scalability or architectural regressions. Knowing this allows us to stop that build early
Now as we know which metrics we need to look at and how to automate the capturing and detect regressions from build to build we simply add it to the continuous delivery pipeline by letting these metrics act as quality gateways. We do not let a build move forward if we already know that it has a well known problem.
Here are all the benefits
Only good code reaches production
We eliminate time spent in later stages if we already identify problems earlier
We all level up our skills and become a better team
We produce better software faster -> we don’t crash the car