9. Test my new
icon.
Tell me which
video carousel
works best.
Find out if mobile
customers like our
new menus!
https://flic.kr/p/6rKxaH
10. opticon2017
1. Improve the customer
experience
2. Optimize conversion
events
3. Validate new features
4. Build an experimentation
culture
Program goals
12. Prioritize backlog
• Business strategy
• Customer impact
• Opportunity
• Level of effort to implement
https://flic.kr/p/7VFBK2
13. opticon2017
Prioritize backlog
Use an objective, consistent model to score experiment ideas from your teams.
Does this align to
the core initiatives?
Business strategy
Key business priority
Committed engineering
work
Will this impact a
customers broadly?
Customer impact
Scope of update
worldwide
Potential to impact
multiple categories and
platforms
Direct response to
customer pain point
Is the impact to
business high?
Opportunity
Estimated amount of
impact to revenue or
conversion activity
Impact to overall
category
How much time does it
take to build or plan?
Effort
Amount of estimated
development time
15. Measure progress
# of tests launched
% of traffic tested
% of tests deployed globally
Test win rate
Revenue impact
16. opticon2017
Measure progress
Track key program measurements to track month over month business progress.
Metric Description Monthly
Target
Actual YTD
# of tests launched Measures test velocity 5 4 12
% of traffic tested Tracks testing expected traffic volumes 1M MUV .5 4M
% of tests globally
deployed
% of campaigns tested outside US 50% 50% 40%
% of INTL tests % of total volume 60% 50% 55%
Test win rate Specific to Optimization tests 30% 40% 35%
Revenue impact Estimated annual impact to revenue 5M 6M 12M
Example scorecard
18. opticon2017
Assess Opportunity
Home
Explore
Marketing CLEs
Download
Details pages
Buy
PDP pages
How-to
Help pages
1 Identify conversion by
page or page groups
2 Calculate estimated
growth
3 Measure overall
estimated impact
Use to define realistic targets that moves a programs from what’s been done, to what’s possible.
4 Set targets
Home
PDP Pages
Home
25. opticon2017
PRIORITIZE BACKLOG
Capture Ideas
Always start with a
hypothesis – give
teams examples of
what a good one
looks like.
Give structure to
testing backlog by
categorizing ideas
by site / page Naming
conventions can
help keep things
organized and
easy to report
(Beth is an
expert!)
26. opticon2017
PRIORITIZE BACKLOG
Scoring
Potential
of the experiment
to positively
impact your goal
Impact
to the business is
the experiment
does win
Level of Effort
to implement the
experiment in
Optimizely
Love
Strategic importance,
executive support,
biased opinions
27. opticon2017
PRIORITIZE BACKLOG
Selecting & Accepting Ideas
Active status lets
everyone know
you’ve selected this
for experimentation
and are beginning
work
Backlog status lets
your team know it’s
still under review by
the central group
28. opticon2017
Democratize
(good) ideation
• Train teams on how to write a
hypothesis
• Create a process (like
Microsoft’s) for submission,
scoring, and acceptance
• Give constructive feedback
and let teammates share in
successes
30. opticon2017
MEASURE PROGRAM
Velocity & Win Rate
Testing Velocity lets
you measure your
teams operational
performance
Win Rate helps teams
measure the quality of
their hypotheses and the
impact to the business
31. opticon2017
Using Program
Reporting Right
Don’t just look at these
reports!
• Set monthly, quarterly, and
annual goals for performance
• Measure and report on a
monthly basis
• Reassess on an annual basis
• Integrated the learnings into
the operations of your
experimentation program
33. opticon2017
Insights on
your program
Some things you may learn:
• Some teams are more
effective than others in
moving an idea to experiment
quickly
• Win rates may be significantly
lower on certain pages of your
site
• Testing velocity may slow
around holidays
• Certain experiment strategies
work more effectively
36. opticon2017
ASSESS OPPORTUNITIES
Expected Value
$1 per roll of the die
Every time you roll a 3
I pay you $5
If you want to play this game, come see me after this presentation!
Win Rate – 1/6 = 16%
Value of Win = $5
Expected Value of Roll = 83 Cents
38. opticon2017
ASSESS OPPORTUNITIES
Expected Value of an Experiment
Every time you win, you get 5% increase
on your revenue, which is $10,000,000
and you win 10% of the time
IF IT COSTS LESS THAN $50K TO RUN A TEST, RUN IT!
Win Rate = 10%
Value of Win = 5% x $10,000,000 = $500,000
Expected Value of Test - $50,000
40. opticon2017
ASSESS OPPORTUNITIES
Annual Expected Value of Program
High Complexity
Low Velocity
Low Complexity
High Velocity
Type of Tests Take longer, bigger changes Easier, smaller changes
Win Rate 20% 10%
Avg Lift 30% 10%
Expected Value of Test $300,000 $100,000
Tests / Year 25 100
Annual Expected Value of
Testing Program $15M $10M
41. opticon2017
Assessing
Opportunity
As you begin to understand
your program:
• Track how win rate and
velocity impact your overall
program value
• Think about ROI and where
you can maximize
• Take a “portfolio” approach to
your experimentation program
Because I know why my family and friends want to travel, I know what to look for – I know how to measure success. So, let’s begin.
Customer experience improvements, Conversion Optimization, Validation of new features?
Measuring program success is not about measuring how many tests you did. You first need to start with, why your program exists, what function is it serving.
At Microsoft, we have teams across the company who test. Why they test is different, but at the foundation – all of the teams are working to deliver customer insights and direction to different business groups across the company. So, before we begin, you first have to decide, why your program exists?
https://www.flickr.com/photos/jsloss/3573190903/
And, if you aren’t careful, to figure out why your are testing, you can quickly find your self at a music festival – where everyone wants something different from you!
All of the program goals tie back to delivering customer insights. This is not
https://flic.kr/p/7VFBK2
A prioritization model creates transparency and focus within testing programs.
Ours is a point based model where we score each category (which has a few questions/criteria) and then we total the points. The points map to priority levels which we add to our intake forms, which creates transparency for the overall score.
A prioritization model creates transparency and focus within testing programs.
Ours is a point based model where we score each category (which has a few questions/criteria) and then we total the points. The points map to priority levels which we add to our intake forms, which creates transparency for the overall score.
When measuring, advise is to keep is simple. Look at how you are tracking monthly, and set targets.
Some other ideas to track are: bust rate, implementation rate, time to launch, % of inconclusive tests.
Thank you – and here’s to structuring, building and reporting on your Successful experimentation program.
Customer experience improvements, Conversion Optimization, Validation of new features?
Measuring program success is not about measuring how many tests you did. You first need to start with, why your program exists, what function is it serving.
At Microsoft, we have teams across the company who test. Why they test is different, but at the foundation – all of the teams are working to deliver customer insights and direction to different business groups across the company. So, before we begin, you first have to decide, why your program exists?
>>>>
Our goals can only be reached
through a vehicle of a plan; Pablo Picasso