This webinar aims to show you how to avoid the common pitfalls that can affect your conversion rate optimization strategy. At the end of this webinar you will be able to:
1) Use A/B Testing best practices to get the most out of it
2) Identify the common mistakes preemptively
3) Appreciate how A/B Testing fits your whole conversion rate optimization strategy (based on real cases, such as Swedoffice and Spotify)
hosted by Simon Dahla from Conversionista Sweden and Martijn Janssen, Partner Manager Optimizely
7. ➔ An optimization method that involves testing different
versions of a web page (or app)
➔ The variations are identical except for a few things that
might affect a user's behavior
➔ Calculations are made to see if the effect is not coincidence
What is A/B-testing?
9. ● Visitors are randomly selected to see different variations
(cookies are stored)
● Keeping track of your KPIs
● Downloading content from the cloud or redirecting the
visitor to a different URL
An A/B-testing tool in a nutshell
Three primary things
10. ➔ To learn more about the visitor’s
behaviour in order to formulate new
hypotheses
➔ We want to achieve our online goals e.
g. increased sales or more leads
Why should you test?
12. Testing area
If there is an obvious opportunity to shift behaviour, expose insight
or increase number conversions
You test everything
Just Do It (JFDI)
Issues where a fix is easy to identify or the change is a no-brainer
Put your findings into buckets
Explore
You need more information to triangulate the problem. If an item is
in this bucket, you need to do further digging, more data points
(red = not suitable for testing)
13. No (analytics) integration
● Troubleshooting tests
● (Segmenting results)
● Test that “flipp”
● Tests that don’t make any sense
● Broken test
● What drives the difference
15. Your test will finish in 100 years!
★ Use a test duration calculator
★ https://www.optimizely.com/resources/sample-size-calculator
★ http://apps.conversionista.se/visual-test-duration-calculator/
17. Optimizely’s Stats Engine
● New way of measuring significance in a dynamic environment
Results
● Make a decision as soon as you see significant results
● Test many goals and variations accurately at the same time
● No extra work for experimenters
Traditional
Statistics
Stats Engine
Percent of tests with winners
or losers declared 0.36 0.22
Percent of tests with a change
in significant declaration 0.37 0.04
18. ● Segmenting
● Customer service
● Session replay
● Eyetracking
● User testing
● Form analytics
Your hypothesis is crap
Use input from:
● Search analysis
● A/B-testing
● Web analysis
● Competitors
● Customer contacts
● Surveys
24. The analysis with Google Analytics
In the funnel visualization
reports we found a bigger
drop off between signup
and thank you page than
between landing page and
signup page
25. The Hypothesis
Since we have observed that [We have a big drop off between the
Signup and the Thank you Page]. By [Analyzing the data in Google
Analytics And Crazy Egg]. We want to [Move up the “Instructions”]
which should lead to [more people signing up]. The effect will be
measured by [the number of people signing up]
http://dah.la/hypothesis-creator
The hypothesis
formula
27. Variation 1 outperforms the Original
Micro
Conversion
Goal
Micro
Conversion
Goal
Macro
Conversion
Goal
28.
29. Your tests are not prioritized
Opportunity
High
Low
LowEffortHigh
Opportunity factors
to take into
consideration:
➔ Complexity
➔ Resources
➔ Decisions
Effort factors to
take into
consideration:
➔ Potential
➔ Scale
➔ Goal
31. Original
● Premium Trial page (US)
● High drop off
● Asked to provide credit
card details to start the
premium trial
32. ● User testing
● Short survey →
○ Data shows that the
primary reason to not
start the premium
trail is
■ Does not want to
give away their
credit card details
Input
33. Test Hypothesis
“Eftersom att vi med DATAANALYS
har observerat att en stor del av de som
lämnar premiumflödet (i data) gör det p.g.a.
att de INTE VILL GE BORT sina betaluppgifter
kommer vi säga VARFÖR de måste ge det
vilket kommer leda till att fler gör det.
Något vi kommer att mäta i antal köp.”
http://dah.la/hypothesis-creator
The hypothesis
formula
34. Hypothesis: “Give the user a reason...”
We only use this to verify your account, you
won't be charged anything for your trial
We need this because our music deals only
allow free trials for users that are credit card or
PayPal holders
We need this just in case you decide to stay
Premium after your free month
B
C
D
35. Test Results
C. “Because of our music...”
B. “Verify your account…”
D. “If you want to continue...”
A. Original
Variations CC PAGE Thank You page
53. Results
39% Increase in # of Sales Leads
Bounce Rate Decreased
Google Quality Score went up
Cost per Lead went down
54.
55. Do not get risky - be aware of bugs
- Make sure not to direct all traffic to a “broken” or bad
performing variation
- Preview your variations in cross browser tests
- Use phased rollouts to avoid dissatisfaction
63. Summary: Common Testing Mistakes
➔ You test everything on
your site
➔ No integrations
➔ Your test will finish in a 100
years
➔ You draw conclusions
based on an ongoing test
➔ You put in too little effort
on your hypothesis
➔ Your test isn’t prioritized
➔ You don't learn anything
➔ You change everything at
once
➔ You don't account for
different traffic sources
➔ Be aware of bugs
64. Key take aways
1. The only bad test is the one where you don’t learn anything
2. Expect the unexpected
3. Only test where you can trigger a behaviour change - where
we make decisions
4. Formulate your test hypothesis WELL !important
REMEMBER & DON’T FORGET