In our latest webinar we join forces with Magento and Signature Hardware to discuss how to drive revenue with A/B testing. Learn more at www.blueacorn.com
5. WhyA/BTest?
1. To learn more about your customers
2. To drive measured KPI improvements
#optimizeba
6. What do we A/BTest?
User Interaction & Interface
• Workflows
• Features
• Visual hierarchy
• Usability
Psychological Factors
• Voice
• Promotions
• Anxiety
• Urgency
• Buying modality
The process of buying something
(experience)
The reasons to buy something
(offer)
7. Are you sure about that?
How do you know if the features you’ re chang ing
actually help or hurt?
17. When changing anything on your site
DOs
• Start with a hypothesis about
that change
• Use behavioral data as evidence
why the change is needed
• Gather qualitative data about the
proposed change you’re making
• A/B test the change itself to
validate your hypothesis
DON’T’s
• Come up with a list of ideas to
change on your site based
solely on opinions
• Expect all of your changes to
improve the experience
• Invest your resources
foolishly–take calculated bets
33. When improving your mobile experience
DOs
• Gather qualitative data about
your mobile experience &
identify areas of improvement
• Isolate “experience” A/B tests for
mobile different than desktop
• Analyze A/B test data down to
the device type
DON’T’s
• Go solely by the opinion of your
designer on what the best
“mobile first” experience is
• Think that just by having a
responsive site that you’ve
handled mobile
• Assume modal boxes work
effectively in mobile experiences
41. When offering promotions
DOs
• Test the messaging of your
promotions to find what
resonates best
• Find opportunities to reinforce
the promotions throughout the
experience
• Find the balance of making sure
the promotion is seen but not
obtrusive
DON’T’s
• Assume that a homepage
promotion or global banner is
effective
• Think that you can’t get more out
of your promotions
51. When A/BTesting
DOs
• Have a commitment to testing over
time
• Use qualitative and quantitative data
to determine what to test
• Find opportunities for compounding
wins
• Fundamentally understand the value
of testing – learning and wins
DONTs
• Expect to get huge lifts from your
tests
• Use these specific test examples
and change things on your site
• Test random stuff just because you
can
• Expect a majority of your tests to
be “winners”
Thank you for that great introduction, Emily, and for partnering with us to bring this information to those who were unable to addend the session at Imagine. We’ve updated the content and are glad to have Sean join us as well.
What we’re going to share with you today is over a dozen of our most recent A/B tests that we’ve run with various clients.
Sean has been gracious enough to share his story from one such client of ours – a merchant’s perspective on A/B Testing
All of the test results we’re sharing with you today have been tested in one of our three recommended testing platforms – Monetate, Qubit, or Optimizely, and have all reached statistical significance with tests typically running anywhere from 2 weeks to 2 months in duration.
SEAN: Can you tell us a little about why you test?
SEAN: Can you tell us how you determine what to test (or how you develop a testing roadmap)?
Intro then…
SEAN: how have you changed the approach at Signature Hardware when we look your own roadmap of features and enhancements you make to the site?
This was actually two tests
12.3% lift in revenue 96.3% statistical confidence
Ironically, B and C performed worse than A
Lots of turnover in products, not enough time to generate customer reviews before product was replaced with another
Left the site with lots of products without reviews, or with very few
Hypothesis was that this created a negative social proof
10.5% increase in revenue per visitor
94% statistical confidence
Despite what some of the biggest reviews platforms tell you – test for yourself how effective reviews are
Need Supply new product page layout
Need Supply new product page layout
SEAN: Can you walk us through this test and what you were trying to determine here, along with the result? (Resulted in a 10.5% increase in revenue per visitor.)
94% statistical confidence
10.5% increase in revenue per visitor
94% statistical confidence
Behavioral data could come in the form of your own customers, or in the form of published research or studies
SEAN: Anything to add?
Kevin: intro test; SEAN: Can you want to walk us through this test?
11.1% increase in conversion rate
6.6% increase in product click-throughs
6.2% increase in CR
8.5% increase in product page click-throughs
32.2% increase in RPV - 95% stat sig
>100% increase in homepage click engagement - 99% stat sig
21.8% increase in conversion rate - 90% stat sig
Here’s another example of a mobile test. SEAN: Can you walk us through this one?
10.5% increase in revenue per visitor
6.2% increase in revenue
AJAX add to cart
6.9% increase in mobile RPV
Interestingly – this same treatment on desktop performed worse -9.1%
11.8% increase in RPV - 79% stat sig
4.7% increase in add to cart rate - 99% stat sig
1.2% decrease in bounce rate - 97% stat sig
11.6% increase in conversion rate - 88% stat sig
15.1% increase in new visitor conversion rate - 81% stat sig
SEAN: Anything to add here?
Raise your hand when you see the promo code on this product page
8.9% life in RPV
VIC promo code in product details
5.3% increase in conversion rate
SEAN: Can you talk about this test, where we added a free shipping notification in the header?
2.7% increase in RPV
SEAN: Anything to add?
BARCO cart page
BARCO cart page
38.4% increase RPV
23.7% increase in conversion rate
VIC pricing removing the .00
VIC pricing removing the .00
12.2% increase in RPV
Affirm payment options – for one particular client or across the board
Offering a payment option through Affirm increase AOV 118%
CR increased 7%
Affirm payment options – for one particular client or across the board
SEAN? Additional thoughts?
Sean – I’m going to kick things off with the first question – when we looked at some of your results – we see mostly single-digit gains – these don’t seem like the typical, flashy case studies where someone changes a button color and sees a 300% lift – why do you consider your A/B testing to be successful?