Despite its flaws, the Net Promoter Score (NPS) is often chosen by management for measuring customer satisfaction. Learn ways to mitigate damage from a poorly implemented NPS survey, enriching it with data that really matters to users and your stakeholders—while staying in the good graces of those bewitched by the traditional NPS:
1. What’s the NPS and how is it calculated?
2. What are its strengths and weaknesses?
3. How can you make the NPS more trustworthy and interpretable?
4. What other overall performance measures could replace or complement the NPS?
• Traditional “”Voice of the Customer”” (VOC) research
• “”Outcome Driven Innovation”” (ODI)
• “”Outcome Mapping.”” A new model that addresses the weaknesses of other approaches. Identify and measure key outcomes and opportunities, then predict how changes will impact future performance.
This presentation will be equal parts part survey design, data visualization, user needs research, and prioritization process.
2. Ted Boren
Ted has been doing UX research and design for over 20
years, helping make useful, usable, and enjoyable
experiences. He’s also passionate about amplifying the
voice of the customer in feature prioritization. This is his
fifth presentation at UXPA. Past topics have included
prompting during “think-aloud” studies (the subject of his
masters’ thesis and an influential article) and true intent
studies.
Ted has an MS from the University of Washington’s
Department of Human Centered Design and Engineering.
Past employers include Microsoft, the Church of Jesus
Christ of Latter-day Saints, and Instructure. Now at
Ancestry.com he enjoys connecting people to their past.
3. 1. What is the NPS?
2. Where does it struggle?
3. How can I make it work
better?
4. What else could I do instead?
10%
20%
20%
50%
4. “The One Number You Need to Grow”...
is not Customer Satisfaction (CSAT).
5. “Most customer satisfaction surveys aren’t very
useful. They tend to be long and complicated,
yielding low response rates and ambiguous
implications that are difficult for operating
managers to act on.”
“Our research indicates that
satisfaction lacks a consistently
demonstrable connection to actual
customer behavior and growth.”
“[CSAT scores] are rarely challenged or
audited because most senior executives,
board members, and investors don’t take
them very seriously. That’s because their
results don’t correlate tightly with profits
or growth.”
“Surprisingly, the most
effective question wasn’t
about customer satisfaction or
even loyalty per se.”
December 2003
by Frederick F. Reichheld
7. Detractor, Passive, or Promoter?
“On a scale of 0 to 10,
how likely are you to recommend [Product/Brand]
to your friends and colleagues?”
Score = % Promoters - % Detractors
-100 ← → +100
0 10
9
8
7
1 2 3 4 5 6
8. “a powerful way to
measure and manage
customer loyalty”
“By substituting a single question—blunt tool
though it may appear to be—for the complex
black box of the typical customer satisfaction
survey, companies can actually put consumer
survey results to use and focus employees on
the task of stimulating growth.”
“getting customers enthusiastic
enough to recommend a company
appears to be crucial to growth”
“the scale [is] so easy to understand that
even outsiders, such as investors,
regulators, and journalists, would grasp
the basic messages without needing a
handbook and a statistical abstract.”
“It [is] intuitive to customers when they
assign grades and to employees and
partners responsible for interpreting
the results and taking action.”
December 2003
by Frederick F. Reichheld
10. “Not surprisingly, ‘would
recommend’ didn’t predict relative
growth in industries dominated by
monopolies and near monopolies,
where consumers have little
choice.” *
“Asking users of [a system they were
forced to use] whether they would
recommend the system to a friend or
colleague seemed a little abstract, as
they had no choice in the matter.” *
“In certain cases, we found small
niche companies that were
growing faster than their
net-promoter percentages would
imply.”
“Although the ‘would recommend’
question generally proved to be the
most effective in determining
loyalty and predicting growth, that
wasn’t the case in every single
industry. … In a few situations, it
was simply irrelevant.”
December 2003
by Frederick F. Reichheld
11. NPS uses some “funny math”
As far as the NPS score is concerned:
● 0 = 6
● 7 = 8
● 9 = 10
Consequently, wildly different data sets can give you the same NPS score.
0 10
9
8
7
1 2 3 4 5 6
NPS = 0 (Avg = 5.3)
12. NPS uses some “funny math”
As far as the NPS score is concerned:
● 0 = 6
● 7 = 8
● 9 = 10
Consequently, wildly different data sets can give you the same NPS score.
0 10
9
8
7
1 2 3 4 5 6
NPS = 0 (Avg = 8.0)
13. NPS doesn’t identify problems or strengths
It’s not diagnostic.
Where am I supposed
to focus?
14. Other Critiques
“How Harmful Is the Net Promoter Score?” Jeff Sauro*
“Net Promoter Score Considered Harmful (and What UX
Professionals Can Do About It)” Jared Spool
“A Longitudinal Examination of Net Promoter and Firm Revenue
Growth” Keiningham, Cooil, Andreassen, and Aksoy
… and many more.
15. So why is the NPS
so darned
attractive to
management?
Well...
“It’s the
one number
you need.”
28. ACME, how do I love you?
Let me count the ways…
● You make my workflow
a dream!
● I can always call you when I need!
● You anticipate what I want.
With Love, NPS 9-10
Dear ACME, I care about you
enough to tell you the truth.
● You’re great, until I get
under pressure… then you
don’t seem to really understand.
● What would I say to your next
signi cant other?
● Are we really meant for each
other? Sometimes I catch you
looking at that other audience…
● Tell me we can make it work...
Sincerely, NPS 6-8
To whom it may concern --
Recommend you? Well...
● You’re all I know... so I can’t
recommend anything else. I
don’t have a choice...
● I don’t get opportunities to
“recommend” you; everyone I know
already uses it.
So it’s not a 10. 8? 6?? 0??? Sorry I
couldn’t be more help.
Confusedly Yours,
“What’s NPS?”
Hey A --we a D .
I k o w ’ve to h a
w i b ...
● It a s ev de w e w a t
to .
● I ca v et d o w I’m o
t e d.
● I sa n X d o r me Y!
● Thi t. I’m e k up.
N 0-5
“Postcards
from the
NPS…”
29. Code and count
those themes...
… and overlay NPS,
satisfaction, or
business metrics.
(At its root, still
qualitative, but may
satisfy stakeholder
craving for “numbers.”)
Ease
of use
Account M
anagem
ent
Content Creation
Etc.
31. OUTCOMES
should “cover the ground”:
More or Less IMPORTANT
CHEAP or EXPENSIVE
Things that are AGREED on or
ARGUED about
Things your product does:
● WELL
● POORLY
● NOT AT ALL
32. Sample Outcomes
● Minimize dust generated by a circular saw
● Reduce the amount of cleaner required
● Miniminze time under anesthesia
● Decrease time to make an assignment
● Improve likelihood of choosing the best action
Make sure the outcome is straightforward, but the format
recommended by ODI is: “direction + noun + verb (+ context)
35. Commonsense claim:
Ideally, the more
important an outcome,
the more essential that
users are satisfied with it.
But where to focus?
Outcome
Importance vs
Satisfaction
37. Outcome Driven Innovation (ODI)
Anthony Ulwick, Strategyn, HBR 2002
Opportunitya
= 10 x (% Importantb
+ ( [% Importantb
- % Satisfiedc
] )
Opportunitya
= (20 x % Importantb
) - 10 x % Satisfiedc
a. But if % Satisfied > % Important,
then (% Important - % Satisfied = 0).
In other words, (% Satisfied = % Important) or (Opportunity = 10 x % Important).*
b. % Important = “top 2 box” (% of 4 or 5 on a 5-point scale)
c. % Satisfied = “top 2 box” (% of 4 or 5 on a 5-point scale)
0 ← → 20
38. “Opportunity
Landscape”
(ODI)
1. Loses data by
consolidating the “top 2
box” scores
2. Visualization hard to
connect to data
collected and derived
3. Hides opportunities for
cost reduction
Original chart by
Strategyn
39. “Opportunity
Landscape”
(ODI)
This rule “protects”
important outcomes
from being neglected if
they are highly
satisfied (upper right).
But it also masks
potential
opportunities for cost
reduction in the
upper- to mid- left.
if % Satisfied > % Important,
then for the Opportunity Score,
pretend % Satisfied = % Important.
40. Back to a
simpler chart
then. Focus?
“The best opportunities from
the customer perspective are in
the lower right.”
Harvest
(overserved)
Maintain
Improve
(underserved)
Monitor
But what about the middle?
VoC might say
Quadrants: Axes are
median importance and
median satisfaction
(sensitive to the data
set).
See Katz for
example.
41. Where to
Focus?
Central axis: Distance
from the axis indicates
level of absolute
misalignment between
importance and
satisfaction. (Imp-Sat)
“The best opportunities from
the customer perspective are
the furthest away from the
diagonal line, underneath.”
Could
Harvest
(overserved)
Must Improve
(underserved)
Could
Improve 2.
1.
3.
45. What if it’s a
very large
change?
Based on changes in
satisfaction, can
something very
important become very
unimportant, and vice
versa?
(Assuming a 1:1 relationship
between Satisfaction and
Importance.)
46. Is importance
less changeable
(more durable)
than
satisfaction?
What if the change in
importance is half as big
as change in satisfaction
(1:2)?
48. ODI
revisualized?
This largely lines up
with ODI assumptions
about importance: the
slope would be 1:2, but
would always intersect
at the origin.
Does this static,
arbitrary ratio and
intercept make sense?
50. Use the data to
describe its
own axis...
Trendline: Distance from
the dataset’s trendline
indicates current level of
misalignment between
importance and
satisfaction in the product.
“For now, the best
opportunities are the furthest
away from the dataset’s
trendline, underneath.”
51. Less mature
● Lower average
satisfaction
● Less tightly clustered
along axis
53. “When the light turns
green, you go.
When the light turns red,
you stop.
But what do you do
when the light turns blue
with orange and
lavender spots?”
~Shel Silverstein
What if the data
is bonkers?
59. 20 Actual
Outcomes
● Only one is less than a 5 average
importance, despite trying to
choose some “less important”
items for comparison.
● Similar for Satisfaction.
● Most important outcome is the
best served.
● But the third most important is
among the worst served.
● Cartesian distance from the
trendline is by far the longest.
● No real candidates for harvesting.
65. What if you married data collection for
NPS and Desired Outcomes...
● Started tracking the mean rating
(0-10) in addition to the formal NPS
score?
● Analyzed the “why did you give us
that rating” data to identify important
customer intents or outcomes?*
● Starting asking each respondent
about importance and satisfaction for
3-4 of those outcomes?
Over time, you would have an evolving
Battle Map:
● What to focus on now
● What to maintain
● Where to cut costs
● Where to focus for Audience A
versus Audience B
● Where the next line of attack is likely
to be
68. Ted Boren
Ted has been doing UX research and design for over 20
years, helping make useful, usable, and enjoyable
experiences. He’s also passionate about amplifying the
voice of the customer in feature prioritization. This is his
fifth presentation at UXPA. Past topics have included
prompting during “think-aloud” studies (the subject of his
masters’ thesis and an influential article) and true intent
studies.
Ted has an MS from the University of Washington’s
Department of Human Centered Design and Engineering.
Past employers include Microsoft, the Church of Jesus
Christ of Latter-day Saints, and Instructure. Now at
Ancestry.com he enjoys connecting people to their past.