Joanne Mechling of Market Strategies International describes the results of an experiment she conducted to test the impact of integrative graphics and gamification on online surveys, with surprising results.
Guide to Networking Essentials 8th Edition by Greg Tomsho solution manual.doc
Â
Evolution of Research by Joanne Mechling, Market Strategies
1. Another Day, Another Survey
The Continued Evolution of Online Research
MRA Northwest Chapter 2012 Educational Conference
May 8, 2012
Reg Baker & Joanne Mechling
2. Overview
1. The respondent engagement problem
2. The experiment
3. Implications of findings
4. A restrained approach to interactivity
5. Meeting the increasing demand for online respondents
6. Summary
7. Q&A
2
5. But online MR has a problem
Speeding
Straightlining
Demand
Random responding
Parsimonious verbatims
Participation
5
6. Engagement from a Survey Research Perspective
âRespondent motivation declines as the interview continues beyond an optimal point.â
--Cannell & Kahn (1968)
âRespondent burden . . . (1) the length of the interview; (2) the amount of effort required of
the respondent; (3) the amount of stress on the respondent; and (4) the frequency with
which the respondent is interviewed.â
---Bradburn (1977)
âRespondents answering items that are included in large sets toward the later parts of a
long questionnaire are more likely to give identical answers to most or all of the items,
compared to those responding to items in smaller sets or in shorter questionnaires.â
---Herzong & Bachman (1981)
âInstead of seeking optimal solutions to problems, people usually seek solutions that are
simply satisfactory or acceptable in order to minimize psychological costs.â
---Krosnick & Alwin (1987)
6
7. Engagement from a Technology Perspective
â A quality of user experience that emphasizes the positive aspects of interaction, an in
particular the phenomena associated with being captivated by technology (and so being
motivated to use it). Successful technologies are not just used, they are engaged with;
users invest time, attention, and emotion into the equation.â
--- Attfield, Kazai, Laimas & Piwowarski (2011)
âThe more engaged users are, the more features an application can sustain. But most users
have low commitment -- especially to websites, which must focus on simplicity, rather than
features.â
---Nielsen (2007)
âLeverage knowledge in the headâŠPerformance can be faster and more efficient.â
---Norman (1988)
7
8. Current Schools of Thought
1. Use of interactive features such as slider bars, drag and drops, and
other Flash-like objects increase respondent enjoyment, yield better
quality data and improve survey participation (Reid, Morden & Reid,
2007).
2. Respondents prefer standard HTML formats (Miller, 2009) and
extensive use of interactive features can have unpredictable impacts
on response, denigrating data quality (Malinoff, 2010).
3. Use of game-like features in online surveys increase engagement,
encourage more thoughtful responding and better quality data
(Puleston and Sleep, 2011).
8
10. 4 Survey Types
Text only Decoratively visual
ïĄ Male
ïĄ Male
Functionally visual Gamified
10
11. Method
Replicate Edison Electric Institute study.
US adults 18+ from ResearchNow panel.
Random assignment to design treatments:
Text only Decoratively visual
n=251 n=251
Functionally visual Gamified
n=252 n=253
Fieldwork: June 28âJuly 5, 2011. Participation rate 8%.
11
15. Whatâs a game?
ïŒ Back story
ïŒ Game-like aesthetic
ïŒ A challenge
ïŒ Rules for play
Gamified
ïŒ Rules for
advancement
ïŒ Rewards for
accomplishment
15
17. Hypotheses
Text only Decoratively visual
H0 Lowest H1 No benefits vs.
satisfaction other treatments
Functionally visual Gamified
H2 Satisfaction, H3 Polarized appeal,
engagement and risking self-
data quality equal selection
to or greater than H4 Adds to survey
gamified costs (for us)
17
18. Productivity
Decoratively Functionally
Total Text only
visual visual
Gamified
Completion rate 80% ïȘ
Completion length 15 mins.
Labor vs. âtext onlyâ 1.1x 1.5x 2.0x
18
25. Findings
Text only Decoratively visual
H0 Lowest H1 No benefits vs.
satisfaction other treatments
Functionally visual Gamified
H2 Satisfaction, H3 Polarized appeal,
engagement and risking self-
data quality equal selection
to or greater than H4 Adds to survey
gamified costs (for us)
25
26. Visually functional and gamified treatments provided...
âą A more enjoyable respondent experience
âą No increase in sampling error or changes
to response distributions
âą No decrease in satisficing
âą Increase in production costs (for us)
26
28. What our experiment tells us
âą Key to survey engagement is...the same as it ever was:
â Survey length
â Topic salience
â Cognitive burden
â Frequency of survey requests
âą Creating a more enjoyable survey experience still a worthy goal.
âą Surveys will become more graphical (functionally visual).
âą Challenge: develop and execute research focused on defining best
practices for visually enhanced surveys to replace those that
evolved (over decades) for text only surveys.
Rigorous &
Evangelism systematic
evaluation
28
30. The impact of images
âą Web surveys make it relatively easy for surveys to incorporate
photographs, graphics, and other images.
âą Use of images in web surveys can be:
â decoration to provide a more attractive interface for the respondents,
â an integral part of the question, helping respondents to identify the
particular object they are being asked about
âą Even when images are intended merely as embellishment, they are
likely to be powerful, contextual stimuli and can have effects on
responses:
âBest case, they distract respondents from the task of answering
questions;
âWorst case, they change the meaning of questions.
âą Images incorporated into a survey need to be chosen very carefully and
deliberately.
30
31. Images can move answers in the direction of the image
âą Couper, Tourangeau, and Kenyon (2004) varied the content of
photographs that accompanied each of 6 survey items that asked
respondents how often theyâd done something.
âą A photograph that depicted some instance of the category of interest
accompanied each item; images chosen to represent low or high
frequency exemplars of the target category:
Low High
frequency frequency
âą One group of respondents saw only the high frequency exemplars and
a second group saw only low frequency exemplars
âImages seen had statistically significant effects on answers to all 6 items.
âThose who saw high frequency exemplars reported higher frequencies
than those who got the low frequency exemplars.
31
32. Images can narrow the interpretation of the category of
interest
âą Tourangeau et al., (2011) compared responses to:
Visual examples vs. Verbal examples
Fruit
(including bananas,
watermelon, apples,
oranges, pineapple,
etc.)
âą Respondents reported eating more servings of foods in a target
category when the categories were represented by words than by a
picture
âEven though verbal and pictorial examples on the same level of
generality were chosen.
32
33. Images can serve as a standard of comparison affecting the
judgments made
âą Couper, Conrad, and Tourangeau (2007) displayed photographs either
of a woman in a hospital bed or a young woman jogging to web survey
respondents.
âą Respondents received one or the other picture in a web survey. Images
appeared near a question asking a respondent to rate the quality of
his/her own health.
âą Respondents rated their own health as worse when they got the picture
of the jogger and as better they got the picture of the sick women
33
34. Background choices matter
âą Color is not a neutral choice; Baker & Couper (2007) tested 3 colors as
backgrounds:
Breakoff
rates 15.0% 10.8% 13.7%
No effect on perceived/actual completion time or on subjective evaluation
items asked at the end.
âą Nielsen (2006) argues:
âUse either plain-color backgrounds or extremely subtle background
patterns. Background graphic interfere with the eyeâs capability to
resolve the lines in the characters and recognize word shapes.â
34
35. Respondents use color to assign meanings to scale points
âą Tourangeau, Couper and Conrad (2004, 2007) argue that respondents
apply 5 heuristics that help them interpret the response scales in visual
surveys, one being:
âLike in appearance means close in meaningâ
âą Tourangeau, Couper, and Conrad (2007) compared two scales
experimentally:
Two colors
One color
35
36. Colors can move scale use to the extreme
Responses shift toward the more positive end of the scale when the top
scale was used as compared to the bottom scale.
50
45 Same Color
40 Two Color 36.8
34.9
35
30
Percent
25.6 26.2
25 22.4
20 17.8
15
9.7
10 6.0
7.2
6.3
5 1.7
3.1
1.9
0.6
0
1 2 3 4 5 6 7
36
37. Progress bars: Friend or foe?
âą Assumptions about progress bars:
âRespondents want to be informed about their position in the
questionnaire.
âProviding this information will increase the likelihood they will finish
it.
âą Callegaro, Villar, and Yang (2011) carried out a meta-analysis of studies
done on progress bars and break-off rates. Their conclusions:
âProgress indicators by themselves do not appear to lower breakoffs,
they may increase breakoffs when they offer discouraging news.
âThey only clearly reduce breakoffs when they offer unusually positive
feedback.
37
39. Are online panels an anachronism?
âą High demand
âą High turnover
âą Increased focus on low incidence
populations
âą Concerns about panel biases,
diversity and representivity
39
40. The immediate future is multisourcing
1. Extend the reach of online sampling beyond a single panel.
2. Find people who want to do a survey now.
3. Screen and match them to a waiting survey.
40
41. Pros and cons of routers
Pro Con
âą Increases diversity âą Black box
âą Reduces reliance on professional âą Respondent validation is more
respondents difficult
âą Supports blending âą Respondent reuse may be
problematic
âą Reduces screen outs
âą Router bias
âą Standardizes online sample
selection âą No standards
41