4. Related work
âȘ Online Travel Agencies reshaped the ecosystem [1]
âȘ âŠand eWOM has a strong bias/influence on online decision making [2,4,5]
âȘ âŠas well as to predict business performance [3]
âȘ From a CS perspective eWOM is a main ingredient for algorithmic decision support
mechanism like RS
âȘ Experiments in psychology literature[6] revealed users with different decision
making styles
[1] Xiang, Z. et al. Information technology and consumer behavior in travel and tourism: Insights from travel planning using the internet. JRCS 2015
[2] Ulrike Gretzel and Kyung Hyan Yoo. Use and impact of online travel reviews. Information and Communication Technologies in Tourism 2008
[3] Xie, K. et al. The business value of online consumer reviews and management response to hotel performance. IJHM 2014
[4] Xiang, Z. et al. A comparative analysis of major online review platforms: Implications for social media analytics inhospitality and tourism. TM 2017
[5] Xie, H. et al. Consumers responses to ambivalent online hotel reviews: The role of perceived source credibility and predecisional disposition. IJHM 2011
[6] Schwartz, B. et al. Maximizing versus satisficing: Happiness is a matter of choice. JPSP 2002
4
5. 5
The goal of this research is
determine how rating summary
statistics are guiding usersâ
choices in the online scenario
⊠in order to develop more
efficient algorithms.
Research Goals
6. Decomposing rating
summaries
We consider them to be
multi-attribute objects:
âȘ Number of ratings
âȘ Mean of the ratings
âȘ Bimodality
âȘ Variance
âȘ Skewness
âȘ Origin of Ratings 6
7. Decision Making on Multi-attribute Items
âȘ Non-Compensatory Strategies [1]:
â« Compare items based on one attribute
â« Perform intra-dimensional comparisons
â« Perform less comparisons
âȘ Compensatory Strategies [1]:
â« All attributes meet a minimum requirement
â« Multiple inter-dimensional comparisons
â« Spend more time on items
Eye movement is an indicator of the screening of the choices [2].
7[1] John W Payne. Task complexity and contingentprocessing in decision making: An information search and protocol analysis. Organizational Behavior and Human
Performance, 1976
[2] Jacob L. Orquin and Simone Mueller Loose. Attention and choice: A review on eye movements in decision making. Acta Psychologica 2013
8. Decision making strategies
âȘ Interpersonal differences
âȘ Satisficer / Maximizers [1]
âȘ Three subdimensions[2]:
â« Decision Difficulty
â« Alternative Search
â« High Standards
8
[1] Herbert A Simon. A behavioral model of rational choice. The quarterly journal of economics, 1955
[2] Schwartz et al., 2002, Maximizing versus satisficing: Happiness is a matter of choice. Journal of Personality and Social Psychology 1983
Herbert Simon
9. Earlier Work
âȘ We run a set of 3 experiments to understand trade-off
mechanisms between decision strategies
âȘ Decomposing rating summaries: Different types of explanations,
Number of ratings, Mean of the ratings, Variance, Skewness
âȘ Respondents:
â« Relied highly on the mean rating
â« Non linear influence of overall Number of Ratings
â« Variance and skewness remain largely unnoticed
â« Maximizers vs. Satisficers display different preferences
9
[1] Coba L., Zanker M., Rook L., Symeonidis P.: Exploring Users' Perception of Rating Summary Statistics. UMAP â18
[2] Coba L., Zanker M., Rook L., Symeonidis P.: Exploring Users' Perceptionof Collaborative Explanation Styles. CBI â18
[3] Coba L., Zanker M., Rook L., Symeonidis P.: Decision Making Strategies Differ in the Presenceof Collaborative Explanations: Two Conjoint Studies. IUI
â19
11. Conjoint experiment to
quantify usersâ preferences
Ranking based Conjoint Methodology:
âȘ Used in product
design/development
âȘ Items can be seen as a bundle of
attributes
âȘ Goal to identify the utility
contribution of each attribute of
the rating summary statistics
separately
11
12. Data
âȘ Data driven levels [1]
âȘ J-shaped [2]
âȘ Bimodality coefficient[3]:
12
[1] Markus Zanker and Martin Schoberegger. An empirical study on the persuasiveness
of fact-based explanations for recommender systems. RecSys 2014
[2] Hu N, Zhang J, Pavlou PA (2009) Overcomingthe J-shaped distribution of product reviews.
Commun ACM
[3] Pfister R, Schwarz KA, Janczyk M, Dale R, Freeman JB (2013) Good things peak in pairs: a note on
the bimodality coefficient. Front Psychol
13. Design
âȘ Full-factorial design with:
â« 2 levels of the
Number of rating
â« 3 levels of Mean
â« 3 levels of Bimodality
âȘ 3 screens with 6 items to
rank
13
14. Additive utility model
Different attributes contribute independently to the overall utility.
The perceived utility of an item/profile is determined as:
đą = đ„đ đœ + đ
đ„đ vector characterizing a profile i,
đœ vector with (unknown) preferences for each attribute level,
đ is the residual error.
Respondents are supposed to select the alternative with, in their
eyes, maximal utility u.
14
15. Eye-tracking Metrics
âȘ Area of Interest (AOI)[1]
âȘ Fixation times
â« Geometrical mean [2]
âȘ Revisits [3]
15
[1]Kenneth Holmqvist, Marcus Nystroom, Richard Andersson, Richard Dewhurst,
Halszka Jarodzka, and Joost Van De Weijer. Eye tracking. A comprehensive guide to
methods and measures. Oxford University Press, 2011.
[2] Jeff Sauro and James R. Lewis. Average task times in usability tests. CHIâ10
[3] John W Payne. Task complexity and contingent processing in decision making: An
information search and protocol analysis. Organizational Behavior and Human
Performance, 1976
19. Non - compensatory
strategy
âȘ Compare items based on one
attribute
âȘ Perform intra-dimensional
comparisons
âȘ Perform less comparisons
19
20. Compensatory strategy
âȘ All attributes meet a minimum
requirement
âȘ Multiple inter-dimensional
comparison
âȘ Spend more time on items
20
21. Max vs. Sat: Time spent on items
21
Geometrical mean of the time spent on item (confidence level of 95%), median split on
decision difficulty sub-scale.
22. Max vs. Sat: Revisits
22
Mean number of revisits per item(confidence level of 95%), median split on decision difficulty
sub-scale.
25. Conclusions
âȘ Maximizers and satisficers expose different decision making behavior
â« Choice is dominated by mean and number of ratings
â« Bimodality showed no significant influence
â« Compensatory vs. non compensatory
âȘ Rating summaries influence/bias usersâ choice
â« Not considered when interpreting implicit user feedback
âȘ Our results indicate that more aspects need to be considered to optimize
recommendations based on explainability/persuasiveness
25