Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Challenges and Solutions in Group Recommender Systems

1.672 Aufrufe

Veröffentlicht am

Tutorial at ICDM 2017 (The 17th IEEE International Conference on Data Mining)

Veröffentlicht in: Technologie
  • Als Erste(r) kommentieren

Challenges and Solutions in Group Recommender Systems

  1. 1. CHALLENGES AND SOLUTIONS IN GROUP RECOMMENDER SYSTEMS Ludovico Boratto (ludovicoboratto.com – ludovico.boratto@acm.org) Eurecat (Spain) ICDM 2017 – 17th IEEE International Conference on Data Mining
  2. 2. Plan of the talk 1. Recommender systems principles 2. Group recommendation introduction 3. Tasks and state of the art survey 4. Evaluation methods 5. Emerging aspects and techniques 6. Case study 7. Summary
  3. 3. [Ricci et al. 2015] Recommender systems principles
  4. 4. What book should I buy?
  5. 5. What news should I read?
  6. 6. The Problem
  7. 7. A Solution ???
  8. 8. Jeff Bezos ¨ “If I have 3 million customers on the Web, I should have 3 million stores on the Web” ¤ Jeff Bezos, CEO of Amazon.com
  9. 9. Recommender systems ¨ Suggest items that might interest a user
  10. 10. Recommender Systems ¨ In everyday life we rely on recommendations from other people either by word of mouth, recommendation letters, movie and book reviews printed in newspapers, ... ¨ In a typical recommender system people provide recommendations as inputs, which the system then aggregates and directs to appropriate recipients
  11. 11. Recommender Systems ¨ A recommender system helps to make choices without sufficient personal experience of the alternatives ¤ To suggest products to their customers ¤ To provide consumers with information to help them decide which products to purchase ¨ They are based on a number of technologies: information filtering, machine learning, adaptive and personalized system, user modeling, …
  12. 12. The recommendation problem ¨ We are given: ¤ a set of users ¤ a set of items ¤ a set of values (e.g., V=[1,5] or V={like,dislike}) ¨ Let be a ternary relation that contains the preferences given by the users ¨ We denote as the subset of items evaluated by a user u ¨ The objective is to define a function (prediction of the unknown ratings) and to identify an item i* with the highest predicted rating: U = {u1,u2,...,un} I = {i1,i2,...,im} V R ⊆U × I ×V Iu f :U × I →V i* = argmax j∈I Iu f (u, j)
  13. 13. Core Recommendation Techniques ¨ U is a set of users ¨ I is a set of items/products Technique Background Input Process Collaborative Ratings from U of items in I Ratings from u of items in I Identify users in U similar to u, and extrapolate from their ratings of i Content-based Features of items in I u’s ratings of items in I Generate a classifier that fits u’s rating behavior and use it on i Demographic Demographic information about U and their ratings of items in I Demographic information about u Identify users that are demographically similar to u, and extrapolate from their ratings of i Utility-based Features of items in I A utility function over items in I that describes u’s preferences Apply the function to the items and determine i’s rank Knowledge- based Features of items in I. Knowledge of how these items meet a user’s needs A description of u’s needs or interests Infer a match between i and u’s need
  14. 14. Group recommendation introduction
  15. 15. Group Recommendation ¨ Designed for contexts in which more than one person is involved in the recommendation process I’m a vegetarian! I’m on a diet I love Asian food Where shall we dine?
  16. 16. Group Recommendation Application scenarios ¨ Any scenario that involves a decision making process and a group of users ¤ People dining together (“Where shall we dine?”) ¤ Friends going to the cinema (“Which movie shall we watch?”) ¤ Groups planning a trip (“Where shall we go?”) ¤ …
  17. 17. Group Recommendation Problem statement ¨ We are given: ¤ a set of users ¤ a set of items ¤ a set of values (e.g., V=[1,5] or V={like,dislike}) ¨ Let be a ternary relation that contains the preferences given by the users U = {u1,u2,...,un} I = {i1,i2,...,im} V R ⊆U × I ×V
  18. 18. Group Recommendation Problem statement ¨ Let the set of users U be split into K groups, where each group respects the following properties: ¤ all the users in gk receive the same recommendations ¤ each user in U has to belong to a group in order to receive the recommendations: ¤ groups are formed by sets of users who don’t intersect (each user receives just one set of recommendations): gk ⊆ U ∀u ∈U ∃ k ∈ {1,...,K} s.t. u ∈ gk ∀k,q ∈ {1,...,K} k ≠ q ⇒ gk ∩gq = ∅
  19. 19. Group Recommendation Problem statement ¨ Given a group the objective is to define a function and to identify an item i* with the highest predicted rating: gk ⊆ U f :gk × I →V i* = argmax j∈I f (gk, j)
  20. 20. Group Recommendation Challenges 1. How should the different types of group be handled in the recommendation process? 2. Should the preferences be collected for each user or for the group? 3. How should the individual preferences for an item be merged into a group one? 4. Should the ratings be predicted for each user or for the group? 5. Who should choose the items to recommend to the group? 6. How can the recommendations be explained to the group?
  21. 21. Tasks and state of the art survey
  22. 22. Tasks and state of the art survey 1. Types of group 2. Preference acquisition 3. Group modeling 4. Rating prediction 5. Help the members to achieve consensus 6. Explanation of the recommendations
  23. 23. 1. Types of group Tasks and state of the art survey
  24. 24. Types of group ¨ Different types of groups lead to different ways in which the preferences can be modeled [Boratto and Carta 2011][Carvalho et al. 2013] ¨ A group recommender system can work with: ¤ an established group who share the same long-term interests, like a group of fans of an artist ¤ an occasional group who has a common specific aim, like visiting a museum ¤ a random group of people who do not have anything in common (e.g., the recommendation of background music in a room)
  25. 25. Types of group Established groups in the literature ¨ PolyLens [O’Connor et al. 2001] ¤ Movie recommendation, considering that people usually go to the cinema with the same group ¨ GRec_OC (Group Recommender for Online Communities) [Kim et al. 2010] ¤ Book recommender system for online communities (i.e., people with similar interests that share information)
  26. 26. Types of group Occasional groups in the literature ¨ MusicFX [McCarthy and Anagnost 1998] ¤ Music recommendation to people working out in a gym at a given time ¨ INTRIGUE [Ardissono et al. 2003] ¤ Suggest tourist attractions to groups of users traveling together ¤ The system can work with subgroups, to weight differently people with special needs (e.g., children or disabled people)
  27. 27. Types of group Occasional groups in the literature ¨ [Liu et al. 2012] defines event-based social networks, i.e., communities of people who attend social events, by considering both online and offline interactions
  28. 28. Types of group Random groups in the literature ¨ G.A.I.N. [Pizzutilo et al. 2005] ¤ Recommends news to a group of users that are in a public space at a specific time ¨ FIT (Family Interactive TV System) [Goren-Bar and Glinansky 2004] ¤ Looks at the probability of each family member to watch TV in a time slot and predicts who there might be watching TV
  29. 29. Types of group Random groups in the literature ¨ Flytrap [Crossen et al. 2002] and Jukola [O’Hara et al. 2004] ¤ Select music to be played in a public room ¤ Flytrap considers the preferences of the users present in the room at the moment of the song selection ¤ Jukola allows artists to upload their MP3s and those in the room can express their vote
  30. 30. 2. Preference acquisition Tasks and state of the art survey
  31. 31. Preference acquisition ¨ A system can acquire explicit or implicit preferences ¨ They can be collected considering that ¤ a user is a part of a group (group preferences), ¤ or not (individual preferences) ¨ Observational studies show that when individual users interact, their preferences evolve [Delic et al. 2016] ¨ The type of preference acquisition leads to completely different ways in which information is handled by the system
  32. 32. Preference acquisition Group preferences in the literature ¨ In CATS [McCarthy et al. 2006] members interact and express their preferences around a shared device called “DiamondTouch table-top”
  33. 33. Preference acquisition Group preferences in the literature ¨ In Travel Decision Forum [Jameson 2004] each member of the group can view and copy the preferences of the other members
  34. 34. Preference acquisition Group preferences in the literature ¨ In [Gartrell et al. 2010], the system allows both individual and groups to express preferences (e.g., a couple watching a movie together) ¨ In [Chen et al. 2008] it is assumed that both individuals and subgroups express preferences
  35. 35. Preference acquisition Individual preferences in the literature ¨ CoFeel [Chen and Pu 2013] allows to express through colors the emotions given by a song chosen by the GroupFun music group recommender system
  36. 36. Preference acquisition Individual preferences in the literature ¨ MusicFX [McCarthy and Anagnost 1998] lets users express also negative ratings (range [-2,2]) ¨ Adaptive Radio [Chao et al. 2005] focuses only on negative preferences ¤ To avoid playing music that might be disliked by anyone
  37. 37. Preference acquisition Theoretical study ¨ [Xie and Lui 2015] consider the fact that recommender systems work with partial information ¤ Moreover, some users cheat (misbehavior) ¨ What is the minimum number of ratings a product needs so that one can make a reliable evaluation of its quality? ¨ Developed theoretical models, validated on Flixter and Netflix data in the group recommendation context
  38. 38. Preference acquisition Theoretical study ¨ n’: minimum number of ratings needed to tolerate the misbehaving users ¨ Pr[n’ ≥ n]: the fraction of movies with a minimum number of ratings larger than or equal to n
  39. 39. 3. Group modeling Tasks and state of the art survey
  40. 40. Group Modeling ¨ In order to derive a group preference for the items, group modeling strategies combine the individual user models ¨ “There is no strategy useful in every context independently from the environment” [Pizzutilo et al. 2005] ¤ The strategy that best models a group has to be evaluated in the context in which the group is modeled
  41. 41. Group Modeling ¨ This topic has been mainly studied by J. Masthoff ¤ More than 10 years ¤ Most recent work that involves all the strategies is [Masthoff 2015] ¨ 11 existing strategies
  42. 42. Group Modeling Strategies Survey
  43. 43. Group Modeling Strategies ¨ When presenting each strategy, we will use the following example: ¤ 3 users (u1, u2, u3) ¤ 10 items (i1,…,i10) ¤ Each element of the table represents a rating (1,…,10) i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10
  44. 44. 1. Additive Utilitarian ¨ Add individual ratings for each item ¨ Also known as Average Strategy ¤ The ordered ranking of the items for a group is the same i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 Group 20 21 21 25 26 28 22 15 14 23
  45. 45. 1. Additive Utilitarian Uses in the literature ¨ Pocket RestaurantFinder [McCarthy 2002] recommends restaurants to a group of people, by averaging the individual preferences of the group members on different types of features (location, cost, cuisine, …) ¨ In [Amer-Yahia et al. 2009], the modeling strategy averages the individual preferences also taking into account the disagreement of the group members for an item ¨ [De Pessemier et al. 2013] illustrate that modeling users with an average is the best way to model individual preferences in different contexts
  46. 46. 2. Multiplicative Utilitarian ¨ Multiplicate individual ratings for each item ¨ [Masthoff 2011] showed it is the strategy that works best when selecting a sequence of television items to suit a group of viewers i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 Group 280 100 336 540 648 800 270 120 84 420
  47. 47. 3. Borda Count ¨ Each item gets a number of points, according to the position in the list of each user ¤ Least favorite item è 0 points ¤ A point is added for the following item ¤ Same rating to more items è points are distributed i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10
  48. 48. 3. Borda Count ¨ Each item gets a number of points, according to the position in the list of each user ¤ Least favorite item è 0 points ¤ A point is added for the following item ¤ Same rating to more items è points are distributed i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 i8 and i9 è Least favorite items for u2 Share the lowest points: (0+1)/2=0.5
  49. 49. 3. Borda Count ¨ Each item gets a number of points, according to the position in the list of each user ¤ Least favorite item è 0 points ¤ A point is added for the following item ¤ Same rating to more items è points are distributed i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 4.5 8 3 8 6 4.5 8 1.5 0 1.5 u2 3.5 7.5 2 6.5 5 7.5 6.5 0.5 0.5 3.5 u3 2.5 0 5 3 6 7.5 1 2.5 4 7.5 Group 10.5 15.5 10 17 17 19.5 15.5 4.5 4.5 12.5 i8 and i9 è Least favorite items for u2 Share the lowest points: (0+1)/2=0.5
  50. 50. 3. Borda Count Uses in the literature ¨ [Masthoff 2011] showed it is one of the strategies that generates most satisfaction when selecting a sequence of television items to suit a group of viewers ¨ TravelWithFriends [De Pessemier et al. 2015] uses it to rank the top-5 travel destinations to recommend to a group
  51. 51. 4. Copeland Rule ¨ Form of majority voting ¨ Sort the items according to their Copeland index ¤ number of times in which an alternative beats the others, minus the number of times it loses i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10
  52. 52. 4. Copeland Rule ¨ Form of majority voting ¨ Sort the items according to their Copeland index ¤ number of times in which an alternative beats the others, minus the number of times it loses i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 Item i2 beats item i1, since both u1 and u2 gave a higher rating to it
  53. 53. 4. Copeland Rule i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 i1 0 + - + + + + - - 0 i2 - 0 - 0 - 0 0 - - - i3 + + 0 + + + + - - + i4 - 0 - 0 - + - - - - i5 - + - + 0 + + - - - i6 - 0 - - - 0 - - - - i7 - 0 - + - + 0 - - - i8 + + + + + + + 0 0 + i9 + + + + + + + 0 0 + i10 0 + + + + + + - - 0 Index -2 +6 -3 +6 +1 +8 +4 -8 -8 -2
  54. 54. 4. Copeland Rule Uses in the literature ¨ The approach proposed in [Felfernig et al. 2012] proved that a form of majority voting is the most successful in a requirements negotiation context
  55. 55. 5. Plurality Voting i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 ¨ Each user votes for her/his favorite option ¨ If more than one alternative needs to be selected, the items that received the highest number of votes are selected
  56. 56. 5. Plurality Voting ¨ Each user votes for her/his favorite option ¨ If more than one alternative needs to be selected, the items that received the highest number of votes are selected i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 User u1 selects items i2, i4, i7
  57. 57. 5. Plurality Voting 1 2 3 4 5 6 u1 i2, i4, i7 i4, i7 i5 i1 i3 i8 u2 i2, i6 i4, i7 i5 i1 i3 i8, i9 u3 i6, i10 i10 i10 i10 i3 i9 Group i2, i6 i4, i7 i5 i1 i3 i8, i9 User u1 selects items i2, i4, i7 ¨ Each user votes for her/his favorite option ¨ If more than one alternative needs to be selected, the items that received the highest number of votes are selected
  58. 58. 5. Plurality Voting Uses in the literature ¨ This strategy was implemented and tested by [Senot et al. 2010] in the TV domain
  59. 59. 6. Approval Voting i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 ¨ A point is assigned to all the items a user likes ¤ Suppose that each user votes for all the items with a rating above a certain threshold (let’s say 5)
  60. 60. 6. Approval Voting ¨ A point is assigned to all the items a user likes ¤ Suppose that each user votes for all the items with a rating above a certain threshold (let’s say 5) i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10
  61. 61. 6. Approval Voting ¨ A point is assigned to all the items a user likes ¤ Suppose that each user votes for all the items with a rating above a certain threshold (let’s say 5) ¨ Group rating for an item: sum of the individual votes i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 1 1 1 1 1 1 1 1 1 u2 1 1 1 1 1 1 1 1 u3 1 1 1 1 1 1 Group 2 2 3 3 3 3 2 1 1 3
  62. 62. 6. Approval Voting Uses in the literature ¨ To choose the Web pages to recommend to a group, Let’s Browse [Lieberman et al. 1999] evaluates if the page currently considered by the system matches with the user profile above a certain threshold and recommends the one with the highest score ¨ It also proved to be successful in contexts in which the similarity between the users in a group is high [Bourke et al. 2011]
  63. 63. 7. Least Misery ¨ Group rating: lowest rating expressed for an item by any of the members of the group ¤ usually adopted to model small groups, to make sure that every member is satisfied i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 Group 5 1 6 6 8 8 3 4 3 6
  64. 64. 7. Least Misery Uses in the literature ¨ This strategy is used by PolyLens [O’Connor et al. 2001], in order to produce movie recommendations that satisfy the small groups handled by the system. ¨ GroupLink [Wei et al. 2016] recommends a set of activities to a group of users. Each user has to be recommended a minimum number of activities s/he enjoys
  65. 65. 8. Most Pleasure ¨ Group rating: the highest rating expressed for an item by a member of the group ¨ This strategy is used by [Quijano-Sanchez et al. 2012] in a system that faces the cold start problem. i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 Group 8 10 8 10 9 10 10 6 7 10
  66. 66. 9. Average without Misery ¨ Group rating: average of the ratings assigned by each user for that item ¨ The items with a rating under a certain threshold are not considered (in the example, 4) i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10
  67. 67. 9. Average without Misery i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 Group 20 - 21 25 26 28 - 15 - 23 ¨ Group rating: average of the ratings assigned by each user for that item ¨ The items with a rating under a certain threshold are not considered (in the example, 4)
  68. 68. 9. Average without Misery Uses in the literature ¨ In order to model the preferences of the group for each genre of music to play in a gym, MusicFX [McCarthy and Anagnost 1998] sums the individual ratings expressed by each user, discarding the ones under a minimum degree of satisfaction.
  69. 69. 10. Fairness i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 Group ¨ Idea: users can be recommended something they do not like, as long as they also get recommended something they like ¨ Each user chooses her/his favorite item ¤ Two items with the same rating è choice is based on the other users
  70. 70. 10. Fairness i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 Group i4 ¨ Idea: users can be recommended something they do not like, as long as they also get recommended something they like ¨ Each user choose her/his favorite item ¤ Two items with the same rating è choice is based on the other users
  71. 71. 10. Fairness i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 Group i4 i6 ¨ Idea: users can be recommended something they do not like, as long as they also get recommended something they like ¨ Each user choose her/his favorite item ¤ Two items with the same rating è choice is based on the other users
  72. 72. 10. Fairness i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 Group i4 i6 i10 ¨ Idea: users can be recommended something they do not like, as long as they also get recommended something they like ¨ Each user choose her/his favorite item ¤ Two items with the same rating è choice is based on the other users
  73. 73. 10. Fairness i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 Group i4 i6 i10 i5 ¨ Idea: users can be recommended something they do not like, as long as they also get recommended something they like ¨ Each user choose her/his favorite item ¤ Two items with the same rating è choice is based on the other users
  74. 74. 10. Fairness i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 Group i4 i6 i10 i5 i2 i7 i1 i3 i9 i8 ¨ Idea: users can be recommended something they do not like, as long as they also get recommended something they like ¨ Each user choose her/his favorite item ¤ Two items with the same rating è choice is based on the other users
  75. 75. 10. Fairness Uses in the literature ¨ This strategy is adopted by [Christensen and Schiaffino 2011] in the music recommendation context
  76. 76. 11. Most Respected Person (Dictatorship) ¨ Select the items according to the preferences of the most respected person ¤ Using the preferences of the others just in case more than one item received the same evaluation i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 u1 8 10 7 10 9 8 10 6 3 6 u2 7 10 6 9 8 10 9 4 4 7 u3 5 1 8 6 9 10 3 5 7 10 Group 8 10 7 10 9 8 10 6 3 6 In the example, the most respected person is u1
  77. 77. 11. Most Respected Person (Dictatorship) Uses in the literature ¨ This strategy is used by INTRIGUE [Ardissono et al. 2003] that advantages the preferences of a subset of users with particular needs ¨ G.A.I.N. [Pizzutilo et al. 2005] shows that when people interact, a user or a small portion of the group influences the choices of the whole group ¨ In [Jung 2012], long tail users are considered, i.e., an expert group on a certain attribute. Their ratings are considered to provide recommendations to the non-expert user group (short head group) ¨ When the group model of a family is built in [Berkovsky and Freyne 2010], the person who prepares the recipe has a higher weight w.r.t. to the partner and the children
  78. 78. 4. Rating prediction Tasks and state of the art survey
  79. 79. Rating prediction ¨ Ratings can be predicted using one of the following 3 approaches [Jameson and Smyth 2007]: 1. based on a group model: combine individual preferences and use it to build predictions for the group 2. merging recommendations built for the users in a group 3. aggregating all the predictions built for the users in a group
  80. 80. Rating prediction Construction of group preference models ¨ Build a group model to combine individual preferences, then predict a rating for the items that do not have a score in the group model ¨ Two main steps: 1. Construct a model Mg for a group g (it contains its preferences) 2. For each item i not rated by the group g, use Mg to predict a rating pgi
  81. 81. Rating prediction Construction of group preference models ¨ MusicFX [McCarthy and Anagnost 1998] decides the genre of music to play by randomly selecting one of the top-m stations available in the group model that summed the individual preferences ¤ Random to avoid playing the top genre everyday n The same people might work out at the same time and the same genre would be played everyday ¨ INTRIGUE [Ardissono et al. 2003] models the preferences of subgroups of homogeneous people, then produces the recommendations giving a different importance to particular categories of people (e.g., disabled people)
  82. 82. Rating prediction Construction of group preference models ¨ [Berkovsky and Freyne 2010] showed that when recommending recipes to a family, a group model that combines the individual preferences should be used to make the predictions ¨ To recommend TV programs, TV4M [Yu et al. 2006] builds a model with the family members who logged in (i.e., who are in front of the TV)
  83. 83. Rating prediction Merging individual recommendations ¨ Present to a group a set of items, i.e., the merging of the items with the highest predicted ratings for each user in the group ¨ The approach works as follows: 1. For each user u in the group: n For each item i not rated, predict a rating pui n Select the set Cu of items with the highest predicted ratings pui 2. Model the preferences of each group by producing U Cu
  84. 84. Rating prediction Merging individual recommendations ¨ The approach is not widely used in the literature ¨ PolyLens [O’Connor et al. 2001] selects the items with the highest predicted ratings for each user ¤ Then employs a Least Misery strategy to recommend the ones with the lowest rating
  85. 85. Rating prediction Merging individual predictions ¨ Predict individual preferences for all the items not rated by each user, then aggregate individual preferences for an item into a group model ¨ The approach works as follows: 1. For each item i: n For each user u who did not rate i, predict a rating pui n Calculate an aggregate rating rgi from the ratings of the users in the group
  86. 86. Rating prediction Merging individual predictions ¨ Pocket RestaurantFinder [McCarthy 2002] predicts a rating for each user and each restaurant and combines them with an average ¨ Travel Decision Forum [Jameson 2004] builds predictions for every user (users can copy the preferences for the others), than predicts a group score by considering the median of the individual predictions
  87. 87. Rating prediction Merging individual predictions ¨ E-Tourism [Garcia et al. 2009, Sebastia et al. 2009] build three types of predictions for each user (demographic, content- and like-based), aggregates them and selects the group recommendations from each list
  88. 88. 5. Help the members to achieve consensus Tasks and state of the art survey
  89. 89. Help the members to achieve a consensus ¨ Three strategies are usually employed to select the items to recommend to the group: 1. the system suggests the items with the highest predicted ratings, without consulting the group; 2. a member of the group is responsible for the final decision; 3. the users in the group have a conversation, in order to achieve consensus.
  90. 90. Help the members to achieve a consensus Member responsible for the final decision ¨ Travel Decision Forum [Jameson 2004] allows the tourist guide to make the final decision ¨ In [Ben-Arieh and Chen 2006], an expert in the group expresses opinions on an alternative through linguistic labels (e.g., perfect) and the system aggregates these labels to make a decision
  91. 91. Help the members to achieve a consensus Conversation between the users ¨ Travel Decision Forum [Jameson 2004] also allows users to have a conversation ¨ If they’re not in the same room, animated characters (agents) represent the likely response of the abstent users
  92. 92. 6. Explanation of the recommendations Tasks and state of the art survey
  93. 93. Explanation of the recommendations ¨ The systems deal with preferences of multiple users ¨ Some explain why the proposed items have been selected for the group
  94. 94. Explanation of the recommendations ¨ PolyLens [O’Connor et al. 2001] presents the group recommendations by showing also the individual ones
  95. 95. Explanation of the recommendations ¨ Let’s Browse [Lieberman et al. 1999] shows the keywords that led to the recommendation
  96. 96. Explanation of the recommendations ¨ INTRIGUE [Ardissono et al. 2003] gives a long explanation of why a destination was recommended to a group
  97. 97. Evaluation methods
  98. 98. Evaluation methods ¨ Three approaches: 1. Offline methods on existing datasets 2. User surveys that that test the effectiveness of a system by asking users to answer questionnaires 3. Live systems that work in real-world domains, like the social networks
  99. 99. Evaluation methods Offline methods ¨ Employ classic evaluation metrics: ¤ RMSE ¤ MAE ¤ Precision and Recall ¤ …
  100. 100. Evaluation methods Offline methods ¨ No public group recommendation dataset is available in the literature [Padmanabhan et al. 2011, Quijano-Sanchez et al. 2012] ¤ The partitioning of the users into groups is not available ¨ The vast majority of the approaches adds constraints on a dataset to infer the groups and build the recommendations
  101. 101. Evaluation methods User surveys ¨ Users are asked to compile questionnaire to evaluate the system from several perspectives: ¤ The quality of the recommendations [De Pessemier et al. 2016] ¤ The usability of the system [Zapata et al. 2015]
  102. 102. Evaluation methods Live systems ¨ GroupLink [Wei et al. 2016] suggests events to promote group members’ face-to-face interactions in non-work settings ¨ Identifies and tracking personal preferences by analyzing individual digital traces (social media, email, and online streaming histories) ¨ A live system has been developed: https://bit.ly/group-link
  103. 103. Emerging aspects and techniques
  104. 104. Emerging aspects and techniques 1. Advanced recommendation techniques applied to group recommendation 2. Social group recommender systems 3. Fairness in group recommendations
  105. 105. Advanced recommendation techniques Emerging aspects and techniques
  106. 106. Advanced recommendation techniques ¨ Over the last few years, new recommendation techniques have been developed to address problems such as: ¤ sparsity ¤ limited coverage ¨ Two main research directions: ¤ dimensionality reduction n Compact representation of users and items (most significant features) ¤ graph-based techniques n Exploit the transitive relations in the data ¨ They have been recently adopted in group recommendation problems
  107. 107. Advanced recommendation techniques Dimensionality reduction ¨ [Christensen and Schiaffino 2013] employ matrix factorization and SNA (to analyze social influence)
  108. 108. Advanced recommendation techniques Graph-based techniques ¨ [Kim and El Saddik 2015] present a stochastic method ¤ Build a bipartite graph and perform random walks to quantify the influence of nodes (i.e., users and items) and rank items to recommend to groups
  109. 109. Advanced recommendation techniques Graph-based techniques ¨ COM (COnsensus Model) [Yuan et al. 2014] builds a generative model that incorporates users’ selection history and personal considerations of content factors ¨ Users in a group may have different influences (e.g., expert in a topic)
  110. 110. Social group recommender systems Emerging aspects and techniques
  111. 111. Social group recommender systems ¨ HappyMovie [Quijano Sanchez et al. 2014] is a Facebook application that recommends movies to groups ¨ It considers user preferences, social interactions, personality of the users, … ¨ 60 users (35 males and 25 females) tested and evaluated the application
  112. 112. Fairness in group recommendation Emerging aspects and techniques
  113. 113. Fairness in group recommendation ¨ User groups may be heterogeneous, consisting of people with potentially dissimilar preferences. ¨ If an item is overall good for the group, there could be one or more members that do not like it ¨ These users would be frustrated if the item is selected by the group! ¨ Measuring how fair are the items recommended for a group is central
  114. 114. Fairness in group recommendation ¨ [Qi et al. 2016] and [Serbos et al. 2017] study fairness in the package-to-group recommendation scenario. The two works introduce two metrics: 1. m-Proportionality: For a user u, and a package P, P is m-proportional for u, for m ≥ 1, if there exist at least m items in P that u likes. For a group of users G, and a package P, the m-proportionality of the package P for the group G is defined as: |GP|/|G| n where GP ⊆ G is the set of users in the group for which the package P is m-proportional.
  115. 115. Fairness in group recommendation 2. m-Envy-Freeness: a user u feels that a package is fair, if there are m items for which the user is in the favored top-∆% of the group. Otherwise, the user has envy against the other members of the group, who always get a better deal, and thus feels she is being treated unfairly. For a group of users G, and a package P, the m-envy-freeness of the package P for the group G is defined as: |Gef|/|G| n where Gef ⊆ G is the set of users in the group for which the package P is m-envy-free.
  116. 116. Fairness in group recommendation ¨ [Lin et al. 2017] recommend items to a group, by ensuring fairness thanks to Pareto efficiency ¨ A solution is called Pareto efficient if none of the objective functions can be improved without degrading some of the other objectives. ¨ Several greedy algorithms that optimize different fairness metrics are proposed and the most effective is that based on the variance of the ratings of the users: FVar(g,I) = 1-Var({U(u,I), ∀u∈g} ¨ This last solution outperform the two previous metrics in terms of accuracy
  117. 117. Group recommendation with automatic detection of groups Case Study
  118. 118. Group recommendation with automatic detection of groups ¨ Example: recommendation flyers ¨ Nielsen estimates that 1B Euros per year is spent to print 12M flyers ¨ 14.6B Euros are estimated to be spent by the customers thanks to these flyers http://www.nielsen.com/content/dam/c orporate/Italy/reports/2012/Le nuove tendenze del largo consumo (R. de Camillis).pdf
  119. 119. Group recommendation with automatic detection of groups
  120. 120. Group recommendation with automatic detection of groups
  121. 121. Group recommendation with automatic detection of groups
  122. 122. Group recommendation with automatic detection of groups
  123. 123. Group recommendation with automatic detection of groups
  124. 124. Group recommendation with automatic detection of groups
  125. 125. Group recommendation with automatic detection of groups
  126. 126. Group recommendation with automatic detection of groups
  127. 127. Group recommendation and automatic detection of groups ¨ Research questions: 1. How should we predict the ratings in this context? n individual predictions for each user? n group predictions? 2. How should we group the users for recommendation purposes? 3. How should we generate group models that contain the preferences for a group?
  128. 128. Group recommendation and automatic detection of groups ¨ [Boratto and Carta 2015] shows that: 1. Ratings should be predicted for individual users 2. Groups should be detected with a clustering algorithm (k-means) that also includes the predictions in the input 3. Groups should be modeled through an average of the individual ratings (Additive Utilitarian) n It represents the centroid of the cluster
  129. 129. Open issues and research challenges
  130. 130. Open issues and research challenges ¨ No public dataset available ¤ With both group structure and individual/group preferences ¨ Evaluation ¤ How effective are the group recommendations? Consider both individual satisfaction and that of the group as a whole ¨ Explanations with model-based algorithms ¤ Recommendations are based on latent features and explaining them is challenging ¨ Understanding and employing group dynamics ¤ Integrating the evolution of the individual preferences that happens because of the group dynamics is still an open issue ¨ Novelty, diversity, and serendipity ¤ Generating novel, diverse, and serendipitous recommendations for the whole group is challenging
  131. 131. References [Amer-Yahia et al. 2009] Sihem Amer-Yahia, Senjuti Basu Roy, Ashish Chawlat, Gautam Das, and Cong Yu. 2009. Group recommendation: semantics and efficiency. Proc. VLDB Endow. 2, 1 [Ardissono et al. 2003] Liliana Ardissono, Anna Goy, Giovanna Petrone, Marino Segnan, and Pietro Torasso. 2003. Intrigue: Personalized Recommendation of Tourist Attractions for Desktop and Hand Held Devices. Applied Artificial Intelligence [Baltrunas et al. 2010] Linas Baltrunas, Tadas Makcinskas, and Francesco Ricci. 2010. Group recommendations with rank aggregation and collaborative filtering. In Proceedings of the fourth ACM conference on Recommender systems (RecSys '10) [Ben-Arieh and Chen 2006] D. Ben-Arieh and Zhifeng Chen. 2006. Linguistic-labels aggregation and consensus measure for autocratic decision making using group recommendations. Trans. Sys. Man Cyber. Part A 36, 3 [Berkovsky and Freyne 2010] Shlomo Berkovsky and Jill Freyne. 2010. Group-based recipe recommendations: analysis of data aggregation strategies. In Proceedings of the fourth ACM conference on Recommender systems (RecSys '10)
  132. 132. References [Boratto and Carta 2011] Ludovico Boratto and Salvatore Carta. 2011. State-of-the-art in group recommendation and new approaches for automatic identification of groups. In: Information Retrieval and Mining in Distributed Environments, Studies in Computational Intelligence. [Boratto and Carta 2015] Ludovico Boratto and Salvatore Carta. 2015. ART: group recommendation approaches for automatically detected groups,” In: International Journal of Machine Learning and Cybernetics. [Bourke et al. 2011] Steven Bourke, Kevin McCarthy, and Barry Smyth. 2011. Using Social Ties In Group Recommendation. In Proceedings of The 22nd Irish Conference on Artificial Intelligence and Cognitive Science [Carvalho et al. 2013] Lucas Augusto M.C. Carvalho and Hendrik T. Macedo. 2013. Generation of coalition structures to provide proper groups' formation in group recommender systems. In Proceedings of the 22nd International Conference on World Wide Web (WWW '13 Companion).
  133. 133. References [Chao et al. 2005] Dennis L. Chao, Justin Balthrop, and Stephanie Forrest. 2005. Adaptive radio: achieving consensus using negative preferences. In Proceedings of the 2005 International ACM SIGGROUP Conference on Suppor- ting Group Work, GROUP 2005 [Chen and Pu 2013] Yu Chen and Pearl Pu. 2013. CoFeel: Using Emotions to Enhance Social Interaction in Group Recommender Systems. In Alpine Rendez-Vous (ARV) 2013 Workshop on Tools and Technology for Emotion-Awareness in Computer Mediated Collaboration and Learning. [Chen et al. 2008] Yen-Liang Chen, Li-Chen Cheng, and Ching-Nan Chuang. 2008. A group recommendation system with consideration of interactions among group members. Expert Syst. Appl. 34 [Crossen et al. 2002] Andrew Crossen, Jay Budzik, and Kristian J. Hammond. 2002. Flytrap: intelligent group music recommendation. In Proceedings of the 7th international conference on Intelligent user interfaces (IUI '02)
  134. 134. References [Christensen and Schiaffino 2011] Ingrid A. Christensen and Silvia N. Schiaffino. 2011. Entertainment recommender systems for group of users. Expert Systems with Applications [Christensen and Schiaffino 2013] Ingrid Alina Christensen and Silvia N. Schiaffino. 2013. Matrix Factorization in Social Group Recommender Systems. In 12th Mexican International Conference on Artificial Intelligence, MI- CAI 2013 [De Pessemier et al. 2013] Toon Pessemier, Simon Dooms, and Luc Martens. 2013. Comparison of group recommendation algorithms. Multimedia Tools and Applications [De Pessemier et al. 2015] Toon De Pessemier, Jeroen Dhondt, Kris Vanhecke, and Luc Martens. 2016. TravelWithFriends: a Hybrid Group Recommender System for Travel Destinations.” Proceedings of the Workshop on Tourism Recommender Systems, in Conjunction with the 9th ACM Conference on Recommender Systems. [De Pessemier et al. 2016] Toon De Pessemier, Jeroen Dhondt, and Luc Martens. 2016. Hybrid group recommendations for a travel service. Multimedia Tools and Applications [Delic et al. 2016] Amra Delic, Julia Neidhardt, Thuy Ngoc Nguyen, Francesco Ricci, Laurens Rook, Hannes Werthner, and Markus Zanker, “Observing group decision making processes,” in Proceedings of RecSys ’16
  135. 135. References [Felfernig et al. 2012] Alexander Felfernig, Christoph Zehentner, Gerald Ninaus, Harald Grabner, Walid Maalej, Dennis Pagano, Leopold Weninger, and Florian Reinfrank. 2012. Group Decision Support for Requirements Negotiation. In Advances in User Modeling - UMAP 2011 Workshops [Garcia et al. 2009] Inma Garcia, Laura Sebastia, Eva Onaindia, and Cesar Guzman. 2009. A Group Recommender System for Tourist Activities. In Proceedings of the 10th International Conference on E-Commerce and Web Technologies (EC-Web 2009) [Gartrell et al. 2010] Mike Gartrell, Xinyu Xing, Qin Lv, Aaron Beach, Richard Han, Shivakant Mishra, and Karim Seada. 2010. Enhancing group recommendation by incorporating social relationship interactions. In Proceedings of the 16th ACM international conference on Supporting group work (GROUP '10) [Goren-Bar and Glinansky 2004] Dina Goren-Bar, Oded Glinansky. 2004. FIT- recommend ing TV programs to family members. Computers & Graphics 28(2) [Jameson 2004] Anthony Jameson. 2004. More than the sum of its members: challenges for group recommender systems. In Proceedings of the working conference on Advanced visual interfaces (AVI '04).
  136. 136. References [Jameson and Smyth 2007] Anthony Jameson and Barry Smyth. 2007. Recommendation to Groups. In The Adaptive Web, Methods and Strategies of Web Personalization. [Jung 2012] Jason J. Jung. 2012. Attribute selection-based recommendation framework for short-head user group: An empirical study by MovieLens and IMDB. [Kim and El Saddik 2015] Heung-Nam Kim and Abdulmotaleb El Saddik. 2015. A stochastic approach to group recommendations in social media systems. Inf. Syst. 50 [Kim et al. 2010] Jae Kyeong Kim, Hyea Kyeong Kim, Hee Young Oh, and Young U. Ryu. 2010. A group recommendation system for online communities. Int. J. Inf. Manag. 30 [Lieberman et al. 1999] Henry Lieberman, Neil W. Van Dyke, and Adriana Santarosa Vivacqua. 1999. Let’s Browse: A Collaborative Web Browsing Agent. In IUI [Lin et al. 2017] Xiao Lin, Min Zhang, Yongfeng Zhang, and Zhaoquan Gu, “Fairness- aware group recommendation with pareto efficiency,” in Proceedings of RecSys 2017 [Liu et al. 2012] Xingjie Liu, Qi He, Yuanyuan Tian, Wang-Chien Lee, John McPherson, and Jiawei Han. 2012. Event-based social networks: linking the online and offline social worlds. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD '12).
  137. 137. References [Masthoff 2011] Judith Masthoff. 2015. Group recommender systems: Combining individual models. In Recommender systems handbook [Masthoff 2015] Judith Masthoff. 2015. Group Recommender Systems: Aggregation, Satisfaction and Group Attributes. In Recommender Systems Handbook [McCarthy and Anagnost 1998] Joseph F. McCarthy and Theodore D. Anagnost. 1998. MusicFX: An Arbiter of Group Preferences for Computer Supported Collaborative Workouts. In CSCW ’98, Proceedings of the ACM 1998 Conference on Computer Supported Cooperative Work [McCarthy 2002] J.F. McCarthy. 2002. Pocket RestaurantFinder: A Situated Recommender System for Groups. In Workshop on Mobile Ad-Hoc Communication at the 2002 ACM Conference on Human Factors in Computer Systems. [McCarthy et al. 2006] K. McCarthy, L. McGinty, B. Smyth, and M. Salamo. 2006. Kevin McCarthy, Maria Salamo, Lorcan Coyle, Lorraine McGinty, Barry Smyth, and Paddy Nixon. 2006c. CATS: A Synchronous Approach to Collaborative Group Recommendation. In Proceedings of the Nineteenth International Florida Artificial Intelligence Research Society Conference [Nam Kim et al. 2015] Heung-Nam Kim and Abdulmotaleb El-Saddik. 2015. A stochastic approach to group recommendations in social media systems. Inf. Syst.
  138. 138. References [O’Connor et al. 2001] Mark O’Connor, Dan Cosley, Joseph A. Konstan, and John Riedl. 2001. PolyLens: A recommender system for groups of users. In Proceedings of the Seventh European Conference on Computer Supported Cooperative Work [O’Hara et al. 2004] Kenton O'Hara, Matthew Lipson, Marcel Jansen, Axel Unger, Huw Jeffries, and Peter Macer. 2004. Jukola: democratic music choice in a public space. In Proceedings of the 5th conference on Designing interactive systems: processes, practices, methods, and techniques (DIS '04). [Padmanabhan et al. 2011] Vineet Padmanabhan, Siva Krishna Seemala, and Wilson Naik Bhukya. 2011. A rule based approach to group recommender systems. In Proceedings of the 5th international conference on Multi- Disciplinary Trends in Artificial Intelligence (MIWAI’11). [Pizzutilo et al. 2005] Sebastiano Pizzutilo, Berardina De Carolis, Giovanni Cozzolongo, and Francesco Ambruoso. 2005. Group modeling in a public space: Methods, techniques and experiences. In Proceedings of WSEAS AIC 05. [Qi et al. 2016] Shuyao Qi, Nikos Mamoulis, Evaggelia Pitoura, and Panayiotis Tsaparas, “Recommending packages to groups” in Proceedings of ICDM 2016.
  139. 139. References [Quijano- Sanchez et al. 2012] Lara Quijano-Sanchez, Derek G. Bridge, Belen Diaz-Agudo, and Juan A. Recio-Garcia. 2012. A Case-Based Solution to the Cold-Start Problem in Group Recommenders. In Case-Based Reasoning Research and De- velopment - 20th International Conference, ICCBR 2012 [Quijano Sanchez et al. 2014] Lara Quijano Sanchez, Belen Diaz-Agudo, and Juan A. Recio-Garcia. 2014. Development of a group recommender application in a Social Network. Knowl.-Based Syst. [Ricci et al. 2015] Francesco Ricci, Lior Rokach, and Bracha Shapira. 2015. Recommender Systems: Introduction and Challenges. In Recommender Systems Handbook. [Sebastia et al. 2009] Laura Sebastia, Inma Garcia, Eva Onaindia, Cesar Guzman. 2009. E- Tourism: a Tourist Recommendation and Planning Application. International Journal on Artificial Intelligence Tools 18(5) [Senot et al. 2010] Christophe Senot, Dimitre Kostadinov, Makram Bouzid, Jerome Picault, Armen Aghasaryan, and Cedric ernier. 2010. Analysis of Strategies for Building Group Profiles. In User Modeling, Adaptation, and Personalization, 18th International Conference, UMAP 2010 [Serbos et al. 2017] Dimitris Serbos, Shuyao Qi, Nikos Mamoulis, Evaggelia Pitoura, and Panayiotis Tsaparas, “Fairness in package-to-group recommendations” in Proceedings WWW ’17
  140. 140. References [Wei et al. 2016] Honghao Wei, Cheng-Kang Hsieh, Longqi Yang, and Deborah Estrin. 2016. GroupLink: Group Event Recommendations Using Personal Digital Traces. In Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion (CSCW '16 Companion) [Xie and Lui 2015] Hong Xie and John C. S. Lui. 2015. Mathematical Modeling and Analysis of Product Rating with Partial Information. ACM Trans. Knowl. Discov. Data 9, 4 [Yu et al. 2006] Zhiwen Yu, Xingshe Zhou, Yanbin Hao, and Jianhua Gu. 2006. TV Program Recommendation for Multiple Viewers Based on user Profile Merging. User Modeling and User-Adapted Interaction 16, 1 [Yuan et al. 2014] Quan Yuan, Gao Cong, and Chin-Yew Lin. 2014. COM: a generative model for group recommendation. In The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14 [Zapata et al. 2015] Alfredo Zapata, Victor H. Menendez, Manuel E. Prieto, and Cristobal Romero. 2015. Evaluation and se- lection of group recommendation strategies for collaborative searching of learning objects. Int. J. Hum.-Comput. Stud.

×