SlideShare ist ein Scribd-Unternehmen logo
1 von 35
Downloaden Sie, um offline zu lesen
Deviation-Based Contextual SLIM Recommenders
Yong Zheng, Bamshad Mobasher, Robin Burke
DePaul University, Chicago, IL, USA
@CIKM 2014, Shanghai, China, Nov 4, 2014
Outline of the Talk
• Context-aware Recommender Systems (CARS)
• Collaborative Filtering and SLIM Recommenders
• CSLIM: Contextualizing SLIM Recommenders
• Experimental Evaluations
• Conclusions and Future Work
Outline of the Talk
• Context-aware Recommender Systems (CARS)
• Collaborative Filtering and SLIM Recommenders
• CSLIM: Contextualizing SLIM Recommenders
• Experimental Evaluations
• Conclusions and Future Work
Traditional Recommender Systems (RS)
T1 T2 T3 T4 T5
U1 3 2
U2 3 3 4
U3 4 2 1
U4 2 5 5
U5 3 2 4 2
Example: User-Item 2D-Rating Matrix
Traditional Recommender: Users × Items Ratings
Context-aware RS (CARS)
Motivations behind: Recommendation cannot live
alone without considering contexts, because users’
preferences always change from contexts to contexts.
Companion
Context-aware RS (CARS)
Example: User-Item Contextual Rating Matrix
In CARS: Users × Items × Contexts Ratings
Context-aware RS (CARS)
Example: User-Item Contextual Rating Matrix
Terminology:
Context dimension: time, location, companion
Context condition: values in specific dimension, e.g.,
weekend and weekday are two conditions in the
context dimension “Time”
Context-aware RS (CARS)
Representational CARS (R-CARS):
Assuming there are known influential contextual
variables available (e.g., location, time, mood, etc),
how to build CARS algorithms to adapt to users’
preferences in different contextual situations.
Context-aware RS (CARS)
Most of research in R-CARS is focusing on
development of context-aware collaborative filtering
(CACF).
CF CACF
Contexts
Outline of the Talk
• Context-aware Recommender Systems (CARS)
• Collaborative Filtering and SLIM Recommenders
• CSLIM: Contextualizing SLIM Recommenders
• Experimental Evaluations
• Conclusions and Future Work
Collaborative Filtering (CF)
CF is one of most popular recommendation algorithms.
1). Memory-based CF
Such as user-based CF and item-based CF
Pros: good for explanation; Cons: sparsity problems
2). Model-based CF
Such as matrix factorization, etc
Pros: good performance; Cons: cold-start, explanation
3).Hybrid CF Recommendation Algorithms
Such as content-based hybrid CF, etc
Pros: further improvement; Cons: running costs
Item-based CF (ItemKNN, Sarwar, 2001)
T1 T2 T3 T4 T5
U1 3 2
U2 3 3 ??? 4
U3 4 2 1
U4 2 5 5
U5 3 2 4 2
𝑃𝑢,𝑖 =
𝑗∈𝑁 𝑖
𝑅 𝑢,𝑗 × 𝑠𝑖𝑚(𝑖, 𝑗
𝑗∈𝑁 𝑖
𝑠𝑖𝑚(𝑖, 𝑗
Rating Prediction:
Cons: item-item similarity calculations and
neighborhood selections rely on co-ratings.
What if the # of co-ratings is limited?
SLIM (Ning, et al., 2011)
Sparse Linear Model (SLIM) is considered as another
shape of collaborative filtering approach.
Ranking Score Prediction:
Matrix R = User-Item rating matrix;
Matrix W = Item-Item coefficient matrix ≈ similarity matrix
We name this approach as SLIM-I, since W represents
item-item coefficients.
𝑆𝑖,𝑗 = 𝑅𝑖,: ⋅ 𝑊:,𝑗 =
ℎ=1,ℎ≠𝑗
𝑁
𝑅𝑖,ℎ 𝑊ℎ,𝑗
Comparison Between ItemKNN & SLIM-I
Pros of SLIM-I:
Matrix W is learned directly towards prediction/ranking
error; in other words, item-item coefficient/similarity is no
longer calculated based on co-ratings, which is more
reliable and can be optimized towards ranking directly.
SLIM-I has been demonstrated to outperform UserKNN,
ItemKNN, matrix factorization and other traditional RS
algorithms.
𝑆𝑖,𝑗 = 𝑅𝑖,: ⋅ 𝑊:,𝑗 =
ℎ=1,ℎ≠𝑗
𝑁
𝑅𝑖,ℎ 𝑊ℎ,𝑗
𝑃𝑢,𝑖 =
𝑗∈𝑁 𝑖
𝑅 𝑢,𝑗 × 𝑠𝑖𝑚(𝑖, 𝑗
𝑗∈𝑁 𝑖
𝑠𝑖𝑚(𝑖, 𝑗
Rating Prediction in ItemKNN:
Ranking Score Prediction in SLIM-I:
SLIM-I and SLIM-U
SLIM-I is another shape of ItemKNN; W = Item-item coefficient matrix;
SLIM-U is another shape of UserKNN; W = User-user coefficient matrix;
Outline of the Talk
• Context-aware Recommender Systems (CARS)
• Collaborative Filtering and SLIM Recommenders
• CSLIM: Contextualizing SLIM Recommenders
• Experimental Evaluations
• Conclusions and Future Work
CSLIM: Contextual SLIM Recommenders
We use SLIM-I as an example to introduce how to build
CSLIM-I approaches; contexts can also be incorporated
into SLIM-U to formulate CSLIM-U models accordingly.
Ranking Prediction in SLIM-I:
CSLIM has a uniform ranking prediction:
CSLIM aggregates contextual ratings with item-item coefficients.
There are two key points:
1).The rating to be aggregated should be placed under same c;
2).Accordingly, W indicates coefficients under same contexts;
𝑆𝑖,𝑗 = 𝑅𝑖,: ⋅ 𝑊:,𝑗 =
ℎ=1,ℎ≠𝑗
𝑁
𝑅𝑖,ℎ 𝑊ℎ,𝑗
𝑆𝑖,𝑗,𝑐 =
ℎ=1,ℎ≠𝑗
𝑁
𝑅𝑖,ℎ,𝑐 𝑊ℎ,𝑗
Incorporate Contexts
CSLIM: Contextual SLIM Recommenders
The challenge is how to estimate , since contextual
ratings are usually sparse – it is not guaranteed that the
same user already rated other items in the same context c.
Ranking Prediction in CSLIM-I:
We used a deviation-based approach to estimate it.
Matrix R: user-item 2D rating matrix (non-contextual ratings)
Matrix W: item-item coefficient matrix
Matrix D: a matrix estimating rating deviations in contexts;
Here, D is a CI matrix (rows are items, cols are contexts)
This approach is named as CSLIM-I-CI
𝑆𝑖,𝑗,𝑐 =
ℎ=1,ℎ≠𝑗
𝑁
𝑅𝑖,ℎ,𝑐 𝑊ℎ,𝑗
𝑅𝑖,ℎ,𝑐
CSLIM: Contextual SLIM Recommenders
We used a deviation-based approach to estimate it.
Example: CSLIM-I-CI,
R = non-contextual Rating Matrix
D = Contextual Rating Deviation Matrix
W = Item-item Coefficient Matrix
C = a binary context vector, as below
𝑅𝑖,𝑗,𝑐 = 𝑅𝑖,𝑗 +
𝑙=1
𝐿
𝐷𝑗,𝑙 𝑐𝑙
Weekend Weekday At Home At Park
1 0 0 1
We use this estimation even if we already know a real contextual rating in
situation c, since we’d like to learn as many cells in D as possible.
CSLIM: Contextual SLIM Recommenders
There are three ways to model contextual rating deviation (CRD) in D:
1). D is a CI matrix – assuming there is CRD for each <item, context> pair
2). D is a CU matrix – assuming there is CRD for each <user, context> pair
3). D is a vector – assuming CRD is only dependent with context
Incorporate contexts into SLIM-I: CSLIM-I-CI, CSLIM-I-CU, CSLIM-I-C;
Incorporate contexts into SLIM-U: CSLIM-U-CI, CSLIM-U-CU, CSLIM-U-C;
We have built six Deviation-based CSLIM models!!
Further Step: General CSLIM Approaches
Cons: CSLIM requires users’ non-contextual ratings on items; if
there are no such ratings, we proposed to use the average of user’s
contextual ratings on the item for representative, which was
demonstrated to be feasible in our experiments.
However, we’d like to build more General CSLIM (GSLIM) models
which does not require the data of non-contextual ratings.
Simply, we model matrix D as a CC matrix, where each cell in D
represents the CRD between each two contextual conditions.
GCSLIM-I-CC can estimate rating deviations from a contextual rating
to another contextual rating (same item but different contexts).
Further Step: General CSLIM Approaches
For example, we want to estimate R<u1, t1, {Weekday, At home}>
And we already know the rating R<u1, t1, {Weekend, At cinema}>
And Matrix D helps us to learn and estimate
CRD (Weekday, Weekend) & CRD (At home, At cinema)
Therefore, R<u1, t1, {Weekday, At home}> =
R<u1, t1, {Weekend, At cinema}> + CRD (Weekday, Weekend)
+ CRD (At home, At cinema)
Similarly, matrix D can be paired with users or items; e.g., we
assume CRD between contexts differ from users to users.
Further Step: General CSLIM Approaches
Two challenges in GCSLIM approaches:
1). For each <user, item> pair, there could be several ratings for
this pair but in different contexts. Which contextual rating should
be applied?
If we use all those ratings  increasing computational costs;
If we just select one of them  there are three ways: MostSimilar,
LeastSimilar and Random; our experiments showed we could
randomly pick up one. See our papers for more details.
Further Step: General CSLIM Approaches
Two challenges in GCSLIM approaches:
2). How to couple matrix D with user or item dimension
If assign a D for each user/item  increasing computation costs
Solution: we can cluster users/items to small groups, and assume
the users/items in the same group can share a same matrix D.
We will explore this attempt in our future work.
Outline of the Talk
• Context-aware Recommender Systems (CARS)
• Collaborative Filtering and SLIM Recommenders
• CSLIM: Contextualizing SLIM Recommenders
• Experimental Evaluations
• Conclusions and Future Work
Data Sets
The current situation in the CARS research domain:
1). The number of data sets is limited;
2). The data is either small or sparse;
3). There are no large data sets, or larger ones are not publicly
accessible. Most data were collected from surveys.
All the data sets used can be found here: http://tiny.cc/contextdata
For reason of limited time, we only present results based on the
restaurant and music data in this slide. See more results in our
CIKM paper.
Baseline Approaches
We choose the state-of-the-arts CACF algorithms as baselines:
1). Differential context modeling (DCM): DCM incorporates contexts
into UserKNN/ItemKNN, but it suffers from sparsity problem and
performs the worst in terms of precision, recall and MAP.
2). Context-aware Splitting Approaches (CASA): CASA is a contextual
transformation approach, where contextual data were converted to
2D user-item rating matrix, and then traditional approach (MF in
this case) can be applied to the transformed data.
3). Context-aware Matrix Factorization (CAMF): CAMF incorporates
contexts into MF, where CRD is modeled as similar way as CSLIM.
4). Tensor Factorization (TF): TF is an independent context-aware
algorithm, since contexts are assumed to be independent with
user and item dimensions. TF increases computational costs with
the number of contexts increases.
Evaluation Protocols
1). 5-folds Cross-validation
All algorithms were run based on the same 5-folds of the data.
2). Top-N Recommendation Evaluations
Metrics: Precision, Recall and MAP (Mean Average Precision)
Precision and Recall are used to measure accuracy;
MAP is used to measure the position in the rankings;
Research Questions:
1). CSLIM outperforms the state-of-the-art CARS algorithms?
2). How about the GCSLIM? Better than CSLIM?
3). There are so many CLSIM algorithms, any guidelines to
pre-select the appropriate CSLIM algorithm?
Evaluation Results
Research Questions:
1). CSLIM outperforms the state-of-the-art CARS algorithms?
2). How about the GCSLIM? Better than CSLIM?
3). There are so many CLSIM algorithms, any guidelines to
pre-select the appropriate CSLIM algorithm?
Evaluation Results
Research Questions:
1). CSLIM outperforms the state-of-the-art CARS algorithms?
2). How about the GCSLIM? Better than CSLIM?
3). There are so many CLSIM algorithms, any guidelines to
pre-select the appropriate CSLIM algorithm?
Evaluation Results
Research Questions:
1). CSLIM outperforms the state-of-the-art CARS algorithms?
2). How about the GCSLIM? Better than CSLIM?
3). There are so many CLSIM algorithms, any guidelines to
pre-select the appropriate CSLIM algorithm?
There are two pieces in CSLIM algorithms; For example, CSLIM-I-CI
1). CSLIM-I, indicates we perform an ItemKNN CF approach;
2). – CI, indicates we model CRD as a CI matrix;
Questions:
1). CSLIM-I/ItemKNN or CSLIM-U/UserKNN should be used?
AW: it depends on the average number of ratings on items or
the average number of ratings by users.
2). –CI, –CU or –C should be applied?
AW: it relies on contexts are more dependent with users or items
For more details, see our CIKM paper.
Evaluation Results
How about the running efficiency?
Typically, in CSLIM and GCSLIM, the matrices D and W should be
learned in the process. There could be different challenges:
1). Large number of users/items/ratings
In this case, the non-contextual rating matrix R or the rating space
P will be very large, as well as the matrix W.
Solution: adopt KNN strategy. We do not use all the ratings, but
only select the top-N neighbors (items or users).
2). Large scale of contexts
What if there are tons of contextual conditions? Usually, in CARS
domain, the # of contextual dimensions are within 10, and the # of
contextual conditions are 100 at most.
Solution: there are many ways to pre-select influential contexts,
which contributes to reduce the # of contexts.
Outline of the Talk
• Context-aware Recommender Systems (CARS)
• Collaborative Filtering and SLIM Recommenders
• CSLIM: Contextualizing SLIM Recommenders
• Experimental Evaluations
• Conclusions and Future Work
Conclusions
1). CSLIM actually has been demonstrated to outperform the
state-of-the-art CARS algorithms;
2). GCSLIM sometimes contributes further improvements, but it is
not guaranteed that GCSLIM can always beat CSLIM algorithms – it
depends on how sparse the contextual ratings are;
3). We figure out some influential factors and discover latent rules
to select the appropriate CSLIM algorithms in advance.
1). Try to examine CSLIM and GCSLIM on larger data sets;
2). Try to compete with more models, e.g. factorization machines;
3). Try to couple CC matrix with users/items in GCSLIM approach;
4). Try to incorporate contexts into matrix W instead of adding the
matrix D.
Future Work
Deviation-Based Contextual SLIM Recommenders
Yong Zheng, Bamshad Mobasher, Robin Burke
DePaul University, Chicago, IL, USA
@CIKM 2014, Shanghai, China, Nov 4, 2014
Thanks!
Questions?

Weitere ähnliche Inhalte

Was ist angesagt?

Matrix Factorization Techniques For Recommender Systems
Matrix Factorization Techniques For Recommender SystemsMatrix Factorization Techniques For Recommender Systems
Matrix Factorization Techniques For Recommender Systems
Lei Guo
 
Recommender Systems! @ASAI 2011
Recommender Systems! @ASAI 2011Recommender Systems! @ASAI 2011
Recommender Systems! @ASAI 2011
Ernesto Mislej
 
Latent factor models for Collaborative Filtering
Latent factor models for Collaborative FilteringLatent factor models for Collaborative Filtering
Latent factor models for Collaborative Filtering
sscdotopen
 
CSTalks-Quaternary Semantics Recomandation System-24 Aug
CSTalks-Quaternary Semantics Recomandation System-24 AugCSTalks-Quaternary Semantics Recomandation System-24 Aug
CSTalks-Quaternary Semantics Recomandation System-24 Aug
cstalks
 
Hybrid Solution of the Cold-Start Problem in Context-Aware Recommender Systems
Hybrid Solution of the Cold-Start Problem in Context-Aware Recommender SystemsHybrid Solution of the Cold-Start Problem in Context-Aware Recommender Systems
Hybrid Solution of the Cold-Start Problem in Context-Aware Recommender Systems
Matthias Braunhofer
 

Was ist angesagt? (20)

[RecSys 2014] Deviation-Based and Similarity-Based Contextual SLIM Recommenda...
[RecSys 2014] Deviation-Based and Similarity-Based Contextual SLIM Recommenda...[RecSys 2014] Deviation-Based and Similarity-Based Contextual SLIM Recommenda...
[RecSys 2014] Deviation-Based and Similarity-Based Contextual SLIM Recommenda...
 
[UMAP 2016] User-Oriented Context Suggestion
[UMAP 2016] User-Oriented Context Suggestion[UMAP 2016] User-Oriented Context Suggestion
[UMAP 2016] User-Oriented Context Suggestion
 
[EMPIRE 2016] Adapt to Emotional Reactions In Context-aware Personalization
[EMPIRE 2016] Adapt to Emotional Reactions In Context-aware Personalization[EMPIRE 2016] Adapt to Emotional Reactions In Context-aware Personalization
[EMPIRE 2016] Adapt to Emotional Reactions In Context-aware Personalization
 
Recommender Systems: Advances in Collaborative Filtering
Recommender Systems: Advances in Collaborative FilteringRecommender Systems: Advances in Collaborative Filtering
Recommender Systems: Advances in Collaborative Filtering
 
Summary of a Recommender Systems Survey paper
Summary of a Recommender Systems Survey paperSummary of a Recommender Systems Survey paper
Summary of a Recommender Systems Survey paper
 
Matrix Factorization Techniques For Recommender Systems
Matrix Factorization Techniques For Recommender SystemsMatrix Factorization Techniques For Recommender Systems
Matrix Factorization Techniques For Recommender Systems
 
Movie lens movie recommendation system
Movie lens movie recommendation systemMovie lens movie recommendation system
Movie lens movie recommendation system
 
Recommender Systems! @ASAI 2011
Recommender Systems! @ASAI 2011Recommender Systems! @ASAI 2011
Recommender Systems! @ASAI 2011
 
Matrix Factorization Technique for Recommender Systems
Matrix Factorization Technique for Recommender SystemsMatrix Factorization Technique for Recommender Systems
Matrix Factorization Technique for Recommender Systems
 
Movie Recommendation engine
Movie Recommendation engineMovie Recommendation engine
Movie Recommendation engine
 
Latent factor models for Collaborative Filtering
Latent factor models for Collaborative FilteringLatent factor models for Collaborative Filtering
Latent factor models for Collaborative Filtering
 
(Gaurav sawant &amp; dhaval sawlani)bia 678 final project report
(Gaurav sawant &amp; dhaval sawlani)bia 678 final project report(Gaurav sawant &amp; dhaval sawlani)bia 678 final project report
(Gaurav sawant &amp; dhaval sawlani)bia 678 final project report
 
Cs583 recommender-systems
Cs583 recommender-systemsCs583 recommender-systems
Cs583 recommender-systems
 
CSTalks-Quaternary Semantics Recomandation System-24 Aug
CSTalks-Quaternary Semantics Recomandation System-24 AugCSTalks-Quaternary Semantics Recomandation System-24 Aug
CSTalks-Quaternary Semantics Recomandation System-24 Aug
 
Alleviating cold-user start problem with users' social network data in recomm...
Alleviating cold-user start problem with users' social network data in recomm...Alleviating cold-user start problem with users' social network data in recomm...
Alleviating cold-user start problem with users' social network data in recomm...
 
Hybrid Solution of the Cold-Start Problem in Context-Aware Recommender Systems
Hybrid Solution of the Cold-Start Problem in Context-Aware Recommender SystemsHybrid Solution of the Cold-Start Problem in Context-Aware Recommender Systems
Hybrid Solution of the Cold-Start Problem in Context-Aware Recommender Systems
 
Collaborative Filtering
Collaborative FilteringCollaborative Filtering
Collaborative Filtering
 
Recommendation and Information Retrieval: Two Sides of the Same Coin?
Recommendation and Information Retrieval: Two Sides of the Same Coin?Recommendation and Information Retrieval: Two Sides of the Same Coin?
Recommendation and Information Retrieval: Two Sides of the Same Coin?
 
Movie recommendation project
Movie recommendation projectMovie recommendation project
Movie recommendation project
 
Collaborative Filtering at Spotify
Collaborative Filtering at SpotifyCollaborative Filtering at Spotify
Collaborative Filtering at Spotify
 

Andere mochten auch

Andere mochten auch (12)

Tutorial: Context In Recommender Systems
Tutorial: Context In Recommender SystemsTutorial: Context In Recommender Systems
Tutorial: Context In Recommender Systems
 
[ADMA 2017] Identification of Grey Sheep Users By Histogram Intersection In R...
[ADMA 2017] Identification of Grey Sheep Users By Histogram Intersection In R...[ADMA 2017] Identification of Grey Sheep Users By Histogram Intersection In R...
[ADMA 2017] Identification of Grey Sheep Users By Histogram Intersection In R...
 
[WI 2017] Affective Prediction By Collaborative Chains In Movie Recommendation
[WI 2017] Affective Prediction By Collaborative Chains In Movie Recommendation[WI 2017] Affective Prediction By Collaborative Chains In Movie Recommendation
[WI 2017] Affective Prediction By Collaborative Chains In Movie Recommendation
 
[WI 2017] Context Suggestion: Empirical Evaluations vs User Studies
[WI 2017] Context Suggestion: Empirical Evaluations vs User Studies[WI 2017] Context Suggestion: Empirical Evaluations vs User Studies
[WI 2017] Context Suggestion: Empirical Evaluations vs User Studies
 
[Decisions2013@RecSys]The Role of Emotions in Context-aware Recommendation
[Decisions2013@RecSys]The Role of Emotions in Context-aware Recommendation[Decisions2013@RecSys]The Role of Emotions in Context-aware Recommendation
[Decisions2013@RecSys]The Role of Emotions in Context-aware Recommendation
 
[IUI 2017] Criteria Chains: A Novel Multi-Criteria Recommendation Approach
[IUI 2017] Criteria Chains: A Novel Multi-Criteria Recommendation Approach[IUI 2017] Criteria Chains: A Novel Multi-Criteria Recommendation Approach
[IUI 2017] Criteria Chains: A Novel Multi-Criteria Recommendation Approach
 
[IUI2015] A Revisit to The Identification of Contexts in Recommender Systems
[IUI2015] A Revisit to The Identification of Contexts in Recommender Systems[IUI2015] A Revisit to The Identification of Contexts in Recommender Systems
[IUI2015] A Revisit to The Identification of Contexts in Recommender Systems
 
[WISE 2015] Similarity-Based Context-aware Recommendation
[WISE 2015] Similarity-Based Context-aware Recommendation[WISE 2015] Similarity-Based Context-aware Recommendation
[WISE 2015] Similarity-Based Context-aware Recommendation
 
[RIIT 2017] Identifying Grey Sheep Users By The Distribution of User Similari...
[RIIT 2017] Identifying Grey Sheep Users By The Distribution of User Similari...[RIIT 2017] Identifying Grey Sheep Users By The Distribution of User Similari...
[RIIT 2017] Identifying Grey Sheep Users By The Distribution of User Similari...
 
[UMAP 2015] Integrating Context Similarity with Sparse Linear Recommendation ...
[UMAP 2015] Integrating Context Similarity with Sparse Linear Recommendation ...[UMAP 2015] Integrating Context Similarity with Sparse Linear Recommendation ...
[UMAP 2015] Integrating Context Similarity with Sparse Linear Recommendation ...
 
Tutorial: Context-awareness In Information Retrieval and Recommender Systems
Tutorial: Context-awareness In Information Retrieval and Recommender SystemsTutorial: Context-awareness In Information Retrieval and Recommender Systems
Tutorial: Context-awareness In Information Retrieval and Recommender Systems
 
Matrix Factorization In Recommender Systems
Matrix Factorization In Recommender SystemsMatrix Factorization In Recommender Systems
Matrix Factorization In Recommender Systems
 

Ähnlich wie [CIKM 2014] Deviation-Based Contextual SLIM Recommenders

Mining Large Streams of User Data for PersonalizedRecommenda.docx
Mining Large Streams of User Data for PersonalizedRecommenda.docxMining Large Streams of User Data for PersonalizedRecommenda.docx
Mining Large Streams of User Data for PersonalizedRecommenda.docx
ARIV4
 
Survey of Recommendation Systems
Survey of Recommendation SystemsSurvey of Recommendation Systems
Survey of Recommendation Systems
youalab
 
Probability density estimation using Product of Conditional Experts
Probability density estimation using Product of Conditional ExpertsProbability density estimation using Product of Conditional Experts
Probability density estimation using Product of Conditional Experts
Chirag Gupta
 

Ähnlich wie [CIKM 2014] Deviation-Based Contextual SLIM Recommenders (20)

Mining Large Streams of User Data for PersonalizedRecommenda.docx
Mining Large Streams of User Data for PersonalizedRecommenda.docxMining Large Streams of User Data for PersonalizedRecommenda.docx
Mining Large Streams of User Data for PersonalizedRecommenda.docx
 
factorization methods
factorization methodsfactorization methods
factorization methods
 
Credit Default Swap (CDS) Rate Construction by Machine Learning Techniques
Credit Default Swap (CDS) Rate Construction by Machine Learning TechniquesCredit Default Swap (CDS) Rate Construction by Machine Learning Techniques
Credit Default Swap (CDS) Rate Construction by Machine Learning Techniques
 
IM426 3A G5.ppt
IM426 3A G5.pptIM426 3A G5.ppt
IM426 3A G5.ppt
 
A Novel Collaborative Filtering Algorithm by Bit Mining Frequent Itemsets
A Novel Collaborative Filtering Algorithm by Bit Mining Frequent ItemsetsA Novel Collaborative Filtering Algorithm by Bit Mining Frequent Itemsets
A Novel Collaborative Filtering Algorithm by Bit Mining Frequent Itemsets
 
IRJET- Online Course Recommendation System
IRJET- Online Course Recommendation SystemIRJET- Online Course Recommendation System
IRJET- Online Course Recommendation System
 
Survey of Recommendation Systems
Survey of Recommendation SystemsSurvey of Recommendation Systems
Survey of Recommendation Systems
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
Case study of Machine learning in banks
Case study of Machine learning in banksCase study of Machine learning in banks
Case study of Machine learning in banks
 
Should all a- rated banks have the same default risk as lehman?
Should all a- rated banks have the same default risk as lehman?Should all a- rated banks have the same default risk as lehman?
Should all a- rated banks have the same default risk as lehman?
 
acmsigtalkshare-121023190142-phpapp01.pptx
acmsigtalkshare-121023190142-phpapp01.pptxacmsigtalkshare-121023190142-phpapp01.pptx
acmsigtalkshare-121023190142-phpapp01.pptx
 
Study of relevancy, diversity, and novelty in recommender systems
Study of relevancy, diversity, and novelty in recommender systemsStudy of relevancy, diversity, and novelty in recommender systems
Study of relevancy, diversity, and novelty in recommender systems
 
[AFEL] Neighborhood Troubles: On the Value of User Pre-Filtering To Speed Up ...
[AFEL] Neighborhood Troubles: On the Value of User Pre-Filtering To Speed Up ...[AFEL] Neighborhood Troubles: On the Value of User Pre-Filtering To Speed Up ...
[AFEL] Neighborhood Troubles: On the Value of User Pre-Filtering To Speed Up ...
 
Om0010 operations management
Om0010 operations managementOm0010 operations management
Om0010 operations management
 
IRJET- Boosting Response Aware Model-Based Collaborative Filtering
IRJET- Boosting Response Aware Model-Based Collaborative FilteringIRJET- Boosting Response Aware Model-Based Collaborative Filtering
IRJET- Boosting Response Aware Model-Based Collaborative Filtering
 
Probability density estimation using Product of Conditional Experts
Probability density estimation using Product of Conditional ExpertsProbability density estimation using Product of Conditional Experts
Probability density estimation using Product of Conditional Experts
 
Collaborative filtering with CCAM
Collaborative filtering with CCAMCollaborative filtering with CCAM
Collaborative filtering with CCAM
 
Om0010 operations management
Om0010 operations managementOm0010 operations management
Om0010 operations management
 
Low rank models for recommender systems with limited preference information
Low rank models for recommender systems with limited preference informationLow rank models for recommender systems with limited preference information
Low rank models for recommender systems with limited preference information
 
The Evaluation of Topsis and Fuzzy-Topsis Method for Decision Making System i...
The Evaluation of Topsis and Fuzzy-Topsis Method for Decision Making System i...The Evaluation of Topsis and Fuzzy-Topsis Method for Decision Making System i...
The Evaluation of Topsis and Fuzzy-Topsis Method for Decision Making System i...
 

Kürzlich hochgeladen

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Kürzlich hochgeladen (20)

Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 

[CIKM 2014] Deviation-Based Contextual SLIM Recommenders

  • 1. Deviation-Based Contextual SLIM Recommenders Yong Zheng, Bamshad Mobasher, Robin Burke DePaul University, Chicago, IL, USA @CIKM 2014, Shanghai, China, Nov 4, 2014
  • 2. Outline of the Talk • Context-aware Recommender Systems (CARS) • Collaborative Filtering and SLIM Recommenders • CSLIM: Contextualizing SLIM Recommenders • Experimental Evaluations • Conclusions and Future Work
  • 3. Outline of the Talk • Context-aware Recommender Systems (CARS) • Collaborative Filtering and SLIM Recommenders • CSLIM: Contextualizing SLIM Recommenders • Experimental Evaluations • Conclusions and Future Work
  • 4. Traditional Recommender Systems (RS) T1 T2 T3 T4 T5 U1 3 2 U2 3 3 4 U3 4 2 1 U4 2 5 5 U5 3 2 4 2 Example: User-Item 2D-Rating Matrix Traditional Recommender: Users × Items Ratings
  • 5. Context-aware RS (CARS) Motivations behind: Recommendation cannot live alone without considering contexts, because users’ preferences always change from contexts to contexts. Companion
  • 6. Context-aware RS (CARS) Example: User-Item Contextual Rating Matrix In CARS: Users × Items × Contexts Ratings
  • 7. Context-aware RS (CARS) Example: User-Item Contextual Rating Matrix Terminology: Context dimension: time, location, companion Context condition: values in specific dimension, e.g., weekend and weekday are two conditions in the context dimension “Time”
  • 8. Context-aware RS (CARS) Representational CARS (R-CARS): Assuming there are known influential contextual variables available (e.g., location, time, mood, etc), how to build CARS algorithms to adapt to users’ preferences in different contextual situations.
  • 9. Context-aware RS (CARS) Most of research in R-CARS is focusing on development of context-aware collaborative filtering (CACF). CF CACF Contexts
  • 10. Outline of the Talk • Context-aware Recommender Systems (CARS) • Collaborative Filtering and SLIM Recommenders • CSLIM: Contextualizing SLIM Recommenders • Experimental Evaluations • Conclusions and Future Work
  • 11. Collaborative Filtering (CF) CF is one of most popular recommendation algorithms. 1). Memory-based CF Such as user-based CF and item-based CF Pros: good for explanation; Cons: sparsity problems 2). Model-based CF Such as matrix factorization, etc Pros: good performance; Cons: cold-start, explanation 3).Hybrid CF Recommendation Algorithms Such as content-based hybrid CF, etc Pros: further improvement; Cons: running costs
  • 12. Item-based CF (ItemKNN, Sarwar, 2001) T1 T2 T3 T4 T5 U1 3 2 U2 3 3 ??? 4 U3 4 2 1 U4 2 5 5 U5 3 2 4 2 𝑃𝑢,𝑖 = 𝑗∈𝑁 𝑖 𝑅 𝑢,𝑗 × 𝑠𝑖𝑚(𝑖, 𝑗 𝑗∈𝑁 𝑖 𝑠𝑖𝑚(𝑖, 𝑗 Rating Prediction: Cons: item-item similarity calculations and neighborhood selections rely on co-ratings. What if the # of co-ratings is limited?
  • 13. SLIM (Ning, et al., 2011) Sparse Linear Model (SLIM) is considered as another shape of collaborative filtering approach. Ranking Score Prediction: Matrix R = User-Item rating matrix; Matrix W = Item-Item coefficient matrix ≈ similarity matrix We name this approach as SLIM-I, since W represents item-item coefficients. 𝑆𝑖,𝑗 = 𝑅𝑖,: ⋅ 𝑊:,𝑗 = ℎ=1,ℎ≠𝑗 𝑁 𝑅𝑖,ℎ 𝑊ℎ,𝑗
  • 14. Comparison Between ItemKNN & SLIM-I Pros of SLIM-I: Matrix W is learned directly towards prediction/ranking error; in other words, item-item coefficient/similarity is no longer calculated based on co-ratings, which is more reliable and can be optimized towards ranking directly. SLIM-I has been demonstrated to outperform UserKNN, ItemKNN, matrix factorization and other traditional RS algorithms. 𝑆𝑖,𝑗 = 𝑅𝑖,: ⋅ 𝑊:,𝑗 = ℎ=1,ℎ≠𝑗 𝑁 𝑅𝑖,ℎ 𝑊ℎ,𝑗 𝑃𝑢,𝑖 = 𝑗∈𝑁 𝑖 𝑅 𝑢,𝑗 × 𝑠𝑖𝑚(𝑖, 𝑗 𝑗∈𝑁 𝑖 𝑠𝑖𝑚(𝑖, 𝑗 Rating Prediction in ItemKNN: Ranking Score Prediction in SLIM-I:
  • 15. SLIM-I and SLIM-U SLIM-I is another shape of ItemKNN; W = Item-item coefficient matrix; SLIM-U is another shape of UserKNN; W = User-user coefficient matrix;
  • 16. Outline of the Talk • Context-aware Recommender Systems (CARS) • Collaborative Filtering and SLIM Recommenders • CSLIM: Contextualizing SLIM Recommenders • Experimental Evaluations • Conclusions and Future Work
  • 17. CSLIM: Contextual SLIM Recommenders We use SLIM-I as an example to introduce how to build CSLIM-I approaches; contexts can also be incorporated into SLIM-U to formulate CSLIM-U models accordingly. Ranking Prediction in SLIM-I: CSLIM has a uniform ranking prediction: CSLIM aggregates contextual ratings with item-item coefficients. There are two key points: 1).The rating to be aggregated should be placed under same c; 2).Accordingly, W indicates coefficients under same contexts; 𝑆𝑖,𝑗 = 𝑅𝑖,: ⋅ 𝑊:,𝑗 = ℎ=1,ℎ≠𝑗 𝑁 𝑅𝑖,ℎ 𝑊ℎ,𝑗 𝑆𝑖,𝑗,𝑐 = ℎ=1,ℎ≠𝑗 𝑁 𝑅𝑖,ℎ,𝑐 𝑊ℎ,𝑗 Incorporate Contexts
  • 18. CSLIM: Contextual SLIM Recommenders The challenge is how to estimate , since contextual ratings are usually sparse – it is not guaranteed that the same user already rated other items in the same context c. Ranking Prediction in CSLIM-I: We used a deviation-based approach to estimate it. Matrix R: user-item 2D rating matrix (non-contextual ratings) Matrix W: item-item coefficient matrix Matrix D: a matrix estimating rating deviations in contexts; Here, D is a CI matrix (rows are items, cols are contexts) This approach is named as CSLIM-I-CI 𝑆𝑖,𝑗,𝑐 = ℎ=1,ℎ≠𝑗 𝑁 𝑅𝑖,ℎ,𝑐 𝑊ℎ,𝑗 𝑅𝑖,ℎ,𝑐
  • 19. CSLIM: Contextual SLIM Recommenders We used a deviation-based approach to estimate it. Example: CSLIM-I-CI, R = non-contextual Rating Matrix D = Contextual Rating Deviation Matrix W = Item-item Coefficient Matrix C = a binary context vector, as below 𝑅𝑖,𝑗,𝑐 = 𝑅𝑖,𝑗 + 𝑙=1 𝐿 𝐷𝑗,𝑙 𝑐𝑙 Weekend Weekday At Home At Park 1 0 0 1 We use this estimation even if we already know a real contextual rating in situation c, since we’d like to learn as many cells in D as possible.
  • 20. CSLIM: Contextual SLIM Recommenders There are three ways to model contextual rating deviation (CRD) in D: 1). D is a CI matrix – assuming there is CRD for each <item, context> pair 2). D is a CU matrix – assuming there is CRD for each <user, context> pair 3). D is a vector – assuming CRD is only dependent with context Incorporate contexts into SLIM-I: CSLIM-I-CI, CSLIM-I-CU, CSLIM-I-C; Incorporate contexts into SLIM-U: CSLIM-U-CI, CSLIM-U-CU, CSLIM-U-C; We have built six Deviation-based CSLIM models!!
  • 21. Further Step: General CSLIM Approaches Cons: CSLIM requires users’ non-contextual ratings on items; if there are no such ratings, we proposed to use the average of user’s contextual ratings on the item for representative, which was demonstrated to be feasible in our experiments. However, we’d like to build more General CSLIM (GSLIM) models which does not require the data of non-contextual ratings. Simply, we model matrix D as a CC matrix, where each cell in D represents the CRD between each two contextual conditions. GCSLIM-I-CC can estimate rating deviations from a contextual rating to another contextual rating (same item but different contexts).
  • 22. Further Step: General CSLIM Approaches For example, we want to estimate R<u1, t1, {Weekday, At home}> And we already know the rating R<u1, t1, {Weekend, At cinema}> And Matrix D helps us to learn and estimate CRD (Weekday, Weekend) & CRD (At home, At cinema) Therefore, R<u1, t1, {Weekday, At home}> = R<u1, t1, {Weekend, At cinema}> + CRD (Weekday, Weekend) + CRD (At home, At cinema) Similarly, matrix D can be paired with users or items; e.g., we assume CRD between contexts differ from users to users.
  • 23. Further Step: General CSLIM Approaches Two challenges in GCSLIM approaches: 1). For each <user, item> pair, there could be several ratings for this pair but in different contexts. Which contextual rating should be applied? If we use all those ratings  increasing computational costs; If we just select one of them  there are three ways: MostSimilar, LeastSimilar and Random; our experiments showed we could randomly pick up one. See our papers for more details.
  • 24. Further Step: General CSLIM Approaches Two challenges in GCSLIM approaches: 2). How to couple matrix D with user or item dimension If assign a D for each user/item  increasing computation costs Solution: we can cluster users/items to small groups, and assume the users/items in the same group can share a same matrix D. We will explore this attempt in our future work.
  • 25. Outline of the Talk • Context-aware Recommender Systems (CARS) • Collaborative Filtering and SLIM Recommenders • CSLIM: Contextualizing SLIM Recommenders • Experimental Evaluations • Conclusions and Future Work
  • 26. Data Sets The current situation in the CARS research domain: 1). The number of data sets is limited; 2). The data is either small or sparse; 3). There are no large data sets, or larger ones are not publicly accessible. Most data were collected from surveys. All the data sets used can be found here: http://tiny.cc/contextdata For reason of limited time, we only present results based on the restaurant and music data in this slide. See more results in our CIKM paper.
  • 27. Baseline Approaches We choose the state-of-the-arts CACF algorithms as baselines: 1). Differential context modeling (DCM): DCM incorporates contexts into UserKNN/ItemKNN, but it suffers from sparsity problem and performs the worst in terms of precision, recall and MAP. 2). Context-aware Splitting Approaches (CASA): CASA is a contextual transformation approach, where contextual data were converted to 2D user-item rating matrix, and then traditional approach (MF in this case) can be applied to the transformed data. 3). Context-aware Matrix Factorization (CAMF): CAMF incorporates contexts into MF, where CRD is modeled as similar way as CSLIM. 4). Tensor Factorization (TF): TF is an independent context-aware algorithm, since contexts are assumed to be independent with user and item dimensions. TF increases computational costs with the number of contexts increases.
  • 28. Evaluation Protocols 1). 5-folds Cross-validation All algorithms were run based on the same 5-folds of the data. 2). Top-N Recommendation Evaluations Metrics: Precision, Recall and MAP (Mean Average Precision) Precision and Recall are used to measure accuracy; MAP is used to measure the position in the rankings; Research Questions: 1). CSLIM outperforms the state-of-the-art CARS algorithms? 2). How about the GCSLIM? Better than CSLIM? 3). There are so many CLSIM algorithms, any guidelines to pre-select the appropriate CSLIM algorithm?
  • 29. Evaluation Results Research Questions: 1). CSLIM outperforms the state-of-the-art CARS algorithms? 2). How about the GCSLIM? Better than CSLIM? 3). There are so many CLSIM algorithms, any guidelines to pre-select the appropriate CSLIM algorithm?
  • 30. Evaluation Results Research Questions: 1). CSLIM outperforms the state-of-the-art CARS algorithms? 2). How about the GCSLIM? Better than CSLIM? 3). There are so many CLSIM algorithms, any guidelines to pre-select the appropriate CSLIM algorithm?
  • 31. Evaluation Results Research Questions: 1). CSLIM outperforms the state-of-the-art CARS algorithms? 2). How about the GCSLIM? Better than CSLIM? 3). There are so many CLSIM algorithms, any guidelines to pre-select the appropriate CSLIM algorithm? There are two pieces in CSLIM algorithms; For example, CSLIM-I-CI 1). CSLIM-I, indicates we perform an ItemKNN CF approach; 2). – CI, indicates we model CRD as a CI matrix; Questions: 1). CSLIM-I/ItemKNN or CSLIM-U/UserKNN should be used? AW: it depends on the average number of ratings on items or the average number of ratings by users. 2). –CI, –CU or –C should be applied? AW: it relies on contexts are more dependent with users or items For more details, see our CIKM paper.
  • 32. Evaluation Results How about the running efficiency? Typically, in CSLIM and GCSLIM, the matrices D and W should be learned in the process. There could be different challenges: 1). Large number of users/items/ratings In this case, the non-contextual rating matrix R or the rating space P will be very large, as well as the matrix W. Solution: adopt KNN strategy. We do not use all the ratings, but only select the top-N neighbors (items or users). 2). Large scale of contexts What if there are tons of contextual conditions? Usually, in CARS domain, the # of contextual dimensions are within 10, and the # of contextual conditions are 100 at most. Solution: there are many ways to pre-select influential contexts, which contributes to reduce the # of contexts.
  • 33. Outline of the Talk • Context-aware Recommender Systems (CARS) • Collaborative Filtering and SLIM Recommenders • CSLIM: Contextualizing SLIM Recommenders • Experimental Evaluations • Conclusions and Future Work
  • 34. Conclusions 1). CSLIM actually has been demonstrated to outperform the state-of-the-art CARS algorithms; 2). GCSLIM sometimes contributes further improvements, but it is not guaranteed that GCSLIM can always beat CSLIM algorithms – it depends on how sparse the contextual ratings are; 3). We figure out some influential factors and discover latent rules to select the appropriate CSLIM algorithms in advance. 1). Try to examine CSLIM and GCSLIM on larger data sets; 2). Try to compete with more models, e.g. factorization machines; 3). Try to couple CC matrix with users/items in GCSLIM approach; 4). Try to incorporate contexts into matrix W instead of adding the matrix D. Future Work
  • 35. Deviation-Based Contextual SLIM Recommenders Yong Zheng, Bamshad Mobasher, Robin Burke DePaul University, Chicago, IL, USA @CIKM 2014, Shanghai, China, Nov 4, 2014 Thanks! Questions?