1. New Directions in Mahout’s Recommenders
Sebastian Schelter, Apache Software Foundation
Recommender Systems Get-together Berlin
2. NewDirectionsinMahout’sRecommenders
2/28
New Directions?
Mahout in Action is the prime source of
information for using Mahout in practice.
As it is more than two years old, it
is missing a lot of recent developments.
This talk describes what has been added to the recommenders
of Mahout since then.
5. NewDirectionsinMahout’sRecommenders
5/28
New recommenders and factorizers
BiasedItemBasedRecommender, item-based kNN with
user-item-bias estimation
Koren: Factor in the Neighbors: Scalable and Accurate Collaborative Filtering, TKDD ’09
RatingSGDFactorizer, biased matrix factorization
Koren et al.: Matrix Factorization Techniques for Recommender Systems, IEEE Computer ’09
SVDPlusPlusFactorizer, SVD++
Koren: Factorization Meets the Neighborhood: a Multifaceted Collaborative Filtering Model, KDD ’08
ALSWRFactorizer, matrix factorization using Alternating
Least Squares
Zhou et al.: Large-Scale Parallel Collaborative Filtering for the Netflix Prize, AAIM ’08
Hu et al.: Collaborative Filtering for Implicit Feedback Datasets, ICDM ’08
6. NewDirectionsinMahout’sRecommenders
6/28
Batch Item-Similarities on a single machine
Simple but powerful way to deploy Mahout: Use item-based
collaborative filtering with periodically precomputed item
similarities.
Mahout now supports multithreaded item similarity
computation on a single machine for data sizes that don’t
require a Hadoop-based solution.
DataModel dataModel = new FileDataModel(new File(”movielens.csv”));
ItemSimilarity similarity = new LogLikelihoodSimilarity(dataModel));
ItemBasedRecommender recommender =
new GenericItemBasedRecommender(dataModel, similarity);
BatchItemSimilarities batch =
new MultithreadedBatchItemSimilarities(recommender, k);
batch.computeItemSimilarities(numThreads, maxDurationInHours,
new FileSimilarItemsWriter(resultFile));
8. NewDirectionsinMahout’sRecommenders
8/28
Collaborative Filtering
idea: infer recommendations from patterns found in the
historical user-item interactions
data can be explicit feedback (ratings) or implicit feedback
(clicks, pageviews), represented in the interaction matrix A
item1 · · · item3 · · ·
user1 3 · · · 4 · · ·
user2 − · · · 4 · · ·
user3 5 · · · 1 · · ·
· · · · · · · · · · · · · · ·
row ai denotes the interaction history of user i
we target use cases with millions of users and hundreds of
millions of interactions
9. NewDirectionsinMahout’sRecommenders
9/28
MapReduce
paradigm for data-intensive parallel processing
data is partitioned in a distributed file system
computation is moved to data
system handles distribution, execution, scheduling, failures
fixed processing pipeline where user specifies two
functions
map : (k1, v1) → list(k2, v2)
reduce : (k2, list(v2)) → list(v2)
DFS
Input
Input
Input
map
map
map
reduce
reduce
DFS
Output
Output
shuffle
11. NewDirectionsinMahout’sRecommenders
11/28
Neighborhood Methods
Item-Based Collaborative Filtering is one of the most
deployed CF algorithms, because:
simple and intuitively understandable
additionally gives non-personalized, per-item
recommendations (people who like X might also like Y)
recommendations for new users without model retraining
comprehensible explanations (we recommend Y because
you liked X)
12. NewDirectionsinMahout’sRecommenders
12/28
Cooccurrences
start with a simplified view:
imagine interaction matrix A was
binary
→ we look at cooccurrences only
item similarity computation becomes matrix multiplication
ri = (A A) ai
scale-out of the item-based approach reduces to finding an
efficient way to compute the item similarity matrix
S = A A
13. NewDirectionsinMahout’sRecommenders
13/28
Parallelizing S = A A
standard approach of computing item cooccurrences requires
random access to both users and items
foreach item f do
foreach user i who interacted with f do
foreach item j that i also interacted with do
Sfj = Sfj + 1
→ not efficiently parallelizable on partitioned data
row outer product formulation of matrix multiplication is
efficiently parallelizable on a row-partitioned A
S = A A =
i∈A
ai ai
mappers compute the outer products of rows of A, emit the
results row-wise, reducers sum these up to form S
14. NewDirectionsinMahout’sRecommenders
14/28
Parallel similarity computation
real datasets not binary and we want to use a variety of
similarity measures, e.g. Pearson correlation
express similarity measures by 3 canonical functions, which
can be efficiently embedded into the computation (cf.,
VectorSimilarityMeasure)
preprocess adjusts an item rating vector
f = preprocess( f ) j = preprocess( j )
norm computes a single number from the adjusted vector
nf = norm( f ) nj = norm( j )
similarity computes the similarity of two vectors from the
norms and their dot product
Sfj = similarity( dotfj, nf , nj )
15. NewDirectionsinMahout’sRecommenders
15/28
Example: Jaccard coefficient
preprocess binarizes the rating vectors
if =
3
−
5
j =
4
4
1
f = bin(f ) =
1
0
1
j = bin(j) =
1
1
1
norm computes the number of users that rated each item
nf = f 1 = 2 nj = j 1 = 3
similarity finally computes the jaccard coefficient from
the norms and the dot product of the vectors
jaccard(f , j) =
|f ∩ j|
|f ∪ j|
=
dotfj
nf + nj − dotfj
=
2
2 + 3 − 2
=
2
3
19. NewDirectionsinMahout’sRecommenders
19/28
Cost of the algorithm
major cost in our algorithm is the communication in the
second MapReduce pass: for each user, we have to process the
square of the number of his interactions
S =
i∈A
ai ai
→ cost is dominated by the densest rows of A
(the users with the highest number of interactions)
distribution of interactions per user is usually heavy tailed
→ small number of power users with an unproportionally
high amount of interactions drastically increase the runtime
if a user has more than p interactions, only use a random
sample of size p of his interactions
saw negligible effect on prediction quality for moderate p
20. NewDirectionsinMahout’sRecommenders
20/28
Scalable Neighborhood Methods: Experiments
Setup
26 machines running Java 7 and Hadoop 1.0.4
two 4-core Opteron CPUs, 32 GB memory and four 1 TB
disk drives per machine
Results
Yahoo Songs dataset (700M datapoints, 1.8M users, 136K
items), 26 machines, similarity computation takes less than 40
minutes
22. NewDirectionsinMahout’sRecommenders
22/28
Latent factor models: idea
interactions are deeply influenced by a set of factors that are
very specific to the domain (e.g. amount of action or
complexity of characters in movies)
these factors are in general not obvious, we might be able to
think of some of them but it’s hard to estimate their impact
on the interactions
need to infer those so called latent factors from the
interaction data
23. NewDirectionsinMahout’sRecommenders
23/28
low-rank matrix factorization
approximately factor A into the product of two rank r feature
matrices U and M such that A ≈ UM.
U models the latent features of the users, M models the latent
features of the items
dot product ui mj in the latent feature space predicts strength
of interactions between user i and item j
to obtain a factorization, minimize regularized squared error
over the observed interactions, e.g.:
min
U,M
(i,j)∈A
(aij − ui mj)2
+ λ
i
nui ui
2
+
j
nmj mj
2
24. NewDirectionsinMahout’sRecommenders
24/28
Alternating Least Squares
ALS rotates between fixing U and M. When U is fixed, the
system recomputes M by solving a least-squares problem per
item, and vice versa.
easy to parallelize, as all users (and vice versa, items) can be
recomputed independently
additionally, ALS is able to solve non-sparse models from
implicit data
≈ ×
A
u × i
U
u × k
M
k × i
25. NewDirectionsinMahout’sRecommenders
25/28
Implementation in Mahout
o.a.m.cf.taste.hadoop.als.ParallelALSFactorizationJob
computes a factorization using Alternating Least Squares, has
different solvers for explicit and implicit data
Zhou et al.: Large-Scale Parallel Collaborative Filtering for the Netflix Prize, AAIM ’08
Hu et al.: Collaborative Filtering for Implicit Feedback Datasets, ICDM ’08
o.a.m.cf.taste.hadoop.als.FactorizationEvaluator computes
the prediction error of a factorization on a test set
o.a.m.cf.taste.hadoop.als.RecommenderJob computes
recommendations from a factorization
26. NewDirectionsinMahout’sRecommenders
26/28
Scalable Matrix Factorization: Implementation
Recompute user feature matrix U using a broadcast-join:
1. Run a map-only job using multithreaded mappers
2. load item-feature matrix M into memory from HDFS to
share it among the individual mappers
3. mappers read the interaction histories of the users
4. multithreaded: solve a least squares problem per user to
recompute its feature vector
user histories A user features U
item features M
Map
Hash-Join + Re-computation
localfwdlocalfwdlocalfwd
Map
Hash-Join + Re-computation
Map
Hash-Join + Re-computation
broadcast
machine1machine2machine3
27. NewDirectionsinMahout’sRecommenders
27/28
Scalable Matrix Factorization: Experiments
Setup
26 machines running Java 7 and Hadoop 1.0.4
two 4-core Opteron CPUs, 32 GB memory and four 1 TB
disk drives per machine
configured Hadoop to reuse JVMs, ran multithreaded
mappers
Results
Yahoo Songs dataset (700M datapoints), 26 machines, single
iteration (two map-only jobs) takes less than 2 minutes
28. Thanks for listening!
Follow me on twitter at http://twitter.com/sscdotopen
Join Mahout’s mailinglists at http://s.apache.org/mahout-lists
picture on slide 3 by Tim Abott, http://www.flickr.com/photos/theabbott/
picture on slide 21 by Crimson Diabolics, http://crimsondiabolics.deviantart.com/