(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
Challenges on Distributed Machine Learning
1. Challenges on Data Parallelism
and Model Parallelism
Presenter:
Jie Cao
Petuum: A New Platform for Distributed Machine Learning on Big Data
CMU Eric Xing
4. Current Solution
1. Implementation of Specific ML algorithm
YahooLDA
Vowelpal Rabbit, Fast Learning Algorithm Collections, Yahoo -> Microsoft
Caffe, Deep Learning Framework, http://caffe.berkeleyvision.org/
2. Platforms for General Purpose ML
Hadoop
Spark spark+parameterserver, spark+caffe
GraphLab
3. Other Systems
Parameter Server
Petuum
4. Specified Hardware Accelerating
GPU,FPGA-based accelerators, “DianNao” for large NN.
Petuum
1. Systematically analyze ML models and tools
2. Find Common Properties
3. Build “workhorse” engine to solve entire models
12. Parallel Stochastic Gradient
Descent
Parallel SGD: Partition data to different workers; all workers
update full parameter vector
Parallel SGD [Zinkevich et al., 2010]
PSGD runs SGD on local copy of params in each machine
Input
Data
Input
Data
Input
Data
split Update local copy
of ALL params
Update local copy
of ALL params
aggregate
Update ALL
params
Input
Data
Input
Data
Input
Data
13.
14.
15.
16.
17.
18.
19. Challenges in Data
Parallelism
Existing ways are either safe/slow (BSP), or fast/risky (Async)
Need “Partial” synchronicity: Bounded Async Parallelism (BAP)
— Spread network comms evenly (don’t sync unless needed)
— Threads usually shouldn’t wait – but mustn’t drift too far apart!
Need straggler tolerance
— Slow threads must somehow catch up
????
Error Tolerance
20. Challenges 1 in Model Parallelism
Model Dependence
• Only effective if independent(or
weakly-correlated)
• Need carefully-chosen
parameters for updating
• Dependency-aware
On model parallelism and scheduling strategies for distributed machine learning
(NIPS’2014)
Parallel coordinate descent for l1-regularized loss minimization (ICML’2011)
Feature clustering for accelerating parallel coordinate descent (NIPS’2012)
A Frame- work for Machine Learning and Data Mining in the Cloud (PVLDB’2012)
23. Parameter Server
memcached
YahooLDA
— Besteffort
Pettum:
— Bösen
Parameter Server (OSDI’ 2014 ):
— flexiable consitency models,customized, developer decide.
Desirable Consistency Model
1) Correctness of the distributed algorithm can be theoretically proven
2) Computing power of the system is fully utilized
24. Consistency Models• Classic Consistency in Database
• BSP (Bulk Synchronous Parallel,Valiant(PCA), 1990,
• Correct, but slow
• Hadoop[MR],Spark [RDD]
• GraphLab (Careful colored graph or by locking)
• Best-effort
• fast but no theoretic guarantee
• YahooLDA
• Async
• Hogwild!(NIPS 2011)
• A Lock-Free Approach to PSGD.
• Condition: optimization problem is sparse, meaning most gradient updates only modify
small parts of the decision variable. cannot grantee correctness in other case.
28. Analysis of High-Performance Distributed ML at Scale through Parameter Server Consistency Models (AAAI’2015)
Distributed_delayed_stochastic_optimization(NIPS2011)
Slow Learners are Fast (NIPS’2009)
29.
30.
31.
32.
33. 3 Conceptual Actions
• Schedule specifies the next subset of model variables to be updated in parallel
• e.g: fixed sequence,random
• Improved:
• 1. fastest converging variable,avoiding already-converged variables
• 2. avoid inter-dependence.
• Push specifies how individual workers compute partial results on those
variables
• Pull specifies how those partial results are aggregated to perform the full
variable update.
• Sync: Builtin primitives, BSP,SSP,AP
Dynamic Structural
Dependency
37. word rotation scheduling
• 1.V dictionary words into U disjoint subsets V1, . . . , VU
(where U is the number of workers
• 2. Subsequent invocations of schedule will “rotate” subsets
amongst workers, so that every worker touches all U subsets
every U invocations
• For data partitioning, we divide the document tokens W
evenly across workers, and denote worker p’s set of tokens
by Wqp
• all zij will be sampled exactly once after U invocations of
schedule
38. Worker 1 Worker 2 Worker N
Data not moved
Words in docs1:
A,B,C,E,F,G,H,Z
Data not moved
Words in docs2:
A,B,C,E,F,G,H,Z
Data not moved
Words in docs3:
A,B,C,E,F,G,H,Z
V1. dictionary:
A,B,C
V2. dictionary:
A,B,C
Vu. dictionary:
A,B,C
…
…
…
43. Priority-based Schedule
1. Rewrite objective function
by duplicating original features with opposite sign
Here, X contains 2J features,all
2.
44. It works for latent space model, but does not apply to all possible ML models:
1. graphical models and deep networks can have arbitrary structure between parameters and variables,
2. problems on time-series data will have sequential or autoregressive dependencies between datapoints.