2. • How to learn ?
• What to learn?
• Defining learning objectives
• How to scale learning?
• Gotchas
• VisageCloud
–Architecture
–Use Cases
Agenda
3. • What questions to ask before writing the code?
• How to look at the data before feeding it to the
machine?
• What is the state of the art regarding ML?
• What frameworks to use?
• What are the common traps to avoid?
• How to design for scale?
Objectives
5. Vision
• Convolutional Neural Networks
• Inception Paper
NLP
• Word2Vec
• GloVe: Global Vectors for Words Representation
Generic
• Classification
• Prediction
How to Learn?
8. • Learning is an optimization problem
–Find parameters of a system (neural network) that
minimize a fixed error function
–Not unlike planning orbital paths
• Defining the network architecture
• Defining the training algorithm
–Stochastic Gradient Descent
• With momentum
• With noisy
Taking a Step Back: The Math
9. • DeepLearning4j
– Independent company
– Java interface with C-bindings for performance
• TensorFlow
– Python & C++ API
– Developed by Google
– Compatible with TPU
• Torch
– Developed by Facebook
– Written in LuaJIT, with Python bindings
Frameworks
11. • Public data sets
–Labelled Faces in the Wild (LFW)
–Youtube faces
–Kaggle
• Private data sets
• Build your own
–Outsourcing: Mechanical Turk
–Crowsourcing: ReCaptcha model
Data Sets
13. • Machine learning is not magic
• If you can’t understand the data, a machine probably
won’t either
• Preprocessing makes the difference between results
• Applying filters, normalization, anomaly detection is
computationally inexpensive
Preparing Data
16. • Projecting input onto a fixed set of classes
• “Don’t use a cannon to kill a fly”
–Support Vector Machines
• Linear
• Radial Based Functions
Classification
17. • Embedding
–Projecting input (image) onto an vector space with a
known property
• Triplet Loss Function
Identification
18. • Splitting a set of items into non-overlapping subsets,
based on item attributes
• Counting people in video streams
• Algorithms:
–Fixed threshold
–K-means
–Rank-order clustering
Clustering
20. • Scaling training
– Requires shared memory space
– Vertical scaling
• GPU
• Soon-to-come: TPU (tensor processing unit)
• Scaling evaluation
– Shared nothing architecture
– Neural network/classifier rarely change
– Load balancing pattern
– Partitioning data if needed
How to scale learning?
21. • There is no “reduce” for neural networks
• Averaging weights/parameters
– Usually not a good idea
• Genetic algorithms
– Requires a lot of processing power
– Running independent iterations on different machines
– Crossover between weights/parameters of independently
trained neural networks after each epoch
Ideas for horizontal scaling
23. • Our 2D and 3D intuition often fails in high dimensions
• Distances tend to become relatively “the same” as
number of dimensions increases
• Dimensionality reduction
– Embedding functions
– Principal component analysis
The Curse of Dimensionality
24. • “The bottom of a valley is not necessarily the lowest
point on Earth”
• Learning algorithms may get stuck in local optima
• Using momentum or some random noise reduces
this possibility
• Using genetic algorithms can be even more robust,
but it’s computationally expensive
Local Optima
26. “Based on state-of-the-art machine learning, our
weather forecast system can predict tomorrow’s
weather with 72% accuracy”
Evaluating of Learning
You get the same results by saying “it’s going to be the same as today”
27. • Don’t test on the data you train on
– Use different data set
– Split the data sets you have
• Beware of data biases
– Confirmation bias
– Survivorship bias
– Selection bias
• Compare against a benchmark, even a dummy one
– Coin flip
– Linear algorithms
– “Same-as-before”
Evaluation of Learning
31. • The collection
–Slice of data used together
–10K-100K records
• The Cache-Inside Pattern
–Loading / preloading collection in one application server
–Content based routing/balancing to maximize cache hits
–No logic in the database layer
–Requires periodic polling for updates
• Weaker consistency
Partitioning Data: Application Level Logic
33. • Perform comparison logic in database
–User Defined Aggregate Functions
• Removes the need to move data around between
application and database
• Harder to deploy/test
• Stronger consistency
Partitioning Data: Application Level Logic
34. • It’s math, not magic
• If you don’t understand the data, neither will the
machine
• Preprocessing makes the difference
• Test against a benchmark, any benchmark
• Evaluate first, scale later
Key Take-away