1. January 5, 2017 1
Deep Learning
Er. Shiva K. Shrestha, ME Computer, NCIT
2. Slide Credit
o Jeff Dean, Google, Large Scale Deep Learning
o Andrew Ng, Deep Learning
o Aditya Khosla & Joseph Lim, Visual Recognition through ML Competition
January 5, 2017 2
3. Structure
◦ General Questions of the World
◦ What is Deep Learning?
◦ Why Deep Learning?
◦ Deep Neural Network Architectures
◦ Deep Learning Applications
◦ Conclusions, Recommendations
January 5, 2017 3
4. How Can We Build More Intelligent Computer
Systems?
According to Jeff Dean, Google:
o Need to perceive and understand the world
o Basic speech and vision capabilities
o Language understanding
o User behavior prediction
o …
January 5, 2017 4
5. How can we do this?
According to Jeff Dean, Google:
o Cannot write algorithms for each task we want to accomplish
separately.
o Need to write general algorithms that learn from observations
o Can we build systems that:
o Generate understanding from raw data
o Solve difficult problems to improve products
o Minimize software engineering effort
January 5, 2017 5
6. Plenty of Data
o Text: trillions of words of English + other languages
o Visual: billions of images and videos
o Audio: thousands of hours of speech per day
o User Activity: queries, result page clicks, map requests, etc.
o Knowledge Graph: billions of labelled relation triples
o …
January 5, 2017 6
11. Textual Understanding
“This movie should have NEVER been made. From the poorly done
animation, to the beyond bad acting. I am not sure at what point the
people behind this movie said "Ok, looks good! Lets do it!" I was in
awe of how truly horrid this movie was.”
January 5, 2017 11
12. General Machine Learning Approaches
o Learning by labeled example: Supervised Learning
o e.g. An email spam detector
o amazingly effective if you have lots of examples
o Discovering patterns: Unsupervised Learning
o e.g. data clustering
o difficult in practice, but useful if you lack labeled examples
o Feedback right/wrong: Reinforcement Learning
o e.g. learning to play chess by winning or losing
o works well in some domains, becoming more important
January 5, 2017 12
13. Machine Learning
o For many of these problems, we have lots of data
o gives computers the ability to learn without being explicitly
programmed
January 5, 2017 13
Approaches
o Decision tree learning
o Association rule learning
o Artificial neural networks
o Deep learning
o Inductive logic programming
o Support vector machines
o Clustering
o Bayesian networks
Approaches …
o Reinforcement learning
o Representation learning
o Similarity and metric learning
o Sparse dictionary learning
o Genetic algorithms
o Rule-based machine learning
o Learning classifier systems
14. Typical Goal of Machine Learning
Label: “Motorcycle”
Suggest tags
Image search
…
Speech recognition
Music classification
Speaker identification
…
Web search
Anti-spam
Machine translation
…
text
audio
images/video
I/p O/p
ML
ML
ML
January 5, 2017 14
15. Basic Idea of Deep Learning
Is there some way to extract meaningful features from data
even without knowing the task to be performed?
Then, throw in some hierarchical ‘stuff’ to make it ‘deep’
January 5, 2017 15
16. What is Deep Learning?
o The modern reincarnation of ANNs from the 1980s and 90s.
o A collection of simple trainable mathematical units, which
collaborate to compute a complicated function.
oCompatible with (3) General ML Approaches
January 5, 2017 16
17. What is Deep Learning? (2)
o Loosely inspired by what (little) we know about the biological brain.
o AKA:
o Deep Structure Learning
o Hierarchical Learning
o Deep M/c Learning
January 5, 2017 17
18. Deep Learning Definitions
Deep learning is characterized as a class of machine learning algorithms that
o use a cascade of many layers of nonlinear processing units for feature
extraction and transformation.
o are based on the learning of multiple levels of features or representations of
the data.
o are part of the broader machine learning field of learning representations of
data.
o learn multiple levels of representations that correspond to different levels of
abstraction;
January 5, 2017 18
19. DL - Why is this hard?
You see this:
But the camera sees this:
January 5, 2017 19
25. Some Feature Representations (2)
SIFT Spin image
HoG
RIFT
Textons GLOH
Coming up with features is often difficult, time-
consuming, and requires expert knowledge.
January 5, 2017 25
26. The Brain:
Potential Motivation for Deep Learning
[Roe et al., 1992]
Auditory Cortex learns to see!
Auditory Cortex
January 5, 2017 26
27. The Brain adapts!
[BrainPort; Welsh & Blasch, 1997; Nagel et al., 2005; Constantine-Paton & Law, 2009]
Seeing with your Tongue Human Echolocation (Sonar)
Haptic belt: Direction Sense Implanting a 3rd Eye
January 5, 2017 27
28. Feature Learning Problem
Given a 14x14 image patch x, can represent it using 196 real numbers.
Problem: Can we find a learn a better feature vector to represent this?
255
98
93
87
89
91
48
…
January 5, 2017 28
29. Why Deep Learning?
Method Accuracy
Hessian + ESURF [Williems et al 2008] 38%
Harris3D + HOG/HOF [Laptev et al 2003,
2004]
45%
Cuboids + HOG/HOF [Dollar et al 2005,
Laptev 2004]
46%
Hessian + HOG/HOF [Laptev 2004, Williems
et al 2008]
46%
Dense + HOG / HOF [Laptev 2004] 47%
Cuboids + HOG3D [Klaser 2008, Dollar et al
2005]
46%
Unsupervised Feature Learning (DL) 52%
[Le, Zhou & Ng, 2011]
Task: Video Activity Recognition
January 5, 2017 29
30. Deep Neural Network Architectures
o GMDH: 1st DLN of 1965
o Convolutional NN
o Neural history compressor
o Recursive NN
o Long short-term memory (LSTM)
o Deep belief networks (DBN)
o Convolutional deep belief networks
o Large memory storage & retrieval NN
o Deep Boltzmann machines
o Stacked (de-noising) auto-encoders
o Deep stacking networks
o Tensor deep stacking networks
o Spike-and-slab RBMs
o Compound hierarchical-deep models
o Deep coding networks
o Deep Q-networks
o Networks with separate memory
structures
January 5, 2017 30
32. Unsupervised Feature Learning with a NN
x4
x5
x6
+1
x1
x2
x3
+1
a1
a2
a3
+1
b1
b2
b3
+1
c1
c2
c3
New representation
for input.
Use [c1, c3, c3] as representation to feed to learning algorithm.
33. Deep Belief Network
DBN is algorithm for learning a feature hierarchy.
Building Block: 2-layer graphical model (Restricted Boltzmann
Machine).
Can then learn additional layers one at a time.
Schematic overview of
a deep belief net.
34. Deep Belief Network (2)
Input [x1, x2, x3, x4]
Layer 2. [a1, a2, a3]
Layer 3. [b1, b2, b3]
Similar to a sparse auto-encoder in many ways.
Stack RBMs on top of each other to get DBN.
January 5, 2017 34
35. Convolutional DBN for Audio
Spectrogram
Detection units
Max pooling unit
January 5, 2017 35
39. Applications
o Computer Vision: Object
Detection & Recognition
o Speech Recognition
o Speaker Identification
o Web Searches
o Text Classification - Sentiment
Analysis
o Translations
o Miscellaneous
o Fine-grained Classification
o Generalization
o Generating Image Captions from
Pixels
o …
January 5, 2017 39
44. Translation
o Google Translate:
o As Reuters noted for the first time in July, the seating configuration is exactly
what fuels the battle between the latest devices.
o Neural LSTM Model:
o As Reuters reported for the first time in July, the configuration of seats is
exactly what drives the battle between the latest aircraft.
o Human Translation:
o As Reuters first reported in July, seat layout is exactly what drives the battle
between the latest jets.
January 5, 2017 44
52. Conclusion
Deep Neural Networks are very effective for wide range of tasks
o By using parallelism, we can quickly train very large and effective deep neural
models on very large datasets
o Automatically build high-level representations to solve desired tasks
o By using embedding, can work with sparse data
o Effective in many domains: speech, vision, language modeling, user prediction,
language understanding, translation, advertising, …
January 5, 2017 52
An important tool in building Intelligent Systems !
54. Recommendations
o Le, Ranzato, Monga, Devin, Chen, Corrado, Dean, & Ng. Building High-Level Features Using Large
Scale Unsupervised Learning, ICML 2012.
o Dean, Corrado, et al. , Large Scale Distributed Deep Networks, NIPS 2012.
o Mikolov, Chen, Corrado and Dean. Efficient Estimation of Word Representations in Vector Space,
http://arxiv.org/abs/1301.3781.
o Distributed Representations of Sentences and Documents, by Quoc Le and Tomas Mikolov, ICML
2014, http://arxiv.org/abs/1405.4053
o Vanhoucke, Devin and Heigold. Deep Neural Networks for Acoustic Modeling, ICASSP 2013.
o Sequence to Sequence Learning with Neural Networks, Ilya Sutskever, Oriol Vinyals, and Quoc Le.
http://arxiv.org/abs/1409.3215. To appear in NIPS, 2014.
o http://research.google.com/papers
o http://research.google.com/people/jeff
January 5, 2017 54