2. 1. Review of Deep learning
(Convolutional Neural Network)
2. Residual network (Resnet)
He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings
of the IEEE conference on computer vision and pattern recognition. 2016.
3. Densely connected convolutional network
(DenseNet)
Huang, Gao, et al. "Densely connected convolutional networks." Proceedings of
the IEEE conference on computer vision and pattern recognition. 2017.
Contents
3. Structure of Neural Networks
A simple model to emulate a single neuron
This model produces a binary output
Review of Deep learning
=
𝟎 𝒊𝒇
𝒋
𝝎𝒋 𝒙𝒋 ≤ 𝑻
𝟏 𝒊𝒇
𝒋
𝝎𝒋 𝒙𝒋 > 𝑻
𝝎 𝟏
𝝎 𝟐
𝝎 𝟑
𝒋
𝝎𝒋 𝒙𝒋Inputs
Threshold T
Perceptron (1950) Neuron
4. Review of Deep learning
Multilayer Perceptron (MLP)
A network model consists of perceptrons
This model produces vectorized outputs
5. Multilayer Perceptron (MLP)
Review of Deep learning
Handwritten digit with
28 by 28 pixel image
Binary Input
(Intensity of a pixel)
28
28
Input
(784)
Desired output for “5”
𝒚(𝒙) = 𝟎, 𝟎, 𝟎, 𝟎, 𝟏, 𝟎, 𝟎, 𝟎, 𝟎 𝑻
6. Convolutional Neural Network
Convolution layer
Subsampling (Pooling) layer
Rectified Linear Unit(ReLU)
Review of Deep learning
Feature Extractor Classifier
7. Review of Deep learning
Local receptive field (connectivity)
28 by 28 23 by 23
5 by 5
Kernel
(window)
2D Convolution
1. Detect local information
(features)
(e.g., Edge, Shape)
2. Reduce connections
between layers
• Fully connected network
→ 𝟐𝟖 ∗ 𝟐𝟖 ∗ 𝟐𝟑 ∗ 𝟐3 connections
• Local connected network
→ 𝟓 ∗ 𝟓 ∗ 𝟐𝟑 ∗ 𝟐𝟑 connections
𝑤11 𝑤12
𝑤55
8. Review of Deep learning
Shared weights
1. Detect same feature
in other positions
2. Reduce total number of
weights and bias
3. Construct multiple feature
maps (kernels)
𝒐𝒖𝒕𝒑𝒖𝒕 = 𝝈(𝒃 +
𝒍=𝟎
𝟒
𝒎=𝟎
𝟒
𝝎𝒍,𝒎 𝒂𝒋+𝒍,𝒌+𝒎)
9. Review of Deep learning
Pooling layer
1. Simplify (condense)
information in the feature
map
2. Reduce connections
(weights and biases)
Max-pooling:
Output only maximum activation
Conv. Pooling
18. Residual network (ResNet)
A: zero-padding shortcuts
B: project shortcut for increasing
dimension, other shortcut are
identity
C: all shortcuts are projections
19. Dense block
Short paths from early layers to later layers
Densely connected convolutional network
Connect all layer (with
matching feature-map
size) directly
Combine feature by
concatenating
∴ 𝑳-layer has
𝑳 𝑳+𝟏
𝟐
conntections
24. References
Image Source from https://deeplearning4j.org/convolutionalnets
Zeiler, Matthew D., and Rob Fergus. "Visualizing and understanding
convolutional networks.“ European Conference on Computer Vision,
Springer International Publishing, 2014.
Jia-Bin Huang, “Lecture 29 Convolutional Neural Networks”,
Computer Vision Spring 2015
He, Kaiming, et al. "Deep residual learning for image
recognition." Proceedings of the IEEE conference on computer vision
and pattern recognition. 2016.
Huang, Gao, et al. "Densely connected convolutional
networks." Proceedings of the IEEE conference on computer vision
and pattern recognition. 2017.