Convolutional Neural Networks

Ashray Bhandare
Ashray BhandareData Science and Machine Learning Enthusiast with interest in Convolutional Neural Networks and Bio-Inspired Algorithms um The University of Toledo
Convolutional Neural Networks
Ashray Bhandare
Anil Sehgal
Page  2
Agenda
 Introduction
 Archetecture Overview
– How ConvNet Works
 ConvNet Layers
– Convolutional Layer
– Pooling Layer
– Normalization Layer (ReLU)
– Fully-Connected Layer
 Demos & Parameters
– Hyper Parameters
– Mnist dataset on AWS
– CIFAR-10 ConvNetJS
 Case Studies
EECS6980:006 Social Network Analysis
Page  3
Agenda
 Introduction
 Archetecture Overview
– How ConvNet Works
 ConvNet Layers
– Convolutional Layer
– Pooling Layer
– Normalization Layer (ReLU)
– Fully-Connected Layer
 Demos & Parameters
– Hyper Parameters
– Mnist dataset on AWS
– CIFAR-10 ConvNetJS
 Case Studies
EECS6980:006 Social Network Analysis
Page  4
Introduction
 A convolutional neural network (or ConvNet) is a type of feed-forward artificial neural network
 The architecture of a ConvNet is designed to take advantage of the 2D structure of an input image.
 
 A ConvNet is comprised of one or more convolutional layers (often with a pooling step) and then
followed by one or more fully connected layers as in a standard multilayer neural network.
EECS6980:006 Social Network Analysis
VS
Page  5
Agenda
 Introduction
 Archetecture Overview
– How ConvNet Works
 ConvNet Layers
– Convolutional Layer
– Pooling Layer
– Normalization Layer (ReLU)
– Fully-Connected Layer
 Demos & Parameters
– Hyper Parameters
– Mnist dataset on AWS
– CIFAR-10 ConvNetJS
 Case Studies
EECS6980:006 Social Network Analysis
Page  6
Motivation behind ConvNets
 Consider an image of size 200x200x3 (200 wide, 200 high, 3 color channels)
– a single fully-connected neuron in a first hidden layer of a regular Neural Network would have 200*200*3
= 120,000 weights.
– Due to the presence of several such neurons, this full connectivity is wasteful and the huge number of
parameters would quickly lead to overfitting
 However, in a ConvNet, the neurons in a layer will only be connected to a small region of the layer
before it, instead of all of the neurons in a fully-connected manner.
– the final output layer would have dimensions 1x1xN, because by the end of the ConvNet architecture we will
reduce the full image into a single vector of class scores (for N classes), arranged along the depth
dimension
EECS6980:006 Social Network Analysis
Page  7
MLP VS ConvNet
EECS6980:006 Social Network Analysis
Input
Hidden
Output
Input
Hidden
Output
Multilayered
Perceptron:
All Fully
Connected
Layers
Convolutional
Neural Network
With Partially
Connected
Convolution Layer
Page  8
MLP vs ConvNet
A regular 3-layer Neural
Network.
A ConvNet arranges its
neurons in three dimensions
(width, height, depth), as
visualized in one of the
layers.
EECS6980:006 Social Network Analysis
Page  9
Agenda
 Introduction
 Archetecture Overview
– How ConvNet Works
 ConvNet Layers
– Convolutional Layer
– Pooling Layer
– Normalization Layer (ReLU)
– Fully-Connected Layer
 Demos & Parameters
– Hyper Parameters
– Mnist dataset on AWS
– CIFAR-10 ConvNetJS
 Case Studies
EECS6980:006 Social Network Analysis
Page  10
How ConvNet Works
 For example, a ConvNet takes the input as an image which can be classified as ‘X’ or ‘O’
 In a simple case, ‘X’ would look like:
X or OCNN
A two-dimensional
array of pixels
Page  11
How ConvNet Works
 What about trickier cases?
CNN
X
CNN
O
Page  12
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 1 -1 -1 -1
-1 -1 1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 1 -1 -1
-1 -1 -1 1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
=
?
How ConvNet Works – What Computer Sees
Page  13
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 1 -1 -1 -1
-1 -1 1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 1 -1 -1
-1 -1 -1 1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
=x
How ConvNet Works
Page  14
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 X -1 -1 -1 -1 X X -1
-1 X X -1 -1 X X -1 -1
-1 -1 X 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 X -1 -1
-1 -1 X X -1 -1 X X -1
-1 X X -1 -1 -1 -1 X -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
How ConvNet Works – What Computer Sees
 Since the pattern doesnot match exactly, the computer will not be able to classify this as ‘X’
Page  15
Agenda
 Introduction
 Archetecture Overview
– How ConvNet Works
 ConvNet Layers
– Convolutional Layer
– Pooling Layer
– Normalization Layer (ReLU)
– Fully-Connected Layer
 Demos & Parameters
– Hyper Parameters
– Mnist dataset on AWS
– CIFAR-10 ConvNetJS
 Case Studies
EECS6980:006 Social Network Analysis
Page  16
ConvNet Layers (At a Glance)
 CONV layer will compute the output of neurons that are connected to local regions in the input,
each computing a dot product between their weights and a small region they are connected to in
the input volume.
 RELU layer will apply an elementwise activation function, such as the max(0,x) thresholding at
zero. This leaves the size of the volume unchanged.
 POOL layer will perform a downsampling operation along the spatial dimensions (width, height).
 FC (i.e. fully-connected) layer will compute the class scores, resulting in volume of size [1x1xN],
where each of the N numbers correspond to a class score, such as among the N categories.
EECS6980:006 Social Network Analysis
Page  17
Agenda
 Introduction
 Archetecture Overview
– How ConvNet Works
 ConvNet Layers
– Convolutional Layer
– Pooling Layer
– Normalization Layer (ReLU)
– Fully-Connected Layer
 Demos & Parameters
– Hyper Parameters
– Mnist dataset on AWS
– CIFAR-10 ConvNetJS
 Case Studies
EECS6980:006 Social Network Analysis
Page  18
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 X -1 -1 -1 -1 X X -1
-1 X X -1 -1 X X -1 -1
-1 -1 X 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 X -1 -1
-1 -1 X X -1 -1 X X -1
-1 X X -1 -1 -1 -1 X -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Recall – What Computer Sees
 Since the pattern doesnot match exactly, the computer will not be able to classify this as ‘X’
 What got changed?
Page  19
=
=
=
 Convolution layer will work to identify patterns (features) instead of individual pixels
Convolutional Layer
Page  20
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 1
-1 1 -1
1 -1 -1
1 -1 1
-1 1 -1
1 -1 1
Convolutional Layer - Filters
 The CONV layer’s parameters consist of a set of learnable filters.
 Every filter is small spatially (along width and height), but extends through the full depth of the input
volume.
 During the forward pass, we slide (more precisely, convolve) each filter across the width and height
of the input volume and compute dot products between the entries of the filter and the input at any
position.
Page  21
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 1
-1 1 -1
1 -1 -1
1 -1 1
-1 1 -1
1 -1 1
Convolutional Layer - Filters
 Sliding the filter over the width and height of the input gives 2-dimensional activation map that
responds to that filter at every spatial position.
Page  22
Convolutional Layer - Strides
Input
Hidden
Output
Input
Hidden
Output
1-D convolution
Filters: 1
Filter Size: 2
Stride: 2
1-D convolution
Filters: 1
Filter Size: 2
Stride: 1
• The distance that filter is moved across the input from the
previous layer each activation is referred to as the stride.
Page  23
Convolutional Layer - Padding
 Sometimes it is convenient to pad the input volume with zeros around the border.
 Zero padding is allows us to preserve the spatial size of the output volumes
EECS6980:006 Social Network Analysis
Input
Hidden
Output
1 D convolution with:
Filters: 1
Filter Size: 2
Stride: 2
Padding: 1
0
0
Input
Hidden
Output
1 D convolution with:
Filters: 2
Filter Size: 2
Stride: 2
Padding: 1
0
0
Page  24
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Navigation Example
 Strides = 1, Filter Size = 3 X 3 X 1, Padding = 0
Page  25
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Navigation Example
Page  26
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Navigation Example
Page  27
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Navigation Example
Page  28
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Navigation Example
Page  29
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Navigation Example
Page  30
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Navigation Example
Page  31
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Navigation Example
Page  32
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Navigation Example
Page  33
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Computation Example
Page  34
1
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Computation Example
Page  35
1 1
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Computation Example
Page  36
1 1 1
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Computation Example
Page  37
1 1 1
1
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Computation Example
Page  38
1 1 1
1 1
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Computation Example
Page  39
1 1 1
1 1 1
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Computation Example
Page  40
1 1 1
1 1 1
1
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Computation Example
Page  41
1 1 1
1 1 1
1 1
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Computation Example
Page  42
1 1 1
1 1 1
1 1 1
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Computation Example
Page  43
1
1 1 1
1 1 1
1 1 1
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Computation Example
Page  44
1
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Computation Example
Page  45
1 1 -1
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Computation Example
Page  46
1 1 -1
1 1 1
-1 1 1
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Computation Example
Page  47
1
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
1 1 -1
1 1 1
-1 1 1
55
1 1 -1
1 1 1
-1 1 1
Convolutional Layer – Filters – Computation Example
Page  48
1 -1 -1
-1 1 -1
-1 -1 1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
=
0.77 -0.11 0.11 0.33 0.55 -0.11 0.33
-0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11
0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55
0.33 0.33 -0.33 0.55 -0.33 0.33 0.33
0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11
-0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11
0.33 -0.11 0.55 0.33 0.11 -0.11 0.77
Convolutional Layer – Filters – Computation Example
Input Size (W): 9
Filter Size (F): 3 X 3
Stride (S): 1
Filters: 1
Padding (P): 0
9 X 9 7 X 7
Feature Map Size = 1+ (W – F + 2P)/S
= 1+ (9 – 3 + 2 X 0)/1 = 7
Page  49
1 -1 -1
-1 1 -1
-1 -1 1
0.33 -0.11 0.55 0.33 0.11 -0.11 0.77
-0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11
0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11
0.33 0.33 -0.33 0.55 -0.33 0.33 0.33
0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55
-0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11
0.77 -0.11 0.11 0.33 0.55 -0.11 0.33
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
=
0.77 -0.11 0.11 0.33 0.55 -0.11 0.33
-0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11
0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55
0.33 0.33 -0.33 0.55 -0.33 0.33 0.33
0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11
-0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11
0.33 -0.11 0.55 0.33 0.11 -0.11 0.77
-1 -1 1
-1 1 -1
1 -1 -1
1 -1 1
-1 1 -1
1 -1 1
0.33 -0.55 0.11 -0.11 0.11 -0.55 0.33
-0.55 0.55 -0.55 0.33 -0.55 0.55 -0.55
0.11 -0.55 0.55 -0.77 0.55 -0.55 0.11
-0.11 0.33 -0.77 1.00 -0.77 0.33 -0.11
0.11 -0.55 0.55 -0.77 0.55 -0.55 0.11
-0.55 0.55 -0.55 0.33 -0.55 0.55 -0.55
0.33 -0.55 0.11 -0.11 0.11 -0.55 0.33
=
=
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Filters – Output Feature Map
 Output Feature
Map of One
complete
convolution:
– Filters: 3
– Filter Size: 3 X 3
– Stride: 1
 Conclusion:
– Input Image:
9 X 9
– Output of
Convolution:
7 X 7 X 3
Page  50
0.33 -0.11 0.55 0.33 0.11 -0.11 0.77
-0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11
0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11
0.33 0.33 -0.33 0.55 -0.33 0.33 0.33
0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55
-0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11
0.77 -0.11 0.11 0.33 0.55 -0.11 0.33
0.77 -0.11 0.11 0.33 0.55 -0.11 0.33
-0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11
0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55
0.33 0.33 -0.33 0.55 -0.33 0.33 0.33
0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11
-0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11
0.33 -0.11 0.55 0.33 0.11 -0.11 0.77
0.33 -0.55 0.11 -0.11 0.11 -0.55 0.33
-0.55 0.55 -0.55 0.33 -0.55 0.55 -0.55
0.11 -0.55 0.55 -0.77 0.55 -0.55 0.11
-0.11 0.33 -0.77 1.00 -0.77 0.33 -0.11
0.11 -0.55 0.55 -0.77 0.55 -0.55 0.11
-0.55 0.55 -0.55 0.33 -0.55 0.55 -0.55
0.33 -0.55 0.11 -0.11 0.11 -0.55 0.33
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
Convolutional Layer – Output
Page  51
Agenda
 Introduction
 Archetecture Overview
– How ConvNet Works
 ConvNet Layers
– Convolutional Layer
– Pooling Layer
– Normalization Layer (ReLU)
– Fully-Connected Layer
 Demos & Parameters
– Hyper Parameters
– Mnist dataset on AWS
– CIFAR-10 ConvNetJS
 Case Studies
EECS6980:006 Social Network Analysis
Page  52
Rectified Linear Units (ReLUs)
0.77 -0.11 0.11 0.33 0.55 -0.11 0.33
-0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11
0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55
0.33 0.33 -0.33 0.55 -0.33 0.33 0.33
0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11
-0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11
0.33 -0.11 0.55 0.33 0.11 -0.11 0.77
0.77
Page  53
0.77 0
Rectified Linear Units (ReLUs)
0.77 -0.11 0.11 0.33 0.55 -0.11 0.33
-0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11
0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55
0.33 0.33 -0.33 0.55 -0.33 0.33 0.33
0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11
-0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11
0.33 -0.11 0.55 0.33 0.11 -0.11 0.77
Page  54
0.77 0 0.11 0.33 0.55 0 0.33
Rectified Linear Units (ReLUs)
0.77 -0.11 0.11 0.33 0.55 -0.11 0.33
-0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11
0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55
0.33 0.33 -0.33 0.55 -0.33 0.33 0.33
0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11
-0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11
0.33 -0.11 0.55 0.33 0.11 -0.11 0.77
Page  55
0.77 0 0.11 0.33 0.55 0 0.33
0 1.00 0 0.33 0 0.11 0
0.11 0 1.00 0 0.11 0 0.55
0.33 0.33 0 0.55 0 0.33 0.33
0.55 0 0.11 0 1.00 0 0.11
0 0.11 0 0.33 0 1.00 0
0.33 0 0.55 0.33 0.11 0 0.77
Rectified Linear Units (ReLUs)
0.77 -0.11 0.11 0.33 0.55 -0.11 0.33
-0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11
0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55
0.33 0.33 -0.33 0.55 -0.33 0.33 0.33
0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11
-0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11
0.33 -0.11 0.55 0.33 0.11 -0.11 0.77
Page  56
ReLU layer
0.77 0 0.11 0.33 0.55 0 0.33
0 1.00 0 0.33 0 0.11 0
0.11 0 1.00 0 0.11 0 0.55
0.33 0.33 0 0.55 0 0.33 0.33
0.55 0 0.11 0 1.00 0 0.11
0 0.11 0 0.33 0 1.00 0
0.33 0 0.55 0.33 0.11 0 0.77
0.33 0 0.11 0 0.11 0 0.33
0 0.55 0 0.33 0 0.55 0
0.11 0 0.55 0 0.55 0 0.11
0 0.33 0 1.00 0 0.33 0
0.11 0 0.55 0 0.55 0 0.11
0 0.55 0 0.33 0 0.55 0
0.33 0 0.11 0 0.11 0 0.33
0.33 0 0.55 0.33 0.11 0 0.77
0 0.11 0 0.33 0 1.00 0
0.55 0 0.11 0 1.00 0 0.11
0.33 0.33 0 0.55 0 0.33 0.33
0.11 0 1.00 0 0.11 0 0.55
0 1.00 0 0.33 0 0.11 0
0.77 0 0.11 0.33 0.55 0 0.33
0.33 -0.11 0.55 0.33 0.11 -0.11 0.77
-0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11
0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11
0.33 0.33 -0.33 0.55 -0.33 0.33 0.33
0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55
-0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11
0.77 -0.11 0.11 0.33 0.55 -0.11 0.33
0.77 -0.11 0.11 0.33 0.55 -0.11 0.33
-0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11
0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55
0.33 0.33 -0.33 0.55 -0.33 0.33 0.33
0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11
-0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11
0.33 -0.11 0.55 0.33 0.11 -0.11 0.77
0.33 -0.55 0.11 -0.11 0.11 -0.55 0.33
-0.55 0.55 -0.55 0.33 -0.55 0.55 -0.55
0.11 -0.55 0.55 -0.77 0.55 -0.55 0.11
-0.11 0.33 -0.77 1.00 -0.77 0.33 -0.11
0.11 -0.55 0.55 -0.77 0.55 -0.55 0.11
-0.55 0.55 -0.55 0.33 -0.55 0.55 -0.55
0.33 -0.55 0.11 -0.11 0.11 -0.55 0.33
Page  57
Agenda
 Introduction
 Archetecture Overview
– How ConvNet Works
 ConvNet Layers
– Convolutional Layer
– Pooling Layer
– Normalization Layer (ReLU)
– Fully-Connected Layer
 Demos & Parameters
– Hyper Parameters
– Mnist dataset on AWS
– CIFAR-10 ConvNetJS
 Case Studies
EECS6980:006 Social Network Analysis
Page  58
Pooling Layer
 The pooling layers down-sample the previous layers feature map.
 Its function is to progressively reduce the spatial size of the representation to reduce the amount of
parameters and computation in the network
 The pooling layer often uses the Max operation to perform the downsampling process
EECS6980:006 Social Network Analysis
Page  59
1.00
Pooling
 Pooling Filter Size = 2 X 2, Stride = 2
Page  60
1.00 0.33
Pooling
 Pooling Filter Size = 2 X 2, Stride = 2
Page  61
1.00 0.33 0.55
Pooling
 Pooling Filter Size = 2 X 2, Stride = 2
Page  62
1.00 0.33 0.55 0.33
Pooling
 Pooling Filter Size = 2 X 2, Stride = 2
Page  63
1.00 0.33 0.55 0.33
0.33
Pooling
 Pooling Filter Size = 2 X 2, Stride = 2
Page  64
1.00 0.33 0.55 0.33
0.33 1.00 0.33 0.55
0.55 0.33 1.00 0.11
0.33 0.55 0.11 0.77
Pooling
 Pooling Filter Size = 2 X 2, Stride = 2
Page  65
1.00 0.33 0.55 0.33
0.33 1.00 0.33 0.55
0.55 0.33 1.00 0.11
0.33 0.55 0.11 0.77
0.33 0.55 1.00 0.77
0.55 0.55 1.00 0.33
1.00 1.00 0.11 0.55
0.77 0.33 0.55 0.33
0.55 0.33 0.55 0.33
0.33 1.00 0.55 0.11
0.55 0.55 0.55 0.11
0.33 0.11 0.11 0.33
Pooling
Page  66
Layers get stacked
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
1.00 0.33 0.55 0.33
0.33 1.00 0.33 0.55
0.55 0.33 1.00 0.11
0.33 0.55 0.11 0.77
0.33 0.55 1.00 0.77
0.55 0.55 1.00 0.33
1.00 1.00 0.11 0.55
0.77 0.33 0.55 0.33
0.55 0.33 0.55 0.33
0.33 1.00 0.55 0.11
0.55 0.55 0.55 0.11
0.33 0.11 0.11 0.33
Page  67
Layers Get Stacked - Example
224 X 224 X 3 224 X 224 X 64
CONVOLUTION
WITH 64 FILTERS
112 X 112 X 64
POOLING
(DOWNSAMPLING)
Page  68
Deep stacking
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
1.00 0.55
0.55 1.00
0.55 1.00
1.00 0.55
1.00 0.55
0.55 0.55
Page  69
Agenda
 Introduction
 Archetecture Overview
– How ConvNet Works
 ConvNet Layers
– Convolutional Layer
– Pooling Layer
– Normalization Layer (ReLU)
– Fully-Connected Layer
 Demos & Parameters
– Hyper Parameters
– Mnist dataset on AWS
– CIFAR-10 ConvNetJS
 Case Studies
EECS6980:006 Social Network Analysis
Page  70
Fully connected layer
 Fully connected layers are the normal flat
feed-forward neural network layers.
 These layers may have a non-linear
activation function or a softmax activation
in order to predict classes.
 To compute our output, we simply re-
arrange the output matrices as a 1-D
array.
1.00 0.55
0.55 1.00
0.55 1.00
1.00 0.55
1.00 0.55
0.55 0.55
1.00
0.55
0.55
1.00
1.00
0.55
0.55
0.55
0.55
1.00
1.00
0.55
Page  71
Fully connected layer
 A summation of product of inputs and weights at each output node determines the final prediction
X
O
0.55
1.00
1.00
0.55
0.55
0.55
0.55
0.55
1.00
0.55
0.55
1.00
Page  72
Putting it all together
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 -1 -1 1 -1 -1 -1 -1
-1 -1 -1 1 -1 1 -1 -1 -1
-1 -1 1 -1 -1 -1 1 -1 -1
-1 1 -1 -1 -1 -1 -1 1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1
X
O
Page  73
Agenda
 Introduction
 Archetecture Overview
– How ConvNet Works
 ConvNet Layers
– Convolutional Layer
– Pooling Layer
– Normalization Layer (ReLU)
– Fully-Connected Layer
 Parameters & Demos
– Hyper Parameters
– Mnist dataset on AWS
– CIFAR-10 ConvNetJS
 Case Studies
EECS6980:006 Social Network Analysis
Page  74
Hyperparameters (knobs)
 Convolution
– Filter Size
– Number of Filters
– Padding
– Stride
 Pooling
– Window Size
– Stride
 Fully Connected
– Number of neurons
Page  75
Agenda
 Introduction
 Archetecture Overview
– How ConvNet Works
 ConvNet Layers
– Convolutional Layer
– Pooling Layer
– Normalization Layer (ReLU)
– Fully-Connected Layer
 Parameters & Demos
– Hyper Parameters
– Mnist dataset on AWS
– CIFAR-10 ConvNetJS
 Case Studies
EECS6980:006 Social Network Analysis
Page  76
Agenda
 Introduction
 Archetecture Overview
– How ConvNet Works
 ConvNet Layers
– Convolutional Layer
– Pooling Layer
– Normalization Layer (ReLU)
– Fully-Connected Layer
 Parameters & Demos
– Hyper Parameters
– Mnist dataset on AWS
– CIFAR-10 ConvNetJS
 Case Studies
EECS6980:006 Social Network Analysis
Page  77
Case Studies
 LeNet – 1998
 AlexNet – 2012
 ZFNet – 2013
 VGG – 2014
 GoogLeNet – 2014
 ResNet – 2015
EECS6980:006 Social Network Analysis
Page  78
References
 Karpathy, A. (n.d.). CS231n Convolutional Neural Networks for Visual Recognition. Retrieved from
http://cs231n.github.io/convolutional-networks/#overview
 Rohrer, B. (n.d.). How do Convolutional Neural Networks work?. Retrieved from
http://brohrer.github.io/how_convolutional_neural_networks_work.html
 Brownlee, J. (n.d.). Crash Course in Convolutional Neural Networks for Machine Learning. Retrieved from
http://machinelearningmastery.com/crash-course-convolutional-neural-networks/
 Lidinwise (n.d.). The revolution of depth. Retrieved from https://medium.com/@Lidinwise/the-revolution-of-
depth-facf174924f5#.8or5c77ss
 Nervana. (n.d.). Tutorial: Convolutional neural networks. Retrieved from
https://www.nervanasys.com/convolutional-neural-networks/
EECS6980:006 Social Network Analysis
Page  79
Questions
EECS6980:006 Social Network Analysis
Page  80
Thank you!!
EECS6980:006 Social Network Analysis
1 von 80

Recomendados

Cnn von
CnnCnn
CnnNirthika Rajendran
14K views31 Folien
Convolutional neural network von
Convolutional neural networkConvolutional neural network
Convolutional neural networkMojammilHusain
1.1K views11 Folien
Convolutional Neural Networks (CNN) von
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Gaurav Mittal
58.5K views70 Folien
Convolutional neural network von
Convolutional neural network Convolutional neural network
Convolutional neural network Yan Xu
5.3K views68 Folien
Introduction to CNN von
Introduction to CNNIntroduction to CNN
Introduction to CNNShuai Zhang
8.8K views18 Folien
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S... von
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Simplilearn
13.6K views63 Folien

Más contenido relacionado

Was ist angesagt?

Introduction to Neural Networks von
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural NetworksDatabricks
21.6K views62 Folien
Convolution Neural Network (CNN) von
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Suraj Aavula
13.7K views22 Folien
Resnet von
ResnetResnet
Resnetashwinjoseph95
1.6K views16 Folien
Convolutional Neural Network (CNN) von
Convolutional Neural Network (CNN)Convolutional Neural Network (CNN)
Convolutional Neural Network (CNN)Muhammad Haroon
829 views16 Folien
Artificial Neural Network von
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkPrakash K
2K views58 Folien
Convolutional neural network von
Convolutional neural networkConvolutional neural network
Convolutional neural networkItachi SK
223 views22 Folien

Was ist angesagt?(20)

Introduction to Neural Networks von Databricks
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural Networks
Databricks21.6K views
Convolution Neural Network (CNN) von Suraj Aavula
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)
Suraj Aavula13.7K views
Convolutional Neural Network (CNN) von Muhammad Haroon
Convolutional Neural Network (CNN)Convolutional Neural Network (CNN)
Convolutional Neural Network (CNN)
Muhammad Haroon829 views
Artificial Neural Network von Prakash K
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
Prakash K2K views
Convolutional neural network von Itachi SK
Convolutional neural networkConvolutional neural network
Convolutional neural network
Itachi SK223 views
Convolutional Neural Network Models - Deep Learning von Benha University
Convolutional Neural Network Models - Deep LearningConvolutional Neural Network Models - Deep Learning
Convolutional Neural Network Models - Deep Learning
Benha University12.6K views
CNN and its applications by ketaki von Ketaki Patwari
CNN and its applications by ketakiCNN and its applications by ketaki
CNN and its applications by ketaki
Ketaki Patwari1.7K views
Back propagation von Nagarajan
Back propagationBack propagation
Back propagation
Nagarajan66.6K views
Introduction Of Artificial neural network von Nagarajan
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural network
Nagarajan18.4K views
Convolutional Neural Networks : Popular Architectures von ananth
Convolutional Neural Networks : Popular ArchitecturesConvolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular Architectures
ananth1.6K views
Convolutional Neural Network (CNN) - image recognition von YUNG-KUEI CHEN
Convolutional Neural Network (CNN)  - image recognitionConvolutional Neural Network (CNN)  - image recognition
Convolutional Neural Network (CNN) - image recognition
YUNG-KUEI CHEN4.3K views
Deep Learning - Convolutional Neural Networks von Christian Perone
Deep Learning - Convolutional Neural NetworksDeep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural Networks
Christian Perone71.4K views
Machine Learning: Introduction to Neural Networks von Francesco Collova'
Machine Learning: Introduction to Neural NetworksMachine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural Networks
Francesco Collova'16.1K views
Introduction to Recurrent Neural Network von Knoldus Inc.
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural Network
Knoldus Inc.1.9K views
Introduction to Deep learning von leopauly
Introduction to Deep learningIntroduction to Deep learning
Introduction to Deep learning
leopauly1.7K views

Similar a Convolutional Neural Networks

Bio-inspired Algorithms for Evolving the Architecture of Convolutional Neural... von
Bio-inspired Algorithms for Evolving the Architecture of Convolutional Neural...Bio-inspired Algorithms for Evolving the Architecture of Convolutional Neural...
Bio-inspired Algorithms for Evolving the Architecture of Convolutional Neural...Ashray Bhandare
591 views146 Folien
Deep Learning - CNN and RNN von
Deep Learning - CNN and RNNDeep Learning - CNN and RNN
Deep Learning - CNN and RNNAshray Bhandare
5.9K views135 Folien
Introduction to machinel learning and deep learning von
Introduction to machinel learning and deep learningIntroduction to machinel learning and deep learning
Introduction to machinel learning and deep learningIbrahim Amer
687 views100 Folien
Practical Deep Learning Using Tensor Flow - Sandeep Kath von
Practical Deep Learning Using Tensor Flow - Sandeep KathPractical Deep Learning Using Tensor Flow - Sandeep Kath
Practical Deep Learning Using Tensor Flow - Sandeep KathSandeep Kath
84 views38 Folien
CNN von
CNNCNN
CNNgeorgejustymirobi1
23 views49 Folien
CNN (v2).pptx von
CNN (v2).pptxCNN (v2).pptx
CNN (v2).pptxFKKBWITTAINAN
26 views45 Folien

Similar a Convolutional Neural Networks(20)

Bio-inspired Algorithms for Evolving the Architecture of Convolutional Neural... von Ashray Bhandare
Bio-inspired Algorithms for Evolving the Architecture of Convolutional Neural...Bio-inspired Algorithms for Evolving the Architecture of Convolutional Neural...
Bio-inspired Algorithms for Evolving the Architecture of Convolutional Neural...
Ashray Bhandare591 views
Introduction to machinel learning and deep learning von Ibrahim Amer
Introduction to machinel learning and deep learningIntroduction to machinel learning and deep learning
Introduction to machinel learning and deep learning
Ibrahim Amer687 views
Practical Deep Learning Using Tensor Flow - Sandeep Kath von Sandeep Kath
Practical Deep Learning Using Tensor Flow - Sandeep KathPractical Deep Learning Using Tensor Flow - Sandeep Kath
Practical Deep Learning Using Tensor Flow - Sandeep Kath
Sandeep Kath84 views
Loop parallelization & pipelining von jagrat123
Loop parallelization & pipeliningLoop parallelization & pipelining
Loop parallelization & pipelining
jagrat1234.8K views
IRJET-Multiple Object Detection using Deep Neural Networks von IRJET Journal
IRJET-Multiple Object Detection using Deep Neural NetworksIRJET-Multiple Object Detection using Deep Neural Networks
IRJET-Multiple Object Detection using Deep Neural Networks
IRJET Journal43 views
Deep-Learning-2017-Lecture5CNN.ppt von AminHa5
Deep-Learning-2017-Lecture5CNN.pptDeep-Learning-2017-Lecture5CNN.ppt
Deep-Learning-2017-Lecture5CNN.ppt
AminHa519 views
Deep-Learning von Amnaalia
Deep-LearningDeep-Learning
Deep-Learning
Amnaalia13 views
Deep-Learning-2017-Lecture5CNN.ppt von kundurti
Deep-Learning-2017-Lecture5CNN.pptDeep-Learning-2017-Lecture5CNN.ppt
Deep-Learning-2017-Lecture5CNN.ppt
kundurti6 views
Deep-Learning-Convolutional Neural Networks and Sequence Modeling.ppt von PraveenVundrajavarap
Deep-Learning-Convolutional Neural Networks and Sequence Modeling.pptDeep-Learning-Convolutional Neural Networks and Sequence Modeling.ppt
Deep-Learning-Convolutional Neural Networks and Sequence Modeling.ppt
Deep learning-2017-lecture5 cnn von AnandShinde47
Deep learning-2017-lecture5 cnnDeep learning-2017-lecture5 cnn
Deep learning-2017-lecture5 cnn
AnandShinde4786 views
Deep-Learning-2017-Lecture5CNN.ppt von sghorai
Deep-Learning-2017-Lecture5CNN.pptDeep-Learning-2017-Lecture5CNN.ppt
Deep-Learning-2017-Lecture5CNN.ppt
sghorai4 views
Deep-Learning-2017-Lecture5CNN.ppt von archn4
Deep-Learning-2017-Lecture5CNN.pptDeep-Learning-2017-Lecture5CNN.ppt
Deep-Learning-2017-Lecture5CNN.ppt
archn42 views

Último

Short Story Assignment by Kelly Nguyen von
Short Story Assignment by Kelly NguyenShort Story Assignment by Kelly Nguyen
Short Story Assignment by Kelly Nguyenkellynguyen01
19 views17 Folien
[DSC Europe 23][Cryptica] Martin_Summer_Digital_central_bank_money_Ideas_init... von
[DSC Europe 23][Cryptica] Martin_Summer_Digital_central_bank_money_Ideas_init...[DSC Europe 23][Cryptica] Martin_Summer_Digital_central_bank_money_Ideas_init...
[DSC Europe 23][Cryptica] Martin_Summer_Digital_central_bank_money_Ideas_init...DataScienceConferenc1
5 views18 Folien
Advanced_Recommendation_Systems_Presentation.pptx von
Advanced_Recommendation_Systems_Presentation.pptxAdvanced_Recommendation_Systems_Presentation.pptx
Advanced_Recommendation_Systems_Presentation.pptxneeharikasingh29
5 views9 Folien
[DSC Europe 23] Spela Poklukar & Tea Brasanac - Retrieval Augmented Generation von
[DSC Europe 23] Spela Poklukar & Tea Brasanac - Retrieval Augmented Generation[DSC Europe 23] Spela Poklukar & Tea Brasanac - Retrieval Augmented Generation
[DSC Europe 23] Spela Poklukar & Tea Brasanac - Retrieval Augmented GenerationDataScienceConferenc1
15 views29 Folien
CRM stick or twist.pptx von
CRM stick or twist.pptxCRM stick or twist.pptx
CRM stick or twist.pptxinfo828217
11 views16 Folien
[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptx von
[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptx[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptx
[DSC Europe 23] Ivana Sesic - Use of AI in Public Health.pptxDataScienceConferenc1
5 views15 Folien

Último(20)

Short Story Assignment by Kelly Nguyen von kellynguyen01
Short Story Assignment by Kelly NguyenShort Story Assignment by Kelly Nguyen
Short Story Assignment by Kelly Nguyen
kellynguyen0119 views
[DSC Europe 23][Cryptica] Martin_Summer_Digital_central_bank_money_Ideas_init... von DataScienceConferenc1
[DSC Europe 23][Cryptica] Martin_Summer_Digital_central_bank_money_Ideas_init...[DSC Europe 23][Cryptica] Martin_Summer_Digital_central_bank_money_Ideas_init...
[DSC Europe 23][Cryptica] Martin_Summer_Digital_central_bank_money_Ideas_init...
Advanced_Recommendation_Systems_Presentation.pptx von neeharikasingh29
Advanced_Recommendation_Systems_Presentation.pptxAdvanced_Recommendation_Systems_Presentation.pptx
Advanced_Recommendation_Systems_Presentation.pptx
[DSC Europe 23] Spela Poklukar & Tea Brasanac - Retrieval Augmented Generation von DataScienceConferenc1
[DSC Europe 23] Spela Poklukar & Tea Brasanac - Retrieval Augmented Generation[DSC Europe 23] Spela Poklukar & Tea Brasanac - Retrieval Augmented Generation
[DSC Europe 23] Spela Poklukar & Tea Brasanac - Retrieval Augmented Generation
CRM stick or twist.pptx von info828217
CRM stick or twist.pptxCRM stick or twist.pptx
CRM stick or twist.pptx
info82821711 views
PRIVACY AWRE PERSONAL DATA STORAGE von antony420421
PRIVACY AWRE PERSONAL DATA STORAGEPRIVACY AWRE PERSONAL DATA STORAGE
PRIVACY AWRE PERSONAL DATA STORAGE
antony4204215 views
[DSC Europe 23] Zsolt Feleki - Machine Translation should we trust it.pptx von DataScienceConferenc1
[DSC Europe 23] Zsolt Feleki - Machine Translation should we trust it.pptx[DSC Europe 23] Zsolt Feleki - Machine Translation should we trust it.pptx
[DSC Europe 23] Zsolt Feleki - Machine Translation should we trust it.pptx
Survey on Factuality in LLM's.pptx von NeethaSherra1
Survey on Factuality in LLM's.pptxSurvey on Factuality in LLM's.pptx
Survey on Factuality in LLM's.pptx
NeethaSherra17 views
SUPER STORE SQL PROJECT.pptx von khan888620
SUPER STORE SQL PROJECT.pptxSUPER STORE SQL PROJECT.pptx
SUPER STORE SQL PROJECT.pptx
khan88862013 views
Organic Shopping in Google Analytics 4.pdf von GA4 Tutorials
Organic Shopping in Google Analytics 4.pdfOrganic Shopping in Google Analytics 4.pdf
Organic Shopping in Google Analytics 4.pdf
GA4 Tutorials16 views
Data Journeys Hard Talk workshop final.pptx von info828217
Data Journeys Hard Talk workshop final.pptxData Journeys Hard Talk workshop final.pptx
Data Journeys Hard Talk workshop final.pptx
info82821710 views
CRIJ4385_Death Penalty_F23.pptx von yvettemm100
CRIJ4385_Death Penalty_F23.pptxCRIJ4385_Death Penalty_F23.pptx
CRIJ4385_Death Penalty_F23.pptx
yvettemm1006 views
[DSC Europe 23][AI:CSI] Dragan Pleskonjic - AI Impact on Cybersecurity and P... von DataScienceConferenc1
[DSC Europe 23][AI:CSI]  Dragan Pleskonjic - AI Impact on Cybersecurity and P...[DSC Europe 23][AI:CSI]  Dragan Pleskonjic - AI Impact on Cybersecurity and P...
[DSC Europe 23][AI:CSI] Dragan Pleskonjic - AI Impact on Cybersecurity and P...
Ukraine Infographic_22NOV2023_v2.pdf von AnastosiyaGurin
Ukraine Infographic_22NOV2023_v2.pdfUkraine Infographic_22NOV2023_v2.pdf
Ukraine Infographic_22NOV2023_v2.pdf
AnastosiyaGurin1.4K views

Convolutional Neural Networks

  • 2. Page  2 Agenda  Introduction  Archetecture Overview – How ConvNet Works  ConvNet Layers – Convolutional Layer – Pooling Layer – Normalization Layer (ReLU) – Fully-Connected Layer  Demos & Parameters – Hyper Parameters – Mnist dataset on AWS – CIFAR-10 ConvNetJS  Case Studies EECS6980:006 Social Network Analysis
  • 3. Page  3 Agenda  Introduction  Archetecture Overview – How ConvNet Works  ConvNet Layers – Convolutional Layer – Pooling Layer – Normalization Layer (ReLU) – Fully-Connected Layer  Demos & Parameters – Hyper Parameters – Mnist dataset on AWS – CIFAR-10 ConvNetJS  Case Studies EECS6980:006 Social Network Analysis
  • 4. Page  4 Introduction  A convolutional neural network (or ConvNet) is a type of feed-forward artificial neural network  The architecture of a ConvNet is designed to take advantage of the 2D structure of an input image.    A ConvNet is comprised of one or more convolutional layers (often with a pooling step) and then followed by one or more fully connected layers as in a standard multilayer neural network. EECS6980:006 Social Network Analysis VS
  • 5. Page  5 Agenda  Introduction  Archetecture Overview – How ConvNet Works  ConvNet Layers – Convolutional Layer – Pooling Layer – Normalization Layer (ReLU) – Fully-Connected Layer  Demos & Parameters – Hyper Parameters – Mnist dataset on AWS – CIFAR-10 ConvNetJS  Case Studies EECS6980:006 Social Network Analysis
  • 6. Page  6 Motivation behind ConvNets  Consider an image of size 200x200x3 (200 wide, 200 high, 3 color channels) – a single fully-connected neuron in a first hidden layer of a regular Neural Network would have 200*200*3 = 120,000 weights. – Due to the presence of several such neurons, this full connectivity is wasteful and the huge number of parameters would quickly lead to overfitting  However, in a ConvNet, the neurons in a layer will only be connected to a small region of the layer before it, instead of all of the neurons in a fully-connected manner. – the final output layer would have dimensions 1x1xN, because by the end of the ConvNet architecture we will reduce the full image into a single vector of class scores (for N classes), arranged along the depth dimension EECS6980:006 Social Network Analysis
  • 7. Page  7 MLP VS ConvNet EECS6980:006 Social Network Analysis Input Hidden Output Input Hidden Output Multilayered Perceptron: All Fully Connected Layers Convolutional Neural Network With Partially Connected Convolution Layer
  • 8. Page  8 MLP vs ConvNet A regular 3-layer Neural Network. A ConvNet arranges its neurons in three dimensions (width, height, depth), as visualized in one of the layers. EECS6980:006 Social Network Analysis
  • 9. Page  9 Agenda  Introduction  Archetecture Overview – How ConvNet Works  ConvNet Layers – Convolutional Layer – Pooling Layer – Normalization Layer (ReLU) – Fully-Connected Layer  Demos & Parameters – Hyper Parameters – Mnist dataset on AWS – CIFAR-10 ConvNetJS  Case Studies EECS6980:006 Social Network Analysis
  • 10. Page  10 How ConvNet Works  For example, a ConvNet takes the input as an image which can be classified as ‘X’ or ‘O’  In a simple case, ‘X’ would look like: X or OCNN A two-dimensional array of pixels
  • 11. Page  11 How ConvNet Works  What about trickier cases? CNN X CNN O
  • 12. Page  12 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 = ? How ConvNet Works – What Computer Sees
  • 13. Page  13 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 =x How ConvNet Works
  • 14. Page  14 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 X -1 -1 -1 -1 X X -1 -1 X X -1 -1 X X -1 -1 -1 -1 X 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 X -1 -1 -1 -1 X X -1 -1 X X -1 -1 X X -1 -1 -1 -1 X -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 How ConvNet Works – What Computer Sees  Since the pattern doesnot match exactly, the computer will not be able to classify this as ‘X’
  • 15. Page  15 Agenda  Introduction  Archetecture Overview – How ConvNet Works  ConvNet Layers – Convolutional Layer – Pooling Layer – Normalization Layer (ReLU) – Fully-Connected Layer  Demos & Parameters – Hyper Parameters – Mnist dataset on AWS – CIFAR-10 ConvNetJS  Case Studies EECS6980:006 Social Network Analysis
  • 16. Page  16 ConvNet Layers (At a Glance)  CONV layer will compute the output of neurons that are connected to local regions in the input, each computing a dot product between their weights and a small region they are connected to in the input volume.  RELU layer will apply an elementwise activation function, such as the max(0,x) thresholding at zero. This leaves the size of the volume unchanged.  POOL layer will perform a downsampling operation along the spatial dimensions (width, height).  FC (i.e. fully-connected) layer will compute the class scores, resulting in volume of size [1x1xN], where each of the N numbers correspond to a class score, such as among the N categories. EECS6980:006 Social Network Analysis
  • 17. Page  17 Agenda  Introduction  Archetecture Overview – How ConvNet Works  ConvNet Layers – Convolutional Layer – Pooling Layer – Normalization Layer (ReLU) – Fully-Connected Layer  Demos & Parameters – Hyper Parameters – Mnist dataset on AWS – CIFAR-10 ConvNetJS  Case Studies EECS6980:006 Social Network Analysis
  • 18. Page  18 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 X -1 -1 -1 -1 X X -1 -1 X X -1 -1 X X -1 -1 -1 -1 X 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 X -1 -1 -1 -1 X X -1 -1 X X -1 -1 X X -1 -1 -1 -1 X -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Recall – What Computer Sees  Since the pattern doesnot match exactly, the computer will not be able to classify this as ‘X’  What got changed?
  • 19. Page  19 = = =  Convolution layer will work to identify patterns (features) instead of individual pixels Convolutional Layer
  • 20. Page  20 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 1 -1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1 -1 1 Convolutional Layer - Filters  The CONV layer’s parameters consist of a set of learnable filters.  Every filter is small spatially (along width and height), but extends through the full depth of the input volume.  During the forward pass, we slide (more precisely, convolve) each filter across the width and height of the input volume and compute dot products between the entries of the filter and the input at any position.
  • 21. Page  21 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 1 -1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1 -1 1 Convolutional Layer - Filters  Sliding the filter over the width and height of the input gives 2-dimensional activation map that responds to that filter at every spatial position.
  • 22. Page  22 Convolutional Layer - Strides Input Hidden Output Input Hidden Output 1-D convolution Filters: 1 Filter Size: 2 Stride: 2 1-D convolution Filters: 1 Filter Size: 2 Stride: 1 • The distance that filter is moved across the input from the previous layer each activation is referred to as the stride.
  • 23. Page  23 Convolutional Layer - Padding  Sometimes it is convenient to pad the input volume with zeros around the border.  Zero padding is allows us to preserve the spatial size of the output volumes EECS6980:006 Social Network Analysis Input Hidden Output 1 D convolution with: Filters: 1 Filter Size: 2 Stride: 2 Padding: 1 0 0 Input Hidden Output 1 D convolution with: Filters: 2 Filter Size: 2 Stride: 2 Padding: 1 0 0
  • 24. Page  24 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Navigation Example  Strides = 1, Filter Size = 3 X 3 X 1, Padding = 0
  • 25. Page  25 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Navigation Example
  • 26. Page  26 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Navigation Example
  • 27. Page  27 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Navigation Example
  • 28. Page  28 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Navigation Example
  • 29. Page  29 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Navigation Example
  • 30. Page  30 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Navigation Example
  • 31. Page  31 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Navigation Example
  • 32. Page  32 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Navigation Example
  • 33. Page  33 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Computation Example
  • 34. Page  34 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Computation Example
  • 35. Page  35 1 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Computation Example
  • 36. Page  36 1 1 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Computation Example
  • 37. Page  37 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Computation Example
  • 38. Page  38 1 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Computation Example
  • 39. Page  39 1 1 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Computation Example
  • 40. Page  40 1 1 1 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Computation Example
  • 41. Page  41 1 1 1 1 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Computation Example
  • 42. Page  42 1 1 1 1 1 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Computation Example
  • 43. Page  43 1 1 1 1 1 1 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Computation Example
  • 44. Page  44 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Computation Example
  • 45. Page  45 1 1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Computation Example
  • 46. Page  46 1 1 -1 1 1 1 -1 1 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Computation Example
  • 47. Page  47 1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 -1 1 1 1 -1 1 1 55 1 1 -1 1 1 1 -1 1 1 Convolutional Layer – Filters – Computation Example
  • 48. Page  48 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 = 0.77 -0.11 0.11 0.33 0.55 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11 0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55 0.33 0.33 -0.33 0.55 -0.33 0.33 0.33 0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11 -0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.55 0.33 0.11 -0.11 0.77 Convolutional Layer – Filters – Computation Example Input Size (W): 9 Filter Size (F): 3 X 3 Stride (S): 1 Filters: 1 Padding (P): 0 9 X 9 7 X 7 Feature Map Size = 1+ (W – F + 2P)/S = 1+ (9 – 3 + 2 X 0)/1 = 7
  • 49. Page  49 1 -1 -1 -1 1 -1 -1 -1 1 0.33 -0.11 0.55 0.33 0.11 -0.11 0.77 -0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11 0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11 0.33 0.33 -0.33 0.55 -0.33 0.33 0.33 0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55 -0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11 0.77 -0.11 0.11 0.33 0.55 -0.11 0.33 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 = 0.77 -0.11 0.11 0.33 0.55 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11 0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55 0.33 0.33 -0.33 0.55 -0.33 0.33 0.33 0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11 -0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.55 0.33 0.11 -0.11 0.77 -1 -1 1 -1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1 -1 1 0.33 -0.55 0.11 -0.11 0.11 -0.55 0.33 -0.55 0.55 -0.55 0.33 -0.55 0.55 -0.55 0.11 -0.55 0.55 -0.77 0.55 -0.55 0.11 -0.11 0.33 -0.77 1.00 -0.77 0.33 -0.11 0.11 -0.55 0.55 -0.77 0.55 -0.55 0.11 -0.55 0.55 -0.55 0.33 -0.55 0.55 -0.55 0.33 -0.55 0.11 -0.11 0.11 -0.55 0.33 = = -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Filters – Output Feature Map  Output Feature Map of One complete convolution: – Filters: 3 – Filter Size: 3 X 3 – Stride: 1  Conclusion: – Input Image: 9 X 9 – Output of Convolution: 7 X 7 X 3
  • 50. Page  50 0.33 -0.11 0.55 0.33 0.11 -0.11 0.77 -0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11 0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11 0.33 0.33 -0.33 0.55 -0.33 0.33 0.33 0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55 -0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11 0.77 -0.11 0.11 0.33 0.55 -0.11 0.33 0.77 -0.11 0.11 0.33 0.55 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11 0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55 0.33 0.33 -0.33 0.55 -0.33 0.33 0.33 0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11 -0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.55 0.33 0.11 -0.11 0.77 0.33 -0.55 0.11 -0.11 0.11 -0.55 0.33 -0.55 0.55 -0.55 0.33 -0.55 0.55 -0.55 0.11 -0.55 0.55 -0.77 0.55 -0.55 0.11 -0.11 0.33 -0.77 1.00 -0.77 0.33 -0.11 0.11 -0.55 0.55 -0.77 0.55 -0.55 0.11 -0.55 0.55 -0.55 0.33 -0.55 0.55 -0.55 0.33 -0.55 0.11 -0.11 0.11 -0.55 0.33 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 Convolutional Layer – Output
  • 51. Page  51 Agenda  Introduction  Archetecture Overview – How ConvNet Works  ConvNet Layers – Convolutional Layer – Pooling Layer – Normalization Layer (ReLU) – Fully-Connected Layer  Demos & Parameters – Hyper Parameters – Mnist dataset on AWS – CIFAR-10 ConvNetJS  Case Studies EECS6980:006 Social Network Analysis
  • 52. Page  52 Rectified Linear Units (ReLUs) 0.77 -0.11 0.11 0.33 0.55 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11 0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55 0.33 0.33 -0.33 0.55 -0.33 0.33 0.33 0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11 -0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.55 0.33 0.11 -0.11 0.77 0.77
  • 53. Page  53 0.77 0 Rectified Linear Units (ReLUs) 0.77 -0.11 0.11 0.33 0.55 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11 0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55 0.33 0.33 -0.33 0.55 -0.33 0.33 0.33 0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11 -0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.55 0.33 0.11 -0.11 0.77
  • 54. Page  54 0.77 0 0.11 0.33 0.55 0 0.33 Rectified Linear Units (ReLUs) 0.77 -0.11 0.11 0.33 0.55 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11 0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55 0.33 0.33 -0.33 0.55 -0.33 0.33 0.33 0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11 -0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.55 0.33 0.11 -0.11 0.77
  • 55. Page  55 0.77 0 0.11 0.33 0.55 0 0.33 0 1.00 0 0.33 0 0.11 0 0.11 0 1.00 0 0.11 0 0.55 0.33 0.33 0 0.55 0 0.33 0.33 0.55 0 0.11 0 1.00 0 0.11 0 0.11 0 0.33 0 1.00 0 0.33 0 0.55 0.33 0.11 0 0.77 Rectified Linear Units (ReLUs) 0.77 -0.11 0.11 0.33 0.55 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11 0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55 0.33 0.33 -0.33 0.55 -0.33 0.33 0.33 0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11 -0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.55 0.33 0.11 -0.11 0.77
  • 56. Page  56 ReLU layer 0.77 0 0.11 0.33 0.55 0 0.33 0 1.00 0 0.33 0 0.11 0 0.11 0 1.00 0 0.11 0 0.55 0.33 0.33 0 0.55 0 0.33 0.33 0.55 0 0.11 0 1.00 0 0.11 0 0.11 0 0.33 0 1.00 0 0.33 0 0.55 0.33 0.11 0 0.77 0.33 0 0.11 0 0.11 0 0.33 0 0.55 0 0.33 0 0.55 0 0.11 0 0.55 0 0.55 0 0.11 0 0.33 0 1.00 0 0.33 0 0.11 0 0.55 0 0.55 0 0.11 0 0.55 0 0.33 0 0.55 0 0.33 0 0.11 0 0.11 0 0.33 0.33 0 0.55 0.33 0.11 0 0.77 0 0.11 0 0.33 0 1.00 0 0.55 0 0.11 0 1.00 0 0.11 0.33 0.33 0 0.55 0 0.33 0.33 0.11 0 1.00 0 0.11 0 0.55 0 1.00 0 0.33 0 0.11 0 0.77 0 0.11 0.33 0.55 0 0.33 0.33 -0.11 0.55 0.33 0.11 -0.11 0.77 -0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11 0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11 0.33 0.33 -0.33 0.55 -0.33 0.33 0.33 0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55 -0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11 0.77 -0.11 0.11 0.33 0.55 -0.11 0.33 0.77 -0.11 0.11 0.33 0.55 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.11 -0.11 0.11 -0.11 1.00 -0.33 0.11 -0.11 0.55 0.33 0.33 -0.33 0.55 -0.33 0.33 0.33 0.55 -0.11 0.11 -0.33 1.00 -0.11 0.11 -0.11 0.11 -0.11 0.33 -0.11 1.00 -0.11 0.33 -0.11 0.55 0.33 0.11 -0.11 0.77 0.33 -0.55 0.11 -0.11 0.11 -0.55 0.33 -0.55 0.55 -0.55 0.33 -0.55 0.55 -0.55 0.11 -0.55 0.55 -0.77 0.55 -0.55 0.11 -0.11 0.33 -0.77 1.00 -0.77 0.33 -0.11 0.11 -0.55 0.55 -0.77 0.55 -0.55 0.11 -0.55 0.55 -0.55 0.33 -0.55 0.55 -0.55 0.33 -0.55 0.11 -0.11 0.11 -0.55 0.33
  • 57. Page  57 Agenda  Introduction  Archetecture Overview – How ConvNet Works  ConvNet Layers – Convolutional Layer – Pooling Layer – Normalization Layer (ReLU) – Fully-Connected Layer  Demos & Parameters – Hyper Parameters – Mnist dataset on AWS – CIFAR-10 ConvNetJS  Case Studies EECS6980:006 Social Network Analysis
  • 58. Page  58 Pooling Layer  The pooling layers down-sample the previous layers feature map.  Its function is to progressively reduce the spatial size of the representation to reduce the amount of parameters and computation in the network  The pooling layer often uses the Max operation to perform the downsampling process EECS6980:006 Social Network Analysis
  • 59. Page  59 1.00 Pooling  Pooling Filter Size = 2 X 2, Stride = 2
  • 60. Page  60 1.00 0.33 Pooling  Pooling Filter Size = 2 X 2, Stride = 2
  • 61. Page  61 1.00 0.33 0.55 Pooling  Pooling Filter Size = 2 X 2, Stride = 2
  • 62. Page  62 1.00 0.33 0.55 0.33 Pooling  Pooling Filter Size = 2 X 2, Stride = 2
  • 63. Page  63 1.00 0.33 0.55 0.33 0.33 Pooling  Pooling Filter Size = 2 X 2, Stride = 2
  • 64. Page  64 1.00 0.33 0.55 0.33 0.33 1.00 0.33 0.55 0.55 0.33 1.00 0.11 0.33 0.55 0.11 0.77 Pooling  Pooling Filter Size = 2 X 2, Stride = 2
  • 65. Page  65 1.00 0.33 0.55 0.33 0.33 1.00 0.33 0.55 0.55 0.33 1.00 0.11 0.33 0.55 0.11 0.77 0.33 0.55 1.00 0.77 0.55 0.55 1.00 0.33 1.00 1.00 0.11 0.55 0.77 0.33 0.55 0.33 0.55 0.33 0.55 0.33 0.33 1.00 0.55 0.11 0.55 0.55 0.55 0.11 0.33 0.11 0.11 0.33 Pooling
  • 66. Page  66 Layers get stacked -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1.00 0.33 0.55 0.33 0.33 1.00 0.33 0.55 0.55 0.33 1.00 0.11 0.33 0.55 0.11 0.77 0.33 0.55 1.00 0.77 0.55 0.55 1.00 0.33 1.00 1.00 0.11 0.55 0.77 0.33 0.55 0.33 0.55 0.33 0.55 0.33 0.33 1.00 0.55 0.11 0.55 0.55 0.55 0.11 0.33 0.11 0.11 0.33
  • 67. Page  67 Layers Get Stacked - Example 224 X 224 X 3 224 X 224 X 64 CONVOLUTION WITH 64 FILTERS 112 X 112 X 64 POOLING (DOWNSAMPLING)
  • 68. Page  68 Deep stacking -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1.00 0.55 0.55 1.00 0.55 1.00 1.00 0.55 1.00 0.55 0.55 0.55
  • 69. Page  69 Agenda  Introduction  Archetecture Overview – How ConvNet Works  ConvNet Layers – Convolutional Layer – Pooling Layer – Normalization Layer (ReLU) – Fully-Connected Layer  Demos & Parameters – Hyper Parameters – Mnist dataset on AWS – CIFAR-10 ConvNetJS  Case Studies EECS6980:006 Social Network Analysis
  • 70. Page  70 Fully connected layer  Fully connected layers are the normal flat feed-forward neural network layers.  These layers may have a non-linear activation function or a softmax activation in order to predict classes.  To compute our output, we simply re- arrange the output matrices as a 1-D array. 1.00 0.55 0.55 1.00 0.55 1.00 1.00 0.55 1.00 0.55 0.55 0.55 1.00 0.55 0.55 1.00 1.00 0.55 0.55 0.55 0.55 1.00 1.00 0.55
  • 71. Page  71 Fully connected layer  A summation of product of inputs and weights at each output node determines the final prediction X O 0.55 1.00 1.00 0.55 0.55 0.55 0.55 0.55 1.00 0.55 0.55 1.00
  • 72. Page  72 Putting it all together -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 1 -1 -1 -1 -1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 X O
  • 73. Page  73 Agenda  Introduction  Archetecture Overview – How ConvNet Works  ConvNet Layers – Convolutional Layer – Pooling Layer – Normalization Layer (ReLU) – Fully-Connected Layer  Parameters & Demos – Hyper Parameters – Mnist dataset on AWS – CIFAR-10 ConvNetJS  Case Studies EECS6980:006 Social Network Analysis
  • 74. Page  74 Hyperparameters (knobs)  Convolution – Filter Size – Number of Filters – Padding – Stride  Pooling – Window Size – Stride  Fully Connected – Number of neurons
  • 75. Page  75 Agenda  Introduction  Archetecture Overview – How ConvNet Works  ConvNet Layers – Convolutional Layer – Pooling Layer – Normalization Layer (ReLU) – Fully-Connected Layer  Parameters & Demos – Hyper Parameters – Mnist dataset on AWS – CIFAR-10 ConvNetJS  Case Studies EECS6980:006 Social Network Analysis
  • 76. Page  76 Agenda  Introduction  Archetecture Overview – How ConvNet Works  ConvNet Layers – Convolutional Layer – Pooling Layer – Normalization Layer (ReLU) – Fully-Connected Layer  Parameters & Demos – Hyper Parameters – Mnist dataset on AWS – CIFAR-10 ConvNetJS  Case Studies EECS6980:006 Social Network Analysis
  • 77. Page  77 Case Studies  LeNet – 1998  AlexNet – 2012  ZFNet – 2013  VGG – 2014  GoogLeNet – 2014  ResNet – 2015 EECS6980:006 Social Network Analysis
  • 78. Page  78 References  Karpathy, A. (n.d.). CS231n Convolutional Neural Networks for Visual Recognition. Retrieved from http://cs231n.github.io/convolutional-networks/#overview  Rohrer, B. (n.d.). How do Convolutional Neural Networks work?. Retrieved from http://brohrer.github.io/how_convolutional_neural_networks_work.html  Brownlee, J. (n.d.). Crash Course in Convolutional Neural Networks for Machine Learning. Retrieved from http://machinelearningmastery.com/crash-course-convolutional-neural-networks/  Lidinwise (n.d.). The revolution of depth. Retrieved from https://medium.com/@Lidinwise/the-revolution-of- depth-facf174924f5#.8or5c77ss  Nervana. (n.d.). Tutorial: Convolutional neural networks. Retrieved from https://www.nervanasys.com/convolutional-neural-networks/ EECS6980:006 Social Network Analysis
  • 79. Page  79 Questions EECS6980:006 Social Network Analysis
  • 80. Page  80 Thank you!! EECS6980:006 Social Network Analysis