SlideShare ist ein Scribd-Unternehmen logo
1 von 46
Downloaden Sie, um offline zu lesen
Convolution Neural Network
By-Amit Kushwaha
•The basic learning entity in the network - Perceptron
•Important Aspects of Deep Learning
Presentation Covers
Machine Learning Engineer
@
Who am I
a Biological Inspiration
Artificial Neural Net ( ANN)
The most basic form of neural network which can also learn
Perceptron
The elementary entity of a neural network.
Perceptron
A Basic Linear discriminant Classifier
How Perceptron Work
0.3
-0.5
0.15
Activation
Function
ϴ
4
2
-2
0
ϴ= (4*0.3 + 2*(-0.5) + (-2*0.15)
Activation Functions
Training a Perceptron
Let’s train a Perceptron to mimic this pattern
x y z
0 0 1
1 1 1
1 0 1
0 1 1
Training Set
out
0
1
1
0
T(out)
Training a Perceptron
Perceptron Model
W1
w4
Activation
Function
ϴ OutPut
W2
W3
x
y
z
1
Wi = Wi + ∆Wi
∆W = -η * ( target output - perceptron output)*X
η is learning rate for perceptron
Training Rules :
W1
w4
Activation
Function
ϴ OutPut
W2
W3
x
y
z
1
Training a Perceptron
Let's learn wi such that it minimizes the squared error
On simplifying
Training a Perceptron
W1
ϴ Out
Assign random values to
Weight Vector([W1,W2,W3,W4])
x y z bias
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
Training Set
out
0
1
1
0
0
1
1
0
T(out)
W2
W3
W4
x
y
z
bias
{
{
epoch 1
epoch 2
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
x y z bias
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
∆w1 ∆w2 ∆w3 ∆w4
0 0 0 0
1 1 1 1
0 0 0 0
0 -1 -1 -1
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
w1 w2 w3 w4
0 0 0 0
0 0 0 0
1 1 1 1
1 1 1 1
1 0 0 0
1 0 0 0
1 0 0 0
1 0 0 0
out
0
0
1
1
0
1
1
0
Training Set Weights P(out) ∆Weights
out
0
0
1
1
0
1
1
0
T(out)
0
ϴ=0 0
0
0
0
0
0
1
1
{
{
epoch 1
epoch 2
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
out
0
1
1
0
0
1
1
0
x y z bias
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
∆w1 ∆w2 ∆w3 ∆w4
0 0 0 0
1 1 1 1
0 0 0 0
0 -1 -1 -1
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
w1 w2 w3 w4
0 0 0 0
0 0 0 0
1 1 1 1
1 1 1 1
1 0 0 0
1 0 0 0
1 0 0 0
1 0 0 0
out
0
0
1
1
0
1
1
0
Training Set Weights P(out) ∆Weights
out
0
0
1
1
0
1
1
0
T(out)
{
{
epoch 1
epoch 2
0
ϴ=0 0
0
0
0
1
1
1
1
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
out
0
1
1
0
0
1
1
0
x y z bias
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
∆w1 ∆w2 ∆w3 ∆w4
0 0 0 0
1 1 1 1
0 0 0 0
0 -1 -1 -1
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
w1 w2 w3 w4
0 0 0 0
0 0 0 0
1 1 1 1
1 1 1 1
1 0 0 0
1 0 0 0
1 0 0 0
1 0 0 0
out
0
0
1
1
0
1
1
0
Training Set Weights P(out) ∆WeightsT(out)
{
{
epoch 1
epoch 2
0
ϴ=0 0
0
0
0
1
0
1
1
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
out
0
1
1
0
0
1
1
0
x y z bias
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
∆w1 ∆w2 ∆w3 ∆w4
0 0 0 0
1 1 1 1
0 0 0 0
0 -1 -1 -1
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
w1 w2 w3 w4
0 0 0 0
0 0 0 0
1 1 1 1
1 1 1 1
1 0 0 0
1 0 0 0
1 0 0 0
1 0 0 0
out
0
0
1
1
0
1
1
0
Training Set Weights P(out) ∆Weights
out
0
1
1
0
0
1
1
0
T(out)
{
{
epoch 1
epoch 2
1
ϴ=1 1
0
1
1
0
1
1
1
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
x y z bias
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
∆w1 ∆w2 ∆w3 ∆w4
0 0 0 0
1 1 1 1
0 0 0 0
0 -1 -1 -1
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
w1 w2 w3 w4
0 0 0 0
0 0 0 0
1 1 1 1
1 1 1 1
1 0 0 0
1 0 0 0
1 0 0 0
1 0 0 0
out
0
0
1
1
0
1
1
0
Training Set Weights P(out) ∆Weights
out
0
1
1
0
0
1
1
0
T(out)
1
ϴ=2 1
0
1
1
0
0
1
1
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
{
{
epoch 1
epoch 2
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
x y z bias
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
∆w1 ∆w2 ∆w3 ∆w4
0 0 0 0
1 1 1 1
0 0 0 0
0 -1 -1 -1
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
w1 w2 w3 w4
0 0 0 0
0 0 0 0
1 1 1 1
1 1 1 1
1 0 0 0
1 0 0 0
1 0 0 0
1 0 0 0
out
0
0
1
1
1
1
1
0
Training Set Weights P(out) ∆Weights
out
0
1
1
0
0
1
1
0
T(out)
1
ϴ=1 1
0
0
0
1
1
1
1
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
{
{
epoch 1
epoch 2
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
x y z bias
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
∆w1 ∆w2 ∆w3 ∆w4
0 0 0 0
1 1 1 1
0 0 0 0
0 -1 -1 -1
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
w1 w2 w3 w4
0 0 0 0
0 0 0 0
1 1 1 1
1 1 1 1
1 0 0 0
1 0 0 0
1 0 0 0
1 0 0 0
out
0
0
1
1
1
1
1
0
Training Set Weights P(out) ∆Weights
out
0
1
1
0
0
1
1
0
T(out)
1
ϴ=1 1
0
0
0
1
0
1
1
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
{
{
epoch 1
epoch 2
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
x y z bias
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
∆w1 ∆w2 ∆w3 ∆w4
0 0 0 0
1 1 1 1
0 0 0 0
0 -1 -1 -1
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
w1 w2 w3 w4
0 0 0 0
0 0 0 0
1 1 1 1
1 1 1 1
1 0 0 0
1 0 0 0
1 0 0 0
1 0 0 0
out
0
0
1
1
1
1
1
0
Training Set Weights P(out) ∆Weights
out
0
1
1
0
0
1
1
0
T(out)
1
ϴ=0 0
0
0
0
0
1
1
1
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
{
{
epoch 1
epoch 2
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
x y z bias
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
∆w1 ∆w2 ∆w3 ∆w4
0 0 0 0
0 0 0 0
1 0 1 1
0 0 0 0
0 0 -1 -1
0 0 0 0
0 0 0 0
0 0 0 0
w1 w2 w3 w4
1 0 0 0
0 0 0 0
0 0 0 0
1 0 1 1
1 0 1 1
1 0 0 0
1 0 0 0
1 0 0 0
out
0
0
0
1
1
1
1
0
Training Set Weights P(out) ∆Weights
out
0
1
1
0
0
1
1
0
T(out)
{epoch3
1
ϴ=0 0
0
0
0
0
0
1
1
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
{epoch4
x y z bias
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
∆w1 ∆w2 ∆w3 ∆w4
0 0 0 0
0 0 0 0
1 0 1 1
0 0 0 0
0 0 -1 -1
0 0 0 0
0 0 0 0
0 0 0 0
w1 w2 w3 w4
1 0 0 0
1 0 0 0
0 0 0 0
1 0 1 1
1 0 1 1
1 0 0 0
1 0 0 0
1 0 0 0
out
0
1
0
1
1
1
1
0
Training Set Weights P(out) ∆Weights
out
0
1
1
0
0
1
1
0
T(out)
{epoch3
1
ϴ=1 0
0
0
0
1
1
1
1
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
{epoch4
x y z bias
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
∆w1 ∆w2 ∆w3 ∆w4
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
0 0 -1 -1
0 0 0 0
0 0 0 0
0 0 0 0
w1 w2 w3 w4
1 0 0 0
1 0 0 0
1 0 0 0
1 0 1 1
1 0 1 1
1 0 0 0
1 0 0 0
1 0 0 0
out
0
1
1
1
1
1
1
0
Training Set Weights P(out) ∆Weights
out
0
1
1
0
0
1
1
0
T(out)
{epoch3
0
ϴ=-2 0
-1
-1
-1
1
0
1
1
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
{epoch4
x y z bias
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
∆w1 ∆w2 ∆w3 ∆w4
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
0 0 -1 -1
0 0 0 0
0 0 0 0
0 0 0 0
w1 w2 w3 w4
1 0 0 0
1 0 0 0
1 0 0 0
1 0 0 0
1 0 1 1
1 0 0 0
1 0 0 0
1 0 0 0
out
0
1
1
0
1
1
1
0
Training Set Weights P(out) ∆Weights
out
0
1
1
0
0
1
1
0
T(out)
{epoch3
1
ϴ=1 1
0
0
0
0
1
1
1
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
{epoch4
∆w1 ∆w2 ∆w3 ∆w4
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
w1 w2 w3 w4
1 0 0 0
1 0 0 0
1 0 0 0
1 0 0 0
out
0
1
1
0
Training Set Weights P(out) ∆WeightsT(out)
1
ϴ Out
0
0
0
x
y
z
1
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
{epoch3
Wi = Wi + ∆Wi
∆Wi = -η * [P(out) -T(Out )] * xi
η is learning rate for perceptron
x y z bias
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
0 0 1 1
1 1 1 1
1 0 1 1
0 1 1 1
out
0
1
1
0
0
1
1
0
In epoch3, ‘loss’ is 0. So, we can assume
that Perceptron has learnt the pattern{epoch4
We trained our Perceptron Model
On a Linear Separable Distribution
We trained our Perceptron Model
It cannot generalize on data distributed like this
Limited Functionality of Linear Hyperplane
Neural Networks
Its like Perceptrons are together in a defined fashion
Two Layered Neural Network
These array of perceptrons form the Neural Nets
h1
h2
Input 1
Input 2
Output
Activation
Convolution Neural Network
• CNNs are Neural Networks that are sparsely (not fully)
connected on the input layer.
• This makes each neuron in the following layer responsible
for only part of the input, which is valuable for recognizing
parts of images, for example.
Convolutional Neural Network
•Convolutional Neural Networks are very similar to ordinary Neural
Networks
•The architecture of a CNN is designed to take advantage of the 2D
structure of an input image
What is an Image ?
• Image is a matrix of pixel values
• Believe me Image is nothing more than this.
CNN aka Deep Neural Network
1.Convolution
2.Activation Functions (ReLU)
3.Pooling or Sub Sampling
4.Classification (Fully Connected Layer)
Image
Filter
1. What is Convolution
This is Convolution
Yeah ! That’s it.
2. Activation Functions (Relu)
Relu
In CNN “Relu" Activation is preferred over other Activation Functions because of it’s
inherent simplicity
This enables models to attain non-linearity in the
decision boundary.
•Greatly accelerate the convergence of sgd. It is argued that this is
due to its linear, non-saturating form.
•Avoids expensive computations
Relu advantages
3. Max-Pooling
Pooling in Action
Spatial Pooling (also called subsampling or downsampling) reduces the
dimensionality of each feature map but retains the most important information.
Spatial Pooling can be of different types: Max, Average, Sum etc.
The purpose of the Fully Connected layer is to use
high-level features for classifying the input image into
various classes based on the training dataset.
4. Fully Connected Layer
Connecting dots…
1.Initialize the layers with random values.
2.The network takes a training image as input, goes through the forward propagation step (convolution, ReLU and pooling
operations along with forward propagation in the Fully Connected layer) and finds the output probabilities for each class.
3.Lets say the output probabilities for the boat image above are [0.2, 0.4, 0.1, 0.3], while the target probabilities are [0, 0,
1, 0].
4.Since weights are randomly assigned for the first training example, output probabilities are also random.
5.Calculate the total error at the output layer (summation over all 4 classes.
1.Total Error = ∑ ½ (target probability – output probability) ²
6.Use Back-propagation to calculate the gradients of the error with respect to all weights in the network and use gradient
descent to update all filter values / weights and parameter values to minimize the output error
•Repeat steps 2-6 with all images in the training set.
Give me the code !
Find the code at
https://github.com/yardstick17/ConnectingDots
Perceptron Training Snapshot CNN training Snapshot
Thanks
References
@yardstick17
Find me on web

Weitere ähnliche Inhalte

Was ist angesagt?

Was ist angesagt? (20)

Convolutional neural networks
Convolutional neural networks Convolutional neural networks
Convolutional neural networks
 
Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural Networks
 
Machine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural NetworkMachine Learning - Convolutional Neural Network
Machine Learning - Convolutional Neural Network
 
cnn ppt.pptx
cnn ppt.pptxcnn ppt.pptx
cnn ppt.pptx
 
Understanding Convolutional Neural Networks
Understanding Convolutional Neural NetworksUnderstanding Convolutional Neural Networks
Understanding Convolutional Neural Networks
 
Overview of Convolutional Neural Networks
Overview of Convolutional Neural NetworksOverview of Convolutional Neural Networks
Overview of Convolutional Neural Networks
 
Introduction to Neural Networks
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural Networks
 
History of deep learning
History of deep learningHistory of deep learning
History of deep learning
 
Deep Learning With Neural Networks
Deep Learning With Neural NetworksDeep Learning With Neural Networks
Deep Learning With Neural Networks
 
AlexNet
AlexNetAlexNet
AlexNet
 
Cnn
CnnCnn
Cnn
 
Convolutional Neural Network and Its Applications
Convolutional Neural Network and Its ApplicationsConvolutional Neural Network and Its Applications
Convolutional Neural Network and Its Applications
 
Introduction to Autoencoders
Introduction to AutoencodersIntroduction to Autoencoders
Introduction to Autoencoders
 
Deep Learning
Deep Learning Deep Learning
Deep Learning
 
Cnn
CnnCnn
Cnn
 
Deep Learning
Deep LearningDeep Learning
Deep Learning
 
Backpropagation algo
Backpropagation  algoBackpropagation  algo
Backpropagation algo
 
Convolutional Neural Network (CNN)
Convolutional Neural Network (CNN)Convolutional Neural Network (CNN)
Convolutional Neural Network (CNN)
 
Convolutional Neural Network
Convolutional Neural NetworkConvolutional Neural Network
Convolutional Neural Network
 
Deep learning
Deep learningDeep learning
Deep learning
 

Ähnlich wie Convolution Neural Network

Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptxLecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
VAIBHAVSAHU55
 
latest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptxlatest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptx
MdMahfoozAlam5
 

Ähnlich wie Convolution Neural Network (20)

Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Perceptron.pptx
Perceptron.pptxPerceptron.pptx
Perceptron.pptx
 
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptxLecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
Lecture9April2020_time_11_55amto12_50pm(Neural_network_PPT).pptx
 
Data mining assignment 5
Data mining assignment 5Data mining assignment 5
Data mining assignment 5
 
Artificial Neural Networks
Artificial Neural NetworksArtificial Neural Networks
Artificial Neural Networks
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
 
CS767_Lecture_04.pptx
CS767_Lecture_04.pptxCS767_Lecture_04.pptx
CS767_Lecture_04.pptx
 
NN-Ch2.PDF
NN-Ch2.PDFNN-Ch2.PDF
NN-Ch2.PDF
 
Neural Networks.pptx
Neural Networks.pptxNeural Networks.pptx
Neural Networks.pptx
 
Perceptron noural network algorithme .pptx
Perceptron noural network algorithme .pptxPerceptron noural network algorithme .pptx
Perceptron noural network algorithme .pptx
 
Lec1 Inroduction to Neural Network.ppt
Lec1 Inroduction to Neural Network.pptLec1 Inroduction to Neural Network.ppt
Lec1 Inroduction to Neural Network.ppt
 
Deep learning simplified
Deep learning simplifiedDeep learning simplified
Deep learning simplified
 
A hitchhiker’s guide to neuroevolution in Erlang
A hitchhiker’s guide to neuroevolution in ErlangA hitchhiker’s guide to neuroevolution in Erlang
A hitchhiker’s guide to neuroevolution in Erlang
 
Learning Deep Learning
Learning Deep LearningLearning Deep Learning
Learning Deep Learning
 
Neural network
Neural networkNeural network
Neural network
 
Lect аі 2 n net p2
Lect аі 2 n net p2Lect аі 2 n net p2
Lect аі 2 n net p2
 
ann-ics320Part4.ppt
ann-ics320Part4.pptann-ics320Part4.ppt
ann-ics320Part4.ppt
 
ann-ics320Part4.ppt
ann-ics320Part4.pptann-ics320Part4.ppt
ann-ics320Part4.ppt
 
Deep learning study 2
Deep learning study 2Deep learning study 2
Deep learning study 2
 
latest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptxlatest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptx
 

Kürzlich hochgeladen

Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ssuser89054b
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
dollysharma2066
 
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
amitlee9823
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Kandungan 087776558899
 

Kürzlich hochgeladen (20)

Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdf
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
 
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced LoadsFEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
 
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
 
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
 
chapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringchapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineering
 
A Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityA Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna Municipality
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
2016EF22_0 solar project report rooftop projects
2016EF22_0 solar project report rooftop projects2016EF22_0 solar project report rooftop projects
2016EF22_0 solar project report rooftop projects
 
Design For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startDesign For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the start
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 

Convolution Neural Network

  • 1. Convolution Neural Network By-Amit Kushwaha •The basic learning entity in the network - Perceptron •Important Aspects of Deep Learning Presentation Covers
  • 4. The most basic form of neural network which can also learn Perceptron The elementary entity of a neural network.
  • 5. Perceptron A Basic Linear discriminant Classifier
  • 8. Training a Perceptron Let’s train a Perceptron to mimic this pattern x y z 0 0 1 1 1 1 1 0 1 0 1 1 Training Set out 0 1 1 0 T(out)
  • 9. Training a Perceptron Perceptron Model W1 w4 Activation Function ϴ OutPut W2 W3 x y z 1
  • 10. Wi = Wi + ∆Wi ∆W = -η * ( target output - perceptron output)*X η is learning rate for perceptron Training Rules : W1 w4 Activation Function ϴ OutPut W2 W3 x y z 1 Training a Perceptron
  • 11. Let's learn wi such that it minimizes the squared error On simplifying Training a Perceptron
  • 12. W1 ϴ Out Assign random values to Weight Vector([W1,W2,W3,W4]) x y z bias 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 Training Set out 0 1 1 0 0 1 1 0 T(out) W2 W3 W4 x y z bias { { epoch 1 epoch 2 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron
  • 13. x y z bias 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 ∆w1 ∆w2 ∆w3 ∆w4 0 0 0 0 1 1 1 1 0 0 0 0 0 -1 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 w1 w2 w3 w4 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 out 0 0 1 1 0 1 1 0 Training Set Weights P(out) ∆Weights out 0 0 1 1 0 1 1 0 T(out) 0 ϴ=0 0 0 0 0 0 0 1 1 { { epoch 1 epoch 2 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron out 0 1 1 0 0 1 1 0
  • 14. x y z bias 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 ∆w1 ∆w2 ∆w3 ∆w4 0 0 0 0 1 1 1 1 0 0 0 0 0 -1 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 w1 w2 w3 w4 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 out 0 0 1 1 0 1 1 0 Training Set Weights P(out) ∆Weights out 0 0 1 1 0 1 1 0 T(out) { { epoch 1 epoch 2 0 ϴ=0 0 0 0 0 1 1 1 1 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron out 0 1 1 0 0 1 1 0
  • 15. x y z bias 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 ∆w1 ∆w2 ∆w3 ∆w4 0 0 0 0 1 1 1 1 0 0 0 0 0 -1 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 w1 w2 w3 w4 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 out 0 0 1 1 0 1 1 0 Training Set Weights P(out) ∆WeightsT(out) { { epoch 1 epoch 2 0 ϴ=0 0 0 0 0 1 0 1 1 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron out 0 1 1 0 0 1 1 0
  • 16. x y z bias 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 ∆w1 ∆w2 ∆w3 ∆w4 0 0 0 0 1 1 1 1 0 0 0 0 0 -1 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 w1 w2 w3 w4 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 out 0 0 1 1 0 1 1 0 Training Set Weights P(out) ∆Weights out 0 1 1 0 0 1 1 0 T(out) { { epoch 1 epoch 2 1 ϴ=1 1 0 1 1 0 1 1 1 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron
  • 17. x y z bias 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 ∆w1 ∆w2 ∆w3 ∆w4 0 0 0 0 1 1 1 1 0 0 0 0 0 -1 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 w1 w2 w3 w4 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 out 0 0 1 1 0 1 1 0 Training Set Weights P(out) ∆Weights out 0 1 1 0 0 1 1 0 T(out) 1 ϴ=2 1 0 1 1 0 0 1 1 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron { { epoch 1 epoch 2 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron
  • 18. x y z bias 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 ∆w1 ∆w2 ∆w3 ∆w4 0 0 0 0 1 1 1 1 0 0 0 0 0 -1 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 w1 w2 w3 w4 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 out 0 0 1 1 1 1 1 0 Training Set Weights P(out) ∆Weights out 0 1 1 0 0 1 1 0 T(out) 1 ϴ=1 1 0 0 0 1 1 1 1 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron { { epoch 1 epoch 2 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron
  • 19. x y z bias 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 ∆w1 ∆w2 ∆w3 ∆w4 0 0 0 0 1 1 1 1 0 0 0 0 0 -1 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 w1 w2 w3 w4 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 out 0 0 1 1 1 1 1 0 Training Set Weights P(out) ∆Weights out 0 1 1 0 0 1 1 0 T(out) 1 ϴ=1 1 0 0 0 1 0 1 1 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron { { epoch 1 epoch 2 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron
  • 20. x y z bias 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 ∆w1 ∆w2 ∆w3 ∆w4 0 0 0 0 1 1 1 1 0 0 0 0 0 -1 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 w1 w2 w3 w4 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 out 0 0 1 1 1 1 1 0 Training Set Weights P(out) ∆Weights out 0 1 1 0 0 1 1 0 T(out) 1 ϴ=0 0 0 0 0 0 1 1 1 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron { { epoch 1 epoch 2 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron
  • 21. x y z bias 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 ∆w1 ∆w2 ∆w3 ∆w4 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 w1 w2 w3 w4 1 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 0 0 0 1 0 0 0 1 0 0 0 out 0 0 0 1 1 1 1 0 Training Set Weights P(out) ∆Weights out 0 1 1 0 0 1 1 0 T(out) {epoch3 1 ϴ=0 0 0 0 0 0 0 1 1 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron {epoch4
  • 22. x y z bias 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 ∆w1 ∆w2 ∆w3 ∆w4 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 w1 w2 w3 w4 1 0 0 0 1 0 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 0 0 0 1 0 0 0 1 0 0 0 out 0 1 0 1 1 1 1 0 Training Set Weights P(out) ∆Weights out 0 1 1 0 0 1 1 0 T(out) {epoch3 1 ϴ=1 0 0 0 0 1 1 1 1 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron {epoch4
  • 23. x y z bias 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 ∆w1 ∆w2 ∆w3 ∆w4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 w1 w2 w3 w4 1 0 0 0 1 0 0 0 1 0 0 0 1 0 1 1 1 0 1 1 1 0 0 0 1 0 0 0 1 0 0 0 out 0 1 1 1 1 1 1 0 Training Set Weights P(out) ∆Weights out 0 1 1 0 0 1 1 0 T(out) {epoch3 0 ϴ=-2 0 -1 -1 -1 1 0 1 1 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron {epoch4
  • 24. x y z bias 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 ∆w1 ∆w2 ∆w3 ∆w4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 w1 w2 w3 w4 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 1 1 1 0 0 0 1 0 0 0 1 0 0 0 out 0 1 1 0 1 1 1 0 Training Set Weights P(out) ∆Weights out 0 1 1 0 0 1 1 0 T(out) {epoch3 1 ϴ=1 1 0 0 0 0 1 1 1 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron {epoch4
  • 25. ∆w1 ∆w2 ∆w3 ∆w4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 w1 w2 w3 w4 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 out 0 1 1 0 Training Set Weights P(out) ∆WeightsT(out) 1 ϴ Out 0 0 0 x y z 1 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron {epoch3 Wi = Wi + ∆Wi ∆Wi = -η * [P(out) -T(Out )] * xi η is learning rate for perceptron x y z bias 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 out 0 1 1 0 0 1 1 0 In epoch3, ‘loss’ is 0. So, we can assume that Perceptron has learnt the pattern{epoch4
  • 26. We trained our Perceptron Model
  • 27. On a Linear Separable Distribution We trained our Perceptron Model
  • 28. It cannot generalize on data distributed like this Limited Functionality of Linear Hyperplane
  • 29. Neural Networks Its like Perceptrons are together in a defined fashion
  • 30. Two Layered Neural Network These array of perceptrons form the Neural Nets h1 h2 Input 1 Input 2 Output Activation
  • 31. Convolution Neural Network • CNNs are Neural Networks that are sparsely (not fully) connected on the input layer. • This makes each neuron in the following layer responsible for only part of the input, which is valuable for recognizing parts of images, for example.
  • 32. Convolutional Neural Network •Convolutional Neural Networks are very similar to ordinary Neural Networks •The architecture of a CNN is designed to take advantage of the 2D structure of an input image
  • 33. What is an Image ? • Image is a matrix of pixel values • Believe me Image is nothing more than this.
  • 34. CNN aka Deep Neural Network 1.Convolution 2.Activation Functions (ReLU) 3.Pooling or Sub Sampling 4.Classification (Fully Connected Layer)
  • 35. Image Filter 1. What is Convolution
  • 36. This is Convolution Yeah ! That’s it.
  • 37. 2. Activation Functions (Relu) Relu In CNN “Relu" Activation is preferred over other Activation Functions because of it’s inherent simplicity
  • 38. This enables models to attain non-linearity in the decision boundary. •Greatly accelerate the convergence of sgd. It is argued that this is due to its linear, non-saturating form. •Avoids expensive computations Relu advantages
  • 40. Pooling in Action Spatial Pooling (also called subsampling or downsampling) reduces the dimensionality of each feature map but retains the most important information. Spatial Pooling can be of different types: Max, Average, Sum etc.
  • 41. The purpose of the Fully Connected layer is to use high-level features for classifying the input image into various classes based on the training dataset. 4. Fully Connected Layer
  • 42. Connecting dots… 1.Initialize the layers with random values. 2.The network takes a training image as input, goes through the forward propagation step (convolution, ReLU and pooling operations along with forward propagation in the Fully Connected layer) and finds the output probabilities for each class. 3.Lets say the output probabilities for the boat image above are [0.2, 0.4, 0.1, 0.3], while the target probabilities are [0, 0, 1, 0]. 4.Since weights are randomly assigned for the first training example, output probabilities are also random. 5.Calculate the total error at the output layer (summation over all 4 classes. 1.Total Error = ∑ ½ (target probability – output probability) ² 6.Use Back-propagation to calculate the gradients of the error with respect to all weights in the network and use gradient descent to update all filter values / weights and parameter values to minimize the output error •Repeat steps 2-6 with all images in the training set.
  • 43. Give me the code ! Find the code at https://github.com/yardstick17/ConnectingDots Perceptron Training Snapshot CNN training Snapshot