1. CONVOLUTION OPERATION
A convolution operation operates on all the pixel
values within its kernel's receptive field,
producing a single value by essentially multiplying
the kernel weights with the pixel values
elementwise and adding a bias term to the result.
This reduces the dimensions of the input matrix
as well.
2.
3. parameter sharing
• Convolution Neural Networks have a couple of
techniques known as parameter sharing and
parameter tying. Parameter sharing is the
method of sharing weights by all neurons in a
particular feature map. Therefore helps to
reduce the number of parameters in the
whole system, making it computationally
cheap.
4.
5. EQUIVARIANT REPRESENTATION
• CNNs are famously equivariant with respect to
translation. This means that translating the
input to a convolutional layer will result in
translating the output
6.
7. convolution operation padding
• padding is a technique used to preserve the
spatial dimensions of the input image after
convolution operations on a feature map.
Padding involves adding extra pixels around
the border of the input feature map before
convolution
8.
9. STRIDE
• The number of pixels turning to the input
matrix is known as the strides. When the
number of strides is 1, we move the filters to 1
pixel at a time. Similarly, when the number of
strides is 2, we carry the filters to 2 pixels, and
so on.
10. RELU
• Usually, the image is highly non-linear, which
means varied pixel values. This is a scenario
that is very difficult for an algorithm to make
correct predictions. RELU activation function is
applied in these cases to decrease the non-
linearity and make the job easier