Introduction to Neural networks (under graduate course) Lecture 9 of 9

Randa Elanwar
Randa ElanwarResearcher um Electronics Research Institute
Neural Networks
Dr. Randa Elanwar
Lecture 9
Lecture Content
• Mapping networks:
– Back-propagation neural network
– Self-organizing map
– Counter propagation network
• Spatiotemporal Network
• Stochastic Networks
– Boltzmann machine
• Neurocognition network
2Neural Networks Dr. Randa Elanwar
Mapping networks
• When the problem is non linear and no straight line
could ever separate samples in the feature space we
need multilayer perceptrons (having hidden layer’s’) to
achieve nonlinearity.
• The idea is that we map/transform/translate our data
to another feature space that is linearly separable. Thus
we call them mapping networks.
• We will discuss three types of mapping networks: the
back-propagation neural network, self-organizing map,
counter propagation network.
3Neural Networks Dr. Randa Elanwar
Mapping networks
• Networks without hidden units are very limited in the input-output
mappings they can model.
– More layers of linear units do not help. Its still linear.
– Fixed output non-linearities are not enough
• We need multiple layers of adaptive non-linear hidden units.
• But how can we train such nets?
– We need an efficient way of adapting all the weights, not just the
last layer. i.e., Learning the weights going into hidden units . This is
hard.
– Why?
– Because: Nobody is telling us directly what hidden units should
do.
– Solution: This can be achieved using ‘Backpropagation’ learning
4Neural Networks Dr. Randa Elanwar
Learning with hidden layers
• Mathematically, the learning process is an optimization problem. We
initiate the NN system with some parameters (weights) and use
known examples to find out the optimal values of such weights.
• Generally, the solution of an optimization problem is to find the
parameter value that leads to minimum value of an optimization
function.
5Neural Networks Dr. Randa Elanwar
G(t)
t
In our case, the optimization function that we
need to minimize to get the final weights is the
error function
E = ydes-yact
E = ydes-f(W.X)
To get the minimum value mathematically we
differentiate the error function with respect to
the parameter we need to get we call it W
E


Learning with hidden layers
• We define the “gradient”: w = . . X
• If  is +ve this means that the current values of W makes the
differentiation result +ve which is wrong. We want
differentiation result to be = 0 (minimum point) we must
move in the opposite direction of the gradient (subtract). The
opposite is also true.
• If  is =0 this means that the current values of W makes the
differentiation result = 0 which is right. These weights are the
optimal values (solution) and w should stop the algorithm. The
network now is trained and ready for use.
6Neural Networks Dr. Randa Elanwar
The back propagation algorithm
• The backpropagation learning algorithm can be divided into two
phases: propagation and weight update.
7Neural Networks Dr. Randa Elanwar
Phase 1: Propagation
1.Forward propagation of a training
pattern's input through the neural
network in order to generate the
propagation's output activations
(yact).
2.Backward propagation of the
propagation's output activations
through the neural network using
the training pattern's target (ydes) in
order to generate the deltas () of
all output and hidden neurons.
Phase 2: Weight update
For each weight follow the following steps:
1.Multiply its output delta () and input
activation (x) and the learning rate () to
get the gradient of the weight (w).
2.Bring the weight in the opposite direction
of the gradient by subtracting it from the
weight.
- The sign of the gradient of a weight
indicates where the error is increasing, this
is why the weight must be updated in the
opposite direction.
- Repeat phase 1 and 2 until the
performance of the network is satisfactory.
Backpropagation Networks
• They are the nonlinear (mapping) neural networks using the
backpropagation supervised learning technique.
• Modes of learning of nonlinear nets:
• There are three modes of learning to choose from: on-line
(pattern), batch and stochastic.
• In on-line and stochastic learning, each propagation is followed
immediately by a weight update.
• In batch learning, many propagations occur before updating the
weights.
• Batch learning requires more memory capacity, but on-line and
stochastic learning require more updates.
8Neural Networks Dr. Randa Elanwar
Backpropagation Networks
• On-line learning is used for dynamic environments that
provide a continuous stream of new patterns.
• Stochastic learning and batch learning both make use
of a training set of static patterns. Stochastic goes
through the data set in a random order in order to
reduce its chances of getting stuck in local minima.
• Stochastic learning is also much faster than batch
learning since weights are updated immediately after
each propagation. Yet batch learning will yield a much
more stable descent to a local minima since each
update is performed based on all patterns.
9Neural Networks Dr. Randa Elanwar
Backpropagation Networks
• Applications of supervised learning (Backpropagation NN)
include
• Pattern recognition
• Credit approval
• Target marketing
• Medical diagnosis
• Defective parts identification in manufacturing
• Crime zoning
• Treatment effectiveness analysis
• Etc
10Neural Networks Dr. Randa Elanwar
Self-organizing map
• We can also train networks where there is no teacher. This is called
unsupervised learning. The network learns a prototype based on the
distribution of patterns in the training data. Such networks allow us
to:
– Discover underlying structure of the data
– Encode or compress the data
– Transform the data
• Self-organizing maps (SOMs) are a data visualization technique
invented by Professor Teuvo Kohonen
– Also called Kohonen Networks, Competitive Learning, Winner-Take-All
Learning
– Generally reduces the dimensions of data through the use of self-
organizing neural networks
– Useful for data visualization; humans cannot visualize high dimensional
data so this is often a useful technique to make sense of large data sets
11Neural Networks Dr. Randa Elanwar
Self-organizing map
• SOM structure:
1. Weights in neuron must represent a class of
pattern. We have a neuron for each class.
2. Inputs pattern presented to all neurons and each
produces an output. Output: measure of the match
between input pattern and pattern stored by
neuron.
3. A competitive learning strategy selects neuron with
largest response.
4. A method of reinforcing the largest response.
12Neural Networks Dr. Randa Elanwar
Self-organizing map
• Unsupervised classification learning is based on clustering of
input data. No a priori knowledge is about an input’s
membership in a particular class.
• Instead, gradually detected characteristics and a history of
training will be used to assist the network in defining classes
and possible boundaries between them.
• Clustering is understood to be the grouping of similar objects
and separating of dissimilar ones.
• We discuss Kohonen’s network which classifies input vectors
into one of the specified number of m categories, according to
the clusters detected in the training set
13Neural Networks Dr. Randa Elanwar
Kohonen’s Network
14Neural Networks Dr. Randa Elanwar
Kohonen network
X
•The Kohonen network is a self-organising
network with the following
characteristics:
1. Neurons are arranged on a 2D grid
2. Inputs are sent to all neurons
3. There are no connections between
neurons
4. For a neuron output (j) is a weighted
sum of multiplication of x and w
vectors, where x is the input, w is the
weights
5. There is no threshold or bias
6. Input values and weights are
normalized
Self-organizing map
Learning in Kohonen networks:
• Initially the weights in each neuron are random
• Input values are sent to all the neurons
• The outputs of each neuron are compared
• The “winner” is the neuron with the largest output value
• Having found the winner, the weights of the winning neuron are
adjusted
• Weights of neurons in a surrounding neighbourhood are also
adjusted
• As training progresses the neighbourhood gets smaller
• Weights are adjusted according to the following formula:
15Neural Networks Dr. Randa Elanwar
Self-organizing map
• The learning coefficient (alpha) starts with a value of 1 and
gradually reduces to 0
• This has the effect of making big changes to the weights initially,
but no changes at the end
• The weights are adjusted so that they more closely resemble
the input patterns
Applications of unsupervised learning (Kohonen’s NN) include
• Clustering
• Vector quantization
• Data compression
• Feature extraction
16Neural Networks Dr. Randa Elanwar
Counter propagation network
• The counterpropagation network (CPN) is a fast-learning
combination of unsupervised and supervised learning.
• Although this network uses linear neurons, it can learn nonlinear
functions by means of a hidden layer of competitive units.
• Moreover, the network is able to learn a function and its inverse
at the same time.
• However, to simplify things, we will only consider the
feedforward mechanism of the CPN.
17Neural Networks Dr. Randa Elanwar
Counter propagation network
• Training:
1.Randomly select a vector pair (x, y) from the training set.
2.Measure the similarity between the input vector and the
activation of the hidden-layer units.
3.In the hidden (competitive) layer, determine the unit with the
largest activation (the winner). I.e., the neuron whose weight
vector is most similar to the current input vector is the “winner.”
4.Adjust the connection weights inbetween
5.Repeat until each input pattern is consistently associated with
the same competitive unit.
18Neural Networks Dr. Randa Elanwar
Counter propagation network
• After the first phase of the training, each hidden-layer neuron is
associated with a subset of input vectors (class of patterns).
• In the second phase of the training, we adjust the weights in the
network’s output layer in such a way that, for any winning hidden-
layer unit, the network’s output is as close as possible to the desired
output for the winning unit’s associated input vectors.
• The idea is that when we later use the network to compute
functions, the output of the winning hidden-layer unit is 1, and the
output of all other hidden-layer units is 0.
19Neural Networks Dr. Randa Elanwar
Spatiotemporal Networks
•A spatio-temporal neural net differs from other neural networks in two ways:
1. Neurons has recurrent links that have different propagation delays
2. The state of the network depends not only on which nodes are firing, but
also on the relative firing times of nodes. i.e., the significance of a node
varies with time and depends on the firing state of other nodes.
•The use of recurrence and multiple links with variable propagation delays
provides a rich mechanism for feature extraction and pattern recognition:
1. Recurrent links enable nodes to integrate and differentiate inputs. I.e.,
detect features
2. multiple links with variable propagation delays between nodes serve as a
short-term memory.
20Neural Networks Dr. Randa Elanwar
Spatiotemporal Networks
• Applications:
• Problems such as speech recognition and time series prediction where the
input signal has an explicit temporal aspect.
• Tasks like image recognition do not have an explicit temporal aspect, but
can also be done by converting static patterns into time-varying (spatio-
temporal) signals via scanning the image. This would lead to a number of
significant advantages:
– The recognition system becomes ‘shift invarient’
– The spatio-temporal approach explains the image geometry since the local
spatial relationships in the image are expressed as local temporal variations in
the scanned input.
– Reduction of complexity (from 2D to 1D)
– The scanning approach allows a visual pattern recognition system to deal with
inputs of arbitrary extent (not only static fixed 2D pattern)
21Neural Networks Dr. Randa Elanwar
Stochastic neural networks
• Stochastic neural networks are a type of artificial neural
networks, which is a tool of artificial intelligence. They are
built by introducing random variations into the network,
either by giving the network's neurons stochastic transfer
functions, or by giving them stochastic weights. This makes
them useful tools for optimization problems, since the
random fluctuations help it escape from local minima.
• Stochastic neural networks that are built by using stochastic
transfer functions are often called Boltzmann machines.
• Stochastic neural networks have found applications in risk
management, oncology, bioinformatics, and other similar
fields
22Neural Networks Dr. Randa Elanwar
Stochastic Networks: Boltzmann machine
• The neurons are stochastic: at any time there is a probability
attached to whether the neurons fires.
• Used for solving constrained optimization problems.
• Typical Boltzmann Machine:
– Weights are fixed to represent the constrains of the problem and the
function to be optimized.
– The net seeks the solution by changing the activations of the units (0 or
1) based on a probability distribution and the effect that the change
would have on the energy function or consensus function for the net.
• May use either supervised or unsupervised learning.
• Learning in Boltzmann Machine is accomplished by using a
Simulated Annealing technique which has stochastic nature. This is
used to reduce the probability of the net becoming trapped in a
local minimum which is not a global minimum.
23Neural Networks Dr. Randa Elanwar
Stochastic Networks: Boltzmann machine
• Learning characteristics:
– Each neuron fires with bipolar values.
– All connections are symmetric.
– In activation passing, the next neuron whose state we
wish to update is selected randomly.
– There are no self-feedback (connections from a neuron
to itself)
24Neural Networks Dr. Randa Elanwar
Stochastic Networks: Boltzmann machine
• There are three phases in operation of the network:
– The clamped phase in which the input and output of visible
neurons are held fixed, while the hidden neurons are allowed
to vary.
– The free running phase in which only the inputs are held fixed
and other neurons are allowed to vary.
– The learning phase.
• These phases iterate till learning has created a
Boltzmann Machine which can be said to have learned
the input patterns and will converge to the learned
patterns when noisy or incomplete pattern is
presented.
25Neural Networks Dr. Randa Elanwar
Stochastic Networks: Boltzmann machine
• For unsupervised learning Generally the initial weights of the net
are randomly set to values in a small range e.g. -0.5 to +0.5.
• Then an input pattern is presented to the net and clamped to the
visible neurons.
• choose a hidden neurons at random and flip its state from sj to –sj
according to certain probability distribution
• The activation passing can continue till the net hidden neurons
reach equilibrium.
• During free running phase, after presentation of the input patterns
all neurons can update their states.
• The learning phase depends whether weight are changed depend
on the difference between the "real" distribution (neuron state) in
clamped phase and the one which will be produced (eventually) by
the machine in free mode.
26Neural Networks Dr. Randa Elanwar
Stochastic Networks: Boltzmann machine
• For supervised learning the set of visible neurons is split into
input and output neurons, and the machine will be used to
associate an input pattern with an output pattern.
• During the clamped phase, the input and output patterns are
clamped to the appropriate units.
• The hidden neurons’ activations can settle at the various
values.
• During free running phase, only the input neurons are
clamped – both the output neurons and the hidden neurons
can pass activation round till the activations in the network
settles.
• Learning rule here is the same as before but must be
modulated (multiplied) by the probability of the input’s
patterns
27Neural Networks Dr. Randa Elanwar
Neurocognition network
• Neurocognitive networks are large-scale systems of
distributed and interconnected neuronal populations in
the central nervous system organized to perform
cognitive functions.
• many computer scientists try to simulate human
cognition with computers. This line of research can be
roughly split into two types: research seeking to create
machines as adept as humans (or more so), and
research attempting to figure out the computational
basis of human cognition — that is, how the brain
actually carries out its computations. This latter branch
of research can be called computational modeling
(while the former is often called artificial intelligence or
AI).
28Neural Networks Dr. Randa Elanwar
1 von 28

Recomendados

Introduction to Neural networks (under graduate course) Lecture 7 of 9 von
Introduction to Neural networks (under graduate course) Lecture 7 of 9Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Randa Elanwar
3.8K views24 Folien
Introduction to Neural networks (under graduate course) Lecture 8 of 9 von
Introduction to Neural networks (under graduate course) Lecture 8 of 9Introduction to Neural networks (under graduate course) Lecture 8 of 9
Introduction to Neural networks (under graduate course) Lecture 8 of 9Randa Elanwar
1.2K views27 Folien
lecture07.ppt von
lecture07.pptlecture07.ppt
lecture07.pptbutest
12.9K views73 Folien
Introduction to Neural networks (under graduate course) Lecture 3 of 9 von
Introduction to Neural networks (under graduate course) Lecture 3 of 9Introduction to Neural networks (under graduate course) Lecture 3 of 9
Introduction to Neural networks (under graduate course) Lecture 3 of 9Randa Elanwar
1.8K views21 Folien
Artificial Neural Network von
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkDessy Amirudin
848 views47 Folien
Artificial Neural Network (draft) von
Artificial Neural Network (draft)Artificial Neural Network (draft)
Artificial Neural Network (draft)James Boulie
2K views61 Folien

Más contenido relacionado

Was ist angesagt?

Artificial Neural Network von
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkKnoldus Inc.
5.4K views19 Folien
Artificial neural network von
Artificial neural networkArtificial neural network
Artificial neural networksweetysweety8
580 views25 Folien
Artificial Neural Networks Lect3: Neural Network Learning rules von
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesMohammed Bennamoun
17.8K views73 Folien
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS von
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSMohammed Bennamoun
3.1K views62 Folien
Introduction to Neural networks (under graduate course) Lecture 1 of 9 von
Introduction to Neural networks (under graduate course) Lecture 1 of 9Introduction to Neural networks (under graduate course) Lecture 1 of 9
Introduction to Neural networks (under graduate course) Lecture 1 of 9Randa Elanwar
2.4K views20 Folien
Lecture artificial neural networks and pattern recognition von
Lecture   artificial neural networks and pattern recognitionLecture   artificial neural networks and pattern recognition
Lecture artificial neural networks and pattern recognitionHưng Đặng
862 views92 Folien

Was ist angesagt?(20)

Artificial Neural Network von Knoldus Inc.
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
Knoldus Inc.5.4K views
Artificial Neural Networks Lect3: Neural Network Learning rules von Mohammed Bennamoun
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
Mohammed Bennamoun17.8K views
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS von Mohammed Bennamoun
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Mohammed Bennamoun3.1K views
Introduction to Neural networks (under graduate course) Lecture 1 of 9 von Randa Elanwar
Introduction to Neural networks (under graduate course) Lecture 1 of 9Introduction to Neural networks (under graduate course) Lecture 1 of 9
Introduction to Neural networks (under graduate course) Lecture 1 of 9
Randa Elanwar2.4K views
Lecture artificial neural networks and pattern recognition von Hưng Đặng
Lecture   artificial neural networks and pattern recognitionLecture   artificial neural networks and pattern recognition
Lecture artificial neural networks and pattern recognition
Hưng Đặng862 views
Ann von vini89
Ann Ann
Ann
vini892.1K views
Introduction to Neural networks (under graduate course) Lecture 2 of 9 von Randa Elanwar
Introduction to Neural networks (under graduate course) Lecture 2 of 9Introduction to Neural networks (under graduate course) Lecture 2 of 9
Introduction to Neural networks (under graduate course) Lecture 2 of 9
Randa Elanwar2.8K views
Artificial neural network model & hidden layers in multilayer artificial neur... von Muhammad Ishaq
Artificial neural network model & hidden layers in multilayer artificial neur...Artificial neural network model & hidden layers in multilayer artificial neur...
Artificial neural network model & hidden layers in multilayer artificial neur...
Muhammad Ishaq5.8K views
Artificial neural networks (2) von sai anjaneya
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)
sai anjaneya717 views
Artificial neural networks seminar presentation using MSWord. von Mohd Faiz
Artificial neural networks seminar presentation using MSWord.Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.
Mohd Faiz659 views
Fundamental, An Introduction to Neural Networks von Nelson Piedra
Fundamental, An Introduction to Neural NetworksFundamental, An Introduction to Neural Networks
Fundamental, An Introduction to Neural Networks
Nelson Piedra5K views
Artificial neural network von DEEPASHRI HK
Artificial neural networkArtificial neural network
Artificial neural network
DEEPASHRI HK186.4K views

Destacado

Improving Performance of Back propagation Learning Algorithm von
Improving Performance of Back propagation Learning AlgorithmImproving Performance of Back propagation Learning Algorithm
Improving Performance of Back propagation Learning Algorithmijsrd.com
310 views3 Folien
The Back Propagation Learning Algorithm von
The Back Propagation Learning AlgorithmThe Back Propagation Learning Algorithm
The Back Propagation Learning AlgorithmESCOM
5.8K views13 Folien
Neural network & its applications von
Neural network & its applications Neural network & its applications
Neural network & its applications Ahmed_hashmi
195.3K views50 Folien
AI Lesson 39 von
AI Lesson 39AI Lesson 39
AI Lesson 39Assistant Professor
483 views12 Folien
Setting Artificial Neural Networks parameters von
Setting Artificial Neural Networks parametersSetting Artificial Neural Networks parameters
Setting Artificial Neural Networks parametersMadhumita Tamhane
4.5K views32 Folien
2.3.1 properties of functions von
2.3.1 properties of functions2.3.1 properties of functions
2.3.1 properties of functionsNorthside ISD
3K views22 Folien

Destacado(20)

Improving Performance of Back propagation Learning Algorithm von ijsrd.com
Improving Performance of Back propagation Learning AlgorithmImproving Performance of Back propagation Learning Algorithm
Improving Performance of Back propagation Learning Algorithm
ijsrd.com310 views
The Back Propagation Learning Algorithm von ESCOM
The Back Propagation Learning AlgorithmThe Back Propagation Learning Algorithm
The Back Propagation Learning Algorithm
ESCOM5.8K views
Neural network & its applications von Ahmed_hashmi
Neural network & its applications Neural network & its applications
Neural network & its applications
Ahmed_hashmi195.3K views
Setting Artificial Neural Networks parameters von Madhumita Tamhane
Setting Artificial Neural Networks parametersSetting Artificial Neural Networks parameters
Setting Artificial Neural Networks parameters
Madhumita Tamhane4.5K views
2.3.1 properties of functions von Northside ISD
2.3.1 properties of functions2.3.1 properties of functions
2.3.1 properties of functions
Northside ISD3K views
Introduction to Neural networks (under graduate course) Lecture 6 of 9 von Randa Elanwar
Introduction to Neural networks (under graduate course) Lecture 6 of 9Introduction to Neural networks (under graduate course) Lecture 6 of 9
Introduction to Neural networks (under graduate course) Lecture 6 of 9
Randa Elanwar1.1K views
Neuron Mc Culloch Pitts dan Hebb von Sherly Uda
Neuron Mc Culloch Pitts dan HebbNeuron Mc Culloch Pitts dan Hebb
Neuron Mc Culloch Pitts dan Hebb
Sherly Uda9.9K views
Neural Networks von R A Akerkar
Neural NetworksNeural Networks
Neural Networks
R A Akerkar1.8K views
An Introduction to Neural Networks and Machine Learning von Chris Nicholls
An Introduction to Neural Networks and Machine LearningAn Introduction to Neural Networks and Machine Learning
An Introduction to Neural Networks and Machine Learning
Chris Nicholls186 views
Back propagation network von HIRA Zaidi
Back propagation networkBack propagation network
Back propagation network
HIRA Zaidi3.5K views
Introduction to Neural Networks in Tensorflow von Nicholas McClure
Introduction to Neural Networks in TensorflowIntroduction to Neural Networks in Tensorflow
Introduction to Neural Networks in Tensorflow
Nicholas McClure5.3K views
backpropagation in neural networks von Akash Goel
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
Akash Goel25.9K views

Similar a Introduction to Neural networks (under graduate course) Lecture 9 of 9

Unit 2 ml.pptx von
Unit 2 ml.pptxUnit 2 ml.pptx
Unit 2 ml.pptxPradeeshSAI
25 views202 Folien
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ... von
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Simplilearn
1.4K views56 Folien
Artificial Neural Network (ANN von
Artificial Neural Network (ANNArtificial Neural Network (ANN
Artificial Neural Network (ANNAndrew Molina
8 views30 Folien
Neural network von
Neural networkNeural network
Neural networkSaddam Hussain
771 views17 Folien
Artificial neural network von
Artificial neural networkArtificial neural network
Artificial neural networkIshaneeSharma
194 views14 Folien
Terminology Machine Learning von
Terminology Machine LearningTerminology Machine Learning
Terminology Machine LearningDataminingTools Inc
2.5K views23 Folien

Similar a Introduction to Neural networks (under graduate course) Lecture 9 of 9(20)

Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ... von Simplilearn
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Simplilearn1.4K views
Artificial neural network by arpit_sharma von Er. Arpit Sharma
Artificial neural network by arpit_sharmaArtificial neural network by arpit_sharma
Artificial neural network by arpit_sharma
Er. Arpit Sharma216 views
Neuralnetwork 101222074552-phpapp02 von Deepu Gupta
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02
Deepu Gupta1.3K views
Deep Learning Sample Class (Jon Lederman) von Jon Lederman
Deep Learning Sample Class (Jon Lederman)Deep Learning Sample Class (Jon Lederman)
Deep Learning Sample Class (Jon Lederman)
Jon Lederman214 views
Artificial neural network von mustafa aadel
Artificial neural networkArtificial neural network
Artificial neural network
mustafa aadel14.2K views
13 intelligent systems-neural_networks von STI Innsbruck
13 intelligent systems-neural_networks13 intelligent systems-neural_networks
13 intelligent systems-neural_networks
STI Innsbruck58 views
Neural network final NWU 4.3 Graphics Course von Mohaiminur Rahman
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics Course
Mohaiminur Rahman941 views
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf von SowmyaJyothi3
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfNEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
SowmyaJyothi3262 views

Más de Randa Elanwar

الجزء السادس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة von
الجزء السادس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةالجزء السادس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء السادس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةRanda Elanwar
1.2K views2 Folien
الجزء الخامس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة von
الجزء الخامس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةالجزء الخامس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الخامس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةRanda Elanwar
319 views2 Folien
الجزء الرابع ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة von
الجزء الرابع ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةالجزء الرابع ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الرابع ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةRanda Elanwar
316 views2 Folien
الجزء الثالث ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة von
الجزء الثالث ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةالجزء الثالث ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الثالث ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةRanda Elanwar
271 views2 Folien
الجزء الثاني ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة von
الجزء الثاني ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةالجزء الثاني ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الثاني ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةRanda Elanwar
313 views2 Folien
الجزء الأول ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة von
الجزء الأول ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةالجزء الأول ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الأول ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةRanda Elanwar
302 views2 Folien

Más de Randa Elanwar(20)

الجزء السادس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة von Randa Elanwar
الجزء السادس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةالجزء السادس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء السادس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
Randa Elanwar1.2K views
الجزء الخامس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة von Randa Elanwar
الجزء الخامس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةالجزء الخامس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الخامس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
Randa Elanwar319 views
الجزء الرابع ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة von Randa Elanwar
الجزء الرابع ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةالجزء الرابع ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الرابع ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
Randa Elanwar316 views
الجزء الثالث ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة von Randa Elanwar
الجزء الثالث ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةالجزء الثالث ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الثالث ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
Randa Elanwar271 views
الجزء الثاني ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة von Randa Elanwar
الجزء الثاني ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةالجزء الثاني ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الثاني ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
Randa Elanwar313 views
الجزء الأول ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة von Randa Elanwar
الجزء الأول ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوةالجزء الأول ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الأول ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
Randa Elanwar302 views
تدريب مدونة علماء مصر على الكتابة الفنية (الترجمة والتلخيص )_Pdf5of5 von Randa Elanwar
تدريب مدونة علماء مصر على الكتابة الفنية (الترجمة والتلخيص    )_Pdf5of5تدريب مدونة علماء مصر على الكتابة الفنية (الترجمة والتلخيص    )_Pdf5of5
تدريب مدونة علماء مصر على الكتابة الفنية (الترجمة والتلخيص )_Pdf5of5
Randa Elanwar552 views
تدريب مدونة علماء مصر على الكتابة الفنية (القصة القصيرة والخاطرة والأخطاء ال... von Randa Elanwar
تدريب مدونة علماء مصر على الكتابة الفنية (القصة القصيرة والخاطرة  والأخطاء ال...تدريب مدونة علماء مصر على الكتابة الفنية (القصة القصيرة والخاطرة  والأخطاء ال...
تدريب مدونة علماء مصر على الكتابة الفنية (القصة القصيرة والخاطرة والأخطاء ال...
Randa Elanwar1.1K views
تدريب مدونة علماء مصر على الكتابة الفنية (مقالات الموارد )_Pdf3of5 von Randa Elanwar
تدريب مدونة علماء مصر على الكتابة الفنية (مقالات الموارد   )_Pdf3of5تدريب مدونة علماء مصر على الكتابة الفنية (مقالات الموارد   )_Pdf3of5
تدريب مدونة علماء مصر على الكتابة الفنية (مقالات الموارد )_Pdf3of5
Randa Elanwar247 views
تدريب مدونة علماء مصر على الكتابة الفنية (المقالات الإخبارية )_Pdf2of5 von Randa Elanwar
تدريب مدونة علماء مصر على الكتابة الفنية (المقالات الإخبارية  )_Pdf2of5تدريب مدونة علماء مصر على الكتابة الفنية (المقالات الإخبارية  )_Pdf2of5
تدريب مدونة علماء مصر على الكتابة الفنية (المقالات الإخبارية )_Pdf2of5
Randa Elanwar344 views
تدريب مدونة علماء مصر على الكتابة الفنية (المقالات المبنية على البحث )_Pdf1of5 von Randa Elanwar
تدريب مدونة علماء مصر على الكتابة الفنية (المقالات المبنية على البحث )_Pdf1of5تدريب مدونة علماء مصر على الكتابة الفنية (المقالات المبنية على البحث )_Pdf1of5
تدريب مدونة علماء مصر على الكتابة الفنية (المقالات المبنية على البحث )_Pdf1of5
Randa Elanwar571 views
تعريف بمدونة علماء مصر ومحاور التدريب على الكتابة للمدونين von Randa Elanwar
تعريف بمدونة علماء مصر ومحاور التدريب على الكتابة للمدونينتعريف بمدونة علماء مصر ومحاور التدريب على الكتابة للمدونين
تعريف بمدونة علماء مصر ومحاور التدريب على الكتابة للمدونين
Randa Elanwar269 views
Entrepreneurship_who_is_your_customer_(arabic)_7of7 von Randa Elanwar
Entrepreneurship_who_is_your_customer_(arabic)_7of7Entrepreneurship_who_is_your_customer_(arabic)_7of7
Entrepreneurship_who_is_your_customer_(arabic)_7of7
Randa Elanwar189 views
Entrepreneurship_who_is_your_customer_(arabic)_5of7 von Randa Elanwar
Entrepreneurship_who_is_your_customer_(arabic)_5of7Entrepreneurship_who_is_your_customer_(arabic)_5of7
Entrepreneurship_who_is_your_customer_(arabic)_5of7
Randa Elanwar148 views
Entrepreneurship_who_is_your_customer_(arabic)_4of7 von Randa Elanwar
Entrepreneurship_who_is_your_customer_(arabic)_4of7Entrepreneurship_who_is_your_customer_(arabic)_4of7
Entrepreneurship_who_is_your_customer_(arabic)_4of7
Randa Elanwar192 views
Entrepreneurship_who_is_your_customer_(arabic)_2of7 von Randa Elanwar
Entrepreneurship_who_is_your_customer_(arabic)_2of7Entrepreneurship_who_is_your_customer_(arabic)_2of7
Entrepreneurship_who_is_your_customer_(arabic)_2of7
Randa Elanwar173 views
يوميات طالب بدرجة مشرف (Part 19 of 20) von Randa Elanwar
يوميات طالب بدرجة مشرف (Part 19 of 20)يوميات طالب بدرجة مشرف (Part 19 of 20)
يوميات طالب بدرجة مشرف (Part 19 of 20)
Randa Elanwar251 views
يوميات طالب بدرجة مشرف (Part 18 of 20) von Randa Elanwar
يوميات طالب بدرجة مشرف (Part 18 of 20)يوميات طالب بدرجة مشرف (Part 18 of 20)
يوميات طالب بدرجة مشرف (Part 18 of 20)
Randa Elanwar138 views
يوميات طالب بدرجة مشرف (Part 17 of 20) von Randa Elanwar
يوميات طالب بدرجة مشرف (Part 17 of 20)يوميات طالب بدرجة مشرف (Part 17 of 20)
يوميات طالب بدرجة مشرف (Part 17 of 20)
Randa Elanwar128 views
يوميات طالب بدرجة مشرف (Part 16 of 20) von Randa Elanwar
يوميات طالب بدرجة مشرف (Part 16 of 20)يوميات طالب بدرجة مشرف (Part 16 of 20)
يوميات طالب بدرجة مشرف (Part 16 of 20)
Randa Elanwar179 views

Último

Drama KS5 Breakdown von
Drama KS5 BreakdownDrama KS5 Breakdown
Drama KS5 BreakdownWestHatch
71 views2 Folien
The Open Access Community Framework (OACF) 2023 (1).pptx von
The Open Access Community Framework (OACF) 2023 (1).pptxThe Open Access Community Framework (OACF) 2023 (1).pptx
The Open Access Community Framework (OACF) 2023 (1).pptxJisc
85 views7 Folien
discussion post.pdf von
discussion post.pdfdiscussion post.pdf
discussion post.pdfjessemercerail
120 views1 Folie
American Psychological Association 7th Edition.pptx von
American Psychological Association  7th Edition.pptxAmerican Psychological Association  7th Edition.pptx
American Psychological Association 7th Edition.pptxSamiullahAfridi4
82 views8 Folien
Material del tarjetero LEES Travesías.docx von
Material del tarjetero LEES Travesías.docxMaterial del tarjetero LEES Travesías.docx
Material del tarjetero LEES Travesías.docxNorberto Millán Muñoz
68 views9 Folien
Psychology KS5 von
Psychology KS5Psychology KS5
Psychology KS5WestHatch
77 views5 Folien

Último(20)

Drama KS5 Breakdown von WestHatch
Drama KS5 BreakdownDrama KS5 Breakdown
Drama KS5 Breakdown
WestHatch71 views
The Open Access Community Framework (OACF) 2023 (1).pptx von Jisc
The Open Access Community Framework (OACF) 2023 (1).pptxThe Open Access Community Framework (OACF) 2023 (1).pptx
The Open Access Community Framework (OACF) 2023 (1).pptx
Jisc85 views
American Psychological Association 7th Edition.pptx von SamiullahAfridi4
American Psychological Association  7th Edition.pptxAmerican Psychological Association  7th Edition.pptx
American Psychological Association 7th Edition.pptx
SamiullahAfridi482 views
Psychology KS5 von WestHatch
Psychology KS5Psychology KS5
Psychology KS5
WestHatch77 views
AI Tools for Business and Startups von Svetlin Nakov
AI Tools for Business and StartupsAI Tools for Business and Startups
AI Tools for Business and Startups
Svetlin Nakov101 views
Narration lesson plan.docx von TARIQ KHAN
Narration lesson plan.docxNarration lesson plan.docx
Narration lesson plan.docx
TARIQ KHAN104 views
Narration ppt.pptx von TARIQ KHAN
Narration  ppt.pptxNarration  ppt.pptx
Narration ppt.pptx
TARIQ KHAN119 views
Structure and Functions of Cell.pdf von Nithya Murugan
Structure and Functions of Cell.pdfStructure and Functions of Cell.pdf
Structure and Functions of Cell.pdf
Nithya Murugan368 views
Are we onboard yet University of Sussex.pptx von Jisc
Are we onboard yet University of Sussex.pptxAre we onboard yet University of Sussex.pptx
Are we onboard yet University of Sussex.pptx
Jisc77 views
The Accursed House by Émile Gaboriau von DivyaSheta
The Accursed House  by Émile GaboriauThe Accursed House  by Émile Gaboriau
The Accursed House by Émile Gaboriau
DivyaSheta158 views
The basics - information, data, technology and systems.pdf von JonathanCovena1
The basics - information, data, technology and systems.pdfThe basics - information, data, technology and systems.pdf
The basics - information, data, technology and systems.pdf
JonathanCovena188 views
UWP OA Week Presentation (1).pptx von Jisc
UWP OA Week Presentation (1).pptxUWP OA Week Presentation (1).pptx
UWP OA Week Presentation (1).pptx
Jisc74 views

Introduction to Neural networks (under graduate course) Lecture 9 of 9

  • 1. Neural Networks Dr. Randa Elanwar Lecture 9
  • 2. Lecture Content • Mapping networks: – Back-propagation neural network – Self-organizing map – Counter propagation network • Spatiotemporal Network • Stochastic Networks – Boltzmann machine • Neurocognition network 2Neural Networks Dr. Randa Elanwar
  • 3. Mapping networks • When the problem is non linear and no straight line could ever separate samples in the feature space we need multilayer perceptrons (having hidden layer’s’) to achieve nonlinearity. • The idea is that we map/transform/translate our data to another feature space that is linearly separable. Thus we call them mapping networks. • We will discuss three types of mapping networks: the back-propagation neural network, self-organizing map, counter propagation network. 3Neural Networks Dr. Randa Elanwar
  • 4. Mapping networks • Networks without hidden units are very limited in the input-output mappings they can model. – More layers of linear units do not help. Its still linear. – Fixed output non-linearities are not enough • We need multiple layers of adaptive non-linear hidden units. • But how can we train such nets? – We need an efficient way of adapting all the weights, not just the last layer. i.e., Learning the weights going into hidden units . This is hard. – Why? – Because: Nobody is telling us directly what hidden units should do. – Solution: This can be achieved using ‘Backpropagation’ learning 4Neural Networks Dr. Randa Elanwar
  • 5. Learning with hidden layers • Mathematically, the learning process is an optimization problem. We initiate the NN system with some parameters (weights) and use known examples to find out the optimal values of such weights. • Generally, the solution of an optimization problem is to find the parameter value that leads to minimum value of an optimization function. 5Neural Networks Dr. Randa Elanwar G(t) t In our case, the optimization function that we need to minimize to get the final weights is the error function E = ydes-yact E = ydes-f(W.X) To get the minimum value mathematically we differentiate the error function with respect to the parameter we need to get we call it W E  
  • 6. Learning with hidden layers • We define the “gradient”: w = . . X • If  is +ve this means that the current values of W makes the differentiation result +ve which is wrong. We want differentiation result to be = 0 (minimum point) we must move in the opposite direction of the gradient (subtract). The opposite is also true. • If  is =0 this means that the current values of W makes the differentiation result = 0 which is right. These weights are the optimal values (solution) and w should stop the algorithm. The network now is trained and ready for use. 6Neural Networks Dr. Randa Elanwar
  • 7. The back propagation algorithm • The backpropagation learning algorithm can be divided into two phases: propagation and weight update. 7Neural Networks Dr. Randa Elanwar Phase 1: Propagation 1.Forward propagation of a training pattern's input through the neural network in order to generate the propagation's output activations (yact). 2.Backward propagation of the propagation's output activations through the neural network using the training pattern's target (ydes) in order to generate the deltas () of all output and hidden neurons. Phase 2: Weight update For each weight follow the following steps: 1.Multiply its output delta () and input activation (x) and the learning rate () to get the gradient of the weight (w). 2.Bring the weight in the opposite direction of the gradient by subtracting it from the weight. - The sign of the gradient of a weight indicates where the error is increasing, this is why the weight must be updated in the opposite direction. - Repeat phase 1 and 2 until the performance of the network is satisfactory.
  • 8. Backpropagation Networks • They are the nonlinear (mapping) neural networks using the backpropagation supervised learning technique. • Modes of learning of nonlinear nets: • There are three modes of learning to choose from: on-line (pattern), batch and stochastic. • In on-line and stochastic learning, each propagation is followed immediately by a weight update. • In batch learning, many propagations occur before updating the weights. • Batch learning requires more memory capacity, but on-line and stochastic learning require more updates. 8Neural Networks Dr. Randa Elanwar
  • 9. Backpropagation Networks • On-line learning is used for dynamic environments that provide a continuous stream of new patterns. • Stochastic learning and batch learning both make use of a training set of static patterns. Stochastic goes through the data set in a random order in order to reduce its chances of getting stuck in local minima. • Stochastic learning is also much faster than batch learning since weights are updated immediately after each propagation. Yet batch learning will yield a much more stable descent to a local minima since each update is performed based on all patterns. 9Neural Networks Dr. Randa Elanwar
  • 10. Backpropagation Networks • Applications of supervised learning (Backpropagation NN) include • Pattern recognition • Credit approval • Target marketing • Medical diagnosis • Defective parts identification in manufacturing • Crime zoning • Treatment effectiveness analysis • Etc 10Neural Networks Dr. Randa Elanwar
  • 11. Self-organizing map • We can also train networks where there is no teacher. This is called unsupervised learning. The network learns a prototype based on the distribution of patterns in the training data. Such networks allow us to: – Discover underlying structure of the data – Encode or compress the data – Transform the data • Self-organizing maps (SOMs) are a data visualization technique invented by Professor Teuvo Kohonen – Also called Kohonen Networks, Competitive Learning, Winner-Take-All Learning – Generally reduces the dimensions of data through the use of self- organizing neural networks – Useful for data visualization; humans cannot visualize high dimensional data so this is often a useful technique to make sense of large data sets 11Neural Networks Dr. Randa Elanwar
  • 12. Self-organizing map • SOM structure: 1. Weights in neuron must represent a class of pattern. We have a neuron for each class. 2. Inputs pattern presented to all neurons and each produces an output. Output: measure of the match between input pattern and pattern stored by neuron. 3. A competitive learning strategy selects neuron with largest response. 4. A method of reinforcing the largest response. 12Neural Networks Dr. Randa Elanwar
  • 13. Self-organizing map • Unsupervised classification learning is based on clustering of input data. No a priori knowledge is about an input’s membership in a particular class. • Instead, gradually detected characteristics and a history of training will be used to assist the network in defining classes and possible boundaries between them. • Clustering is understood to be the grouping of similar objects and separating of dissimilar ones. • We discuss Kohonen’s network which classifies input vectors into one of the specified number of m categories, according to the clusters detected in the training set 13Neural Networks Dr. Randa Elanwar
  • 14. Kohonen’s Network 14Neural Networks Dr. Randa Elanwar Kohonen network X •The Kohonen network is a self-organising network with the following characteristics: 1. Neurons are arranged on a 2D grid 2. Inputs are sent to all neurons 3. There are no connections between neurons 4. For a neuron output (j) is a weighted sum of multiplication of x and w vectors, where x is the input, w is the weights 5. There is no threshold or bias 6. Input values and weights are normalized
  • 15. Self-organizing map Learning in Kohonen networks: • Initially the weights in each neuron are random • Input values are sent to all the neurons • The outputs of each neuron are compared • The “winner” is the neuron with the largest output value • Having found the winner, the weights of the winning neuron are adjusted • Weights of neurons in a surrounding neighbourhood are also adjusted • As training progresses the neighbourhood gets smaller • Weights are adjusted according to the following formula: 15Neural Networks Dr. Randa Elanwar
  • 16. Self-organizing map • The learning coefficient (alpha) starts with a value of 1 and gradually reduces to 0 • This has the effect of making big changes to the weights initially, but no changes at the end • The weights are adjusted so that they more closely resemble the input patterns Applications of unsupervised learning (Kohonen’s NN) include • Clustering • Vector quantization • Data compression • Feature extraction 16Neural Networks Dr. Randa Elanwar
  • 17. Counter propagation network • The counterpropagation network (CPN) is a fast-learning combination of unsupervised and supervised learning. • Although this network uses linear neurons, it can learn nonlinear functions by means of a hidden layer of competitive units. • Moreover, the network is able to learn a function and its inverse at the same time. • However, to simplify things, we will only consider the feedforward mechanism of the CPN. 17Neural Networks Dr. Randa Elanwar
  • 18. Counter propagation network • Training: 1.Randomly select a vector pair (x, y) from the training set. 2.Measure the similarity between the input vector and the activation of the hidden-layer units. 3.In the hidden (competitive) layer, determine the unit with the largest activation (the winner). I.e., the neuron whose weight vector is most similar to the current input vector is the “winner.” 4.Adjust the connection weights inbetween 5.Repeat until each input pattern is consistently associated with the same competitive unit. 18Neural Networks Dr. Randa Elanwar
  • 19. Counter propagation network • After the first phase of the training, each hidden-layer neuron is associated with a subset of input vectors (class of patterns). • In the second phase of the training, we adjust the weights in the network’s output layer in such a way that, for any winning hidden- layer unit, the network’s output is as close as possible to the desired output for the winning unit’s associated input vectors. • The idea is that when we later use the network to compute functions, the output of the winning hidden-layer unit is 1, and the output of all other hidden-layer units is 0. 19Neural Networks Dr. Randa Elanwar
  • 20. Spatiotemporal Networks •A spatio-temporal neural net differs from other neural networks in two ways: 1. Neurons has recurrent links that have different propagation delays 2. The state of the network depends not only on which nodes are firing, but also on the relative firing times of nodes. i.e., the significance of a node varies with time and depends on the firing state of other nodes. •The use of recurrence and multiple links with variable propagation delays provides a rich mechanism for feature extraction and pattern recognition: 1. Recurrent links enable nodes to integrate and differentiate inputs. I.e., detect features 2. multiple links with variable propagation delays between nodes serve as a short-term memory. 20Neural Networks Dr. Randa Elanwar
  • 21. Spatiotemporal Networks • Applications: • Problems such as speech recognition and time series prediction where the input signal has an explicit temporal aspect. • Tasks like image recognition do not have an explicit temporal aspect, but can also be done by converting static patterns into time-varying (spatio- temporal) signals via scanning the image. This would lead to a number of significant advantages: – The recognition system becomes ‘shift invarient’ – The spatio-temporal approach explains the image geometry since the local spatial relationships in the image are expressed as local temporal variations in the scanned input. – Reduction of complexity (from 2D to 1D) – The scanning approach allows a visual pattern recognition system to deal with inputs of arbitrary extent (not only static fixed 2D pattern) 21Neural Networks Dr. Randa Elanwar
  • 22. Stochastic neural networks • Stochastic neural networks are a type of artificial neural networks, which is a tool of artificial intelligence. They are built by introducing random variations into the network, either by giving the network's neurons stochastic transfer functions, or by giving them stochastic weights. This makes them useful tools for optimization problems, since the random fluctuations help it escape from local minima. • Stochastic neural networks that are built by using stochastic transfer functions are often called Boltzmann machines. • Stochastic neural networks have found applications in risk management, oncology, bioinformatics, and other similar fields 22Neural Networks Dr. Randa Elanwar
  • 23. Stochastic Networks: Boltzmann machine • The neurons are stochastic: at any time there is a probability attached to whether the neurons fires. • Used for solving constrained optimization problems. • Typical Boltzmann Machine: – Weights are fixed to represent the constrains of the problem and the function to be optimized. – The net seeks the solution by changing the activations of the units (0 or 1) based on a probability distribution and the effect that the change would have on the energy function or consensus function for the net. • May use either supervised or unsupervised learning. • Learning in Boltzmann Machine is accomplished by using a Simulated Annealing technique which has stochastic nature. This is used to reduce the probability of the net becoming trapped in a local minimum which is not a global minimum. 23Neural Networks Dr. Randa Elanwar
  • 24. Stochastic Networks: Boltzmann machine • Learning characteristics: – Each neuron fires with bipolar values. – All connections are symmetric. – In activation passing, the next neuron whose state we wish to update is selected randomly. – There are no self-feedback (connections from a neuron to itself) 24Neural Networks Dr. Randa Elanwar
  • 25. Stochastic Networks: Boltzmann machine • There are three phases in operation of the network: – The clamped phase in which the input and output of visible neurons are held fixed, while the hidden neurons are allowed to vary. – The free running phase in which only the inputs are held fixed and other neurons are allowed to vary. – The learning phase. • These phases iterate till learning has created a Boltzmann Machine which can be said to have learned the input patterns and will converge to the learned patterns when noisy or incomplete pattern is presented. 25Neural Networks Dr. Randa Elanwar
  • 26. Stochastic Networks: Boltzmann machine • For unsupervised learning Generally the initial weights of the net are randomly set to values in a small range e.g. -0.5 to +0.5. • Then an input pattern is presented to the net and clamped to the visible neurons. • choose a hidden neurons at random and flip its state from sj to –sj according to certain probability distribution • The activation passing can continue till the net hidden neurons reach equilibrium. • During free running phase, after presentation of the input patterns all neurons can update their states. • The learning phase depends whether weight are changed depend on the difference between the "real" distribution (neuron state) in clamped phase and the one which will be produced (eventually) by the machine in free mode. 26Neural Networks Dr. Randa Elanwar
  • 27. Stochastic Networks: Boltzmann machine • For supervised learning the set of visible neurons is split into input and output neurons, and the machine will be used to associate an input pattern with an output pattern. • During the clamped phase, the input and output patterns are clamped to the appropriate units. • The hidden neurons’ activations can settle at the various values. • During free running phase, only the input neurons are clamped – both the output neurons and the hidden neurons can pass activation round till the activations in the network settles. • Learning rule here is the same as before but must be modulated (multiplied) by the probability of the input’s patterns 27Neural Networks Dr. Randa Elanwar
  • 28. Neurocognition network • Neurocognitive networks are large-scale systems of distributed and interconnected neuronal populations in the central nervous system organized to perform cognitive functions. • many computer scientists try to simulate human cognition with computers. This line of research can be roughly split into two types: research seeking to create machines as adept as humans (or more so), and research attempting to figure out the computational basis of human cognition — that is, how the brain actually carries out its computations. This latter branch of research can be called computational modeling (while the former is often called artificial intelligence or AI). 28Neural Networks Dr. Randa Elanwar