This document provides an overview of neural networks and their components. It discusses:
1. The basic structure and functioning of artificial neural networks, including neurons, connections, weights, and activation functions.
2. Different types of neural network architectures like single layer feedforward, multilayer feedforward, and recurrent networks.
3. Neural network learning methods, including supervised learning, unsupervised learning, and reinforcement learning.
4. Key concepts in neural networks like weights, bias, thresholds, learning rates, and momentum factors.
3. • An artificial neural network is an efficient information processing
system which resembles in characteristics with a biological neural
network
• ANN has highly interconnected processing elements –
nodes,units,neurons,artificial neurons
• Neurons work in parallel
• Each neuron is connected with other by connection link
• Each connection link is associated with weights that contain
information about input signal
• This info is Used by neuron netwrk to solve problem.
4.
5. • Internal state of neuron –activation or activity level (function of
inputs received by neuron).
• Activation signal – transmitted to other neurons
• x1 and x2 ---- activation
• X1 and X2 ---- input neurons
• y --- output
• Y --- output neuron
• Activation Function --- function applied over the net input function
7. End of axon splits in to fine strands .
Each strand terminates in to bulb like organ--- synapse
10 pow 4 synapse
8. • 1. soma or cell body – cell nucleus located
• Dendrites –nerve is connected to the body
• Axon – carry impulses of the neuron
9.
10.
11. Basic models of ANN
• 3 basic entries
• 1. model’s synaptic interconnections
• 2 training or learning rule adopted
• 3. activation function
12. • 1. single layer feed forward
• 2. multilayer feed forward
• 3 single node with its own feed back
• 4.single layer with recurrent networks
• 5.Multi layer recurrent networks
17. • Architecture with a lateral feed back
• Also called as on-center-off-surround or lateral inhibition structure
• 2 class of inputs
• --- excitatory -> input from nearby processing elements(open circle)
• ----inhibitory-> input from distinctly located processing
elements(links with solid connective layers)
18. Learning
• Learning or training is process by which a neural network adapts itself
to a stimulus by making proper parameter adjustment which results
in desired response.
1. Parameter learning : updates connecting weights in NN
2. Structure learning: focus on change in network structure
• 1.supervised learning
• 2. unsupervised learning
• 3. reinforcement learning
19. 1. Supervised Learning
• Each input vector requires a corresponding target vector which
represents the desired output
• Input + target vector = training pair
• The network knows what should be the desired output
20. Supervised Learning conti…
• During training ,the input vector is presented to the network which results
in output vector.
• This output vector is the actual output.
• Actual output vector is compared with the desired output vector(target out
put vector).
• If difference exists between these two, then error signal is generated by
the network
• This error signal is used for adjustment of weights until the actual output
matches the desired (target) output.
• A supervisor or training is required for error minimization.
• Correct target output values are known for each input pattern.
21. Unsupervised learning
• Learning process is independent and is not supervised by teacher.
• Input vectors of similar type are grouped without use of training
data
22. • In the training process the network receives input patterns and
organizes these patterns to form clusters.
• When a new input pattern is applied the neural network gives an
output response indicating the class to which the input patterns
belongs
• If for an input a pattern is nt found a new class is generated.
• Self organizing--: While discovering the new features the network
undergoes change in parameters .
• Here exact clusters are formed by discovering similarities and
dissimilarities among the clusters.
24. Reinforcement learning (Learning with critic)
• Similar to supervised learning.Network receives some kind of
feedback from its environment.
• Sometimes less information about target values are known(critic info
About 50%).
• Learning based on critic information is called reinforcement
learning.Feedback sent is called reinforcement signal.
• Feedback is only evaluative not instructive.
• The external reinforcement signals are processed in critic signal
generator
• The critic signals obtained are sent to ANN for adjustments of
weights.
25. Activation functions(AF)
• To make the work more efficient and to obtain exact output some
force or activation is required .
• AF is applied over the net input to calculate the output of ANN
31. • 1.Weights: contains information about input signal which is used by network to
solve a problem .
• Weights are represented in the form of a matrix called as connection matrix
• Assume “ n “ processing elements in an ANN And each has “m” adaptive
weights.
• Weights encode long term memory and activation state encode short term
memory
32. BIAS
• What is bias in a neural network?
Neural network bias can be defined as the constant which is added to the product
of features and weights. It is used to offset the result. It helps the models to shift
the activation function towards the positive or negative side.
• Bias is included by adding a component x0=1 to the input vector X
• The input vector becomes X=(1,X1,X2,… Xn).
• Bias is considered as another weight W0j=bj.
• 2 types of bias:
Positive bias ---increases the net input of the network
Negative bias --- decreases the net input of the network
Using Bias the output of the network can be varied.
34. Threshold
• Threshold is a set value based on which the final output of the network may be
calculated.
• Threshold value is used in activation function .
• Compare --- calculated net input and threshold to obtain network output
• Every application has a threshold limit
• In Neural Network based on the threshold value ,the activation function are
defined and the output is calculated.
• Activation function using threshold can be defined as :
35. Learning rate
It is used to control the amount of weight adjustment at each step of
training .
It is denoted by “alpha”.
Ranges from 0 to 1.
36. Momentum factor
• If Momentum factor is added to the weight updating process
convergence can be made faster.
37. Vigilance Parameter
• Denoted by
• It is used in adaptive resonance theory network.
• Used to control the degree of similarity required for patterns to be
assigned to the same cluster unit
• Ranges from 0.7 to 1 to perform work in controlling the number of
clusters.
38. McCulloch – pitts Neuron
• The first computational model of a neuron was proposed by Warren
MuCulloch (neuroscientist) and Walter Pitts (logician) in 1943.
• M-P Neurons are connected by directed graph