SlideShare ist ein Scribd-Unternehmen logo
1 von 3
Downloaden Sie, um offline zu lesen
IJSRD - International Journal for Scientific Research & Development| Vol. 1, Issue 6, 2013 | ISSN (online): 2321-0613
All rights reserved by www.ijsrd.com 1300
Abstract— The standard back-propagation algorithm is
one of the most widely used algorithm for training feed-
forward neural networks. One major drawback of this
algorithm is it might fall into local minima and slow
convergence rate. Natural gradient descent is principal
method for solving nonlinear function is presented and is
combined with the modified back-propagation algorithm
yielding a new fast training multilayer algorithm. This paper
describes new approach to natural gradient learning in
which the number of parameters necessary is much smaller
than the natural gradient algorithm. This new method
exploits the algebraic structure of the parameter space to
reduce the space and time complexity of algorithm and
improve its performance.
I. INTRODUCTION
The back-propagation (BP) training algorithm is a
supervised learning method for multi-layered feed-forward
neural networks. It is essentially a gradient descent local
optimization technique which involves backward error
correction of network weights. Despite the general success
of back-propagation method in the learning process, several
major deficiencies are still needed to be solved. The
convergence rate of back-propagation is very low and hence
it becomes unsuitable for large problems. Furthermore, the
convergence behavior of the back-propagation algorithm
depends on the choice of initial values of connection
weights and other parameters used in the algorithm such as
the learning rate and the momentum term.
Amari has developed natural gradient learning for multilayer
perceptrons [18], which uses Quasi-Newton method [6]
instead of the steepest descent direction. The Fisher
information matrix is a technique used to estimate hidden
parameters in terms observed random variables. It fits very
nicely into Quasi-Newton optimization framework.
This paper suggests that a simple modification to the initial
search direction, in the above algorithm i.e. changing the
gradient of error with respect to weights, to improve the
training efficiency. It was discovered that if the gradient
based search direction is locally modified by a gain value
used in the activation function of the corresponding node,
significant improvements in the convergence rates can be
achieved [24].
II. BACKPROPAGATION LEARNING ALGORITHM
An artificial neural network consist of input vector x and
gives output y. when network has m hidden units, the output
of hidden layer is φ( wα · x), α = 1,. . .,m where wα is an n
dimensional connection weight vector from input to the α-th
hidden unit, and φ is a sigmoidal output function. Let vα be a
connection weight from the α-th hidden unit to the linear
output unit and let ζ be a bias. Then the output of the neural
network is written as
∑ ( ) (1)
Any perceptron is specified by the parameter { w1, . . . ,wα; v}.
We summarize them into a single m(n+1) dimensional
vector θ. We call the space S consisting of all multilayer
neurons. The parameter θ plays the role of a coordinate
system of S.
The vector θ of dimension m(n+1) can represent a single
neuron.
The output of a neuron is a random variable depends on
input x. Hence the input output relation of the neuron having
parameter θ is described by the conditional probability of
output y on input x,
( | )
√
[ * ( )+ ] (2)
Where ( ) ∑ ( ) (3)
is the mean value of y given input x. Its logarithm is
( | ) * ( )+ (√ ) (4)
This can be regarded as the negative of the square of an
error when y is a target value and f(x,θ) is the output of the
network.
Hence, the maximization of the likelihood is equivalent to
the minimization of the square error
( ) * ( )+ (5)
The conventional on-line learning method modifies the
current parameter θt by using the gradient ( ) of the loss
function such that
θt+1 = θt - ηt (xt ,y*t ;θt ) (6)
here ηt is a learning rate, and
( ) { ( )} (7)
is the gradient of the loss function l* and y*t is the desired
output signal given from teacher.
The steepest descent direction of the loss function l*(θ) in a
Riemannian space is given [18] by
( ) ( ) ( ) (8)
Where G-1
is the inverse of a matrix G = (gij) called the
Riemannian metric tensor. This gradient is called natural
gradient of the loss function l*(θ) in the Riemannian space.
III. NATURAL GRADIENT LEARNING ALGORITHM
In the multilayer neural network, the Riemannian metric
tensor G(θ)= (gij(θ)) is given by the Fisher information
matrix[18],
gij(θ)= E[
( | ) ( | )
] (9)
where E denotes expectation with respect to the input output
pair (x,y) given in Eq.(2).
The natural gradient learning algorithm updates the current
θt by
Improving Performance of Back propagation Learning Algorithm
Harikrishna B Jethva1
Dr. V. M. Chavda2
1
Ph. D. Scholar,Department of Computer Science and Engineering
1
Bhagwant University, Sikar Road Ajmer, Rajasthan
2
SVICS, Kadi, Gujarat
Improving Performance of Back propagation Learning Algorithm
(IJSRD/Vol. 1/Issue 6/2013/007)
All rights reserved by www.ijsrd.com 1301
θt+1 = θt - 𝜂t ( ) (10)
IV. ADAPTIVE IMPLEMENTATION OF NATURAL GRADIENT
LEARNING
The Fisher information G(θ) depends on the probability
distribution of x which is usually unknown. Hence, it is
difficult to obtain G(θ). Moreover, its inversion is costly.
Here, we show an adaptive method of directly estimating G-
1
(θ) [5].
Since the Fisher information of Eq. (9) can be rewritten, by
using Eq. (4), as
Gt = E[
( | ) ( | )
]
* ( )+ +
E[
( ) ( )
]
[
( ) ( )
] (11)
where ‗denotes transposition of a vector or matrix.
We have following recursive estimation of G-1
[23]
( ) – ( ) (12)
Where εt is a small learning rate, ( ) and
ft = f(xt,θt). Together with
θt+1 = θt – ηt l(xt ,yt ;θt ) (13)
this gives the adaptive method of natural gradient learning.
This is different from the Newton method, but can be
regarded as an adaptive version of Gauss Newton method.
Moreover, information geometry suggests the important
geometric properties of hierarchical statistical model in
general.
V. EXPERIMENTAL RESULTS
We conducted an experiment for comparing convergence
speeds between conventional Natural Gradient Learning
(NGL) algorithm, and the Adaptive Natural Gradient
learning (ANGL) algorithms.
We take XOR problem because it is not linearly separable
problem. We use NN architecture with two hidden units and
hyperbolic tangent transfer function between both the
hidden units and output units.
The inputs and outputs are:
X0 = [ ] X1 = [ ] X2 = [ ] X3 = [ ]
Y0 = -1 Y1 = 1 Y2 = 1 Y3 = -1
Respectively.
Thus the error for each pattern is
Єn = yn – tanh(W2tanh(W1xn+b1)+b2)2
(14)
There are two hidden units and each layer has bias. Hence
W1 is a 2-by-2 matrix and W2 is a 1-by-2 matrix.
The performance compared with sum squared error metric.
Neural network training algorithms are very sensitive to the
learning rate. So we use step size η‖ ‖ for NGL algorithm.
An interesting point of comparison is the relative step size
of this algorithm. For ANGL, the effective learning rate is
the product of the learning rate η and the largest eigenvalue
of the G-1
.
Figure 1, 2 shows the sum squared error of each learning
epoch for NGL and ANGL. Table 1 show the parameters
used in the three learning algorithms and some of the result
of the experiment.
Parameter NGL ANGL
Hidden units
Learning rate
Adaption rate
Learning Epoch
When SSE < 0.02
Final SSE
Final Learning Rate
2
0.25
N.A.
10000
0.0817
1e-4
2
0.25
0.1
320
3.55e-4
0.144
Table 1: The result of XOR Experiment And Parameter
Used
VI. CONCLUSION
Natural Gradient Descent learning works well for many
problems. Amari[18] had developed an algorithm to avoid
local minima by following the curvature of a manifold in the
parameter space of neuron. By using recursive estimate of
the inverse of the Fisher information matrix of the
parameters, the algorithm is able to accelerate learning in
the direction of descent.
The experiment have shown that the performance of natural
gradient algorithm improved by using adaptive gradient
method of learning.
There are many areas of research in which this research can
be applied, like speech recognition etc.
REFERENCES
[1] D. E. Rumelhart and J. L. McClelland, Parallel
Distributed Processing. Cambridge, MA: MIT Press,
1986.
[2] D. O. Hebb, The Organization of Behavior. New
York: John Wiley & Sons, 1949.
[3] D. J. C. MacKay, Information Theory, Inference, and
Learning Algorithms. New York: Cambridge
University Press, 2003.
[4] F. Rosenblatt, Principles of Neurodynamics:
Perceptrons and the Theory of Brain Mechanisms.
Washington DC: Spartan Books, 1962.
[5] H. Park, S. Amari, and K. Fukumizu, ―Adaptive
natural gradient learning algorithms for various
stochastic models,‖ Neural Networks, vol. 13, no. 7,
pp. 755–764, 2000.
[6] James A. Freeman David M. Skapura, Neural
Networks Algorithms, Applications, and
Programming Techniques, Addison-Wesley
Publishing Company (1991)
[7] Jinwook Go, Gunhee Han, Hagbae Kim
Multigradient: A New Neural Network Learning
Algorithm for Pattern Classification IEEE
TRANSACTIONS ON GEOSCIENCE AND
REMOTE SENSING, VOL. 39, NO. 5, MAY 2001
[8] Kenji Fukumizu, Shun-ichi Amari Local Minima and
Plateaus in Hierarchical Structures of Multilayer
Perceptrons Brain Science Institute The Institute of
Physical and Chemical Research (RIKEN) E-mail:
ffuku,amarig@brain.riken.go.jp Oct 22, 1999
[9] Kavita Burse, Manish Manoria, Vishnu P. S. Kirar
Improved Back Propagation Algorithm to Avoid
Local Minima in Multiplicative Neuron Model World
Improving Performance of Back propagation Learning Algorithm
(IJSRD/Vol. 1/Issue 6/2013/007)
All rights reserved by www.ijsrd.com 1302
Academy of Science, Engineering and Technology 72
2010
[10] M. Abramowitz and I. A. Stegun, Eds., Handbook of
Mathematical Functions with Formulas, Graphs, and
Mathematical Tables. Washington, DC: US
Government Printing Office, 1972.
[11] M. Biehl and H. Schwarze, ―Learning by online
gradient descent,‖ Journal of Physics, vol. A, no. 28,
pp. 643–656, 1995.
[12] N. Murata, ―Astatistical study of on-line learning,‖ in
On-line Learning in Neural Networks, D. Saad, Ed.,
pp. 63–92. New York: Cambridge University Press,
1999.
[13] N. M. Nawi, M. R. Ransing, and R. S. RansingAn
Improved Learning Algorithm based on the
Conjugate Gradient Method for Back Propagation
Neural NetworksInternational Journal of Applied
Science, Engineering and Technology
www.waset.org Spring 2006
[14] R. Rojas, Neural Networks, ch. 7. New York:
Springer-Verlag, 1996.
[15] RIKEN Brain Science Institute(RIKEN BSI) Japan
http://www.brain.riken.jp/
[16] R. A. Fisher, ―On the mathematical foundations of
theoretical statistics,‖ Philosophical Transactions of
the Royal Society of London, vol. 222, pp. 309–68,
1922.
[17] S. Amari, ―Neural learning in structured parameter
spaces — natural riemannian gradient,‖ in Advances
in Neural Information Processing Systems, M. C.
Mozer, M. I. Jordan, and T. Petsche, Eds., vol. 9, p.
127. Cambridge, MA: The MIT Press, 1997.
[18] S. Amari, ―Natural gradient works efficiently in
learning,‖ Neural Computation, vol. 10, no. 2, pp.
251–276, 1998.
[19] S. Amari, H. Park, and T. Ozeki, Geometrical
singularities in the neuromanifold of multilayer
perceptrons, no. 14. Cambridge, MA: MIT Press,
2002.
[20] S. Amari and H. Nagaoka, Methods of Information
Geometry, Translations of Mathematical
Monographs, vol. 191. New York: Oxford University
Press, 2000.
[21] Simon Haykin Neural Networks A comprehension
foundation Pearson education seventh edition (2007)
[22] T. Heskes and B. Kappen, ―On-line learning
processes in artificial neural networks,‖ in
Mathematical Foundations of Neural Networks, J.
Taylor, Ed., pp. 199–233. Amsterdam, Netherlands:
Elsevier, 1993.
[23] Todd K. Moon and Wynn Stirling Mathematical
Methods and Algorithms for Signal Processing,
Prentice Hall, 1999
[24] Weixing, Xugang, Zheng TANG Avoiding the Local
Minima Problem in Backpropagation Algorithm with
Modified Error Function IEICE TRANS.
FUNDAMENTALS, VOL.E88–A, NO.12
DECEMBER 2005
Fig. 1: The Sum Squared Error of NGL
.
Fig. 2: The Sum Squared Error of ANGL.

Weitere ähnliche Inhalte

Was ist angesagt?

An Automatic Medical Image Segmentation using Teaching Learning Based Optimiz...
An Automatic Medical Image Segmentation using Teaching Learning Based Optimiz...An Automatic Medical Image Segmentation using Teaching Learning Based Optimiz...
An Automatic Medical Image Segmentation using Teaching Learning Based Optimiz...idescitation
 
Artificial bee colony with fcm for data clustering
Artificial bee colony with fcm for data clusteringArtificial bee colony with fcm for data clustering
Artificial bee colony with fcm for data clusteringAlie Banyuripan
 
Parallel processing technique for high speed image segmentation using color
Parallel processing technique for high speed image segmentation using colorParallel processing technique for high speed image segmentation using color
Parallel processing technique for high speed image segmentation using colorIAEME Publication
 
OTSU Thresholding Method for Flower Image Segmentation
OTSU Thresholding Method for Flower Image SegmentationOTSU Thresholding Method for Flower Image Segmentation
OTSU Thresholding Method for Flower Image Segmentationijceronline
 
Expert system design for elastic scattering neutrons optical model using bpnn
Expert system design for elastic scattering neutrons optical model using bpnnExpert system design for elastic scattering neutrons optical model using bpnn
Expert system design for elastic scattering neutrons optical model using bpnnijcsa
 
Performance Evaluation of Object Tracking Technique Based on Position Vectors
Performance Evaluation of Object Tracking Technique Based on Position VectorsPerformance Evaluation of Object Tracking Technique Based on Position Vectors
Performance Evaluation of Object Tracking Technique Based on Position VectorsCSCJournals
 
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...IJECEIAES
 
Colour Image Segmentation Using Soft Rough Fuzzy-C-Means and Multi Class SVM
Colour Image Segmentation Using Soft Rough Fuzzy-C-Means and Multi Class SVM Colour Image Segmentation Using Soft Rough Fuzzy-C-Means and Multi Class SVM
Colour Image Segmentation Using Soft Rough Fuzzy-C-Means and Multi Class SVM ijcisjournal
 
Matching networks for one shot learning
Matching networks for one shot learningMatching networks for one shot learning
Matching networks for one shot learningKazuki Fujikawa
 
Fuzzy entropy based optimal
Fuzzy entropy based optimalFuzzy entropy based optimal
Fuzzy entropy based optimalijsc
 
Convolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in TheanoConvolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in TheanoSeongwon Hwang
 
Kernels in convolution
Kernels in convolutionKernels in convolution
Kernels in convolutionRevanth Kumar
 
Learning to Reconstruct
Learning to ReconstructLearning to Reconstruct
Learning to ReconstructJonas Adler
 
Feed forward neural network for sine
Feed forward neural network for sineFeed forward neural network for sine
Feed forward neural network for sineijcsa
 

Was ist angesagt? (20)

An Automatic Medical Image Segmentation using Teaching Learning Based Optimiz...
An Automatic Medical Image Segmentation using Teaching Learning Based Optimiz...An Automatic Medical Image Segmentation using Teaching Learning Based Optimiz...
An Automatic Medical Image Segmentation using Teaching Learning Based Optimiz...
 
Artificial bee colony with fcm for data clustering
Artificial bee colony with fcm for data clusteringArtificial bee colony with fcm for data clustering
Artificial bee colony with fcm for data clustering
 
K010218188
K010218188K010218188
K010218188
 
Parallel processing technique for high speed image segmentation using color
Parallel processing technique for high speed image segmentation using colorParallel processing technique for high speed image segmentation using color
Parallel processing technique for high speed image segmentation using color
 
OTSU Thresholding Method for Flower Image Segmentation
OTSU Thresholding Method for Flower Image SegmentationOTSU Thresholding Method for Flower Image Segmentation
OTSU Thresholding Method for Flower Image Segmentation
 
Fuzzy logic member functions
Fuzzy logic member functionsFuzzy logic member functions
Fuzzy logic member functions
 
Expert system design for elastic scattering neutrons optical model using bpnn
Expert system design for elastic scattering neutrons optical model using bpnnExpert system design for elastic scattering neutrons optical model using bpnn
Expert system design for elastic scattering neutrons optical model using bpnn
 
Performance Evaluation of Object Tracking Technique Based on Position Vectors
Performance Evaluation of Object Tracking Technique Based on Position VectorsPerformance Evaluation of Object Tracking Technique Based on Position Vectors
Performance Evaluation of Object Tracking Technique Based on Position Vectors
 
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
 
Neural networks
Neural networksNeural networks
Neural networks
 
Colour Image Segmentation Using Soft Rough Fuzzy-C-Means and Multi Class SVM
Colour Image Segmentation Using Soft Rough Fuzzy-C-Means and Multi Class SVM Colour Image Segmentation Using Soft Rough Fuzzy-C-Means and Multi Class SVM
Colour Image Segmentation Using Soft Rough Fuzzy-C-Means and Multi Class SVM
 
Matching networks for one shot learning
Matching networks for one shot learningMatching networks for one shot learning
Matching networks for one shot learning
 
Fuzzy entropy based optimal
Fuzzy entropy based optimalFuzzy entropy based optimal
Fuzzy entropy based optimal
 
Convolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in TheanoConvolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in Theano
 
Kernels in convolution
Kernels in convolutionKernels in convolution
Kernels in convolution
 
Learning to Reconstruct
Learning to ReconstructLearning to Reconstruct
Learning to Reconstruct
 
neural networksNnf
neural networksNnfneural networksNnf
neural networksNnf
 
Bistablecamnets
BistablecamnetsBistablecamnets
Bistablecamnets
 
Feed forward neural network for sine
Feed forward neural network for sineFeed forward neural network for sine
Feed forward neural network for sine
 
Lesson 38
Lesson 38Lesson 38
Lesson 38
 

Andere mochten auch

Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Randa Elanwar
 
Introduction to Neural networks (under graduate course) Lecture 1 of 9
Introduction to Neural networks (under graduate course) Lecture 1 of 9Introduction to Neural networks (under graduate course) Lecture 1 of 9
Introduction to Neural networks (under graduate course) Lecture 1 of 9Randa Elanwar
 
The Back Propagation Learning Algorithm
The Back Propagation Learning AlgorithmThe Back Propagation Learning Algorithm
The Back Propagation Learning AlgorithmESCOM
 
Setting Artificial Neural Networks parameters
Setting Artificial Neural Networks parametersSetting Artificial Neural Networks parameters
Setting Artificial Neural Networks parametersMadhumita Tamhane
 
2.3.1 properties of functions
2.3.1 properties of functions2.3.1 properties of functions
2.3.1 properties of functionsNorthside ISD
 
Introduction to Neural networks (under graduate course) Lecture 6 of 9
Introduction to Neural networks (under graduate course) Lecture 6 of 9Introduction to Neural networks (under graduate course) Lecture 6 of 9
Introduction to Neural networks (under graduate course) Lecture 6 of 9Randa Elanwar
 
Back propagation network
Back propagation networkBack propagation network
Back propagation networkHIRA Zaidi
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesMohammed Bennamoun
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networksAkash Goel
 
Back propagation
Back propagationBack propagation
Back propagationNagarajan
 
Neurophysiological and evolutionary
Neurophysiological and evolutionaryNeurophysiological and evolutionary
Neurophysiological and evolutionarySnowPea Guh
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications Ahmed_hashmi
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian LearningESCOM
 

Andere mochten auch (17)

Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9
 
Introduction to Neural networks (under graduate course) Lecture 1 of 9
Introduction to Neural networks (under graduate course) Lecture 1 of 9Introduction to Neural networks (under graduate course) Lecture 1 of 9
Introduction to Neural networks (under graduate course) Lecture 1 of 9
 
The Back Propagation Learning Algorithm
The Back Propagation Learning AlgorithmThe Back Propagation Learning Algorithm
The Back Propagation Learning Algorithm
 
AI Lesson 39
AI Lesson 39AI Lesson 39
AI Lesson 39
 
Setting Artificial Neural Networks parameters
Setting Artificial Neural Networks parametersSetting Artificial Neural Networks parameters
Setting Artificial Neural Networks parameters
 
2.3.1 properties of functions
2.3.1 properties of functions2.3.1 properties of functions
2.3.1 properties of functions
 
Introduction to Neural networks (under graduate course) Lecture 6 of 9
Introduction to Neural networks (under graduate course) Lecture 6 of 9Introduction to Neural networks (under graduate course) Lecture 6 of 9
Introduction to Neural networks (under graduate course) Lecture 6 of 9
 
hopfield neural network
hopfield neural networkhopfield neural network
hopfield neural network
 
Back propagation network
Back propagation networkBack propagation network
Back propagation network
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
 
Hopfield Networks
Hopfield NetworksHopfield Networks
Hopfield Networks
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
 
HOPFIELD NETWORK
HOPFIELD NETWORKHOPFIELD NETWORK
HOPFIELD NETWORK
 
Back propagation
Back propagationBack propagation
Back propagation
 
Neurophysiological and evolutionary
Neurophysiological and evolutionaryNeurophysiological and evolutionary
Neurophysiological and evolutionary
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian Learning
 

Ähnlich wie Improving Performance of Back propagation Learning Algorithm

A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...
A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...
A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...Cemal Ardil
 
Adaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsAdaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsIJCSEA Journal
 
USING LEARNING AUTOMATA AND GENETIC ALGORITHMS TO IMPROVE THE QUALITY OF SERV...
USING LEARNING AUTOMATA AND GENETIC ALGORITHMS TO IMPROVE THE QUALITY OF SERV...USING LEARNING AUTOMATA AND GENETIC ALGORITHMS TO IMPROVE THE QUALITY OF SERV...
USING LEARNING AUTOMATA AND GENETIC ALGORITHMS TO IMPROVE THE QUALITY OF SERV...IJCSEA Journal
 
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...Cemal Ardil
 
New Approach of Preprocessing For Numeral Recognition
New Approach of Preprocessing For Numeral RecognitionNew Approach of Preprocessing For Numeral Recognition
New Approach of Preprocessing For Numeral RecognitionIJERA Editor
 
Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...
Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...
Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...Cemal Ardil
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithmsaciijournal
 
RESEARCH ON FUZZY C- CLUSTERING RECURSIVE GENETIC ALGORITHM BASED ON CLOUD CO...
RESEARCH ON FUZZY C- CLUSTERING RECURSIVE GENETIC ALGORITHM BASED ON CLOUD CO...RESEARCH ON FUZZY C- CLUSTERING RECURSIVE GENETIC ALGORITHM BASED ON CLOUD CO...
RESEARCH ON FUZZY C- CLUSTERING RECURSIVE GENETIC ALGORITHM BASED ON CLOUD CO...ijaia
 
Ijarcet vol-2-issue-4-1579-1582
Ijarcet vol-2-issue-4-1579-1582Ijarcet vol-2-issue-4-1579-1582
Ijarcet vol-2-issue-4-1579-1582Editor IJARCET
 
An introduction to deep learning
An introduction to deep learningAn introduction to deep learning
An introduction to deep learningVan Thanh
 
Design of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approachDesign of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approachEditor Jacotech
 
Design of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approachDesign of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approachEditor Jacotech
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsaciijournal
 
Adaptive Training of Radial Basis Function Networks Based on Cooperative
Adaptive Training of Radial Basis Function Networks Based on CooperativeAdaptive Training of Radial Basis Function Networks Based on Cooperative
Adaptive Training of Radial Basis Function Networks Based on CooperativeESCOM
 

Ähnlich wie Improving Performance of Back propagation Learning Algorithm (20)

A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...
A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...
A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...
 
How to Decide the Best Fuzzy Model in ANFIS
How to Decide the Best Fuzzy Model in ANFIS How to Decide the Best Fuzzy Model in ANFIS
How to Decide the Best Fuzzy Model in ANFIS
 
Adaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsAdaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errors
 
40120130406008
4012013040600840120130406008
40120130406008
 
USING LEARNING AUTOMATA AND GENETIC ALGORITHMS TO IMPROVE THE QUALITY OF SERV...
USING LEARNING AUTOMATA AND GENETIC ALGORITHMS TO IMPROVE THE QUALITY OF SERV...USING LEARNING AUTOMATA AND GENETIC ALGORITHMS TO IMPROVE THE QUALITY OF SERV...
USING LEARNING AUTOMATA AND GENETIC ALGORITHMS TO IMPROVE THE QUALITY OF SERV...
 
Analysis_molf
Analysis_molfAnalysis_molf
Analysis_molf
 
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
 
W33123127
W33123127W33123127
W33123127
 
New Approach of Preprocessing For Numeral Recognition
New Approach of Preprocessing For Numeral RecognitionNew Approach of Preprocessing For Numeral Recognition
New Approach of Preprocessing For Numeral Recognition
 
Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...
Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...
Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
 
RESEARCH ON FUZZY C- CLUSTERING RECURSIVE GENETIC ALGORITHM BASED ON CLOUD CO...
RESEARCH ON FUZZY C- CLUSTERING RECURSIVE GENETIC ALGORITHM BASED ON CLOUD CO...RESEARCH ON FUZZY C- CLUSTERING RECURSIVE GENETIC ALGORITHM BASED ON CLOUD CO...
RESEARCH ON FUZZY C- CLUSTERING RECURSIVE GENETIC ALGORITHM BASED ON CLOUD CO...
 
Ijarcet vol-2-issue-4-1579-1582
Ijarcet vol-2-issue-4-1579-1582Ijarcet vol-2-issue-4-1579-1582
Ijarcet vol-2-issue-4-1579-1582
 
An introduction to deep learning
An introduction to deep learningAn introduction to deep learning
An introduction to deep learning
 
Design of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approachDesign of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approach
 
Design of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approachDesign of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approach
 
Path loss prediction
Path loss predictionPath loss prediction
Path loss prediction
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
 
Adaptive Training of Radial Basis Function Networks Based on Cooperative
Adaptive Training of Radial Basis Function Networks Based on CooperativeAdaptive Training of Radial Basis Function Networks Based on Cooperative
Adaptive Training of Radial Basis Function Networks Based on Cooperative
 
Neural networks
Neural networksNeural networks
Neural networks
 

Mehr von ijsrd.com

IoT Enabled Smart Grid
IoT Enabled Smart GridIoT Enabled Smart Grid
IoT Enabled Smart Gridijsrd.com
 
A Survey Report on : Security & Challenges in Internet of Things
A Survey Report on : Security & Challenges in Internet of ThingsA Survey Report on : Security & Challenges in Internet of Things
A Survey Report on : Security & Challenges in Internet of Thingsijsrd.com
 
IoT for Everyday Life
IoT for Everyday LifeIoT for Everyday Life
IoT for Everyday Lifeijsrd.com
 
Study on Issues in Managing and Protecting Data of IOT
Study on Issues in Managing and Protecting Data of IOTStudy on Issues in Managing and Protecting Data of IOT
Study on Issues in Managing and Protecting Data of IOTijsrd.com
 
Interactive Technologies for Improving Quality of Education to Build Collabor...
Interactive Technologies for Improving Quality of Education to Build Collabor...Interactive Technologies for Improving Quality of Education to Build Collabor...
Interactive Technologies for Improving Quality of Education to Build Collabor...ijsrd.com
 
Internet of Things - Paradigm Shift of Future Internet Application for Specia...
Internet of Things - Paradigm Shift of Future Internet Application for Specia...Internet of Things - Paradigm Shift of Future Internet Application for Specia...
Internet of Things - Paradigm Shift of Future Internet Application for Specia...ijsrd.com
 
A Study of the Adverse Effects of IoT on Student's Life
A Study of the Adverse Effects of IoT on Student's LifeA Study of the Adverse Effects of IoT on Student's Life
A Study of the Adverse Effects of IoT on Student's Lifeijsrd.com
 
Pedagogy for Effective use of ICT in English Language Learning
Pedagogy for Effective use of ICT in English Language LearningPedagogy for Effective use of ICT in English Language Learning
Pedagogy for Effective use of ICT in English Language Learningijsrd.com
 
Virtual Eye - Smart Traffic Navigation System
Virtual Eye - Smart Traffic Navigation SystemVirtual Eye - Smart Traffic Navigation System
Virtual Eye - Smart Traffic Navigation Systemijsrd.com
 
Ontological Model of Educational Programs in Computer Science (Bachelor and M...
Ontological Model of Educational Programs in Computer Science (Bachelor and M...Ontological Model of Educational Programs in Computer Science (Bachelor and M...
Ontological Model of Educational Programs in Computer Science (Bachelor and M...ijsrd.com
 
Understanding IoT Management for Smart Refrigerator
Understanding IoT Management for Smart RefrigeratorUnderstanding IoT Management for Smart Refrigerator
Understanding IoT Management for Smart Refrigeratorijsrd.com
 
DESIGN AND ANALYSIS OF DOUBLE WISHBONE SUSPENSION SYSTEM USING FINITE ELEMENT...
DESIGN AND ANALYSIS OF DOUBLE WISHBONE SUSPENSION SYSTEM USING FINITE ELEMENT...DESIGN AND ANALYSIS OF DOUBLE WISHBONE SUSPENSION SYSTEM USING FINITE ELEMENT...
DESIGN AND ANALYSIS OF DOUBLE WISHBONE SUSPENSION SYSTEM USING FINITE ELEMENT...ijsrd.com
 
A Review: Microwave Energy for materials processing
A Review: Microwave Energy for materials processingA Review: Microwave Energy for materials processing
A Review: Microwave Energy for materials processingijsrd.com
 
Web Usage Mining: A Survey on User's Navigation Pattern from Web Logs
Web Usage Mining: A Survey on User's Navigation Pattern from Web LogsWeb Usage Mining: A Survey on User's Navigation Pattern from Web Logs
Web Usage Mining: A Survey on User's Navigation Pattern from Web Logsijsrd.com
 
APPLICATION OF STATCOM to IMPROVED DYNAMIC PERFORMANCE OF POWER SYSTEM
APPLICATION OF STATCOM to IMPROVED DYNAMIC PERFORMANCE OF POWER SYSTEMAPPLICATION OF STATCOM to IMPROVED DYNAMIC PERFORMANCE OF POWER SYSTEM
APPLICATION OF STATCOM to IMPROVED DYNAMIC PERFORMANCE OF POWER SYSTEMijsrd.com
 
Making model of dual axis solar tracking with Maximum Power Point Tracking
Making model of dual axis solar tracking with Maximum Power Point TrackingMaking model of dual axis solar tracking with Maximum Power Point Tracking
Making model of dual axis solar tracking with Maximum Power Point Trackingijsrd.com
 
A REVIEW PAPER ON PERFORMANCE AND EMISSION TEST OF 4 STROKE DIESEL ENGINE USI...
A REVIEW PAPER ON PERFORMANCE AND EMISSION TEST OF 4 STROKE DIESEL ENGINE USI...A REVIEW PAPER ON PERFORMANCE AND EMISSION TEST OF 4 STROKE DIESEL ENGINE USI...
A REVIEW PAPER ON PERFORMANCE AND EMISSION TEST OF 4 STROKE DIESEL ENGINE USI...ijsrd.com
 
Study and Review on Various Current Comparators
Study and Review on Various Current ComparatorsStudy and Review on Various Current Comparators
Study and Review on Various Current Comparatorsijsrd.com
 
Reducing Silicon Real Estate and Switching Activity Using Low Power Test Patt...
Reducing Silicon Real Estate and Switching Activity Using Low Power Test Patt...Reducing Silicon Real Estate and Switching Activity Using Low Power Test Patt...
Reducing Silicon Real Estate and Switching Activity Using Low Power Test Patt...ijsrd.com
 
Defending Reactive Jammers in WSN using a Trigger Identification Service.
Defending Reactive Jammers in WSN using a Trigger Identification Service.Defending Reactive Jammers in WSN using a Trigger Identification Service.
Defending Reactive Jammers in WSN using a Trigger Identification Service.ijsrd.com
 

Mehr von ijsrd.com (20)

IoT Enabled Smart Grid
IoT Enabled Smart GridIoT Enabled Smart Grid
IoT Enabled Smart Grid
 
A Survey Report on : Security & Challenges in Internet of Things
A Survey Report on : Security & Challenges in Internet of ThingsA Survey Report on : Security & Challenges in Internet of Things
A Survey Report on : Security & Challenges in Internet of Things
 
IoT for Everyday Life
IoT for Everyday LifeIoT for Everyday Life
IoT for Everyday Life
 
Study on Issues in Managing and Protecting Data of IOT
Study on Issues in Managing and Protecting Data of IOTStudy on Issues in Managing and Protecting Data of IOT
Study on Issues in Managing and Protecting Data of IOT
 
Interactive Technologies for Improving Quality of Education to Build Collabor...
Interactive Technologies for Improving Quality of Education to Build Collabor...Interactive Technologies for Improving Quality of Education to Build Collabor...
Interactive Technologies for Improving Quality of Education to Build Collabor...
 
Internet of Things - Paradigm Shift of Future Internet Application for Specia...
Internet of Things - Paradigm Shift of Future Internet Application for Specia...Internet of Things - Paradigm Shift of Future Internet Application for Specia...
Internet of Things - Paradigm Shift of Future Internet Application for Specia...
 
A Study of the Adverse Effects of IoT on Student's Life
A Study of the Adverse Effects of IoT on Student's LifeA Study of the Adverse Effects of IoT on Student's Life
A Study of the Adverse Effects of IoT on Student's Life
 
Pedagogy for Effective use of ICT in English Language Learning
Pedagogy for Effective use of ICT in English Language LearningPedagogy for Effective use of ICT in English Language Learning
Pedagogy for Effective use of ICT in English Language Learning
 
Virtual Eye - Smart Traffic Navigation System
Virtual Eye - Smart Traffic Navigation SystemVirtual Eye - Smart Traffic Navigation System
Virtual Eye - Smart Traffic Navigation System
 
Ontological Model of Educational Programs in Computer Science (Bachelor and M...
Ontological Model of Educational Programs in Computer Science (Bachelor and M...Ontological Model of Educational Programs in Computer Science (Bachelor and M...
Ontological Model of Educational Programs in Computer Science (Bachelor and M...
 
Understanding IoT Management for Smart Refrigerator
Understanding IoT Management for Smart RefrigeratorUnderstanding IoT Management for Smart Refrigerator
Understanding IoT Management for Smart Refrigerator
 
DESIGN AND ANALYSIS OF DOUBLE WISHBONE SUSPENSION SYSTEM USING FINITE ELEMENT...
DESIGN AND ANALYSIS OF DOUBLE WISHBONE SUSPENSION SYSTEM USING FINITE ELEMENT...DESIGN AND ANALYSIS OF DOUBLE WISHBONE SUSPENSION SYSTEM USING FINITE ELEMENT...
DESIGN AND ANALYSIS OF DOUBLE WISHBONE SUSPENSION SYSTEM USING FINITE ELEMENT...
 
A Review: Microwave Energy for materials processing
A Review: Microwave Energy for materials processingA Review: Microwave Energy for materials processing
A Review: Microwave Energy for materials processing
 
Web Usage Mining: A Survey on User's Navigation Pattern from Web Logs
Web Usage Mining: A Survey on User's Navigation Pattern from Web LogsWeb Usage Mining: A Survey on User's Navigation Pattern from Web Logs
Web Usage Mining: A Survey on User's Navigation Pattern from Web Logs
 
APPLICATION OF STATCOM to IMPROVED DYNAMIC PERFORMANCE OF POWER SYSTEM
APPLICATION OF STATCOM to IMPROVED DYNAMIC PERFORMANCE OF POWER SYSTEMAPPLICATION OF STATCOM to IMPROVED DYNAMIC PERFORMANCE OF POWER SYSTEM
APPLICATION OF STATCOM to IMPROVED DYNAMIC PERFORMANCE OF POWER SYSTEM
 
Making model of dual axis solar tracking with Maximum Power Point Tracking
Making model of dual axis solar tracking with Maximum Power Point TrackingMaking model of dual axis solar tracking with Maximum Power Point Tracking
Making model of dual axis solar tracking with Maximum Power Point Tracking
 
A REVIEW PAPER ON PERFORMANCE AND EMISSION TEST OF 4 STROKE DIESEL ENGINE USI...
A REVIEW PAPER ON PERFORMANCE AND EMISSION TEST OF 4 STROKE DIESEL ENGINE USI...A REVIEW PAPER ON PERFORMANCE AND EMISSION TEST OF 4 STROKE DIESEL ENGINE USI...
A REVIEW PAPER ON PERFORMANCE AND EMISSION TEST OF 4 STROKE DIESEL ENGINE USI...
 
Study and Review on Various Current Comparators
Study and Review on Various Current ComparatorsStudy and Review on Various Current Comparators
Study and Review on Various Current Comparators
 
Reducing Silicon Real Estate and Switching Activity Using Low Power Test Patt...
Reducing Silicon Real Estate and Switching Activity Using Low Power Test Patt...Reducing Silicon Real Estate and Switching Activity Using Low Power Test Patt...
Reducing Silicon Real Estate and Switching Activity Using Low Power Test Patt...
 
Defending Reactive Jammers in WSN using a Trigger Identification Service.
Defending Reactive Jammers in WSN using a Trigger Identification Service.Defending Reactive Jammers in WSN using a Trigger Identification Service.
Defending Reactive Jammers in WSN using a Trigger Identification Service.
 

Kürzlich hochgeladen

priority interrupt computer organization
priority interrupt computer organizationpriority interrupt computer organization
priority interrupt computer organizationchnrketan
 
Gravity concentration_MI20612MI_________
Gravity concentration_MI20612MI_________Gravity concentration_MI20612MI_________
Gravity concentration_MI20612MI_________Romil Mishra
 
Cost estimation approach: FP to COCOMO scenario based question
Cost estimation approach: FP to COCOMO scenario based questionCost estimation approach: FP to COCOMO scenario based question
Cost estimation approach: FP to COCOMO scenario based questionSneha Padhiar
 
tourism-management-srs_compress-software-engineering.pdf
tourism-management-srs_compress-software-engineering.pdftourism-management-srs_compress-software-engineering.pdf
tourism-management-srs_compress-software-engineering.pdfchess188chess188
 
22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...
22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...
22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...KrishnaveniKrishnara1
 
Module-1-Building Acoustics(Introduction)(Unit-1).pdf
Module-1-Building Acoustics(Introduction)(Unit-1).pdfModule-1-Building Acoustics(Introduction)(Unit-1).pdf
Module-1-Building Acoustics(Introduction)(Unit-1).pdfManish Kumar
 
Turn leadership mistakes into a better future.pptx
Turn leadership mistakes into a better future.pptxTurn leadership mistakes into a better future.pptx
Turn leadership mistakes into a better future.pptxStephen Sitton
 
Curve setting (Basic Mine Surveying)_MI10412MI.pptx
Curve setting (Basic Mine Surveying)_MI10412MI.pptxCurve setting (Basic Mine Surveying)_MI10412MI.pptx
Curve setting (Basic Mine Surveying)_MI10412MI.pptxRomil Mishra
 
Indian Tradition, Culture & Societies.pdf
Indian Tradition, Culture & Societies.pdfIndian Tradition, Culture & Societies.pdf
Indian Tradition, Culture & Societies.pdfalokitpathak01
 
Immutable Image-Based Operating Systems - EW2024.pdf
Immutable Image-Based Operating Systems - EW2024.pdfImmutable Image-Based Operating Systems - EW2024.pdf
Immutable Image-Based Operating Systems - EW2024.pdfDrew Moseley
 
70 POWER PLANT IAE V2500 technical training
70 POWER PLANT IAE V2500 technical training70 POWER PLANT IAE V2500 technical training
70 POWER PLANT IAE V2500 technical trainingGladiatorsKasper
 
FUNCTIONAL AND NON FUNCTIONAL REQUIREMENT
FUNCTIONAL AND NON FUNCTIONAL REQUIREMENTFUNCTIONAL AND NON FUNCTIONAL REQUIREMENT
FUNCTIONAL AND NON FUNCTIONAL REQUIREMENTSneha Padhiar
 
Novel 3D-Printed Soft Linear and Bending Actuators
Novel 3D-Printed Soft Linear and Bending ActuatorsNovel 3D-Printed Soft Linear and Bending Actuators
Novel 3D-Printed Soft Linear and Bending ActuatorsResearcher Researcher
 
Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...
Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...
Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...shreenathji26
 
Uk-NO1 Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Exp...
Uk-NO1 Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Exp...Uk-NO1 Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Exp...
Uk-NO1 Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Exp...Amil baba
 
Javier_Fernandez_CARS_workshop_presentation.pptx
Javier_Fernandez_CARS_workshop_presentation.pptxJavier_Fernandez_CARS_workshop_presentation.pptx
Javier_Fernandez_CARS_workshop_presentation.pptxJavier Fernández Muñoz
 
The Satellite applications in telecommunication
The Satellite applications in telecommunicationThe Satellite applications in telecommunication
The Satellite applications in telecommunicationnovrain7111
 
Detection&Tracking - Thermal imaging object detection and tracking
Detection&Tracking - Thermal imaging object detection and trackingDetection&Tracking - Thermal imaging object detection and tracking
Detection&Tracking - Thermal imaging object detection and trackinghadarpinhas1
 

Kürzlich hochgeladen (20)

priority interrupt computer organization
priority interrupt computer organizationpriority interrupt computer organization
priority interrupt computer organization
 
Gravity concentration_MI20612MI_________
Gravity concentration_MI20612MI_________Gravity concentration_MI20612MI_________
Gravity concentration_MI20612MI_________
 
Cost estimation approach: FP to COCOMO scenario based question
Cost estimation approach: FP to COCOMO scenario based questionCost estimation approach: FP to COCOMO scenario based question
Cost estimation approach: FP to COCOMO scenario based question
 
tourism-management-srs_compress-software-engineering.pdf
tourism-management-srs_compress-software-engineering.pdftourism-management-srs_compress-software-engineering.pdf
tourism-management-srs_compress-software-engineering.pdf
 
22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...
22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...
22CYT12 & Chemistry for Computer Systems_Unit-II-Corrosion & its Control Meth...
 
Module-1-Building Acoustics(Introduction)(Unit-1).pdf
Module-1-Building Acoustics(Introduction)(Unit-1).pdfModule-1-Building Acoustics(Introduction)(Unit-1).pdf
Module-1-Building Acoustics(Introduction)(Unit-1).pdf
 
Turn leadership mistakes into a better future.pptx
Turn leadership mistakes into a better future.pptxTurn leadership mistakes into a better future.pptx
Turn leadership mistakes into a better future.pptx
 
Curve setting (Basic Mine Surveying)_MI10412MI.pptx
Curve setting (Basic Mine Surveying)_MI10412MI.pptxCurve setting (Basic Mine Surveying)_MI10412MI.pptx
Curve setting (Basic Mine Surveying)_MI10412MI.pptx
 
Indian Tradition, Culture & Societies.pdf
Indian Tradition, Culture & Societies.pdfIndian Tradition, Culture & Societies.pdf
Indian Tradition, Culture & Societies.pdf
 
Versatile Engineering Construction Firms
Versatile Engineering Construction FirmsVersatile Engineering Construction Firms
Versatile Engineering Construction Firms
 
Immutable Image-Based Operating Systems - EW2024.pdf
Immutable Image-Based Operating Systems - EW2024.pdfImmutable Image-Based Operating Systems - EW2024.pdf
Immutable Image-Based Operating Systems - EW2024.pdf
 
70 POWER PLANT IAE V2500 technical training
70 POWER PLANT IAE V2500 technical training70 POWER PLANT IAE V2500 technical training
70 POWER PLANT IAE V2500 technical training
 
FUNCTIONAL AND NON FUNCTIONAL REQUIREMENT
FUNCTIONAL AND NON FUNCTIONAL REQUIREMENTFUNCTIONAL AND NON FUNCTIONAL REQUIREMENT
FUNCTIONAL AND NON FUNCTIONAL REQUIREMENT
 
Novel 3D-Printed Soft Linear and Bending Actuators
Novel 3D-Printed Soft Linear and Bending ActuatorsNovel 3D-Printed Soft Linear and Bending Actuators
Novel 3D-Printed Soft Linear and Bending Actuators
 
Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...
Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...
Introduction to Artificial Intelligence: Intelligent Agents, State Space Sear...
 
Uk-NO1 Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Exp...
Uk-NO1 Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Exp...Uk-NO1 Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Exp...
Uk-NO1 Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Exp...
 
Designing pile caps according to ACI 318-19.pptx
Designing pile caps according to ACI 318-19.pptxDesigning pile caps according to ACI 318-19.pptx
Designing pile caps according to ACI 318-19.pptx
 
Javier_Fernandez_CARS_workshop_presentation.pptx
Javier_Fernandez_CARS_workshop_presentation.pptxJavier_Fernandez_CARS_workshop_presentation.pptx
Javier_Fernandez_CARS_workshop_presentation.pptx
 
The Satellite applications in telecommunication
The Satellite applications in telecommunicationThe Satellite applications in telecommunication
The Satellite applications in telecommunication
 
Detection&Tracking - Thermal imaging object detection and tracking
Detection&Tracking - Thermal imaging object detection and trackingDetection&Tracking - Thermal imaging object detection and tracking
Detection&Tracking - Thermal imaging object detection and tracking
 

Improving Performance of Back propagation Learning Algorithm

  • 1. IJSRD - International Journal for Scientific Research & Development| Vol. 1, Issue 6, 2013 | ISSN (online): 2321-0613 All rights reserved by www.ijsrd.com 1300 Abstract— The standard back-propagation algorithm is one of the most widely used algorithm for training feed- forward neural networks. One major drawback of this algorithm is it might fall into local minima and slow convergence rate. Natural gradient descent is principal method for solving nonlinear function is presented and is combined with the modified back-propagation algorithm yielding a new fast training multilayer algorithm. This paper describes new approach to natural gradient learning in which the number of parameters necessary is much smaller than the natural gradient algorithm. This new method exploits the algebraic structure of the parameter space to reduce the space and time complexity of algorithm and improve its performance. I. INTRODUCTION The back-propagation (BP) training algorithm is a supervised learning method for multi-layered feed-forward neural networks. It is essentially a gradient descent local optimization technique which involves backward error correction of network weights. Despite the general success of back-propagation method in the learning process, several major deficiencies are still needed to be solved. The convergence rate of back-propagation is very low and hence it becomes unsuitable for large problems. Furthermore, the convergence behavior of the back-propagation algorithm depends on the choice of initial values of connection weights and other parameters used in the algorithm such as the learning rate and the momentum term. Amari has developed natural gradient learning for multilayer perceptrons [18], which uses Quasi-Newton method [6] instead of the steepest descent direction. The Fisher information matrix is a technique used to estimate hidden parameters in terms observed random variables. It fits very nicely into Quasi-Newton optimization framework. This paper suggests that a simple modification to the initial search direction, in the above algorithm i.e. changing the gradient of error with respect to weights, to improve the training efficiency. It was discovered that if the gradient based search direction is locally modified by a gain value used in the activation function of the corresponding node, significant improvements in the convergence rates can be achieved [24]. II. BACKPROPAGATION LEARNING ALGORITHM An artificial neural network consist of input vector x and gives output y. when network has m hidden units, the output of hidden layer is φ( wα · x), α = 1,. . .,m where wα is an n dimensional connection weight vector from input to the α-th hidden unit, and φ is a sigmoidal output function. Let vα be a connection weight from the α-th hidden unit to the linear output unit and let ζ be a bias. Then the output of the neural network is written as ∑ ( ) (1) Any perceptron is specified by the parameter { w1, . . . ,wα; v}. We summarize them into a single m(n+1) dimensional vector θ. We call the space S consisting of all multilayer neurons. The parameter θ plays the role of a coordinate system of S. The vector θ of dimension m(n+1) can represent a single neuron. The output of a neuron is a random variable depends on input x. Hence the input output relation of the neuron having parameter θ is described by the conditional probability of output y on input x, ( | ) √ [ * ( )+ ] (2) Where ( ) ∑ ( ) (3) is the mean value of y given input x. Its logarithm is ( | ) * ( )+ (√ ) (4) This can be regarded as the negative of the square of an error when y is a target value and f(x,θ) is the output of the network. Hence, the maximization of the likelihood is equivalent to the minimization of the square error ( ) * ( )+ (5) The conventional on-line learning method modifies the current parameter θt by using the gradient ( ) of the loss function such that θt+1 = θt - ηt (xt ,y*t ;θt ) (6) here ηt is a learning rate, and ( ) { ( )} (7) is the gradient of the loss function l* and y*t is the desired output signal given from teacher. The steepest descent direction of the loss function l*(θ) in a Riemannian space is given [18] by ( ) ( ) ( ) (8) Where G-1 is the inverse of a matrix G = (gij) called the Riemannian metric tensor. This gradient is called natural gradient of the loss function l*(θ) in the Riemannian space. III. NATURAL GRADIENT LEARNING ALGORITHM In the multilayer neural network, the Riemannian metric tensor G(θ)= (gij(θ)) is given by the Fisher information matrix[18], gij(θ)= E[ ( | ) ( | ) ] (9) where E denotes expectation with respect to the input output pair (x,y) given in Eq.(2). The natural gradient learning algorithm updates the current θt by Improving Performance of Back propagation Learning Algorithm Harikrishna B Jethva1 Dr. V. M. Chavda2 1 Ph. D. Scholar,Department of Computer Science and Engineering 1 Bhagwant University, Sikar Road Ajmer, Rajasthan 2 SVICS, Kadi, Gujarat
  • 2. Improving Performance of Back propagation Learning Algorithm (IJSRD/Vol. 1/Issue 6/2013/007) All rights reserved by www.ijsrd.com 1301 θt+1 = θt - 𝜂t ( ) (10) IV. ADAPTIVE IMPLEMENTATION OF NATURAL GRADIENT LEARNING The Fisher information G(θ) depends on the probability distribution of x which is usually unknown. Hence, it is difficult to obtain G(θ). Moreover, its inversion is costly. Here, we show an adaptive method of directly estimating G- 1 (θ) [5]. Since the Fisher information of Eq. (9) can be rewritten, by using Eq. (4), as Gt = E[ ( | ) ( | ) ] * ( )+ + E[ ( ) ( ) ] [ ( ) ( ) ] (11) where ‗denotes transposition of a vector or matrix. We have following recursive estimation of G-1 [23] ( ) – ( ) (12) Where εt is a small learning rate, ( ) and ft = f(xt,θt). Together with θt+1 = θt – ηt l(xt ,yt ;θt ) (13) this gives the adaptive method of natural gradient learning. This is different from the Newton method, but can be regarded as an adaptive version of Gauss Newton method. Moreover, information geometry suggests the important geometric properties of hierarchical statistical model in general. V. EXPERIMENTAL RESULTS We conducted an experiment for comparing convergence speeds between conventional Natural Gradient Learning (NGL) algorithm, and the Adaptive Natural Gradient learning (ANGL) algorithms. We take XOR problem because it is not linearly separable problem. We use NN architecture with two hidden units and hyperbolic tangent transfer function between both the hidden units and output units. The inputs and outputs are: X0 = [ ] X1 = [ ] X2 = [ ] X3 = [ ] Y0 = -1 Y1 = 1 Y2 = 1 Y3 = -1 Respectively. Thus the error for each pattern is Єn = yn – tanh(W2tanh(W1xn+b1)+b2)2 (14) There are two hidden units and each layer has bias. Hence W1 is a 2-by-2 matrix and W2 is a 1-by-2 matrix. The performance compared with sum squared error metric. Neural network training algorithms are very sensitive to the learning rate. So we use step size η‖ ‖ for NGL algorithm. An interesting point of comparison is the relative step size of this algorithm. For ANGL, the effective learning rate is the product of the learning rate η and the largest eigenvalue of the G-1 . Figure 1, 2 shows the sum squared error of each learning epoch for NGL and ANGL. Table 1 show the parameters used in the three learning algorithms and some of the result of the experiment. Parameter NGL ANGL Hidden units Learning rate Adaption rate Learning Epoch When SSE < 0.02 Final SSE Final Learning Rate 2 0.25 N.A. 10000 0.0817 1e-4 2 0.25 0.1 320 3.55e-4 0.144 Table 1: The result of XOR Experiment And Parameter Used VI. CONCLUSION Natural Gradient Descent learning works well for many problems. Amari[18] had developed an algorithm to avoid local minima by following the curvature of a manifold in the parameter space of neuron. By using recursive estimate of the inverse of the Fisher information matrix of the parameters, the algorithm is able to accelerate learning in the direction of descent. The experiment have shown that the performance of natural gradient algorithm improved by using adaptive gradient method of learning. There are many areas of research in which this research can be applied, like speech recognition etc. REFERENCES [1] D. E. Rumelhart and J. L. McClelland, Parallel Distributed Processing. Cambridge, MA: MIT Press, 1986. [2] D. O. Hebb, The Organization of Behavior. New York: John Wiley & Sons, 1949. [3] D. J. C. MacKay, Information Theory, Inference, and Learning Algorithms. New York: Cambridge University Press, 2003. [4] F. Rosenblatt, Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Washington DC: Spartan Books, 1962. [5] H. Park, S. Amari, and K. Fukumizu, ―Adaptive natural gradient learning algorithms for various stochastic models,‖ Neural Networks, vol. 13, no. 7, pp. 755–764, 2000. [6] James A. Freeman David M. Skapura, Neural Networks Algorithms, Applications, and Programming Techniques, Addison-Wesley Publishing Company (1991) [7] Jinwook Go, Gunhee Han, Hagbae Kim Multigradient: A New Neural Network Learning Algorithm for Pattern Classification IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 39, NO. 5, MAY 2001 [8] Kenji Fukumizu, Shun-ichi Amari Local Minima and Plateaus in Hierarchical Structures of Multilayer Perceptrons Brain Science Institute The Institute of Physical and Chemical Research (RIKEN) E-mail: ffuku,amarig@brain.riken.go.jp Oct 22, 1999 [9] Kavita Burse, Manish Manoria, Vishnu P. S. Kirar Improved Back Propagation Algorithm to Avoid Local Minima in Multiplicative Neuron Model World
  • 3. Improving Performance of Back propagation Learning Algorithm (IJSRD/Vol. 1/Issue 6/2013/007) All rights reserved by www.ijsrd.com 1302 Academy of Science, Engineering and Technology 72 2010 [10] M. Abramowitz and I. A. Stegun, Eds., Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Washington, DC: US Government Printing Office, 1972. [11] M. Biehl and H. Schwarze, ―Learning by online gradient descent,‖ Journal of Physics, vol. A, no. 28, pp. 643–656, 1995. [12] N. Murata, ―Astatistical study of on-line learning,‖ in On-line Learning in Neural Networks, D. Saad, Ed., pp. 63–92. New York: Cambridge University Press, 1999. [13] N. M. Nawi, M. R. Ransing, and R. S. RansingAn Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural NetworksInternational Journal of Applied Science, Engineering and Technology www.waset.org Spring 2006 [14] R. Rojas, Neural Networks, ch. 7. New York: Springer-Verlag, 1996. [15] RIKEN Brain Science Institute(RIKEN BSI) Japan http://www.brain.riken.jp/ [16] R. A. Fisher, ―On the mathematical foundations of theoretical statistics,‖ Philosophical Transactions of the Royal Society of London, vol. 222, pp. 309–68, 1922. [17] S. Amari, ―Neural learning in structured parameter spaces — natural riemannian gradient,‖ in Advances in Neural Information Processing Systems, M. C. Mozer, M. I. Jordan, and T. Petsche, Eds., vol. 9, p. 127. Cambridge, MA: The MIT Press, 1997. [18] S. Amari, ―Natural gradient works efficiently in learning,‖ Neural Computation, vol. 10, no. 2, pp. 251–276, 1998. [19] S. Amari, H. Park, and T. Ozeki, Geometrical singularities in the neuromanifold of multilayer perceptrons, no. 14. Cambridge, MA: MIT Press, 2002. [20] S. Amari and H. Nagaoka, Methods of Information Geometry, Translations of Mathematical Monographs, vol. 191. New York: Oxford University Press, 2000. [21] Simon Haykin Neural Networks A comprehension foundation Pearson education seventh edition (2007) [22] T. Heskes and B. Kappen, ―On-line learning processes in artificial neural networks,‖ in Mathematical Foundations of Neural Networks, J. Taylor, Ed., pp. 199–233. Amsterdam, Netherlands: Elsevier, 1993. [23] Todd K. Moon and Wynn Stirling Mathematical Methods and Algorithms for Signal Processing, Prentice Hall, 1999 [24] Weixing, Xugang, Zheng TANG Avoiding the Local Minima Problem in Backpropagation Algorithm with Modified Error Function IEICE TRANS. FUNDAMENTALS, VOL.E88–A, NO.12 DECEMBER 2005 Fig. 1: The Sum Squared Error of NGL . Fig. 2: The Sum Squared Error of ANGL.