SlideShare ist ein Scribd-Unternehmen logo
1 von 5
Downloaden Sie, um offline zu lesen
IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 13, Issue 4 (Jul. - Aug. 2013), PP 34-38
www.iosrjournals.org
www.iosrjournals.org 34 | Page
Using Multi-layered Feed-forward Neural Network (MLFNN)
Architecture as Bidirectional Associative Memory (BAM) for
Function Approximation
Manisha Singh1
, Somesh Kumar2
1
M.Tech. (CSE) Student
2
Professor, Noida Institute of Engineering & Technology, Greater Noida, India
Abstract : Function approximation is to find the underlying relationship from à given finite input-output data.
It has numerous applications such as prediction, pattern recognition, data mining and classification etc. Multi-
layered feed-forward neural networks (MLFNNs) with the use of back propagation algorithm have been
extensively used for the purpose of function approximation recently. Another class of neural networks BAM has
also been experimented for the same problem with lot of variations. In the present paper we have proposed the
application of back propagation algorithm to MLFNN in such a way that it works like BAM and the result thus
presented show greater and speedy approximation for the example function.
Keywords: Neural networks, Multilayered feed-forward neural network (MLFNN), Bidirectional Associative
Memory (BAM), Function approximation
I. INTRODUCTION
Capturing the implicit relationship between the input and output patterns such that when test input
pattern is given, the pattern corresponding to the output of the generalized system is retrieved, is called the
problem of pattern mapping. Therefore the very objective of pattern mapping problem is to capture the implied
function and mimic its behavior. Alternatively, function approximation is the task of learning or constructing a
function that generates approximately the same outputs from input vectors as the process being modeled, based
on available training data. The approximate mapping system should give an output which is fairly close to the
values of the real function for inputs close to the current input used during learning. There exist multiple
methods that have been established as function approximation tools, where an artificial neural network (ANNs)
is the one which has been very popular recently. A trained neural network is expected to capture the system
characteristics in their weights. Such network is supposed to have generalized from the training data, if for a
new input the network produces the same output which the system would have produced.
Mainly the two categories of networks used for pattern mapping (thus function approximation) task are
– multilayered feed-forward neural networks trained using back-propagation algorithm [1-2], and the
bidirectional associative memory (BAM) [3] trained using Hebb rule. Due to design, MLFNN with hidden
layers is used for the classification purposes and hence when used for pattern mapping task, suffers the problem
of generalization. Generalization is possible only if the mapping function satisfies certain constraints.
Otherwise, the problem of capturing the mapping function from the training data set becomes an ill-
posed problem [Tikhonov and Arsenin, 1977]. To overcome this, a radial basis function network (RBFN) and
counter propagation neural network (CPN) has been used. The underlying algorithms which modify weights
separate such networks from back propagation. In back-propagation, the error and corresponding weight
modification are propagated backwards from the output layer to the input layer. Different training samples may
exert opposite forces in increasing or decreasing the same weight, and the net effect is that weight changes are
often of extremely small magnitude in networks with learning rates small enough to have some assurance of
converging to a local minimum of the mean squared error.
II. RELATED WORK
Function approximation using feed-forward neural networks (FNNs).has been studied and discussed in
detail by so many authors in literature (Cybenko, 1989; Hecht-Nielsen, 1989; Carroll and Dickinson, 1989;
Hornik, 1990, 1993; Park and Sandberg, 1991, 1993; Barron, 1993). FNNs are established structures capable of
approximating generic classes of functions, including continuous and bounded. The structure studied varied in
the terms of number of hidden layers for a particular function and different classes of FNNs were defined for
approximating different classes of functions as per a set of approximation criteria. It is well known that a two-
layered FNN, i.e. one that does not have any hidden layers, is not capable of approximating generic nonlinear
Using Multi-layered Feed-forward Neural Network Architecture as BAM for function approximation
www.iosrjournals.org 35 | Page
continuous functions (Widrow, 1990). On the other hand, four or more layer FNNs is rarely used in practice.
Hence, almost all the work deal with the most challenging issue of the approximation capability of
three-layered FNNs.
The dynamics of multilayer feed-forward neural networks is simpler compared to recurrent neural
networks [4-6] and BAMs. Since for storage and recall, Hebbian learning rule is used in BAM and therefore it
has limited storage and mapping capability. There has been reported substantial work ([7]-[24]) in the literature
to overcome this limitation. In the high-order BAM [7], high-order nonlinearity is applied to forward and
backward information flows to increase the memory capacity and improve error correction capability. The
exponential BAM [8] employs an exponential scheme of information flow to exponentially enhance the
similarity between an input pattern and it‟s nearest stored pattern. It improves the storage capacity and error
correcting capability of the BAM, and has a good convergence property. In [9], a high capacity fuzzy
associative memory (FAM) for multiple rule storage is described. A weighted-pattern learning algorithm for
BAM is described in [10] by means of global minimization. As inspired by the perceptron-learning algorithm,
an optimal learning scheme for a class of BAM is advanced [11], which has superior convergence and stability
properties. In [12], the synthesis problem of bidirectional associative memories is formulated as a set of linear
inequalities that can be solved using the perceptron training algorithm. A multilayer recursive neural network
with symmetrical interconnections is introduced in [14] for improved recognition performance under noisy
conditions and for increased storage capacity. In [15], a new bidirectional hetero-associative memory is defined
which encompasses correlational, competitive and topological properties and capable of increasing its clustering
capability. Other related work can be found in [16], [17],[18] and [19] in which either by adding dummy neuron,
increasing in number of layers or manipulating the interconnection among neurons in each layer, the issue of
performance improvement of BAM is addressed. Even some new learning algorithms were introduced to
improve the performance of original BAM and can be found in detail in [20]-[24].
III. PROPOSED MODEL
All methods discussed in section II use different algorithms which work only in one direction i.e.
forward direction and the error thus calculated is propagated backward and weights are adjusted accordingly.
We propose a method based on Back Propagation algorithm which works in two phases: Phase-I and Phase-II.
Since in both the phases the error is calculated and weights are adjusted accordingly, once in forward direction
and then in backward direction, therefore there is a significant improvement in convergence. In this section we
outline the theory behind the proposed approach.
3.1 Bidirectional Associative Memory (BAM)
The Bidirectional associative memory is hetero-associative, content-addressable memory. In this type
of memory the neurons in one layer are fully connected to the neurons of second layer. In fact if we take a feed-
forward network with one hidden layer and connections are done among various neurons of layers (Fig. 1). If
we perform error minimization using back propagation algorithm, then this corresponds to the BAM. The
weights are taken symmetrical.
Figure 1: A typical 1-n-1 feed-forward architecture as Bidirectional Associative Memory
3.2 The standard BP Algorithm for 1-n-1 feed-forward architecture
 Create a feed-forward network with one input, n hidden units and one output.
 Initialize all the weights to small random values.
 Repeat until (Eavg = < τ) where τ is the tolerance level of error
1. Input pattern Xk from training set and compute the corresponding output Sy
2. For corresponding output, calculate the error as
Using Multi-layered Feed-forward Neural Network Architecture as BAM for function approximation
www.iosrjournals.org 36 | Page
δo = Sy(D- Sy) (1– Sy)
3. For each hidden unit h, calculate
δh = Sh(1- Sh) Whj δo
4. Update each network weight between hidden and output layer Whj as follows:
Whj = Whj + η∆Whj + α ∆Whjold
where ∆Whj = Sh * δo and ∆Whjold = ∆Whj
5. Update each network weight between input and hidden layer Wih as follows:
Wih = Wih + η∆Wih + α ∆Wihold
where ∆Wih = δh* Xk and ∆Wihold = ∆Wih
3.3 The proposed Two-Phase BP Algorithm
Phase-I
The standard BP algorithm
Phase-II
 Repeat until (Eavg = < τ) where τ is the tolerance level of error
1. Whi = W’ih and Wjh = W’hj
2. Input pattern Dk for corresponding Xk and compute the corresponding output Su
3. For corresponding output Su , calculate the error as
δi = Su(X- Su) (1 – Su)
4. For each hidden unit h, calculate
δh = Sh(1- Sh) Whi δi
5. Update each network weight between hidden and input layer Whi as follows:
Whi = Whi + η∆Whi + α ∆Whiold
where ∆Whi = Sh * δi and ∆Whiold = ∆Whi
6. Update each network weight between output and hidden layer Wjh as follows:
Wjh = ∆Wjh + η∆Wjh + α ∆Wjhold
where ∆Wjh = δi* Dk and ∆Wjhold = ∆Wjh
IV. SIMULATION DESIGN AND EXPERIMENTS
A three-layered feed-forward neural network has been created for the experimentation. At the input and
output layers only one neuron has been considered since we are trying to approximate a single variable function
sin(x). The hidden layer consists of variable number of neurons ranging from 2 to 10. The summary of the
various parameters used is shown in Table 1.
At different learning rates and fixed momentum value (0.6), the convergence of the function has been
studied with tolerance value .05 by both algorithms discussed in section 3.1 & 3.2 - standard back propagation
and the proposed two-phase back propagation algorithm (which let the MLFNN work as BAM). The results
have been tabulated in Table 2. One epoch in standard BP algorithm means when all the training patterns are
presented once, while one epoch in proposed two-phase BP algorithm means when for all patterns both forward
and backward phases run once.
Table 1: The parameters used in simulation
Parameter Number/Values
Input Neurons 1
Hidden Neurons 2/3/4/5/6/7/8/9/10
Number of Patterns 5
Learning rate, η 0.7/0.8/0.9/1.0
Momentum, α 0.6
Tolerance, τ 0.005
Using Multi-layered Feed-forward Neural Network Architecture as BAM for function approximation
www.iosrjournals.org 37 | Page
Table 2: Number of epochs required for convergence for approximating the function y=sin(x).
Architecture
(input-hidden-output)
Learning rates
η=0.7 η=0.8 η=0.9 η=1.0
FFN BAM FFN BAM FFN BAM FFN BAM
1-2-1 4480 1268 4126 1184 3759 1095 3232 1041
1-3-1 2562 532 2640 461 2398 424 2061 370
1-4-1 2201 487 2252 387 2071 346 1597 353
1-5-1 2194 423 1837 399 1445 381 1524 342
1-6-1 1354 419 1267 336 1229 348 1264 330
1-7-1 1459 406 1368 377 1363 322 1407 329
1-8-1 1742 395 1785 329 1380 349 1444 320
1-9-1 1309 347 1102 312 1025 300 839 276
1-10-1 1226 343 1022 308 950 494 801 272
1-11-1 1270 376 1133 327 998 572 915 312
0 200 400 600 800 1000 1200 1400
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Error vs epoch plot
Epoch number
0 200 400 600 800 1000 1200
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Error vs epoch plot
Epoch number
Figure 2: Two typical plots of the instantaneous squared error on an epoch by epoch basis as training proceeds
V. RESULTS AND DISCUSSION
The results shown in Table 2 clearly indicate that the proposed two-phase back propagation algorithm
for BAM architecture simply outperform the MLFNN architecture with standard back propagation algorithm.
The number of epochs taken for the convergence of the taken function is substantially low in the case
of proposed two-phase BP algorithm. Moreover, if we increase the number of neurons in the hidden layer, from
2 to 10, the convergence is achieved early that means in lesser number of epochs in both the cases. Another
obvious observation comes out that on increasing the learning rate from 0.7 to 1.0 the convergence is being
achieved in lesser number of epochs. The results show that the best suited BAM architecture for the taken
function sin(x) is 1-10-1 with learning rate 1.0 and momentum 0.6.
VI. CONCLUSION AND FUTURE SCOPE
The new two-phase BP algorithm has been presented which when applied to multi-layered feed-
forward neural network (MLFNN) let it behave like bidirectional associative memory (BAM). As discussed
above, the proposed algorithm is better suited for function approximation and results obtained are quite
encouraging. Still the work is in early stage and more experiments with the various parameters like number of
hidden layers with variable number of neurons in each hidden layer, number of inputs, value of momentum etc.
are to be undertaken and their effect on achieving the convergence is to be studied.
References
[1] Rumelhan D.E., Hinton G.E., and Williams R. J., Learning Internal Representations by Error Propagation, Parallel Distributed
Processing, Vol. I Foundations, MIT Press, Cambridge, MA, pages 318-364. 1986.
[2] Tang C. Z. and Kwan H. K., Parameter effects on convergence speed and generalization capability of back-propagation algorithm
International Journal of Electronics, 74(1): 35-46, January 1993.
[3] Kosko B., Neural Networks and Fuzzy Systems (Prentice-Hall. Inc., New Jersey, 1992)
[4] Pearlmutter B.A., Gradient calculations for dynamic recurrent neural networks: a survey. IEEE Trans. On Neural Networks, 6
(5):1212-1228, Sept. 1995
[5] Prokhorov D.V., Feldkamp L.A., and Tyukin I.Y., Adaptive behavior with fixed weights in RNN: an overview. Proceedings of
International Joint Conference on Neural Networks, 2002, pages 201 8-2022
[6] Kwan H. K. and Yan J., Second-order recurrent neural network for word sequence learning, Proceedings of 2001 International
Symposium on Intelligent Multimedia, Video and Speech Processing, Hong Kong, May 2-4. 2001, pages 405-408.
Using Multi-layered Feed-forward Neural Network Architecture as BAM for function approximation
www.iosrjournals.org 38 | Page
[7] Tai H.-M.. Wu C.-H., and Jong, T.-L., High-order bidirectional associative memor‟: Elecrronics krlers, 25(21):1424-1425, 12th Oct.
1989.
[8] Jeng Y.-J., Yeh C.-C., and Chiueh T.D., „Exponential bidirectional associative memories”, Electronics Letters, 26(11):717-718,24th
May1990.
[9] Chung F.L. and Lee Tong, ‘On fuzzy associative memory with multiple-rule storage capacity”. IEEE Trans. on Fuzzy Systems,
4(3):375-384, August 1996.
[10] [Wang T.. Zhuang X., and Xing X., „Weight learning of bidirectional associative memories by global minimization”, IEEE Trans.
on Neural Networks, 3(6):1010-1018, Nov. 1992.
[11] Shanmukh K. and Venkatesh Y.V., „Generalized scheme for optimal learning in recurrent neural networks‟: IEE Proc: Vision,
Image, and Signal Processing. 142(2):71-77, April 1995.
[12] Salih, I. Smith, S.H., and Liu, D., „Design of bidirectional associative memories based on the perceptron training technique”.
Proceedings of IEEE International Symposium on Circuits and Systems, 1999, Vol. 5, pages 355-358.
[13] Kwan H. K. and Tsang P. C., „Multi -layer recursive neural network„: Proceedings of Canadian Conference on Electrical and
Computer Engineering, Montreal, Canada, September 17-20, 1989, Vol. 1. pages 428-431.
[14] Widrow B. and Winter R., „Neural Nets for Adaptive Filtering and Adaptive Pattern Recognition‟‟, IEEE Computer Magazine,
pages 25-39. March 1988.
[15] Chartier S., Giguere G. and Langlois D., “A new bidirectional heteroassociative memory encompassing correlational, competitive
and topological properties”, Neural Networks 22 (2009), pp 568-578, 2009
[16] Wang Y.F., Cruz J.B. and Mulligan J.H., “Two coding strategies for bidirectional associative memory”, IEEE Trans. Neural
Networks, vol. 1, no. 1, pp 81-92, 1990
[17] Wang Y.F., Cruz J.B. and Mulligan J.H., “Guaranteed recall of all training pairs for bidirectional associative memory”, IEEE Trans.
Neural Networks, vol. 2, no. 6, pp 559-567, 1991
[18] Kang H., “Multilayer associative neural network (MANN‟s): Storage capacity versus perfect recall”, IEEE Trans. Neural Networks,
vol. 5, pp 812-822, 1994
[19] Wang Z., “A bidirectional associative memory based on optimal linear associative memory”, IEEE Trans. Neural Networks, vol.45,
pp 1171-1179, Oct.1996
[20] Hassoun M.H. and Youssef A.M., “A high performance recording algorithm for Hopfield model associative memories”, Opt. Eng.,
vol. 27, no. , pp 46-54, 1989
[21] Simpson P.K., “Higher-ordered and interconnected bidirectional associative memories”, IEEE Trans. Syst. Man. Cybern., vol. 20,
no. 3, pp 637-652, 1990
[22] Zhuang X., Huang Y. and Chen S.S., “Better learning for bidirectional associative memory”, Neural Networks, vol.6, no.8, pp1131-
1146, 1993
[23] Oh H. and Kothari S.C., “Adaptation of the relaxation method for learning bidirectional associative memory”, IEEE Trans. Neural
Networks, vol.5, pp 573-583, July 1994
[24] Wang C.C. and Don H.S., “An analysis of high-capacity discrete exponential BAM”, IEEE Trans. Neural Networks, vol. 6, no. 2,
pp 492-496, 1995

Weitere ähnliche Inhalte

Was ist angesagt?

Was ist angesagt? (20)

CNN
CNNCNN
CNN
 
Convolutional neural network
Convolutional neural networkConvolutional neural network
Convolutional neural network
 
Convolutional neural network
Convolutional neural networkConvolutional neural network
Convolutional neural network
 
Densenet CNN
Densenet CNNDensenet CNN
Densenet CNN
 
Cnn
CnnCnn
Cnn
 
Offline Character Recognition Using Monte Carlo Method and Neural Network
Offline Character Recognition Using Monte Carlo Method and Neural NetworkOffline Character Recognition Using Monte Carlo Method and Neural Network
Offline Character Recognition Using Monte Carlo Method and Neural Network
 
Convolutional Neural Network and Its Applications
Convolutional Neural Network and Its ApplicationsConvolutional Neural Network and Its Applications
Convolutional Neural Network and Its Applications
 
Introduction to CNN
Introduction to CNNIntroduction to CNN
Introduction to CNN
 
Convolutional Neural Network (CNN)
Convolutional Neural Network (CNN)Convolutional Neural Network (CNN)
Convolutional Neural Network (CNN)
 
CONVOLUTIONAL NEURAL NETWORK
CONVOLUTIONAL NEURAL NETWORKCONVOLUTIONAL NEURAL NETWORK
CONVOLUTIONAL NEURAL NETWORK
 
CNN and its applications by ketaki
CNN and its applications by ketakiCNN and its applications by ketaki
CNN and its applications by ketaki
 
Understanding Convolutional Neural Networks
Understanding Convolutional Neural NetworksUnderstanding Convolutional Neural Networks
Understanding Convolutional Neural Networks
 
Convolutional Neural Network and RNN for OCR problem.
Convolutional Neural Network and RNN for OCR problem.Convolutional Neural Network and RNN for OCR problem.
Convolutional Neural Network and RNN for OCR problem.
 
Image Segmentation Using Deep Learning : A survey
Image Segmentation Using Deep Learning : A surveyImage Segmentation Using Deep Learning : A survey
Image Segmentation Using Deep Learning : A survey
 
Cnn method
Cnn methodCnn method
Cnn method
 
Image classification with Deep Neural Networks
Image classification with Deep Neural NetworksImage classification with Deep Neural Networks
Image classification with Deep Neural Networks
 
Deep learning for image super resolution
Deep learning for image super resolutionDeep learning for image super resolution
Deep learning for image super resolution
 
Convolutional Neural Network Models - Deep Learning
Convolutional Neural Network Models - Deep LearningConvolutional Neural Network Models - Deep Learning
Convolutional Neural Network Models - Deep Learning
 
Modern Convolutional Neural Network techniques for image segmentation
Modern Convolutional Neural Network techniques for image segmentationModern Convolutional Neural Network techniques for image segmentation
Modern Convolutional Neural Network techniques for image segmentation
 
Convolutional neural network
Convolutional neural networkConvolutional neural network
Convolutional neural network
 

Andere mochten auch

Allelopathic effect of Albizia saman F. Muell on three widely cultivated Indi...
Allelopathic effect of Albizia saman F. Muell on three widely cultivated Indi...Allelopathic effect of Albizia saman F. Muell on three widely cultivated Indi...
Allelopathic effect of Albizia saman F. Muell on three widely cultivated Indi...
IOSR Journals
 
Influence of Different Soil Management Practices On Soil Properties and Its I...
Influence of Different Soil Management Practices On Soil Properties and Its I...Influence of Different Soil Management Practices On Soil Properties and Its I...
Influence of Different Soil Management Practices On Soil Properties and Its I...
IOSR Journals
 
Proximate Composition of Whole Seeds and Pulp of African Black Velvet Tamarin...
Proximate Composition of Whole Seeds and Pulp of African Black Velvet Tamarin...Proximate Composition of Whole Seeds and Pulp of African Black Velvet Tamarin...
Proximate Composition of Whole Seeds and Pulp of African Black Velvet Tamarin...
IOSR Journals
 
Environmental health Effect and Air Pollution from cigarette smokers in Cross...
Environmental health Effect and Air Pollution from cigarette smokers in Cross...Environmental health Effect and Air Pollution from cigarette smokers in Cross...
Environmental health Effect and Air Pollution from cigarette smokers in Cross...
IOSR Journals
 

Andere mochten auch (20)

Video Streaming Compression for Wireless Multimedia Sensor Networks
Video Streaming Compression for Wireless Multimedia Sensor NetworksVideo Streaming Compression for Wireless Multimedia Sensor Networks
Video Streaming Compression for Wireless Multimedia Sensor Networks
 
B0540714
B0540714B0540714
B0540714
 
An optimized link state routing protocol based on a cross layer design for wi...
An optimized link state routing protocol based on a cross layer design for wi...An optimized link state routing protocol based on a cross layer design for wi...
An optimized link state routing protocol based on a cross layer design for wi...
 
Improving security for data migration in cloud computing using randomized enc...
Improving security for data migration in cloud computing using randomized enc...Improving security for data migration in cloud computing using randomized enc...
Improving security for data migration in cloud computing using randomized enc...
 
Automatic Learning Image Objects via Incremental Model
Automatic Learning Image Objects via Incremental ModelAutomatic Learning Image Objects via Incremental Model
Automatic Learning Image Objects via Incremental Model
 
K0356778
K0356778K0356778
K0356778
 
A0510107
A0510107A0510107
A0510107
 
A Novel Rebroadcast Technique for Reducing Routing Overhead In Mobile Ad Hoc ...
A Novel Rebroadcast Technique for Reducing Routing Overhead In Mobile Ad Hoc ...A Novel Rebroadcast Technique for Reducing Routing Overhead In Mobile Ad Hoc ...
A Novel Rebroadcast Technique for Reducing Routing Overhead In Mobile Ad Hoc ...
 
Allelopathic effect of Albizia saman F. Muell on three widely cultivated Indi...
Allelopathic effect of Albizia saman F. Muell on three widely cultivated Indi...Allelopathic effect of Albizia saman F. Muell on three widely cultivated Indi...
Allelopathic effect of Albizia saman F. Muell on three widely cultivated Indi...
 
Emotion Recognition Based On Audio Speech
Emotion Recognition Based On Audio SpeechEmotion Recognition Based On Audio Speech
Emotion Recognition Based On Audio Speech
 
J0935461
J0935461J0935461
J0935461
 
Survivability Issues in Optical Wavalength Division Multiplexing (WDM) Network
Survivability Issues in Optical Wavalength Division Multiplexing (WDM) NetworkSurvivability Issues in Optical Wavalength Division Multiplexing (WDM) Network
Survivability Issues in Optical Wavalength Division Multiplexing (WDM) Network
 
Public Verifiability in Cloud Computing Using Signcryption Based on Elliptic ...
Public Verifiability in Cloud Computing Using Signcryption Based on Elliptic ...Public Verifiability in Cloud Computing Using Signcryption Based on Elliptic ...
Public Verifiability in Cloud Computing Using Signcryption Based on Elliptic ...
 
Influence of Different Soil Management Practices On Soil Properties and Its I...
Influence of Different Soil Management Practices On Soil Properties and Its I...Influence of Different Soil Management Practices On Soil Properties and Its I...
Influence of Different Soil Management Practices On Soil Properties and Its I...
 
Simulation And Hardware Analysis Of Three Phase PWM Rectifier With Power Fact...
Simulation And Hardware Analysis Of Three Phase PWM Rectifier With Power Fact...Simulation And Hardware Analysis Of Three Phase PWM Rectifier With Power Fact...
Simulation And Hardware Analysis Of Three Phase PWM Rectifier With Power Fact...
 
A Hierarchical Feature Set optimization for effective code change based Defec...
A Hierarchical Feature Set optimization for effective code change based Defec...A Hierarchical Feature Set optimization for effective code change based Defec...
A Hierarchical Feature Set optimization for effective code change based Defec...
 
Proximate Composition of Whole Seeds and Pulp of African Black Velvet Tamarin...
Proximate Composition of Whole Seeds and Pulp of African Black Velvet Tamarin...Proximate Composition of Whole Seeds and Pulp of African Black Velvet Tamarin...
Proximate Composition of Whole Seeds and Pulp of African Black Velvet Tamarin...
 
Environmental health Effect and Air Pollution from cigarette smokers in Cross...
Environmental health Effect and Air Pollution from cigarette smokers in Cross...Environmental health Effect and Air Pollution from cigarette smokers in Cross...
Environmental health Effect and Air Pollution from cigarette smokers in Cross...
 
A New Single-Stage Multilevel Type Full-Bridge Converter Applied to closed lo...
A New Single-Stage Multilevel Type Full-Bridge Converter Applied to closed lo...A New Single-Stage Multilevel Type Full-Bridge Converter Applied to closed lo...
A New Single-Stage Multilevel Type Full-Bridge Converter Applied to closed lo...
 
Improved Fuzzy Control Strategy for Power Quality in Distributed Generation’s...
Improved Fuzzy Control Strategy for Power Quality in Distributed Generation’s...Improved Fuzzy Control Strategy for Power Quality in Distributed Generation’s...
Improved Fuzzy Control Strategy for Power Quality in Distributed Generation’s...
 

Ähnlich wie Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidirectional Associative Memory (BAM) for Function Approximation

Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
CSCJournals
 
Electricity Demand Forecasting Using Fuzzy-Neural Network
Electricity Demand Forecasting Using Fuzzy-Neural NetworkElectricity Demand Forecasting Using Fuzzy-Neural Network
Electricity Demand Forecasting Using Fuzzy-Neural Network
Naren Chandra Kattla
 

Ähnlich wie Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidirectional Associative Memory (BAM) for Function Approximation (20)

N ns 1
N ns 1N ns 1
N ns 1
 
F017533540
F017533540F017533540
F017533540
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
 
Efficient design of feedforward network for pattern classification
Efficient design of feedforward network for pattern classificationEfficient design of feedforward network for pattern classification
Efficient design of feedforward network for pattern classification
 
Comparison of Neural Network Training Functions for Hematoma Classification i...
Comparison of Neural Network Training Functions for Hematoma Classification i...Comparison of Neural Network Training Functions for Hematoma Classification i...
Comparison of Neural Network Training Functions for Hematoma Classification i...
 
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
 
Ffnn
FfnnFfnn
Ffnn
 
D028018022
D028018022D028018022
D028018022
 
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
 
Jf3515881595
Jf3515881595Jf3515881595
Jf3515881595
 
An efficient technique for color image classification based on lower feature ...
An efficient technique for color image classification based on lower feature ...An efficient technique for color image classification based on lower feature ...
An efficient technique for color image classification based on lower feature ...
 
Electricity Demand Forecasting Using Fuzzy-Neural Network
Electricity Demand Forecasting Using Fuzzy-Neural NetworkElectricity Demand Forecasting Using Fuzzy-Neural Network
Electricity Demand Forecasting Using Fuzzy-Neural Network
 
RunPool: A Dynamic Pooling Layer for Convolution Neural Network
RunPool: A Dynamic Pooling Layer for Convolution Neural NetworkRunPool: A Dynamic Pooling Layer for Convolution Neural Network
RunPool: A Dynamic Pooling Layer for Convolution Neural Network
 
Machine Learning Algorithms for Image Classification of Hand Digits and Face ...
Machine Learning Algorithms for Image Classification of Hand Digits and Face ...Machine Learning Algorithms for Image Classification of Hand Digits and Face ...
Machine Learning Algorithms for Image Classification of Hand Digits and Face ...
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technology
 
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...
 
Efficient Block Classification of Computer Screen Images for Desktop Sharing ...
Efficient Block Classification of Computer Screen Images for Desktop Sharing ...Efficient Block Classification of Computer Screen Images for Desktop Sharing ...
Efficient Block Classification of Computer Screen Images for Desktop Sharing ...
 
A040101001006
A040101001006A040101001006
A040101001006
 
H046014853
H046014853H046014853
H046014853
 

Mehr von IOSR Journals

Mehr von IOSR Journals (20)

A011140104
A011140104A011140104
A011140104
 
M0111397100
M0111397100M0111397100
M0111397100
 
L011138596
L011138596L011138596
L011138596
 
K011138084
K011138084K011138084
K011138084
 
J011137479
J011137479J011137479
J011137479
 
I011136673
I011136673I011136673
I011136673
 
G011134454
G011134454G011134454
G011134454
 
H011135565
H011135565H011135565
H011135565
 
F011134043
F011134043F011134043
F011134043
 
E011133639
E011133639E011133639
E011133639
 
D011132635
D011132635D011132635
D011132635
 
C011131925
C011131925C011131925
C011131925
 
B011130918
B011130918B011130918
B011130918
 
A011130108
A011130108A011130108
A011130108
 
I011125160
I011125160I011125160
I011125160
 
H011124050
H011124050H011124050
H011124050
 
G011123539
G011123539G011123539
G011123539
 
F011123134
F011123134F011123134
F011123134
 
E011122530
E011122530E011122530
E011122530
 
D011121524
D011121524D011121524
D011121524
 

Kürzlich hochgeladen

Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Kandungan 087776558899
 
+97470301568>> buy weed in qatar,buy thc oil qatar,buy weed and vape oil in d...
+97470301568>> buy weed in qatar,buy thc oil qatar,buy weed and vape oil in d...+97470301568>> buy weed in qatar,buy thc oil qatar,buy weed and vape oil in d...
+97470301568>> buy weed in qatar,buy thc oil qatar,buy weed and vape oil in d...
Health
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
Neometrix_Engineering_Pvt_Ltd
 
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
HenryBriggs2
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
mphochane1998
 

Kürzlich hochgeladen (20)

Online food ordering system project report.pdf
Online food ordering system project report.pdfOnline food ordering system project report.pdf
Online food ordering system project report.pdf
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
Hostel management system project report..pdf
Hostel management system project report..pdfHostel management system project report..pdf
Hostel management system project report..pdf
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
+97470301568>> buy weed in qatar,buy thc oil qatar,buy weed and vape oil in d...
+97470301568>> buy weed in qatar,buy thc oil qatar,buy weed and vape oil in d...+97470301568>> buy weed in qatar,buy thc oil qatar,buy weed and vape oil in d...
+97470301568>> buy weed in qatar,buy thc oil qatar,buy weed and vape oil in d...
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
 
Computer Lecture 01.pptxIntroduction to Computers
Computer Lecture 01.pptxIntroduction to ComputersComputer Lecture 01.pptxIntroduction to Computers
Computer Lecture 01.pptxIntroduction to Computers
 
Bridge Jacking Design Sample Calculation.pptx
Bridge Jacking Design Sample Calculation.pptxBridge Jacking Design Sample Calculation.pptx
Bridge Jacking Design Sample Calculation.pptx
 
Online electricity billing project report..pdf
Online electricity billing project report..pdfOnline electricity billing project report..pdf
Online electricity billing project report..pdf
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
Design For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startDesign For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the start
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
Learn the concepts of Thermodynamics on Magic Marks
Learn the concepts of Thermodynamics on Magic MarksLearn the concepts of Thermodynamics on Magic Marks
Learn the concepts of Thermodynamics on Magic Marks
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptx
 
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
 
Engineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planesEngineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planes
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 

Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidirectional Associative Memory (BAM) for Function Approximation

  • 1. IOSR Journal of Computer Engineering (IOSR-JCE) e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 13, Issue 4 (Jul. - Aug. 2013), PP 34-38 www.iosrjournals.org www.iosrjournals.org 34 | Page Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidirectional Associative Memory (BAM) for Function Approximation Manisha Singh1 , Somesh Kumar2 1 M.Tech. (CSE) Student 2 Professor, Noida Institute of Engineering & Technology, Greater Noida, India Abstract : Function approximation is to find the underlying relationship from à given finite input-output data. It has numerous applications such as prediction, pattern recognition, data mining and classification etc. Multi- layered feed-forward neural networks (MLFNNs) with the use of back propagation algorithm have been extensively used for the purpose of function approximation recently. Another class of neural networks BAM has also been experimented for the same problem with lot of variations. In the present paper we have proposed the application of back propagation algorithm to MLFNN in such a way that it works like BAM and the result thus presented show greater and speedy approximation for the example function. Keywords: Neural networks, Multilayered feed-forward neural network (MLFNN), Bidirectional Associative Memory (BAM), Function approximation I. INTRODUCTION Capturing the implicit relationship between the input and output patterns such that when test input pattern is given, the pattern corresponding to the output of the generalized system is retrieved, is called the problem of pattern mapping. Therefore the very objective of pattern mapping problem is to capture the implied function and mimic its behavior. Alternatively, function approximation is the task of learning or constructing a function that generates approximately the same outputs from input vectors as the process being modeled, based on available training data. The approximate mapping system should give an output which is fairly close to the values of the real function for inputs close to the current input used during learning. There exist multiple methods that have been established as function approximation tools, where an artificial neural network (ANNs) is the one which has been very popular recently. A trained neural network is expected to capture the system characteristics in their weights. Such network is supposed to have generalized from the training data, if for a new input the network produces the same output which the system would have produced. Mainly the two categories of networks used for pattern mapping (thus function approximation) task are – multilayered feed-forward neural networks trained using back-propagation algorithm [1-2], and the bidirectional associative memory (BAM) [3] trained using Hebb rule. Due to design, MLFNN with hidden layers is used for the classification purposes and hence when used for pattern mapping task, suffers the problem of generalization. Generalization is possible only if the mapping function satisfies certain constraints. Otherwise, the problem of capturing the mapping function from the training data set becomes an ill- posed problem [Tikhonov and Arsenin, 1977]. To overcome this, a radial basis function network (RBFN) and counter propagation neural network (CPN) has been used. The underlying algorithms which modify weights separate such networks from back propagation. In back-propagation, the error and corresponding weight modification are propagated backwards from the output layer to the input layer. Different training samples may exert opposite forces in increasing or decreasing the same weight, and the net effect is that weight changes are often of extremely small magnitude in networks with learning rates small enough to have some assurance of converging to a local minimum of the mean squared error. II. RELATED WORK Function approximation using feed-forward neural networks (FNNs).has been studied and discussed in detail by so many authors in literature (Cybenko, 1989; Hecht-Nielsen, 1989; Carroll and Dickinson, 1989; Hornik, 1990, 1993; Park and Sandberg, 1991, 1993; Barron, 1993). FNNs are established structures capable of approximating generic classes of functions, including continuous and bounded. The structure studied varied in the terms of number of hidden layers for a particular function and different classes of FNNs were defined for approximating different classes of functions as per a set of approximation criteria. It is well known that a two- layered FNN, i.e. one that does not have any hidden layers, is not capable of approximating generic nonlinear
  • 2. Using Multi-layered Feed-forward Neural Network Architecture as BAM for function approximation www.iosrjournals.org 35 | Page continuous functions (Widrow, 1990). On the other hand, four or more layer FNNs is rarely used in practice. Hence, almost all the work deal with the most challenging issue of the approximation capability of three-layered FNNs. The dynamics of multilayer feed-forward neural networks is simpler compared to recurrent neural networks [4-6] and BAMs. Since for storage and recall, Hebbian learning rule is used in BAM and therefore it has limited storage and mapping capability. There has been reported substantial work ([7]-[24]) in the literature to overcome this limitation. In the high-order BAM [7], high-order nonlinearity is applied to forward and backward information flows to increase the memory capacity and improve error correction capability. The exponential BAM [8] employs an exponential scheme of information flow to exponentially enhance the similarity between an input pattern and it‟s nearest stored pattern. It improves the storage capacity and error correcting capability of the BAM, and has a good convergence property. In [9], a high capacity fuzzy associative memory (FAM) for multiple rule storage is described. A weighted-pattern learning algorithm for BAM is described in [10] by means of global minimization. As inspired by the perceptron-learning algorithm, an optimal learning scheme for a class of BAM is advanced [11], which has superior convergence and stability properties. In [12], the synthesis problem of bidirectional associative memories is formulated as a set of linear inequalities that can be solved using the perceptron training algorithm. A multilayer recursive neural network with symmetrical interconnections is introduced in [14] for improved recognition performance under noisy conditions and for increased storage capacity. In [15], a new bidirectional hetero-associative memory is defined which encompasses correlational, competitive and topological properties and capable of increasing its clustering capability. Other related work can be found in [16], [17],[18] and [19] in which either by adding dummy neuron, increasing in number of layers or manipulating the interconnection among neurons in each layer, the issue of performance improvement of BAM is addressed. Even some new learning algorithms were introduced to improve the performance of original BAM and can be found in detail in [20]-[24]. III. PROPOSED MODEL All methods discussed in section II use different algorithms which work only in one direction i.e. forward direction and the error thus calculated is propagated backward and weights are adjusted accordingly. We propose a method based on Back Propagation algorithm which works in two phases: Phase-I and Phase-II. Since in both the phases the error is calculated and weights are adjusted accordingly, once in forward direction and then in backward direction, therefore there is a significant improvement in convergence. In this section we outline the theory behind the proposed approach. 3.1 Bidirectional Associative Memory (BAM) The Bidirectional associative memory is hetero-associative, content-addressable memory. In this type of memory the neurons in one layer are fully connected to the neurons of second layer. In fact if we take a feed- forward network with one hidden layer and connections are done among various neurons of layers (Fig. 1). If we perform error minimization using back propagation algorithm, then this corresponds to the BAM. The weights are taken symmetrical. Figure 1: A typical 1-n-1 feed-forward architecture as Bidirectional Associative Memory 3.2 The standard BP Algorithm for 1-n-1 feed-forward architecture  Create a feed-forward network with one input, n hidden units and one output.  Initialize all the weights to small random values.  Repeat until (Eavg = < τ) where τ is the tolerance level of error 1. Input pattern Xk from training set and compute the corresponding output Sy 2. For corresponding output, calculate the error as
  • 3. Using Multi-layered Feed-forward Neural Network Architecture as BAM for function approximation www.iosrjournals.org 36 | Page δo = Sy(D- Sy) (1– Sy) 3. For each hidden unit h, calculate δh = Sh(1- Sh) Whj δo 4. Update each network weight between hidden and output layer Whj as follows: Whj = Whj + η∆Whj + α ∆Whjold where ∆Whj = Sh * δo and ∆Whjold = ∆Whj 5. Update each network weight between input and hidden layer Wih as follows: Wih = Wih + η∆Wih + α ∆Wihold where ∆Wih = δh* Xk and ∆Wihold = ∆Wih 3.3 The proposed Two-Phase BP Algorithm Phase-I The standard BP algorithm Phase-II  Repeat until (Eavg = < τ) where τ is the tolerance level of error 1. Whi = W’ih and Wjh = W’hj 2. Input pattern Dk for corresponding Xk and compute the corresponding output Su 3. For corresponding output Su , calculate the error as δi = Su(X- Su) (1 – Su) 4. For each hidden unit h, calculate δh = Sh(1- Sh) Whi δi 5. Update each network weight between hidden and input layer Whi as follows: Whi = Whi + η∆Whi + α ∆Whiold where ∆Whi = Sh * δi and ∆Whiold = ∆Whi 6. Update each network weight between output and hidden layer Wjh as follows: Wjh = ∆Wjh + η∆Wjh + α ∆Wjhold where ∆Wjh = δi* Dk and ∆Wjhold = ∆Wjh IV. SIMULATION DESIGN AND EXPERIMENTS A three-layered feed-forward neural network has been created for the experimentation. At the input and output layers only one neuron has been considered since we are trying to approximate a single variable function sin(x). The hidden layer consists of variable number of neurons ranging from 2 to 10. The summary of the various parameters used is shown in Table 1. At different learning rates and fixed momentum value (0.6), the convergence of the function has been studied with tolerance value .05 by both algorithms discussed in section 3.1 & 3.2 - standard back propagation and the proposed two-phase back propagation algorithm (which let the MLFNN work as BAM). The results have been tabulated in Table 2. One epoch in standard BP algorithm means when all the training patterns are presented once, while one epoch in proposed two-phase BP algorithm means when for all patterns both forward and backward phases run once. Table 1: The parameters used in simulation Parameter Number/Values Input Neurons 1 Hidden Neurons 2/3/4/5/6/7/8/9/10 Number of Patterns 5 Learning rate, η 0.7/0.8/0.9/1.0 Momentum, α 0.6 Tolerance, τ 0.005
  • 4. Using Multi-layered Feed-forward Neural Network Architecture as BAM for function approximation www.iosrjournals.org 37 | Page Table 2: Number of epochs required for convergence for approximating the function y=sin(x). Architecture (input-hidden-output) Learning rates η=0.7 η=0.8 η=0.9 η=1.0 FFN BAM FFN BAM FFN BAM FFN BAM 1-2-1 4480 1268 4126 1184 3759 1095 3232 1041 1-3-1 2562 532 2640 461 2398 424 2061 370 1-4-1 2201 487 2252 387 2071 346 1597 353 1-5-1 2194 423 1837 399 1445 381 1524 342 1-6-1 1354 419 1267 336 1229 348 1264 330 1-7-1 1459 406 1368 377 1363 322 1407 329 1-8-1 1742 395 1785 329 1380 349 1444 320 1-9-1 1309 347 1102 312 1025 300 839 276 1-10-1 1226 343 1022 308 950 494 801 272 1-11-1 1270 376 1133 327 998 572 915 312 0 200 400 600 800 1000 1200 1400 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Error vs epoch plot Epoch number 0 200 400 600 800 1000 1200 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Error vs epoch plot Epoch number Figure 2: Two typical plots of the instantaneous squared error on an epoch by epoch basis as training proceeds V. RESULTS AND DISCUSSION The results shown in Table 2 clearly indicate that the proposed two-phase back propagation algorithm for BAM architecture simply outperform the MLFNN architecture with standard back propagation algorithm. The number of epochs taken for the convergence of the taken function is substantially low in the case of proposed two-phase BP algorithm. Moreover, if we increase the number of neurons in the hidden layer, from 2 to 10, the convergence is achieved early that means in lesser number of epochs in both the cases. Another obvious observation comes out that on increasing the learning rate from 0.7 to 1.0 the convergence is being achieved in lesser number of epochs. The results show that the best suited BAM architecture for the taken function sin(x) is 1-10-1 with learning rate 1.0 and momentum 0.6. VI. CONCLUSION AND FUTURE SCOPE The new two-phase BP algorithm has been presented which when applied to multi-layered feed- forward neural network (MLFNN) let it behave like bidirectional associative memory (BAM). As discussed above, the proposed algorithm is better suited for function approximation and results obtained are quite encouraging. Still the work is in early stage and more experiments with the various parameters like number of hidden layers with variable number of neurons in each hidden layer, number of inputs, value of momentum etc. are to be undertaken and their effect on achieving the convergence is to be studied. References [1] Rumelhan D.E., Hinton G.E., and Williams R. J., Learning Internal Representations by Error Propagation, Parallel Distributed Processing, Vol. I Foundations, MIT Press, Cambridge, MA, pages 318-364. 1986. [2] Tang C. Z. and Kwan H. K., Parameter effects on convergence speed and generalization capability of back-propagation algorithm International Journal of Electronics, 74(1): 35-46, January 1993. [3] Kosko B., Neural Networks and Fuzzy Systems (Prentice-Hall. Inc., New Jersey, 1992) [4] Pearlmutter B.A., Gradient calculations for dynamic recurrent neural networks: a survey. IEEE Trans. On Neural Networks, 6 (5):1212-1228, Sept. 1995 [5] Prokhorov D.V., Feldkamp L.A., and Tyukin I.Y., Adaptive behavior with fixed weights in RNN: an overview. Proceedings of International Joint Conference on Neural Networks, 2002, pages 201 8-2022 [6] Kwan H. K. and Yan J., Second-order recurrent neural network for word sequence learning, Proceedings of 2001 International Symposium on Intelligent Multimedia, Video and Speech Processing, Hong Kong, May 2-4. 2001, pages 405-408.
  • 5. Using Multi-layered Feed-forward Neural Network Architecture as BAM for function approximation www.iosrjournals.org 38 | Page [7] Tai H.-M.. Wu C.-H., and Jong, T.-L., High-order bidirectional associative memor‟: Elecrronics krlers, 25(21):1424-1425, 12th Oct. 1989. [8] Jeng Y.-J., Yeh C.-C., and Chiueh T.D., „Exponential bidirectional associative memories”, Electronics Letters, 26(11):717-718,24th May1990. [9] Chung F.L. and Lee Tong, ‘On fuzzy associative memory with multiple-rule storage capacity”. IEEE Trans. on Fuzzy Systems, 4(3):375-384, August 1996. [10] [Wang T.. Zhuang X., and Xing X., „Weight learning of bidirectional associative memories by global minimization”, IEEE Trans. on Neural Networks, 3(6):1010-1018, Nov. 1992. [11] Shanmukh K. and Venkatesh Y.V., „Generalized scheme for optimal learning in recurrent neural networks‟: IEE Proc: Vision, Image, and Signal Processing. 142(2):71-77, April 1995. [12] Salih, I. Smith, S.H., and Liu, D., „Design of bidirectional associative memories based on the perceptron training technique”. Proceedings of IEEE International Symposium on Circuits and Systems, 1999, Vol. 5, pages 355-358. [13] Kwan H. K. and Tsang P. C., „Multi -layer recursive neural network„: Proceedings of Canadian Conference on Electrical and Computer Engineering, Montreal, Canada, September 17-20, 1989, Vol. 1. pages 428-431. [14] Widrow B. and Winter R., „Neural Nets for Adaptive Filtering and Adaptive Pattern Recognition‟‟, IEEE Computer Magazine, pages 25-39. March 1988. [15] Chartier S., Giguere G. and Langlois D., “A new bidirectional heteroassociative memory encompassing correlational, competitive and topological properties”, Neural Networks 22 (2009), pp 568-578, 2009 [16] Wang Y.F., Cruz J.B. and Mulligan J.H., “Two coding strategies for bidirectional associative memory”, IEEE Trans. Neural Networks, vol. 1, no. 1, pp 81-92, 1990 [17] Wang Y.F., Cruz J.B. and Mulligan J.H., “Guaranteed recall of all training pairs for bidirectional associative memory”, IEEE Trans. Neural Networks, vol. 2, no. 6, pp 559-567, 1991 [18] Kang H., “Multilayer associative neural network (MANN‟s): Storage capacity versus perfect recall”, IEEE Trans. Neural Networks, vol. 5, pp 812-822, 1994 [19] Wang Z., “A bidirectional associative memory based on optimal linear associative memory”, IEEE Trans. Neural Networks, vol.45, pp 1171-1179, Oct.1996 [20] Hassoun M.H. and Youssef A.M., “A high performance recording algorithm for Hopfield model associative memories”, Opt. Eng., vol. 27, no. , pp 46-54, 1989 [21] Simpson P.K., “Higher-ordered and interconnected bidirectional associative memories”, IEEE Trans. Syst. Man. Cybern., vol. 20, no. 3, pp 637-652, 1990 [22] Zhuang X., Huang Y. and Chen S.S., “Better learning for bidirectional associative memory”, Neural Networks, vol.6, no.8, pp1131- 1146, 1993 [23] Oh H. and Kothari S.C., “Adaptation of the relaxation method for learning bidirectional associative memory”, IEEE Trans. Neural Networks, vol.5, pp 573-583, July 1994 [24] Wang C.C. and Don H.S., “An analysis of high-capacity discrete exponential BAM”, IEEE Trans. Neural Networks, vol. 6, no. 2, pp 492-496, 1995