SlideShare ist ein Scribd-Unternehmen logo
1 von 6
Downloaden Sie, um offline zu lesen
World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:1 No:3, 2007

Levenberg-Marquardt Algorithm for Karachi
Stock Exchange Share Rates Forecasting
Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, Cemal Ardil

Abstract— Financial forecasting is an example of signal
processing problems. A number of ways to train/learn the network
are available. We have used Levenberg-Marquardt algorithm for
error back-propagation for weight adjustment. Pre-processing of data
has reduced much of the variation at large scale to small scale,
reducing the variation of training data.

International Science Index 3, 2007 waset.org/publications/8650

Keywords— Gradient descent method, jacobian
Levenberg-Marquardt algorithm, quadratic error surfaces,

matrix.

I. INTRODUCTION

F

orecasting is a high noisy application because we are
typically dealing with systems that receive thousands of
inputs, which interact in a complex nonlinear fashion. The
forecasting of future events from noisy time series data is
commonly done using various forms of statistical models [10].
Usually with a very small subset of inputs or measurements
available from the system, the system behavior is to be
estimated and extrapolated into the future. Typically neural
network models are closely related to statistical models [8],
where the inputs are weighted and scalar outputs are
processed to produce output, thus provide a method of
functional mapping. Learning by example often operates in a
n
m
vector space i.e. a functional mapping R to R is

n
approximation [20]. Where for the positive integer n, R is the

set of all ordered n-tuples of the form
II.

x1 , x 2 , ..., x n .

such as forecasting and prediction or reconstruction
/recognition through learning process that involves
adjustments to the synaptic connections that exist between
neurons. Introductory ANN concepts are made in [3], [5] and
[11]. A network is composed of a number of interconnected
units; each unit has input/output (I/O) characteristics and
implements a local computing. The output of any unit is
determined by its I/O characteristics, it’s interconnection to
other units and external inputs.
Most of the present work in ANN concerns the
development, characterization and extension of mathematical
neural network models.

Nucleus
Axon

Dendrites

Fig. 1. Components of a neuron

A mathematical neural network model refers to the set of
nonlinear n-dimensional (generally) equations that
characterize the overall network operations, as well as
structure and dynamics of the ANN (and training).

INTRODUCTION TO NEURAL NETWORKS

Cell body

An Artificial Neural Network (ANN) is an informationprocessing paradigm that is inspired by the biological nervous
systems, such as brain, process information [1]. The key
element of this paradigm is the novel structure of the
information processing elements (neurons) working in unison
to solve specific problems. ANNs, like people, learn by
example. An ANN is configured for a specific application,
Manuscrot received on December, 28. 2003
Dr. S. M. Aqil Burney is Professor in the Department of Computer
Science,
University of Karachi, Pakistan, Phone: 0092-21-9243131 ext. 2447, fax:
0092-21-9243203, Burney@computer.Org, aqil_burney@yahoo.com.
Tahseen Ahmed Jilani is lecturer in the Department of Computer Science,
university of Karachi and is Ph.D. research associate in the Department of
Statistics. University of Karachi, Pakistan. tahseenjilani@yahoo.com
Cemal Ardil is with the Azerbaijan National Academy of Aviation
Baku, Azerbaijan. cemalardil@gmail.com

Dendrites

Threshold

Fig. 2. An Artificial Neuron
III.

FEEDFORWARD NEURAL NETWORK
ARCHITECTURE AND MAPPING

Feed forward neural networks are composed of neurons in
which the input layer of neurons is connected to the output
layer through one or more layers of intermediate neurons, The

717
World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:1 No:3, 2007

training process of neural networks involves adjusting the
weights till a desired input/output relationship is obtained
[7],[15]. The majority of adaptation learning algorithms are
based on the Widrow-Hoffback-algorithm. The mathematical
characterization of a multilayer feedforward network is that of
a composite application of functions [21]. Each of these
functions represents a particular layer and may be specific to
individual units in the layer, e.g. all the units in the layer are
required to have same activation function. The overall
mapping
is thus characterized by a composite function relating
feedforward network inputs to output [12]. That is

O

networks tend to give reasonable answers when presented
with inputs that they have never seen. Typically, a new input
leads to an output similar to the correct output for input
vectors used in training that are similar to the inputs being
presented [1].

f composite i

Using p-mapping layers in a p+1 layer feedforward net
yield

O

f

Lp

Lp

f

1

.... f

L1

i .......

Thus the interconnection weights from unit k in
International Science Index 3, 2007 waset.org/publications/8650

in

L2

are w

L1

L2

H1

to unit i

. If hidden units have a sigmoidal

activation function, denoted

OiL2

L1

f

sig

I
L
wik1

L2

L
wkj0

f ksig

k 1

L1

ij

j 1

(1)
Above equation illustrates neural network with supervision
and composition of non-linear function.
The neural network forecasting can be described as follows

Zk

1

NN Z k , Z k 1 ,...., Z k d , ek , ek 1 ,..., ek

d

Where Zi is either original observations or pre-processed
data, and ei are residuals. The processing of input data and
the number of e’s and Z’s are needed to determine the
performance of the forecast [9].
IV.

BACKPROPAGATION ALGORITHM

Backpropagation was created by generalizing the WidrowHoff learning rule to multiple-layer networks and non-linear
differentiable transfer functions. Input vectors and the
corresponding target vectors are used to train a network until
it can approximate a function, associate input vectors with
specific output vectors, or classify input vectors in an
appropriate way as defined by analyst [1]. Networks with
biases, a sigmoid function, and a linear output layer capable of
approximating any function with a finite number of
discontinuities, see [13] and [23]. Standard backpropagation is
a gradient descent algorithm, in which the network weights
are moved along the negative of the gradient of the
performance function. Thus for each movement we have,

X

k 1

X

k

- a k .g

k

(2)
There are a number of variations on the basic algorithm that
are based on other standard optimization techniques, such as
conjugate gradient [6]. Properly trained backpropagation

Fig. 3. Error backpropagation method for weight adjustment (A recursive
operation)

This generalization property makes it possible to train a
network on a representative set of inputs/targets pairs and get
good results without training the network on all possible
input/output pairs. Instances are represented by many attribute
value pair. The target function to be learned is defined over
instances that can be described by a vector of predefined
features. These input attributes may be highly correlated or
independent of one another. Input values can be any real
values. The target function output may be discrete-valued,
real-valued or a vector of several real or discrete–valued
attributes [7] and [16].
The training examples may contain errors. ANN learning
methods are quite robust to noise in the training data. Long
training times are acceptable. Network training algorithms
typically require longer training times then, say decision tree
algorithm. Training times can range from a few seconds to
many hours, depending on the factors such as the number of
weights in the network, the number of training examples
considered and the setting of various learning algorithm
parameters [20]. Fast evaluation of the leaning target function
may be required. Although ANN learning times are relatively
long, evaluating the learned network, in to apply it to a
subsequent instance is typically very fast [18], [23].
V. GRADIENT DESCENT TRAINING USING SIGMOIDAL AND
LINEAR ACTIVATION FUNCTION

For networks having differentiable activation functions,
there exist a powerful and computationally efficient method,
called error backpropagation for fining the derivatives of an
error function with respect to the weights and biases in the
network. Gradient Descent algorithm is the most commonly
used error backpropagation method. See for more in detailed
discussion [12], [13] and [22]. If net is the network designed
by taking the scalar product of the connections (weights) and
the stimulus then we define,

718

T

net w i , is large and have a
World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:1 No:3, 2007

differentiable activation function. Given a training set of
input-target pairs of the form
1

having to compute Hessian matrix. When the performance function
has the form of a sum of squares that is

n

2

i , t 1 , i , t 2 , , .. i , t n

H

1 T
e e
2

F w

Consider the sigmoidal activation function with outputs as

Oi

f net

i

1
e

1

i

now, defining a single pair error measure based
on t
have

p

e

Newton’s method for minimizing objective function is
generated using well known recurrence formula.

2

Wi

(3)

International Science Index 3, 2007 waset.org/publications/8650

d ep
dwk

2

When F

W

g JTe

. ikp

weight wk and forms the basis of the gradient descent training

where the Jacobian matrix J contains the first derivatives
of the network errors with respect to weights and biases,
and e is a vector of network errors. The Gauss-Newton
update formula can be

algorithm.

Wi

wk such

that

w

O

d f net p p
t .
. ik
d net p

p

wkj

wkj
VI.

1

1

J

O net

Op t p p
. ik
2e p

wkj

.

wkj

' e p ikp

(6)

LEVENBERG-MARQUARDT ALGORITHM

Training neural network is essentially a non-linear least squares
problem, and thus can be solved by a class of non-linear least squares
algorithms. Among them, the Levenberg-Marquardt is a trust region
based method with hyper-spherical trust region. This method work
extremely well in practice, and is considered the most efficient
algorithm for training median sized artificial neural networks.
Like Quasi-Newton methods, the Levenberg-Marquardt algorithm
was designed to approach second order training speed with out

w2
e2 p
w2
e kp

w1

(5)

e 11
w2
e 21
w2
.
.
ek1
w2
.
.
e1 p

w1
e kp

p

and for linear activation function

W Ji Ji
i

w1
e2 p

yielding the correction:
j
k

T

1

e 11
w1
e 21
w1
.
.
ek1
w1
.
.
e1 p

2

d e
d wk

wk

F W is the gradient of

and the gradient can be computed as

p

Specifically, we assign the weight correction,

F W

H J TJ

(4)
which is the gradient of the pattern error with respect to

w

1 T
e e and
2

2

d ep
dO p d net
.
.
dO p d net d wk
d f net
d net p

1

Wi H

F W then the Hessian matrix can be approximated as,

element of w , i.e., we have

Op tp

j 1
k

1

(7)

2

th

j

1 t
e e
2

F W

d ep
, we use this quantity
The basic idea is to compute
dw
to adjust w . Let for the k

i

t ij

of the network, the function of sum of squared errors is
defined as

p

1 p
t Op
2

Oij

w1 , w2 , w3 ,....., wN consists of all weights

O p , where O is the output obtained from i p , we
p 2

2

p

k

1
2
t

where W
net

1 T
e w e w
2

w

1

.

.
.

.

.

.
.

.
.

.

.

.

.

.

e 11
w N
e 21
w N
.
.
e .k 1
w N
.
.
.
e1 p

.

.
.

(8)

i

.

.

Ji T

.

w N
e2 p
w N
e kp
w

2

N

T

Where J J is positive definite, but if it is not, then, we
make some perturbation into it that will control the probability
of being non positive definite.
Thus,
H JTJ
I
T

Wi 1 W Ji Ji
i

719

1

I Ji T

i

(9)
World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:1 No:3, 2007

Where in neural computing context the quantity
the learning parameter, it ensures that J

T

VIII. NETWORK TRAINING AND RESULTS

is called

J is positive

definite. It is always possible to choose
sufficiently large
enough to ensure a descent step. The learning parameter is
decreased as the iterative process approaches to a minimum.
Thus, in Levenberg-Marquardt optimization, inversion of
T

square matrix J J
I is involved which requires large
memory space to store the Jacobian matrix and the
approximated

We presented input data in concurrent patterns, this
computation can be performed using [12] or [17]. For
programming in C/C++ of ANN we also use the approach of
[3] or [25]. The network performance value along with each
epoch is presented in the figure-6 (a). We set the network
performance measurement goal at 1E-06. Secondly in Figure6 (b): the Hinton plot [10] or [12] of input layer weight is
presented. The white boxes shows positive and grey are
negative weights.

Error

Global
Minima

W
Current
weight

International Science Index 3, 2007 waset.org/publications/8650

W0

Optimal
Weight

Weight
Update

(a)

(a)

(b)
Fig. 4. (a) Typical error function for linear networks, (b) Gradient descent
used to search the space of possible weight vectors

Hessian matrix along with inversion of approximate H
matrix of order N N in each iteration, thus for large
networks, Levenberg-Marquardt algorithm is not suitable.
VII. KARACHI STOCK EXCHANGE SHARES

Stock exchange share prices exhibit very high noise, and
significant non-stationarity. Here, we consider the prediction
of the direction of change in the share price for the next
business day. A number of researchers have applied neural
network technology to exchange rates prediction and trading.
This is for the very first time that ANN technique of
forecasting is applied at K.S.E. The data set used has been
taken from Karachi Stock Exchange Library and KSE brokers.
We have taken 400 data points for training of neural
networks. For prior work on Karachi Stock Exchange see [2].

(b)
Fig.5. (a) Plot of Network Training, Performance function with epoch (b)
Input Weights Hinton Plot.

We trained the network many times and the performance goal
is achieved at different epochs. The gradient of error surface is
also shown in table 1. BP nets initialized with small weights
can develop large weights only after the number of updates is
large. We have used only 100 iterations for training of
network. Early stopping combined with BP is so effective that
large nets can be trained where nets with large weights have
more representation power [20]. For further discussion please
concern [7], [15], [18] and [24]. Finally, the results of
forecasting for 10 days are shown below. It is best to use the
forecast of only next day because of high variability at
Karachi Stock Exchange.

720
World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:1 No:3, 2007

REFERENCES
TABLE 1
NETWORK PERFORMANCE GOAL= E-06, ALONG WITH GRADIENT
OF ERROR FUNCTION 1E-12.
Training

Goal
achieved

[1]

[2]
M.S.E

Gradient

at epoch
1

50

3.80E-10

0.03409

[3]

2

10

7.67E-07

2.01321

[4]

3

12

3.08E-08

0.767221

4

50

3.77E-10

0.110359

[5]

5

17

3.64E-08

0.110359

[6]

6

34

1.51E-10

0.034858

7

50

1.18E-09

0.038043

8

33

5.08E-09

0.807517

9

50

3.80E-10

0.03409

10

39

1.29E-16

5.93E-06

[7]

[8]

[9]

International Science Index 3, 2007 waset.org/publications/8650

TABLE 2
A COMPARISON BETWEEN TARGETS AND NETWORK OUTPUTS
ANN
Date
Targets
Output
Difference

7/16/2001

17.8

17.7991

0.0009

7/17/2001

17.1

17.0992

0.0008

7/18/2001

17.15

17.1492

17.15

17.1491

0.0009

7/20/2001

16.9

16.8992

0.0008

7/23/2001

16.9

16.8996

0.0004

7/24/2001

17.6

17.5992

0.0008

7/25/2001

16.95

16.9492

0.0008

7/26/2001

16.45

16.4492

0.0008

7/27/2001

16.45

16.4492

0.0008

[11]

0.0008

7/19/2001

[10]

[12]
[13]

[14]

[15]
[16]

[17]
IX.

CONCLUSIONS

We have seen that neural networks produced fairly accurate
forecast if compared with weighted exponential methods and
ARIMA methods. Generalization performance is reasonable,
indicating confidence in the training results and produced
fairly accurate forecasts [1].
There are a number of available training algorithms that can
be used with backpropagation method, but LevenbergMarquardt has proven results specifically to a sum-of-squares
error function for financial forecasts of share rate forecasting.
The comparison ability of the different approaches is usually
problem dependent.
It would be interesting to find out how would other
architecture such as Regression neural networks, Bayesian
neural networks and SOFM will behave. Some new ways of
preprocessing and input selection methods will be our future
work. We hope to extend our work to recurrent neural
networks, Bayesian neural networks and iterative principle
component neural networks.

[18]

[19]

[20]

[21]

[22]
[23]

[24]

[25]

721

S. M. Aqil Burney, Tahseen A. Jilani,. (2002). "Time Series forecasting
using artificial neural network methods for Karachi Stock Exchange". A
Project at Department of Computer Science, University of Karachi.
S. M. Aqil Burney, “Measures of central tendency in News-MCB stock
exchange”. Price index pp 12-31. Developed by Vital information
services (pvt) limited, Karachi. c/o Department of Computer Science,
University of Karachi.
Adam Blum, (1992)."Neural Network in C++". John Wiley and Sons,
New York.
C. Brooks, (1997). "Linear and non-linear (non-) forecastability of highfrequency exchange rates". Journal of Forecasting, pp. 16:125-145.
M. Caudill, and C. Butler. (1994)."Understanding neural network". MIT
Press.
Y. Chauvin, and D. Rumelhart, (1995). "Backpropagation: theory,
architecture, and applications." Lawrence Erlbaum Association.
Chung-Ming Kuan and Liu. Tung, (1995)."Forecasting exchange rates
using feedforward and recurrent neural networks." Journal of Applied
Econometrics, pp. 347-64.
C. Lee Giles, Steve Lawrence and A. C. Tsoi, .(2001)."Time series
prediction using a recurrent neural network and grammatical inference".
Machine Learning, Vol. 44, Number ½ , July/August, pp. 161-183.
C. Lee Giles, Steve Lawrence, A. C. Tsoi.(1997). "Rule inference for
financial prediction using recurrent neural networks". Proceedings of
IEEE/IAFE Conference on Computational Intelligence for Financial
Engineering (CIFEr), IEEE, Piscataway, NJ, , pp. 253-259.
C. M. Bishop.(1995). "Neural networks for pattern recognition".
Clarendon Press. Oxford
R. Dixon, and F. Bevan. (1994)."Neural network tutor." Advance
Technology Transfer Group.
T. M. Hagan, H. B. Demuth, (1996)."Neural network design",
Brooks/Cole publishing company.
R. Hecht-Nielsen, (1989). "Theory of the backpropagation neural
network". International Joint Conference on Neural Networks, IEEE,
New York, pp. 593-605.
K. Hornik., M. Sunchcombe, and H. White. (1990). "Using multilayer
feedforward networks for universal approximation". Neural Networks 3,
pp. 551-60.
Iebeling Kaastra and Milton S. Boyd. (1995). "Forecasting future trading
volume using neural networks." Journal of Future Markets, pp. 953-70.
C. M. Kuan, and T. Liu. (1995). "Forecasting exchange rates using
feedforward and recurrent neural networks". Journal of Applied
Econometrics, pp.10: 347-364.
Lars Liden, (2000) "Neural network simulation package". Department of
Cognitive and Natural Systems at Boston University
Mir F. Atiya, Suzan M. El-Shoura, Samir I. Shaheen. "A comparison
between neural network forecasting techniques- case study: river flow
forecasting". IEEE Transactions on Neural Networks. Vol. 10, No. 2
March 1999
Moody, J.(1994). "Prediction risk and architecture selection for neural
networks". appears in, From Statistics to Neural Networks: Theory and
Pattern Recognition Application, NATO ASI Series, Springer-Verlag.
Rich caruana, Steven Lawrence, C. Lee Giles. (2000)."Overfitting in
neural nets: backpropagation, conjugate gradient, and early stopping".
Neural Information Processing Systems, Denver, Colorado, November
pp. 28-30.
B. D. Ripley, (1994)”Neural networks and related methods for
classification”, Journal of the Royal Statistician Society, B 56(3),409456.
R. J. Schalkoff, (1997)."Artificial neural networks". McGraw-Hill
International Edition, computer science series,
N. Schraudolph, Nicol, and T. J. Sejnowski, (1996)."Tempering
backpropagation networks: not all weights are created equal". Appear in
Advances in Neural Information Processing Systems 8, MIT Press,
Cambridge.
W. Chu Chao-Hsien and Djohan Widjaja, (1994)."Neural network
system for forecasting method selection". Decision support systems
(August), pp. 13-24.
T. Stephen Welstead, (1994). "Neural network and fuzzy logic
applications in C/C++". John Wiley and Sons, Inc. N.Y.
International Science Index 3, 2007 waset.org/publications/8650

World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:1 No:3, 2007

S. M. Aqil Burney received the B.Sc.(Physics,
Mathematics, Statistics) from D. J. College affiliated
with the University of Karachi in 1970; First class first
M.Sc. (Statistics,) from Karachi University in 1972
with M.Phil. in 1983 with specialization in
Computing, Simulation and stochastic modeling in
from risk management. Dr. Burney received Ph.D.
degree in Mathematics from Strathclyde University,
Glasgow with specialization in estimation, modeling
and simulation of multivariate Time series models
using algorithmic approach with software development.
He is currently professor and approved supervisor in Computer Science and
Statistics by the High education Commission Government of Pakistan. He is
also the project director of Umair Basha Institute of Information technology
(UBIT). He is also member of various higher academic boards of different
universities of Pakistan. His research interest includes AI, Soft computing
neural networks, fuzzy logic, data mining, statistics, simulation and stochastic
modeling of mobile communication system and networks and network
security. He is author of three books, various technical reports and supervised
more than 100software/Information technology projects of Masters level
degree programs and project director of various projects funded by
Government of Pakistan.
He is member of IEEE (USA), ACM (USA) and fellow of Royal Statistical
Society United Kingdom and also a member of Islamic society of Statistical
Sciences. He is teaching since 1973 in various universities in the field of
Econometric, Bio-Statistics, Statistics, Mathematic and Computer Science He
has vast education management experience at the University level. Dr. Burney
has received appreciations and awards for his research and as educationist.
Tahseen A. Jilani received the B.Sc.(Computer
Science, Mathematics, Statistics) from University of
Karachi in 1998; First class second M.Sc.
(Statistics,) from Karachi University in 2001. Since
2003, he is Ph.D. research fellow in the Department
of Computer Science, University of Karachi.
His research interest includes AI, neural networks,
soft computing, fuzzy logic, Statistical data mining
and simulation. He is teaching since 2002 in the
fields of Statistics, Mathematics and Computer Science.

722

Weitere ähnliche Inhalte

Was ist angesagt?

Electricity consumption-prediction-model-using neuro-fuzzy-system
Electricity consumption-prediction-model-using neuro-fuzzy-systemElectricity consumption-prediction-model-using neuro-fuzzy-system
Electricity consumption-prediction-model-using neuro-fuzzy-systemCemal Ardil
 
Adaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsAdaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsIJCSEA Journal
 
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...IJERA Editor
 
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITY
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITYDCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITY
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITYIAEME Publication
 
Simulation of Single and Multilayer of Artificial Neural Network using Verilog
Simulation of Single and Multilayer of Artificial Neural Network using VerilogSimulation of Single and Multilayer of Artificial Neural Network using Verilog
Simulation of Single and Multilayer of Artificial Neural Network using Verilogijsrd.com
 
An Artificial Intelligence Approach to Ultra High Frequency Path Loss Modelli...
An Artificial Intelligence Approach to Ultra High Frequency Path Loss Modelli...An Artificial Intelligence Approach to Ultra High Frequency Path Loss Modelli...
An Artificial Intelligence Approach to Ultra High Frequency Path Loss Modelli...ijtsrd
 
Investigations on Hybrid Learning in ANFIS
Investigations on Hybrid Learning in ANFISInvestigations on Hybrid Learning in ANFIS
Investigations on Hybrid Learning in ANFISIJERA Editor
 
IRJET- Artificial Neural Network: Overview
IRJET-  	  Artificial Neural Network: OverviewIRJET-  	  Artificial Neural Network: Overview
IRJET- Artificial Neural Network: OverviewIRJET Journal
 
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...ijaia
 
Neural Network Implementation Control Mobile Robot
Neural Network Implementation Control Mobile RobotNeural Network Implementation Control Mobile Robot
Neural Network Implementation Control Mobile RobotIRJET Journal
 
Comparison of Neural Network Training Functions for Hematoma Classification i...
Comparison of Neural Network Training Functions for Hematoma Classification i...Comparison of Neural Network Training Functions for Hematoma Classification i...
Comparison of Neural Network Training Functions for Hematoma Classification i...IOSR Journals
 
11 construction productivity and cost estimation using artificial
11 construction productivity and cost estimation using artificial 11 construction productivity and cost estimation using artificial
11 construction productivity and cost estimation using artificial Vivan17
 
Computational model for artificial learning using formal concept analysis
Computational model for artificial learning using formal concept analysisComputational model for artificial learning using formal concept analysis
Computational model for artificial learning using formal concept analysisAboul Ella Hassanien
 
Short Term Load Forecasting Using Bootstrap Aggregating Based Ensemble Artifi...
Short Term Load Forecasting Using Bootstrap Aggregating Based Ensemble Artifi...Short Term Load Forecasting Using Bootstrap Aggregating Based Ensemble Artifi...
Short Term Load Forecasting Using Bootstrap Aggregating Based Ensemble Artifi...Kashif Mehmood
 
Data clustering using kernel based
Data clustering using kernel basedData clustering using kernel based
Data clustering using kernel basedIJITCA Journal
 

Was ist angesagt? (18)

Electricity consumption-prediction-model-using neuro-fuzzy-system
Electricity consumption-prediction-model-using neuro-fuzzy-systemElectricity consumption-prediction-model-using neuro-fuzzy-system
Electricity consumption-prediction-model-using neuro-fuzzy-system
 
Adaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsAdaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errors
 
Neural Networks: Introducton
Neural Networks: IntroductonNeural Networks: Introducton
Neural Networks: Introducton
 
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...
 
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITY
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITYDCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITY
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITY
 
Simulation of Single and Multilayer of Artificial Neural Network using Verilog
Simulation of Single and Multilayer of Artificial Neural Network using VerilogSimulation of Single and Multilayer of Artificial Neural Network using Verilog
Simulation of Single and Multilayer of Artificial Neural Network using Verilog
 
An Artificial Intelligence Approach to Ultra High Frequency Path Loss Modelli...
An Artificial Intelligence Approach to Ultra High Frequency Path Loss Modelli...An Artificial Intelligence Approach to Ultra High Frequency Path Loss Modelli...
An Artificial Intelligence Approach to Ultra High Frequency Path Loss Modelli...
 
Investigations on Hybrid Learning in ANFIS
Investigations on Hybrid Learning in ANFISInvestigations on Hybrid Learning in ANFIS
Investigations on Hybrid Learning in ANFIS
 
A040101001006
A040101001006A040101001006
A040101001006
 
IRJET- Artificial Neural Network: Overview
IRJET-  	  Artificial Neural Network: OverviewIRJET-  	  Artificial Neural Network: Overview
IRJET- Artificial Neural Network: Overview
 
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...
 
Neural Network Implementation Control Mobile Robot
Neural Network Implementation Control Mobile RobotNeural Network Implementation Control Mobile Robot
Neural Network Implementation Control Mobile Robot
 
Comparison of Neural Network Training Functions for Hematoma Classification i...
Comparison of Neural Network Training Functions for Hematoma Classification i...Comparison of Neural Network Training Functions for Hematoma Classification i...
Comparison of Neural Network Training Functions for Hematoma Classification i...
 
A Learning Automata Based Prediction Mechanism for Target Tracking in Wireles...
A Learning Automata Based Prediction Mechanism for Target Tracking in Wireles...A Learning Automata Based Prediction Mechanism for Target Tracking in Wireles...
A Learning Automata Based Prediction Mechanism for Target Tracking in Wireles...
 
11 construction productivity and cost estimation using artificial
11 construction productivity and cost estimation using artificial 11 construction productivity and cost estimation using artificial
11 construction productivity and cost estimation using artificial
 
Computational model for artificial learning using formal concept analysis
Computational model for artificial learning using formal concept analysisComputational model for artificial learning using formal concept analysis
Computational model for artificial learning using formal concept analysis
 
Short Term Load Forecasting Using Bootstrap Aggregating Based Ensemble Artifi...
Short Term Load Forecasting Using Bootstrap Aggregating Based Ensemble Artifi...Short Term Load Forecasting Using Bootstrap Aggregating Based Ensemble Artifi...
Short Term Load Forecasting Using Bootstrap Aggregating Based Ensemble Artifi...
 
Data clustering using kernel based
Data clustering using kernel basedData clustering using kernel based
Data clustering using kernel based
 

Andere mochten auch

[StockDice] Application for Stock Exchange Monitoring And Business Intelligen...
[StockDice] Application for Stock Exchange Monitoring And Business Intelligen...[StockDice] Application for Stock Exchange Monitoring And Business Intelligen...
[StockDice] Application for Stock Exchange Monitoring And Business Intelligen...Kiran Munir
 
Gold Price Forecasting
Gold Price ForecastingGold Price Forecasting
Gold Price Forecastingyou wu
 
Financial forecastings using neural networks ppt
Financial forecastings using neural networks pptFinancial forecastings using neural networks ppt
Financial forecastings using neural networks pptPuneet Gupta
 
Stock exchange simple ppt
Stock exchange simple pptStock exchange simple ppt
Stock exchange simple pptAvinash Varun
 

Andere mochten auch (6)

[StockDice] Application for Stock Exchange Monitoring And Business Intelligen...
[StockDice] Application for Stock Exchange Monitoring And Business Intelligen...[StockDice] Application for Stock Exchange Monitoring And Business Intelligen...
[StockDice] Application for Stock Exchange Monitoring And Business Intelligen...
 
Gold Price Forecasting
Gold Price ForecastingGold Price Forecasting
Gold Price Forecasting
 
Financial forecastings using neural networks ppt
Financial forecastings using neural networks pptFinancial forecastings using neural networks ppt
Financial forecastings using neural networks ppt
 
Cpfr final
Cpfr finalCpfr final
Cpfr final
 
Data mining
Data miningData mining
Data mining
 
Stock exchange simple ppt
Stock exchange simple pptStock exchange simple ppt
Stock exchange simple ppt
 

Ähnlich wie Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecasting-

Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionIAEME Publication
 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionIAEME Publication
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsaciijournal
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithmsaciijournal
 
Live to learn: learning rules-based artificial neural network
Live to learn: learning rules-based artificial neural networkLive to learn: learning rules-based artificial neural network
Live to learn: learning rules-based artificial neural networknooriasukmaningtyas
 
Neural network based numerical digits recognization using nnt in matlab
Neural network based numerical digits recognization using nnt in matlabNeural network based numerical digits recognization using nnt in matlab
Neural network based numerical digits recognization using nnt in matlabijcses
 
Feed forward neural network for sine
Feed forward neural network for sineFeed forward neural network for sine
Feed forward neural network for sineijcsa
 
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...IAEME Publication
 
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...IJECEIAES
 
Short Term Electrical Load Forecasting by Artificial Neural Network
Short Term Electrical Load Forecasting by Artificial Neural NetworkShort Term Electrical Load Forecasting by Artificial Neural Network
Short Term Electrical Load Forecasting by Artificial Neural NetworkIJERA Editor
 
Approximate bounded-knowledge-extractionusing-type-i-fuzzy-logic
Approximate bounded-knowledge-extractionusing-type-i-fuzzy-logicApproximate bounded-knowledge-extractionusing-type-i-fuzzy-logic
Approximate bounded-knowledge-extractionusing-type-i-fuzzy-logicCemal Ardil
 
Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...IOSR Journals
 
Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...IOSR Journals
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologytheijes
 
Implementation of recurrent neural network for the forecasting of USD buy ra...
Implementation of recurrent neural network for the forecasting  of USD buy ra...Implementation of recurrent neural network for the forecasting  of USD buy ra...
Implementation of recurrent neural network for the forecasting of USD buy ra...IJECEIAES
 
Short Term Load Forecasting: One Week (With & Without Weekend) Using Artifici...
Short Term Load Forecasting: One Week (With & Without Weekend) Using Artifici...Short Term Load Forecasting: One Week (With & Without Weekend) Using Artifici...
Short Term Load Forecasting: One Week (With & Without Weekend) Using Artifici...IJLT EMAS
 
Artificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA ChipArtificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA ChipMaria Perkins
 
Implementation of Feed Forward Neural Network for Classification by Education...
Implementation of Feed Forward Neural Network for Classification by Education...Implementation of Feed Forward Neural Network for Classification by Education...
Implementation of Feed Forward Neural Network for Classification by Education...ijsrd.com
 

Ähnlich wie Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecasting- (20)

Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed prediction
 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed prediction
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
 
Live to learn: learning rules-based artificial neural network
Live to learn: learning rules-based artificial neural networkLive to learn: learning rules-based artificial neural network
Live to learn: learning rules-based artificial neural network
 
Neural network based numerical digits recognization using nnt in matlab
Neural network based numerical digits recognization using nnt in matlabNeural network based numerical digits recognization using nnt in matlab
Neural network based numerical digits recognization using nnt in matlab
 
N ns 1
N ns 1N ns 1
N ns 1
 
Feed forward neural network for sine
Feed forward neural network for sineFeed forward neural network for sine
Feed forward neural network for sine
 
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...
 
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...
 
Short Term Electrical Load Forecasting by Artificial Neural Network
Short Term Electrical Load Forecasting by Artificial Neural NetworkShort Term Electrical Load Forecasting by Artificial Neural Network
Short Term Electrical Load Forecasting by Artificial Neural Network
 
Approximate bounded-knowledge-extractionusing-type-i-fuzzy-logic
Approximate bounded-knowledge-extractionusing-type-i-fuzzy-logicApproximate bounded-knowledge-extractionusing-type-i-fuzzy-logic
Approximate bounded-knowledge-extractionusing-type-i-fuzzy-logic
 
Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...
 
Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technology
 
Implementation of recurrent neural network for the forecasting of USD buy ra...
Implementation of recurrent neural network for the forecasting  of USD buy ra...Implementation of recurrent neural network for the forecasting  of USD buy ra...
Implementation of recurrent neural network for the forecasting of USD buy ra...
 
Short Term Load Forecasting: One Week (With & Without Weekend) Using Artifici...
Short Term Load Forecasting: One Week (With & Without Weekend) Using Artifici...Short Term Load Forecasting: One Week (With & Without Weekend) Using Artifici...
Short Term Load Forecasting: One Week (With & Without Weekend) Using Artifici...
 
Artificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA ChipArtificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA Chip
 
Implementation of Feed Forward Neural Network for Classification by Education...
Implementation of Feed Forward Neural Network for Classification by Education...Implementation of Feed Forward Neural Network for Classification by Education...
Implementation of Feed Forward Neural Network for Classification by Education...
 
Y4502158163
Y4502158163Y4502158163
Y4502158163
 

Mehr von Cemal Ardil

Upfc supplementary-controller-design-using-real-coded-genetic-algorithm-for-d...
Upfc supplementary-controller-design-using-real-coded-genetic-algorithm-for-d...Upfc supplementary-controller-design-using-real-coded-genetic-algorithm-for-d...
Upfc supplementary-controller-design-using-real-coded-genetic-algorithm-for-d...Cemal Ardil
 
The main-principles-of-text-to-speech-synthesis-system
The main-principles-of-text-to-speech-synthesis-systemThe main-principles-of-text-to-speech-synthesis-system
The main-principles-of-text-to-speech-synthesis-systemCemal Ardil
 
The feedback-control-for-distributed-systems
The feedback-control-for-distributed-systemsThe feedback-control-for-distributed-systems
The feedback-control-for-distributed-systemsCemal Ardil
 
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...Cemal Ardil
 
Sonic localization-cues-for-classrooms-a-structural-model-proposal
Sonic localization-cues-for-classrooms-a-structural-model-proposalSonic localization-cues-for-classrooms-a-structural-model-proposal
Sonic localization-cues-for-classrooms-a-structural-model-proposalCemal Ardil
 
Robust fuzzy-observer-design-for-nonlinear-systems
Robust fuzzy-observer-design-for-nonlinear-systemsRobust fuzzy-observer-design-for-nonlinear-systems
Robust fuzzy-observer-design-for-nonlinear-systemsCemal Ardil
 
Response quality-evaluation-in-heterogeneous-question-answering-system-a-blac...
Response quality-evaluation-in-heterogeneous-question-answering-system-a-blac...Response quality-evaluation-in-heterogeneous-question-answering-system-a-blac...
Response quality-evaluation-in-heterogeneous-question-answering-system-a-blac...Cemal Ardil
 
Reduction of-linear-time-invariant-systems-using-routh-approximation-and-pso
Reduction of-linear-time-invariant-systems-using-routh-approximation-and-psoReduction of-linear-time-invariant-systems-using-routh-approximation-and-pso
Reduction of-linear-time-invariant-systems-using-routh-approximation-and-psoCemal Ardil
 
Real coded-genetic-algorithm-for-robust-power-system-stabilizer-design
Real coded-genetic-algorithm-for-robust-power-system-stabilizer-designReal coded-genetic-algorithm-for-robust-power-system-stabilizer-design
Real coded-genetic-algorithm-for-robust-power-system-stabilizer-designCemal Ardil
 
Performance of-block-codes-using-the-eigenstructure-of-the-code-correlation-m...
Performance of-block-codes-using-the-eigenstructure-of-the-code-correlation-m...Performance of-block-codes-using-the-eigenstructure-of-the-code-correlation-m...
Performance of-block-codes-using-the-eigenstructure-of-the-code-correlation-m...Cemal Ardil
 
Optimal supplementary-damping-controller-design-for-tcsc-employing-rcga
Optimal supplementary-damping-controller-design-for-tcsc-employing-rcgaOptimal supplementary-damping-controller-design-for-tcsc-employing-rcga
Optimal supplementary-damping-controller-design-for-tcsc-employing-rcgaCemal Ardil
 
Optimal straight-line-trajectory-generation-in-3 d-space-using-deviation-algo...
Optimal straight-line-trajectory-generation-in-3 d-space-using-deviation-algo...Optimal straight-line-trajectory-generation-in-3 d-space-using-deviation-algo...
Optimal straight-line-trajectory-generation-in-3 d-space-using-deviation-algo...Cemal Ardil
 
On the-optimal-number-of-smart-dust-particles
On the-optimal-number-of-smart-dust-particlesOn the-optimal-number-of-smart-dust-particles
On the-optimal-number-of-smart-dust-particlesCemal Ardil
 
On the-joint-optimization-of-performance-and-power-consumption-in-data-centers
On the-joint-optimization-of-performance-and-power-consumption-in-data-centersOn the-joint-optimization-of-performance-and-power-consumption-in-data-centers
On the-joint-optimization-of-performance-and-power-consumption-in-data-centersCemal Ardil
 
On the-approximate-solution-of-a-nonlinear-singular-integral-equation
On the-approximate-solution-of-a-nonlinear-singular-integral-equationOn the-approximate-solution-of-a-nonlinear-singular-integral-equation
On the-approximate-solution-of-a-nonlinear-singular-integral-equationCemal Ardil
 
On problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-objectOn problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-objectCemal Ardil
 
Numerical modeling-of-gas-turbine-engines
Numerical modeling-of-gas-turbine-enginesNumerical modeling-of-gas-turbine-engines
Numerical modeling-of-gas-turbine-enginesCemal Ardil
 
New technologies-for-modeling-of-gas-turbine-cooled-blades
New technologies-for-modeling-of-gas-turbine-cooled-bladesNew technologies-for-modeling-of-gas-turbine-cooled-blades
New technologies-for-modeling-of-gas-turbine-cooled-bladesCemal Ardil
 
Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...
Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...
Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...Cemal Ardil
 
Multivariate high-order-fuzzy-time-series-forecasting-for-car-road-accidents
Multivariate high-order-fuzzy-time-series-forecasting-for-car-road-accidentsMultivariate high-order-fuzzy-time-series-forecasting-for-car-road-accidents
Multivariate high-order-fuzzy-time-series-forecasting-for-car-road-accidentsCemal Ardil
 

Mehr von Cemal Ardil (20)

Upfc supplementary-controller-design-using-real-coded-genetic-algorithm-for-d...
Upfc supplementary-controller-design-using-real-coded-genetic-algorithm-for-d...Upfc supplementary-controller-design-using-real-coded-genetic-algorithm-for-d...
Upfc supplementary-controller-design-using-real-coded-genetic-algorithm-for-d...
 
The main-principles-of-text-to-speech-synthesis-system
The main-principles-of-text-to-speech-synthesis-systemThe main-principles-of-text-to-speech-synthesis-system
The main-principles-of-text-to-speech-synthesis-system
 
The feedback-control-for-distributed-systems
The feedback-control-for-distributed-systemsThe feedback-control-for-distributed-systems
The feedback-control-for-distributed-systems
 
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...
 
Sonic localization-cues-for-classrooms-a-structural-model-proposal
Sonic localization-cues-for-classrooms-a-structural-model-proposalSonic localization-cues-for-classrooms-a-structural-model-proposal
Sonic localization-cues-for-classrooms-a-structural-model-proposal
 
Robust fuzzy-observer-design-for-nonlinear-systems
Robust fuzzy-observer-design-for-nonlinear-systemsRobust fuzzy-observer-design-for-nonlinear-systems
Robust fuzzy-observer-design-for-nonlinear-systems
 
Response quality-evaluation-in-heterogeneous-question-answering-system-a-blac...
Response quality-evaluation-in-heterogeneous-question-answering-system-a-blac...Response quality-evaluation-in-heterogeneous-question-answering-system-a-blac...
Response quality-evaluation-in-heterogeneous-question-answering-system-a-blac...
 
Reduction of-linear-time-invariant-systems-using-routh-approximation-and-pso
Reduction of-linear-time-invariant-systems-using-routh-approximation-and-psoReduction of-linear-time-invariant-systems-using-routh-approximation-and-pso
Reduction of-linear-time-invariant-systems-using-routh-approximation-and-pso
 
Real coded-genetic-algorithm-for-robust-power-system-stabilizer-design
Real coded-genetic-algorithm-for-robust-power-system-stabilizer-designReal coded-genetic-algorithm-for-robust-power-system-stabilizer-design
Real coded-genetic-algorithm-for-robust-power-system-stabilizer-design
 
Performance of-block-codes-using-the-eigenstructure-of-the-code-correlation-m...
Performance of-block-codes-using-the-eigenstructure-of-the-code-correlation-m...Performance of-block-codes-using-the-eigenstructure-of-the-code-correlation-m...
Performance of-block-codes-using-the-eigenstructure-of-the-code-correlation-m...
 
Optimal supplementary-damping-controller-design-for-tcsc-employing-rcga
Optimal supplementary-damping-controller-design-for-tcsc-employing-rcgaOptimal supplementary-damping-controller-design-for-tcsc-employing-rcga
Optimal supplementary-damping-controller-design-for-tcsc-employing-rcga
 
Optimal straight-line-trajectory-generation-in-3 d-space-using-deviation-algo...
Optimal straight-line-trajectory-generation-in-3 d-space-using-deviation-algo...Optimal straight-line-trajectory-generation-in-3 d-space-using-deviation-algo...
Optimal straight-line-trajectory-generation-in-3 d-space-using-deviation-algo...
 
On the-optimal-number-of-smart-dust-particles
On the-optimal-number-of-smart-dust-particlesOn the-optimal-number-of-smart-dust-particles
On the-optimal-number-of-smart-dust-particles
 
On the-joint-optimization-of-performance-and-power-consumption-in-data-centers
On the-joint-optimization-of-performance-and-power-consumption-in-data-centersOn the-joint-optimization-of-performance-and-power-consumption-in-data-centers
On the-joint-optimization-of-performance-and-power-consumption-in-data-centers
 
On the-approximate-solution-of-a-nonlinear-singular-integral-equation
On the-approximate-solution-of-a-nonlinear-singular-integral-equationOn the-approximate-solution-of-a-nonlinear-singular-integral-equation
On the-approximate-solution-of-a-nonlinear-singular-integral-equation
 
On problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-objectOn problem-of-parameters-identification-of-dynamic-object
On problem-of-parameters-identification-of-dynamic-object
 
Numerical modeling-of-gas-turbine-engines
Numerical modeling-of-gas-turbine-enginesNumerical modeling-of-gas-turbine-engines
Numerical modeling-of-gas-turbine-engines
 
New technologies-for-modeling-of-gas-turbine-cooled-blades
New technologies-for-modeling-of-gas-turbine-cooled-bladesNew technologies-for-modeling-of-gas-turbine-cooled-blades
New technologies-for-modeling-of-gas-turbine-cooled-blades
 
Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...
Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...
Neuro -fuzzy-networks-for-identification-of-mathematical-model-parameters-of-...
 
Multivariate high-order-fuzzy-time-series-forecasting-for-car-road-accidents
Multivariate high-order-fuzzy-time-series-forecasting-for-car-road-accidentsMultivariate high-order-fuzzy-time-series-forecasting-for-car-road-accidents
Multivariate high-order-fuzzy-time-series-forecasting-for-car-road-accidents
 

Kürzlich hochgeladen

Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Zilliz
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityWSO2
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamUiPathCommunity
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...apidays
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...Zilliz
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelDeepika Singh
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsNanddeep Nachan
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistandanishmna97
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfOrbitshub
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
 

Kürzlich hochgeladen (20)

Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 

Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecasting-

  • 1. World Academy of Science, Engineering and Technology International Journal of Computer, Information Science and Engineering Vol:1 No:3, 2007 Levenberg-Marquardt Algorithm for Karachi Stock Exchange Share Rates Forecasting Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, Cemal Ardil Abstract— Financial forecasting is an example of signal processing problems. A number of ways to train/learn the network are available. We have used Levenberg-Marquardt algorithm for error back-propagation for weight adjustment. Pre-processing of data has reduced much of the variation at large scale to small scale, reducing the variation of training data. International Science Index 3, 2007 waset.org/publications/8650 Keywords— Gradient descent method, jacobian Levenberg-Marquardt algorithm, quadratic error surfaces, matrix. I. INTRODUCTION F orecasting is a high noisy application because we are typically dealing with systems that receive thousands of inputs, which interact in a complex nonlinear fashion. The forecasting of future events from noisy time series data is commonly done using various forms of statistical models [10]. Usually with a very small subset of inputs or measurements available from the system, the system behavior is to be estimated and extrapolated into the future. Typically neural network models are closely related to statistical models [8], where the inputs are weighted and scalar outputs are processed to produce output, thus provide a method of functional mapping. Learning by example often operates in a n m vector space i.e. a functional mapping R to R is n approximation [20]. Where for the positive integer n, R is the set of all ordered n-tuples of the form II. x1 , x 2 , ..., x n . such as forecasting and prediction or reconstruction /recognition through learning process that involves adjustments to the synaptic connections that exist between neurons. Introductory ANN concepts are made in [3], [5] and [11]. A network is composed of a number of interconnected units; each unit has input/output (I/O) characteristics and implements a local computing. The output of any unit is determined by its I/O characteristics, it’s interconnection to other units and external inputs. Most of the present work in ANN concerns the development, characterization and extension of mathematical neural network models. Nucleus Axon Dendrites Fig. 1. Components of a neuron A mathematical neural network model refers to the set of nonlinear n-dimensional (generally) equations that characterize the overall network operations, as well as structure and dynamics of the ANN (and training). INTRODUCTION TO NEURAL NETWORKS Cell body An Artificial Neural Network (ANN) is an informationprocessing paradigm that is inspired by the biological nervous systems, such as brain, process information [1]. The key element of this paradigm is the novel structure of the information processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, Manuscrot received on December, 28. 2003 Dr. S. M. Aqil Burney is Professor in the Department of Computer Science, University of Karachi, Pakistan, Phone: 0092-21-9243131 ext. 2447, fax: 0092-21-9243203, Burney@computer.Org, aqil_burney@yahoo.com. Tahseen Ahmed Jilani is lecturer in the Department of Computer Science, university of Karachi and is Ph.D. research associate in the Department of Statistics. University of Karachi, Pakistan. tahseenjilani@yahoo.com Cemal Ardil is with the Azerbaijan National Academy of Aviation Baku, Azerbaijan. cemalardil@gmail.com Dendrites Threshold Fig. 2. An Artificial Neuron III. FEEDFORWARD NEURAL NETWORK ARCHITECTURE AND MAPPING Feed forward neural networks are composed of neurons in which the input layer of neurons is connected to the output layer through one or more layers of intermediate neurons, The 717
  • 2. World Academy of Science, Engineering and Technology International Journal of Computer, Information Science and Engineering Vol:1 No:3, 2007 training process of neural networks involves adjusting the weights till a desired input/output relationship is obtained [7],[15]. The majority of adaptation learning algorithms are based on the Widrow-Hoffback-algorithm. The mathematical characterization of a multilayer feedforward network is that of a composite application of functions [21]. Each of these functions represents a particular layer and may be specific to individual units in the layer, e.g. all the units in the layer are required to have same activation function. The overall mapping is thus characterized by a composite function relating feedforward network inputs to output [12]. That is O networks tend to give reasonable answers when presented with inputs that they have never seen. Typically, a new input leads to an output similar to the correct output for input vectors used in training that are similar to the inputs being presented [1]. f composite i Using p-mapping layers in a p+1 layer feedforward net yield O f Lp Lp f 1 .... f L1 i ....... Thus the interconnection weights from unit k in International Science Index 3, 2007 waset.org/publications/8650 in L2 are w L1 L2 H1 to unit i . If hidden units have a sigmoidal activation function, denoted OiL2 L1 f sig I L wik1 L2 L wkj0 f ksig k 1 L1 ij j 1 (1) Above equation illustrates neural network with supervision and composition of non-linear function. The neural network forecasting can be described as follows Zk 1 NN Z k , Z k 1 ,...., Z k d , ek , ek 1 ,..., ek d Where Zi is either original observations or pre-processed data, and ei are residuals. The processing of input data and the number of e’s and Z’s are needed to determine the performance of the forecast [9]. IV. BACKPROPAGATION ALGORITHM Backpropagation was created by generalizing the WidrowHoff learning rule to multiple-layer networks and non-linear differentiable transfer functions. Input vectors and the corresponding target vectors are used to train a network until it can approximate a function, associate input vectors with specific output vectors, or classify input vectors in an appropriate way as defined by analyst [1]. Networks with biases, a sigmoid function, and a linear output layer capable of approximating any function with a finite number of discontinuities, see [13] and [23]. Standard backpropagation is a gradient descent algorithm, in which the network weights are moved along the negative of the gradient of the performance function. Thus for each movement we have, X k 1 X k - a k .g k (2) There are a number of variations on the basic algorithm that are based on other standard optimization techniques, such as conjugate gradient [6]. Properly trained backpropagation Fig. 3. Error backpropagation method for weight adjustment (A recursive operation) This generalization property makes it possible to train a network on a representative set of inputs/targets pairs and get good results without training the network on all possible input/output pairs. Instances are represented by many attribute value pair. The target function to be learned is defined over instances that can be described by a vector of predefined features. These input attributes may be highly correlated or independent of one another. Input values can be any real values. The target function output may be discrete-valued, real-valued or a vector of several real or discrete–valued attributes [7] and [16]. The training examples may contain errors. ANN learning methods are quite robust to noise in the training data. Long training times are acceptable. Network training algorithms typically require longer training times then, say decision tree algorithm. Training times can range from a few seconds to many hours, depending on the factors such as the number of weights in the network, the number of training examples considered and the setting of various learning algorithm parameters [20]. Fast evaluation of the leaning target function may be required. Although ANN learning times are relatively long, evaluating the learned network, in to apply it to a subsequent instance is typically very fast [18], [23]. V. GRADIENT DESCENT TRAINING USING SIGMOIDAL AND LINEAR ACTIVATION FUNCTION For networks having differentiable activation functions, there exist a powerful and computationally efficient method, called error backpropagation for fining the derivatives of an error function with respect to the weights and biases in the network. Gradient Descent algorithm is the most commonly used error backpropagation method. See for more in detailed discussion [12], [13] and [22]. If net is the network designed by taking the scalar product of the connections (weights) and the stimulus then we define, 718 T net w i , is large and have a
  • 3. World Academy of Science, Engineering and Technology International Journal of Computer, Information Science and Engineering Vol:1 No:3, 2007 differentiable activation function. Given a training set of input-target pairs of the form 1 having to compute Hessian matrix. When the performance function has the form of a sum of squares that is n 2 i , t 1 , i , t 2 , , .. i , t n H 1 T e e 2 F w Consider the sigmoidal activation function with outputs as Oi f net i 1 e 1 i now, defining a single pair error measure based on t have p e Newton’s method for minimizing objective function is generated using well known recurrence formula. 2 Wi (3) International Science Index 3, 2007 waset.org/publications/8650 d ep dwk 2 When F W g JTe . ikp weight wk and forms the basis of the gradient descent training where the Jacobian matrix J contains the first derivatives of the network errors with respect to weights and biases, and e is a vector of network errors. The Gauss-Newton update formula can be algorithm. Wi wk such that w O d f net p p t . . ik d net p p wkj wkj VI. 1 1 J O net Op t p p . ik 2e p wkj . wkj ' e p ikp (6) LEVENBERG-MARQUARDT ALGORITHM Training neural network is essentially a non-linear least squares problem, and thus can be solved by a class of non-linear least squares algorithms. Among them, the Levenberg-Marquardt is a trust region based method with hyper-spherical trust region. This method work extremely well in practice, and is considered the most efficient algorithm for training median sized artificial neural networks. Like Quasi-Newton methods, the Levenberg-Marquardt algorithm was designed to approach second order training speed with out w2 e2 p w2 e kp w1 (5) e 11 w2 e 21 w2 . . ek1 w2 . . e1 p w1 e kp p and for linear activation function W Ji Ji i w1 e2 p yielding the correction: j k T 1 e 11 w1 e 21 w1 . . ek1 w1 . . e1 p 2 d e d wk wk F W is the gradient of and the gradient can be computed as p Specifically, we assign the weight correction, F W H J TJ (4) which is the gradient of the pattern error with respect to w 1 T e e and 2 2 d ep dO p d net . . dO p d net d wk d f net d net p 1 Wi H F W then the Hessian matrix can be approximated as, element of w , i.e., we have Op tp j 1 k 1 (7) 2 th j 1 t e e 2 F W d ep , we use this quantity The basic idea is to compute dw to adjust w . Let for the k i t ij of the network, the function of sum of squared errors is defined as p 1 p t Op 2 Oij w1 , w2 , w3 ,....., wN consists of all weights O p , where O is the output obtained from i p , we p 2 2 p k 1 2 t where W net 1 T e w e w 2 w 1 . . . . . . . . . . . . . . e 11 w N e 21 w N . . e .k 1 w N . . . e1 p . . . (8) i . . Ji T . w N e2 p w N e kp w 2 N T Where J J is positive definite, but if it is not, then, we make some perturbation into it that will control the probability of being non positive definite. Thus, H JTJ I T Wi 1 W Ji Ji i 719 1 I Ji T i (9)
  • 4. World Academy of Science, Engineering and Technology International Journal of Computer, Information Science and Engineering Vol:1 No:3, 2007 Where in neural computing context the quantity the learning parameter, it ensures that J T VIII. NETWORK TRAINING AND RESULTS is called J is positive definite. It is always possible to choose sufficiently large enough to ensure a descent step. The learning parameter is decreased as the iterative process approaches to a minimum. Thus, in Levenberg-Marquardt optimization, inversion of T square matrix J J I is involved which requires large memory space to store the Jacobian matrix and the approximated We presented input data in concurrent patterns, this computation can be performed using [12] or [17]. For programming in C/C++ of ANN we also use the approach of [3] or [25]. The network performance value along with each epoch is presented in the figure-6 (a). We set the network performance measurement goal at 1E-06. Secondly in Figure6 (b): the Hinton plot [10] or [12] of input layer weight is presented. The white boxes shows positive and grey are negative weights. Error Global Minima W Current weight International Science Index 3, 2007 waset.org/publications/8650 W0 Optimal Weight Weight Update (a) (a) (b) Fig. 4. (a) Typical error function for linear networks, (b) Gradient descent used to search the space of possible weight vectors Hessian matrix along with inversion of approximate H matrix of order N N in each iteration, thus for large networks, Levenberg-Marquardt algorithm is not suitable. VII. KARACHI STOCK EXCHANGE SHARES Stock exchange share prices exhibit very high noise, and significant non-stationarity. Here, we consider the prediction of the direction of change in the share price for the next business day. A number of researchers have applied neural network technology to exchange rates prediction and trading. This is for the very first time that ANN technique of forecasting is applied at K.S.E. The data set used has been taken from Karachi Stock Exchange Library and KSE brokers. We have taken 400 data points for training of neural networks. For prior work on Karachi Stock Exchange see [2]. (b) Fig.5. (a) Plot of Network Training, Performance function with epoch (b) Input Weights Hinton Plot. We trained the network many times and the performance goal is achieved at different epochs. The gradient of error surface is also shown in table 1. BP nets initialized with small weights can develop large weights only after the number of updates is large. We have used only 100 iterations for training of network. Early stopping combined with BP is so effective that large nets can be trained where nets with large weights have more representation power [20]. For further discussion please concern [7], [15], [18] and [24]. Finally, the results of forecasting for 10 days are shown below. It is best to use the forecast of only next day because of high variability at Karachi Stock Exchange. 720
  • 5. World Academy of Science, Engineering and Technology International Journal of Computer, Information Science and Engineering Vol:1 No:3, 2007 REFERENCES TABLE 1 NETWORK PERFORMANCE GOAL= E-06, ALONG WITH GRADIENT OF ERROR FUNCTION 1E-12. Training Goal achieved [1] [2] M.S.E Gradient at epoch 1 50 3.80E-10 0.03409 [3] 2 10 7.67E-07 2.01321 [4] 3 12 3.08E-08 0.767221 4 50 3.77E-10 0.110359 [5] 5 17 3.64E-08 0.110359 [6] 6 34 1.51E-10 0.034858 7 50 1.18E-09 0.038043 8 33 5.08E-09 0.807517 9 50 3.80E-10 0.03409 10 39 1.29E-16 5.93E-06 [7] [8] [9] International Science Index 3, 2007 waset.org/publications/8650 TABLE 2 A COMPARISON BETWEEN TARGETS AND NETWORK OUTPUTS ANN Date Targets Output Difference 7/16/2001 17.8 17.7991 0.0009 7/17/2001 17.1 17.0992 0.0008 7/18/2001 17.15 17.1492 17.15 17.1491 0.0009 7/20/2001 16.9 16.8992 0.0008 7/23/2001 16.9 16.8996 0.0004 7/24/2001 17.6 17.5992 0.0008 7/25/2001 16.95 16.9492 0.0008 7/26/2001 16.45 16.4492 0.0008 7/27/2001 16.45 16.4492 0.0008 [11] 0.0008 7/19/2001 [10] [12] [13] [14] [15] [16] [17] IX. CONCLUSIONS We have seen that neural networks produced fairly accurate forecast if compared with weighted exponential methods and ARIMA methods. Generalization performance is reasonable, indicating confidence in the training results and produced fairly accurate forecasts [1]. There are a number of available training algorithms that can be used with backpropagation method, but LevenbergMarquardt has proven results specifically to a sum-of-squares error function for financial forecasts of share rate forecasting. The comparison ability of the different approaches is usually problem dependent. It would be interesting to find out how would other architecture such as Regression neural networks, Bayesian neural networks and SOFM will behave. Some new ways of preprocessing and input selection methods will be our future work. We hope to extend our work to recurrent neural networks, Bayesian neural networks and iterative principle component neural networks. [18] [19] [20] [21] [22] [23] [24] [25] 721 S. M. Aqil Burney, Tahseen A. Jilani,. (2002). "Time Series forecasting using artificial neural network methods for Karachi Stock Exchange". A Project at Department of Computer Science, University of Karachi. S. M. Aqil Burney, “Measures of central tendency in News-MCB stock exchange”. Price index pp 12-31. Developed by Vital information services (pvt) limited, Karachi. c/o Department of Computer Science, University of Karachi. Adam Blum, (1992)."Neural Network in C++". John Wiley and Sons, New York. C. Brooks, (1997). "Linear and non-linear (non-) forecastability of highfrequency exchange rates". Journal of Forecasting, pp. 16:125-145. M. Caudill, and C. Butler. (1994)."Understanding neural network". MIT Press. Y. Chauvin, and D. Rumelhart, (1995). "Backpropagation: theory, architecture, and applications." Lawrence Erlbaum Association. Chung-Ming Kuan and Liu. Tung, (1995)."Forecasting exchange rates using feedforward and recurrent neural networks." Journal of Applied Econometrics, pp. 347-64. C. Lee Giles, Steve Lawrence and A. C. Tsoi, .(2001)."Time series prediction using a recurrent neural network and grammatical inference". Machine Learning, Vol. 44, Number ½ , July/August, pp. 161-183. C. Lee Giles, Steve Lawrence, A. C. Tsoi.(1997). "Rule inference for financial prediction using recurrent neural networks". Proceedings of IEEE/IAFE Conference on Computational Intelligence for Financial Engineering (CIFEr), IEEE, Piscataway, NJ, , pp. 253-259. C. M. Bishop.(1995). "Neural networks for pattern recognition". Clarendon Press. Oxford R. Dixon, and F. Bevan. (1994)."Neural network tutor." Advance Technology Transfer Group. T. M. Hagan, H. B. Demuth, (1996)."Neural network design", Brooks/Cole publishing company. R. Hecht-Nielsen, (1989). "Theory of the backpropagation neural network". International Joint Conference on Neural Networks, IEEE, New York, pp. 593-605. K. Hornik., M. Sunchcombe, and H. White. (1990). "Using multilayer feedforward networks for universal approximation". Neural Networks 3, pp. 551-60. Iebeling Kaastra and Milton S. Boyd. (1995). "Forecasting future trading volume using neural networks." Journal of Future Markets, pp. 953-70. C. M. Kuan, and T. Liu. (1995). "Forecasting exchange rates using feedforward and recurrent neural networks". Journal of Applied Econometrics, pp.10: 347-364. Lars Liden, (2000) "Neural network simulation package". Department of Cognitive and Natural Systems at Boston University Mir F. Atiya, Suzan M. El-Shoura, Samir I. Shaheen. "A comparison between neural network forecasting techniques- case study: river flow forecasting". IEEE Transactions on Neural Networks. Vol. 10, No. 2 March 1999 Moody, J.(1994). "Prediction risk and architecture selection for neural networks". appears in, From Statistics to Neural Networks: Theory and Pattern Recognition Application, NATO ASI Series, Springer-Verlag. Rich caruana, Steven Lawrence, C. Lee Giles. (2000)."Overfitting in neural nets: backpropagation, conjugate gradient, and early stopping". Neural Information Processing Systems, Denver, Colorado, November pp. 28-30. B. D. Ripley, (1994)”Neural networks and related methods for classification”, Journal of the Royal Statistician Society, B 56(3),409456. R. J. Schalkoff, (1997)."Artificial neural networks". McGraw-Hill International Edition, computer science series, N. Schraudolph, Nicol, and T. J. Sejnowski, (1996)."Tempering backpropagation networks: not all weights are created equal". Appear in Advances in Neural Information Processing Systems 8, MIT Press, Cambridge. W. Chu Chao-Hsien and Djohan Widjaja, (1994)."Neural network system for forecasting method selection". Decision support systems (August), pp. 13-24. T. Stephen Welstead, (1994). "Neural network and fuzzy logic applications in C/C++". John Wiley and Sons, Inc. N.Y.
  • 6. International Science Index 3, 2007 waset.org/publications/8650 World Academy of Science, Engineering and Technology International Journal of Computer, Information Science and Engineering Vol:1 No:3, 2007 S. M. Aqil Burney received the B.Sc.(Physics, Mathematics, Statistics) from D. J. College affiliated with the University of Karachi in 1970; First class first M.Sc. (Statistics,) from Karachi University in 1972 with M.Phil. in 1983 with specialization in Computing, Simulation and stochastic modeling in from risk management. Dr. Burney received Ph.D. degree in Mathematics from Strathclyde University, Glasgow with specialization in estimation, modeling and simulation of multivariate Time series models using algorithmic approach with software development. He is currently professor and approved supervisor in Computer Science and Statistics by the High education Commission Government of Pakistan. He is also the project director of Umair Basha Institute of Information technology (UBIT). He is also member of various higher academic boards of different universities of Pakistan. His research interest includes AI, Soft computing neural networks, fuzzy logic, data mining, statistics, simulation and stochastic modeling of mobile communication system and networks and network security. He is author of three books, various technical reports and supervised more than 100software/Information technology projects of Masters level degree programs and project director of various projects funded by Government of Pakistan. He is member of IEEE (USA), ACM (USA) and fellow of Royal Statistical Society United Kingdom and also a member of Islamic society of Statistical Sciences. He is teaching since 1973 in various universities in the field of Econometric, Bio-Statistics, Statistics, Mathematic and Computer Science He has vast education management experience at the University level. Dr. Burney has received appreciations and awards for his research and as educationist. Tahseen A. Jilani received the B.Sc.(Computer Science, Mathematics, Statistics) from University of Karachi in 1998; First class second M.Sc. (Statistics,) from Karachi University in 2001. Since 2003, he is Ph.D. research fellow in the Department of Computer Science, University of Karachi. His research interest includes AI, neural networks, soft computing, fuzzy logic, Statistical data mining and simulation. He is teaching since 2002 in the fields of Statistics, Mathematics and Computer Science. 722