Beyond the EU: DORA and NIS 2 Directive's Global Impact
Â
Adaptive equalization
1. ADAPTIVE CHANNEL
EQUALIZATION
College of Technology, Pantnagar
G.B.Pant University of Agriculture and Technology,
Pantnagar
Kamal Bhatt
M.Tech-Electronics & Communication Engg.
ID-44036
2. NEURAL NETWORK
ïNeural networks are the simplified models of the biological
neuron systems.
ï Neural networks are typically organized in layers. Layers are
made up of a number of interconnected 'nodes' .which contain an
'activation function'.
ïPatterns are presented to the network via the 'input layer', which
communicates to one or more 'hidden layers' where the actual
processing is done via a system of weighted 'connections'.
ïThe hidden layers then link to an 'output layer' where the answer
is output
3. MODEL OF ARTIFICIAL NEURON
ï± An appropriate model/simulation of the nervous system should be
able to produce similar responses and behaviours in artificial
systems.
ï± The nervous system is build by relatively simple units, the neurons,
so copying their behaviour and functionality should be the solution.
4. LEARNING IN A SIMPLE NEURON
Preceptron Learning Algorithm:
1. Initialize weights
2. Present a pattern and target output
2
y f [ wxi]
2 i
3. Compute output : y f [ wi x ]
0
i i
i 0
4. Update weights : wi(t 1 wi(t)
) wi
Repeat starting at 2 until acceptable level of error
5. NEURAL NETWORK ARCHITECTURE
ï± An artificial Neural Network is defined as a data
processing system consisting of a large number of
interconnected processing elements or artificial
neurons.
ï± There are three fundamentally different classes of
neural networks. Those are.
ï± Single layer feedforward Networks.
ï± Multilayer feedforward Networks.
ï± Recurrent Networks.
6. Application
The tasks to which artificial neural networks are applied
tend to fall within the following broad categories:
âąFunction approximation, or regression analysis,
including time series prediction and modeling.
âąClassification, including pattern and sequence
recognition, novelty detection and sequential
decision making.
âąData processing, including filtering, clustering,
blind signal separation and compression.
7. Equalization History
ï± The LMS algorithm by Widrow and Hoff in 1960
paved the way for the development of adaptive filters
used for equalisation.
ï±Lucky used this algorithm in 1965 to design adaptive
channel equalisers. Maximum Likelihood Sequence
Estimator (MLSE) equaliser and its Viterbi
implementation in 1970âs.
ï±The multi layer perceptron (MLP) based symbol-by-
symbol equalisers was developed in 1990
8. ï±During 1989 to 1995 some efficient nonlinear artificial
neural network equalizer structure for channel equalization
were proposed, those include Chebyshev Neural Network
, Functional link ANN
ï±In 2002 Kevin M. Passino described the Optimization
Foraging Theory in article âBiomimicry of Bacterial Foragingâ
ï±More recently in 2008, a rank based statistics approach
known as Wilcoxon learning method has been proposed for
signals processing application to mitigate the linear and
nonlinear learning problems.
10. Equalizers
ï±Adaptive channel equalizers have played an important role in
digital communication systems.
ï±Equalizer works like an inversed filter which is placed at
the front end of the receiver. Its transfer function is inverse to
the transfer function of the associated channel , is able to
reduce the error causes between the desired and estimated
signal.
ï±This is achieved through a process of training. During this
period the transmitter transmits a fixed data sequence and the
receiver has a copy of the same.
11. ï±We use Equalizers to compensate the received signals which
are corrupted by the noise, interference and signal power
attenuation introduced by communication channels during
transmission.
ï±Linear transversal filters (LTF) are commonly used in the
design of channel equalizers. The linear equalizers fail to work
well when transmitted signals have encountered severe
nonlinear distortion.
ï±A neural network (NN) has the capability of complicatedly
mapping the input to the output signals, which makes the NN-
based equalizers a potentially suitable solution to deal with
nonlinear channel distortion.
12.
13. ï±The problem of equalization may be treated as a problem of signals
classification, so neural networks (NN) are quite promising candidates
because they can produce arbitrarily complex decision region.
ï±Studies performed during the last decade have established the
superiority of neural equalizers comparative to the traditional equalizers,
in conditions of shigh nonlinear distortions and rapidly varying signals.
ï±Several different neural equalizers architectures have been
developed, mostly combinations between a conventional linear
transversal filter (LTE) and a neural network.
ï±The LTE eliminates the linear distortions, such as ISI, so the NN can
be focused on compensating the nonlinearities. There have been
studies on the following structures: a LTE and a multilayer perception
(MLP) , a LTE and a radial basis function network (RBF) a LTE and a
recurrent neural network
14. ï±MLP networks are sometimes plagued by long training
times and may be trapped at bad local minima.
ï±RBF networks often provide a faster and more robust
solution to the equalization problem. In addition, the RBF
neural network has a structure similar to the optimal
Bayesian symbol decision Therefore, the RBF is an ideal
processing structure to implement the optimal Bayesian
equalizer
ï±. The RBF performances are better than the LTE and MLP
equalizers. g. Several learning algorithms have been
proposed to update the RBF parameters. However, the most
popular algorithm consists of an unsupervised learning rule
for the centers of hidden neurons and a supervised learning
rule for the weights of the output neurons.
15. ï±The centers are generally updated using the k-means clustering
algorithm which consists of computing the squared distance
between the input vector and the centers, choosing a minimum
squared distance, and moving the corresponding center closer to
the input vector.
ï±The k mean algorithm has some potential problems:
classification depend on the initials values of the centers of
RBF, on the type of chosen distance, on the number of classes. If a
center is inappropriate chosen it may never be updated, so it may
never represent a class.
ï± Here is proposed a new competitive method to update the RBF
centers, which recompenses the winning neuron and penalizes the
second winner, named rival..
16. Gradient Based Adaptive Algorithm
An adaptive algorithm is a procedure for adjusting the
parameters of an adaptive filter to minimize a cost function
chosen for the task at hand.
17. In this case, the parameters in Ï(t) correspond
to the impulse response values of the filter at
time n. We can write the output signal y(t) as
The general form of an adaptive FIR filtering algorithm is
where G( ) is a particular vector-valued nonlinear function(
depends on cost function chosen), Ό(t) is a step size
parameter, e(t) and s(t) are the error signal and input signal
vector, respectively, and Ï (t) is a vector of states that store
pertinent information about the characteristics of the input and
error signals
18. The Mean-Squared Error (MSE) cost function can be
defined as
WMSE(t) can be found from the solution to the system of
equations
The method of steepest descent is an optimization procedure
for minimizing the cost function J(t) with respect to a set of
adjustable parameters W(t). This procedure adjusts each
parameter of the system according to relationship
20. LMS ALGORITHM
âą In the family of stochastic gradient algorithms
âą Approximation of the steepest â descent method
âą Based on the MMSE criterion.(Minimum Mean square
Error)
âą Adaptive process containing two input signals:
âą 1.) Filtering process, producing output signal.
âą 2.) Desired signal (Training sequence)
âą Adaptive process: recursive adjustment of filter tap
weights
21. LMS ALGORITHM STEPS
M 1
*
yn un k wk n
ïą Filter output k 0
en dn yn
ïą Estimation error
wk n 1 wk n u n k e* n
ïą Tap-weight adaptation
update value old value learning - tap
error
of tap - weigth of tap - weight rate input
signal
vector vector parameter vector
21
22. Recursive Least Square Algorithm
The recursive least squares (RLS) algorithm is another
algorithm for determining the coefficients of an adaptive filter.
In contrast to the LMS algorithm, the RLS algorithm uses
information from all past input samples (and not only from the
current tap-input samples) to estimate the (inverse of the)
autocorrelation matrix of the input vector.
To decrease the influence of input samples from the far
past, a weighting factor for the influence of each sample is
used. This cost function can be represented as
25. Multilayer Perceptron Network
In 1958, Rosenblatt demonstrated some practical
applications using the perceptron. The perceptron is a
single level connection of McCulloch-Pitts neurons is
called as Single-layer feed forward networks.
The network is capable of linearly separating the input
vectors into
pattern of classes by a hyper plane. Similarly many
perceptrons can be connected in layers to provide a
MLP network, the input signal propagates through the
network in a forward direction, on a layer-by-layer
basis. This network has been applied successfully to
solve
diverse problems.
27. Generally MLP is trained using popular error back-
propagation algorithm.Si represent the inputs
s1, s2âŠâŠâŠ. sn to the network, and yk
represents the output of the final layer of the neural
network. The
connecting weights between the input to the first hidden
layer, first to second hidden layer and the second
hidden layer to the output layers are represented by
respectively.
The final output layer of the MLP may be expressed as
28. The final output yk(t) at the output of neuron k, is compared with the desired
output d(t) and the resulting error signal e(t) is obtained as
The instantaneous value of the total error energy is obtained by
summing all error signals over all neurons in the output layer, that is
This error signal is used to update the weights and thresholds of the hidden
layers as well as the output layer. The updated weights are,
29.
30. Functional Link Artificial Neural Network
FLANN is a novel single layer ANN network in which the original input
pattern is expanded to a higher dimensional space using nonlinear
functions, which provides arbitrarily complex decision regions by
generating nonlinear decision boundaries.
The main purpose of enhanced the functional expansion block to used
for the channel equalization process.
Each element undergoes nonlinear expansion to form M elements such
that the resultant matrix has the dimension of NĂM. The functional
expansion of the element xk by power series expansion is carried out using
the equation given in
31.
32. At tth iteration the error signal e(t) can be
computed as
The weight vector can be updated by least mean
square (LMS) algorithm, as
33. BER Performance of FLANN equalizer compared with
LMS, RLS based Equalizer
34. Chebyshev Artificial Neural Network
Chebyshev artificial neural network is similar to FLANN.
The difference being that in a FLANN the input signal is
expanded to higher dimension using functional expansion.
In Chebyshev the input is expanded using Chebyshev
polynomial. Similarly as FLANN network given in the
ChNN weights are updated by LMS algorithm. The
Chebyshev polynomials generated using the recursive
formula given as
35.
36. BER Performance of ChNN equalizer compared with FLANN and
LMS, RLS based equalizer
38. The centres of the RBF networks are updated using k-means
clustering algorithm. This RBF structure can be extended for
multidimensional output as well. Gaussian kernel is the most
popular form of kernel function for equalization application, it
can be represented as
This network can implement a mapping Frbf : Rmâ R by the
function
Training of the RBF networks involves setting the
parameters for the centres Ci, spread Ïr and the linear
weights Ïi RBF spread parameter, Ïr 2 is set to channel
noise variance Ïn 2
This provides the optimum RBF network as an equaliser.
40. Conclusion
ï±We observed that RLS provides faster convergence
rate than LMS equalizer.
ï±We observed that MLP equalizer is a feed-forward
network trained using BP algorithm, it performed better
than the linear equalizer, but it has a drawback of slow
convergence rate, depending upon the number of nodes and
layers.
ï±Optimal equalizer based on maximum a-posterior
probability (MAP) criterion can be implemented using Radial
basis function (RBF) network.
ï±RBF equalizer mitigation all the ISI, CCI and BN
interference and provide minimum BER plot. But it has one
draw back that if input is increased the number of centres of
the network increases and makes the network more
complicated.
41. REFERENCES
âąHaykin, S., "Adaptive Filter Theory", Prentice Hall,2005
âąHaykin.S âNeural Networkâ, PHI 2003
âąKavita Burse, R. N. Yadav, and S. C. Shrivastava
Channel Equalization Using Neural Networks: A Review â IEEE
Transactions on Systems, Man, And Cybernetics âPart B:
CYBERNETICS, VOL. 40, NO. 3, MAY 2010â
âąJagdish C. Patra, Ranendra N. Pal, Rameswar Baliarsingh, and
Ganapati Panda : Nonlinear Channel Equalization for QAM
Constellation Using Artificial Neural Network â IEEE Transactions on
Systems, Man, And Cybernetics âPart B: CYBERNETICS, VOL. 29,
NO. 2, APRIL 1999â
âąAmalendu Patnaikâ, Dimitrios E. Anagnostouâ, Rabindra K. Mishraâ,
Christos G. Christodoulouâ, and J. C. Lykeâ â
Applications of Neural Networks in Wireless Communications âIEEE
Antennas and Propagation Magazine, Vol. 46, No. 3. June 2004
âąR. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996
âąhttp://www.geocities.com/SiliconValley/Lakes/6007/Neural.htm