SlideShare ist ein Scribd-Unternehmen logo
1 von 41
ADAPTIVE CHANNEL
         EQUALIZATION




       College of Technology, Pantnagar
G.B.Pant University of Agriculture and Technology,
                    Pantnagar


                                             Kamal Bhatt
                 M.Tech-Electronics & Communication Engg.
                                                ID-44036
NEURAL NETWORK
Neural networks are the simplified models of the biological
neuron systems.

 Neural networks are typically organized in layers. Layers are
made up of a number of interconnected 'nodes' .which contain an
'activation function'.

Patterns are presented to the network via the 'input layer', which
communicates to one or more 'hidden layers' where the actual
processing is done via a system of weighted 'connections'.

The hidden layers then link to an 'output layer' where the answer
is output
MODEL OF ARTIFICIAL NEURON
   An appropriate model/simulation of the nervous system should be
    able to produce similar responses and behaviours in artificial
    systems.
   The nervous system is build by relatively simple units, the neurons,
    so copying their behaviour and functionality should be the solution.
LEARNING IN A SIMPLE NEURON

   Preceptron Learning Algorithm:

1. Initialize weights
2. Present a pattern and target output
                                                2
                                    y f [ wxi]
                                      2       i
3. Compute output :           y    f [ wi x ]
                                          0
                                            i       i
                                      i 0


4. Update weights :           wi(t 1 wi(t)
                                    )                   wi

Repeat starting at 2 until acceptable level of error
NEURAL NETWORK ARCHITECTURE
   An artificial Neural Network is defined as a data
    processing system consisting of a large number of
    interconnected processing elements or artificial
    neurons.
    There are three fundamentally different classes of
    neural networks. Those are.

            Single layer feedforward Networks.

            Multilayer feedforward Networks.

            Recurrent Networks.
Application
The tasks to which artificial neural networks are applied
tend to fall within the following broad categories:

‱Function approximation, or regression analysis,
including time series prediction and modeling.

‱Classification, including pattern and sequence
recognition, novelty detection and sequential
decision making.

‱Data processing, including filtering, clustering,
blind signal separation and compression.
Equalization History
 The LMS algorithm by Widrow and Hoff in 1960
paved the way for the development of adaptive filters
used for equalisation.

Lucky used this algorithm in 1965 to design adaptive
channel equalisers. Maximum Likelihood Sequence
Estimator (MLSE) equaliser and its Viterbi
implementation in 1970’s.

The multi layer perceptron (MLP) based symbol-by-
symbol equalisers was developed in 1990
During 1989 to 1995 some efficient nonlinear artificial
neural network equalizer structure for channel equalization
were proposed, those include Chebyshev Neural Network
, Functional link ANN

In 2002 Kevin M. Passino described the Optimization
Foraging Theory in article “Biomimicry of Bacterial Foraging”

More recently in 2008, a rank based statistics approach
known as Wilcoxon learning method has been proposed for
signals processing application to mitigate the linear and
nonlinear learning problems.
Digital Communication
Systems
Equalizers

Adaptive channel equalizers have played an important role in
digital communication systems.

Equalizer works like an inversed filter which is placed at
the front end of the receiver. Its transfer function is inverse to
the transfer function of the associated channel , is able to
reduce the error causes between the desired and estimated
signal.

This is achieved through a process of training. During this
period the transmitter transmits a fixed data sequence and the
receiver has a copy of the same.
We use Equalizers to compensate the received signals which
are corrupted by the noise, interference and signal power
attenuation introduced by communication channels during
transmission.

Linear transversal filters (LTF) are commonly used in the
design of channel equalizers. The linear equalizers fail to work
well when transmitted signals have encountered severe
nonlinear distortion.

A neural network (NN) has the capability of complicatedly
mapping the input to the output signals, which makes the NN-
based equalizers a potentially suitable solution to deal with
nonlinear channel distortion.
The problem of equalization may be treated as a problem of signals
classification, so neural networks (NN) are quite promising candidates
because they can produce arbitrarily complex decision region.

Studies performed during the last decade have established the
superiority of neural equalizers comparative to the traditional equalizers,
in conditions of shigh nonlinear distortions and rapidly varying signals.

Several different neural equalizers architectures have been
developed, mostly combinations between a conventional linear
transversal filter (LTE) and a neural network.

The LTE eliminates the linear distortions, such as ISI, so the NN can
be focused on compensating the nonlinearities. There have been
studies on the following structures: a LTE and a multilayer perception
(MLP) , a LTE and a radial basis function network (RBF) a LTE and a
recurrent neural network
MLP networks are sometimes plagued by long training
times and may be trapped at bad local minima.

RBF networks often provide a faster and more robust
solution to the equalization problem. In addition, the RBF
neural network has a structure similar to the optimal
Bayesian symbol decision Therefore, the RBF is an ideal
processing structure to implement the optimal Bayesian
equalizer

. The RBF performances are better than the LTE and MLP
equalizers. g. Several learning algorithms have been
proposed to update the RBF parameters. However, the most
popular algorithm consists of an unsupervised learning rule
for the centers of hidden neurons and a supervised learning
rule for the weights of the output neurons.
The centers are generally updated using the k-means clustering
algorithm which consists of computing the squared distance
between the input vector and the centers, choosing a minimum
squared distance, and moving the corresponding center closer to
the input vector.

The k mean algorithm has some potential problems:
classification depend on the initials values of the centers of
RBF, on the type of chosen distance, on the number of classes. If a
center is inappropriate chosen it may never be updated, so it may
never represent a class.

 Here is proposed a new competitive method to update the RBF
centers, which recompenses the winning neuron and penalizes the
second winner, named rival..
Gradient Based Adaptive Algorithm
An adaptive algorithm is a procedure for adjusting the
parameters of an adaptive filter to minimize a cost function
chosen for the task at hand.
In this case, the parameters in ω(t) correspond
to the impulse response values of the filter at
time n. We can write the output signal y(t) as




 The general form of an adaptive FIR filtering algorithm is



where G( ) is a particular vector-valued nonlinear function(
depends on cost function chosen), Ό(t) is a step size
parameter, e(t) and s(t) are the error signal and input signal
vector, respectively, and ω (t) is a vector of states that store
pertinent information about the characteristics of the input and
error signals
The Mean-Squared Error (MSE) cost function can be
defined as




 WMSE(t) can be found from the solution to the system of
 equations




 The method of steepest descent is an optimization procedure
 for minimizing the cost function J(t) with respect to a set of
 adjustable parameters W(t). This procedure adjusts each
 parameter of the system according to relationship
Linear Equalization
    Algorithms
LMS ALGORITHM

‱ In the family of stochastic gradient algorithms
‱ Approximation of the steepest – descent method
‱ Based on the MMSE criterion.(Minimum Mean square
  Error)
‱ Adaptive process containing two input signals:
‱      1.) Filtering process, producing output signal.
‱      2.) Desired signal (Training sequence)
‱ Adaptive process: recursive adjustment of filter tap
  weights
LMS ALGORITHM STEPS
                                   M 1
                                                  *
                        yn               un    k wk n
ï‚ą   Filter output                  k 0

                           en       dn        yn
ï‚ą   Estimation error
                                    wk n 1         wk n    u n k e* n
ï‚ą   Tap-weight adaptation



    update value       old value          learning -      tap
                                                                  error
    of tap - weigth    of tap - weight    rate            input
                                                                  signal
    vector             vector             parameter vector



                                                                      21
Recursive Least Square Algorithm

The recursive least squares (RLS) algorithm is another
algorithm for determining the coefficients of an adaptive filter.
In contrast to the LMS algorithm, the RLS algorithm uses
information from all past input samples (and not only from the
current tap-input samples) to estimate the (inverse of the)
autocorrelation matrix of the input vector.

To decrease the influence of input samples from the far
past, a weighting factor for the influence of each sample is
used. This cost function can be represented as
Non Linear Equalizers
Multilayer Perceptron Network

In 1958, Rosenblatt demonstrated some practical
applications using the perceptron. The perceptron is a
single level connection of McCulloch-Pitts neurons is
called as Single-layer feed forward networks.

The network is capable of linearly separating the input
vectors into
pattern of classes by a hyper plane. Similarly many
perceptrons can be connected in layers to provide a
MLP network, the input signal propagates through the
network in a forward direction, on a layer-by-layer
basis. This network has been applied successfully to
solve
diverse problems.
MLP Neural Network Using BP Algorithm
Generally MLP is trained using popular error back-
propagation algorithm.Si represent the inputs
s1, s2


. sn to the network, and yk
represents the output of the final layer of the neural
network. The
connecting weights between the input to the first hidden
layer, first to second hidden layer and the second
hidden layer to the output layers are represented by

respectively.

The final output layer of the MLP may be expressed as
The final output yk(t) at the output of neuron k, is compared with the desired
output d(t) and the resulting error signal e(t) is obtained as




 The instantaneous value of the total error energy is obtained by
 summing all error signals over all neurons in the output layer, that is




This error signal is used to update the weights and thresholds of the hidden
layers as well as the output layer. The updated weights are,
Functional Link Artificial Neural Network
FLANN is a novel single layer ANN network in which the original input
pattern is expanded to a higher dimensional space using nonlinear
functions, which provides arbitrarily complex decision regions by
generating nonlinear decision boundaries.

The main purpose of enhanced the functional expansion block to used
for the channel equalization process.

Each element undergoes nonlinear expansion to form M elements such
that the resultant matrix has the dimension of N×M. The functional
expansion of the element xk by power series expansion is carried out using
the equation given in
At tth iteration the error signal e(t) can be
computed as




 The weight vector can be updated by least mean
 square (LMS) algorithm, as
BER Performance of FLANN equalizer compared with
LMS, RLS based Equalizer
Chebyshev Artificial Neural Network
Chebyshev artificial neural network is similar to FLANN.
The difference being that in a FLANN the input signal is
expanded to higher dimension using functional expansion.
In Chebyshev the input is expanded using Chebyshev
polynomial. Similarly as FLANN network given in the
ChNN weights are updated by LMS algorithm. The
Chebyshev polynomials generated using the recursive
formula given as
BER Performance of ChNN equalizer compared with FLANN and
LMS, RLS based equalizer
Radial Basis Function Equalizer
The centres of the RBF networks are updated using k-means
clustering algorithm. This RBF structure can be extended for
multidimensional output as well. Gaussian kernel is the most
popular form of kernel function for equalization application, it
can be represented as



This network can implement a mapping Frbf : Rm→ R by the
function




Training of the RBF networks involves setting the
parameters for the centres Ci, spread σr and the linear
weights ωi RBF spread parameter, σr 2 is set to channel
noise variance σn 2
This provides the optimum RBF network as an equaliser.
BER Performance RBF Equalizer Compared ChNN, FLANN,
LMS, RLS equalizer
Conclusion
We observed that RLS provides faster convergence
rate than LMS equalizer.
We observed that MLP equalizer is a feed-forward
network trained using BP algorithm, it performed better
than the linear equalizer, but it has a drawback of slow
convergence rate, depending upon the number of nodes and
layers.
Optimal equalizer based on maximum a-posterior
probability (MAP) criterion can be implemented using Radial
basis function (RBF) network.
RBF equalizer mitigation all the ISI, CCI and BN
interference and provide minimum BER plot. But it has one
draw back that if input is increased the number of centres of
the network increases and makes the network more
complicated.
REFERENCES
‱Haykin, S., "Adaptive Filter Theory", Prentice Hall,2005
‱Haykin.S “Neural Network”, PHI 2003
‱Kavita Burse, R. N. Yadav, and S. C. Shrivastava
Channel Equalization Using Neural Networks: A Review „ IEEE
Transactions     on Systems, Man, And Cybernetics —Part B:
CYBERNETICS, VOL. 40, NO. 3, MAY 2010‟
‱Jagdish C. Patra, Ranendra N. Pal, Rameswar Baliarsingh, and
Ganapati Panda : Nonlinear Channel Equalization for QAM
Constellation Using Artificial Neural Network „ IEEE Transactions on
Systems, Man, And Cybernetics —Part B: CYBERNETICS, VOL. 29,
NO. 2, APRIL 1999‟
‱Amalendu Patnaik„, Dimitrios E. Anagnostou„, Rabindra K. Mishra‟,
Christos G. Christodoulou„, and J. C. Lyke‟ „
Applications of Neural Networks in Wireless Communications „IEEE
Antennas and Propagation Magazine, Vol. 46, No. 3. June 2004
‱R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996
‱http://www.geocities.com/SiliconValley/Lakes/6007/Neural.htm

Weitere Àhnliche Inhalte

Was ist angesagt?

Design of Filters PPT
Design of Filters PPTDesign of Filters PPT
Design of Filters PPTImtiyaz Rashed
 
Adaptive equalization
Adaptive equalizationAdaptive equalization
Adaptive equalizationOladapo Abiodun
 
Channel equalization
Channel equalizationChannel equalization
Channel equalizationMunnangi Anirudh
 
Matched filter
Matched filterMatched filter
Matched filtersrkrishna341
 
Adaptive Equalization
Adaptive EqualizationAdaptive Equalization
Adaptive EqualizationOladapo Abiodun
 
Fir filter design (windowing technique)
Fir filter design (windowing technique)Fir filter design (windowing technique)
Fir filter design (windowing technique)Bin Biny Bino
 
Modulation techniques
Modulation techniquesModulation techniques
Modulation techniquesSathish Kumar
 
Linear prediction
Linear predictionLinear prediction
Linear predictionUma Rajaram
 
Adaptive filter
Adaptive filterAdaptive filter
Adaptive filterA. Shamel
 
Adaptive linear equalizer
Adaptive linear equalizerAdaptive linear equalizer
Adaptive linear equalizerSophia Jeanne
 
Digital speech processing lecture1
Digital speech processing lecture1Digital speech processing lecture1
Digital speech processing lecture1Samiul Parag
 
Eqalization and diversity
Eqalization and diversityEqalization and diversity
Eqalization and diversityerindrasen
 
non parametric methods for power spectrum estimaton
non parametric methods for power spectrum estimatonnon parametric methods for power spectrum estimaton
non parametric methods for power spectrum estimatonBhavika Jethani
 
Digital modeling of speech signal
Digital modeling of speech signalDigital modeling of speech signal
Digital modeling of speech signalVinodhini
 
Unit iv wcn main
Unit iv wcn mainUnit iv wcn main
Unit iv wcn mainvilasini rvr
 
Discrete Signal Processing
Discrete Signal ProcessingDiscrete Signal Processing
Discrete Signal Processingmargretrosy
 
Butterworth filter
Butterworth filterButterworth filter
Butterworth filterMOHAMMAD AKRAM
 
Space time coding in mimo
Space time coding in mimo Space time coding in mimo
Space time coding in mimo ILA SHARMA
 

Was ist angesagt? (20)

Design of Filters PPT
Design of Filters PPTDesign of Filters PPT
Design of Filters PPT
 
Adaptive equalization
Adaptive equalizationAdaptive equalization
Adaptive equalization
 
Channel equalization
Channel equalizationChannel equalization
Channel equalization
 
Matched filter
Matched filterMatched filter
Matched filter
 
Pulse shaping
Pulse shapingPulse shaping
Pulse shaping
 
Adaptive Equalization
Adaptive EqualizationAdaptive Equalization
Adaptive Equalization
 
Fir filter design (windowing technique)
Fir filter design (windowing technique)Fir filter design (windowing technique)
Fir filter design (windowing technique)
 
Modulation techniques
Modulation techniquesModulation techniques
Modulation techniques
 
Linear prediction
Linear predictionLinear prediction
Linear prediction
 
Adaptive filter
Adaptive filterAdaptive filter
Adaptive filter
 
Adaptive linear equalizer
Adaptive linear equalizerAdaptive linear equalizer
Adaptive linear equalizer
 
Digital speech processing lecture1
Digital speech processing lecture1Digital speech processing lecture1
Digital speech processing lecture1
 
Eqalization and diversity
Eqalization and diversityEqalization and diversity
Eqalization and diversity
 
non parametric methods for power spectrum estimaton
non parametric methods for power spectrum estimatonnon parametric methods for power spectrum estimaton
non parametric methods for power spectrum estimaton
 
Digital modeling of speech signal
Digital modeling of speech signalDigital modeling of speech signal
Digital modeling of speech signal
 
Unit iv wcn main
Unit iv wcn mainUnit iv wcn main
Unit iv wcn main
 
Discrete Signal Processing
Discrete Signal ProcessingDiscrete Signal Processing
Discrete Signal Processing
 
Butterworth filter
Butterworth filterButterworth filter
Butterworth filter
 
Space time coding in mimo
Space time coding in mimo Space time coding in mimo
Space time coding in mimo
 
Gmsk
GmskGmsk
Gmsk
 

Ähnlich wie Adaptive equalization

ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...vijaym148
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks ShwethaShreeS
 
Adaptive Noise Cancellation using Multirate Techniques
Adaptive Noise Cancellation using Multirate TechniquesAdaptive Noise Cancellation using Multirate Techniques
Adaptive Noise Cancellation using Multirate TechniquesIJERD Editor
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cseNaveenBhajantri1
 
Machine learning certification in gurgaon
Machine learning certification in gurgaon Machine learning certification in gurgaon
Machine learning certification in gurgaon TejaspathiLV
 
Data science training ang placements
Data science training ang placementsData science training ang placements
Data science training ang placementsbhuvan8999
 
data science course
data science coursedata science course
data science coursemarketer1234
 
machine learning training in bangalore
machine learning training in bangalore machine learning training in bangalore
machine learning training in bangalore kalojambhu
 
data science course in pune
data science course in punedata science course in pune
data science course in punemarketer1234
 
Data science certification in mumbai
Data science certification in mumbaiData science certification in mumbai
Data science certification in mumbaiprathyusha1234
 
Data science training in mumbai
Data science training in mumbaiData science training in mumbai
Data science training in mumbaisushmapetloju
 
Data science courses in bangalore
Data science courses in bangaloreData science courses in bangalore
Data science courses in bangaloreprathyusha1234
 
Data science course
Data science courseData science course
Data science courseprathyusha1234
 
Data science certification in pune
Data science certification in puneData science certification in pune
Data science certification in puneprathyusha1234
 
Data science certification course
Data science certification courseData science certification course
Data science certification coursebhuvan8999
 
Data science certification in bangalore
Data science certification in bangaloreData science certification in bangalore
Data science certification in bangaloreprathyusha1234
 
Data science certification
Data science certificationData science certification
Data science certificationprathyusha1234
 

Ähnlich wie Adaptive equalization (20)

ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
Adaptive Noise Cancellation using Multirate Techniques
Adaptive Noise Cancellation using Multirate TechniquesAdaptive Noise Cancellation using Multirate Techniques
Adaptive Noise Cancellation using Multirate Techniques
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
 
ANN.ppt
ANN.pptANN.ppt
ANN.ppt
 
Machine learning certification in gurgaon
Machine learning certification in gurgaon Machine learning certification in gurgaon
Machine learning certification in gurgaon
 
Data science training ang placements
Data science training ang placementsData science training ang placements
Data science training ang placements
 
data science course
data science coursedata science course
data science course
 
machine learning training in bangalore
machine learning training in bangalore machine learning training in bangalore
machine learning training in bangalore
 
data science course in pune
data science course in punedata science course in pune
data science course in pune
 
Data science certification in mumbai
Data science certification in mumbaiData science certification in mumbai
Data science certification in mumbai
 
Data science training in mumbai
Data science training in mumbaiData science training in mumbai
Data science training in mumbai
 
Data science courses in bangalore
Data science courses in bangaloreData science courses in bangalore
Data science courses in bangalore
 
Data science course
Data science courseData science course
Data science course
 
Data science certification in pune
Data science certification in puneData science certification in pune
Data science certification in pune
 
Data science certification course
Data science certification courseData science certification course
Data science certification course
 
Data science certification in bangalore
Data science certification in bangaloreData science certification in bangalore
Data science certification in bangalore
 
Data science certification
Data science certificationData science certification
Data science certification
 

KĂŒrzlich hochgeladen

Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...anjaliyadav012327
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...Pooja Nehwal
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...Sapna Thakur
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...fonyou31
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfchloefrazer622
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 

KĂŒrzlich hochgeladen (20)

Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
 
CĂłdigo Creativo y Arte de Software | Unidad 1
CĂłdigo Creativo y Arte de Software | Unidad 1CĂłdigo Creativo y Arte de Software | Unidad 1
CĂłdigo Creativo y Arte de Software | Unidad 1
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdf
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 

Adaptive equalization

  • 1. ADAPTIVE CHANNEL EQUALIZATION College of Technology, Pantnagar G.B.Pant University of Agriculture and Technology, Pantnagar Kamal Bhatt M.Tech-Electronics & Communication Engg. ID-44036
  • 2. NEURAL NETWORK Neural networks are the simplified models of the biological neuron systems.  Neural networks are typically organized in layers. Layers are made up of a number of interconnected 'nodes' .which contain an 'activation function'. Patterns are presented to the network via the 'input layer', which communicates to one or more 'hidden layers' where the actual processing is done via a system of weighted 'connections'. The hidden layers then link to an 'output layer' where the answer is output
  • 3. MODEL OF ARTIFICIAL NEURON  An appropriate model/simulation of the nervous system should be able to produce similar responses and behaviours in artificial systems.  The nervous system is build by relatively simple units, the neurons, so copying their behaviour and functionality should be the solution.
  • 4. LEARNING IN A SIMPLE NEURON Preceptron Learning Algorithm: 1. Initialize weights 2. Present a pattern and target output 2 y f [ wxi] 2 i 3. Compute output : y f [ wi x ] 0 i i i 0 4. Update weights : wi(t 1 wi(t) ) wi Repeat starting at 2 until acceptable level of error
  • 5. NEURAL NETWORK ARCHITECTURE  An artificial Neural Network is defined as a data processing system consisting of a large number of interconnected processing elements or artificial neurons.  There are three fundamentally different classes of neural networks. Those are.  Single layer feedforward Networks.  Multilayer feedforward Networks.  Recurrent Networks.
  • 6. Application The tasks to which artificial neural networks are applied tend to fall within the following broad categories: ‱Function approximation, or regression analysis, including time series prediction and modeling. ‱Classification, including pattern and sequence recognition, novelty detection and sequential decision making. ‱Data processing, including filtering, clustering, blind signal separation and compression.
  • 7. Equalization History  The LMS algorithm by Widrow and Hoff in 1960 paved the way for the development of adaptive filters used for equalisation. Lucky used this algorithm in 1965 to design adaptive channel equalisers. Maximum Likelihood Sequence Estimator (MLSE) equaliser and its Viterbi implementation in 1970’s. The multi layer perceptron (MLP) based symbol-by- symbol equalisers was developed in 1990
  • 8. During 1989 to 1995 some efficient nonlinear artificial neural network equalizer structure for channel equalization were proposed, those include Chebyshev Neural Network , Functional link ANN In 2002 Kevin M. Passino described the Optimization Foraging Theory in article “Biomimicry of Bacterial Foraging” More recently in 2008, a rank based statistics approach known as Wilcoxon learning method has been proposed for signals processing application to mitigate the linear and nonlinear learning problems.
  • 10. Equalizers Adaptive channel equalizers have played an important role in digital communication systems. Equalizer works like an inversed filter which is placed at the front end of the receiver. Its transfer function is inverse to the transfer function of the associated channel , is able to reduce the error causes between the desired and estimated signal. This is achieved through a process of training. During this period the transmitter transmits a fixed data sequence and the receiver has a copy of the same.
  • 11. We use Equalizers to compensate the received signals which are corrupted by the noise, interference and signal power attenuation introduced by communication channels during transmission. Linear transversal filters (LTF) are commonly used in the design of channel equalizers. The linear equalizers fail to work well when transmitted signals have encountered severe nonlinear distortion. A neural network (NN) has the capability of complicatedly mapping the input to the output signals, which makes the NN- based equalizers a potentially suitable solution to deal with nonlinear channel distortion.
  • 12.
  • 13. The problem of equalization may be treated as a problem of signals classification, so neural networks (NN) are quite promising candidates because they can produce arbitrarily complex decision region. Studies performed during the last decade have established the superiority of neural equalizers comparative to the traditional equalizers, in conditions of shigh nonlinear distortions and rapidly varying signals. Several different neural equalizers architectures have been developed, mostly combinations between a conventional linear transversal filter (LTE) and a neural network. The LTE eliminates the linear distortions, such as ISI, so the NN can be focused on compensating the nonlinearities. There have been studies on the following structures: a LTE and a multilayer perception (MLP) , a LTE and a radial basis function network (RBF) a LTE and a recurrent neural network
  • 14. MLP networks are sometimes plagued by long training times and may be trapped at bad local minima. RBF networks often provide a faster and more robust solution to the equalization problem. In addition, the RBF neural network has a structure similar to the optimal Bayesian symbol decision Therefore, the RBF is an ideal processing structure to implement the optimal Bayesian equalizer . The RBF performances are better than the LTE and MLP equalizers. g. Several learning algorithms have been proposed to update the RBF parameters. However, the most popular algorithm consists of an unsupervised learning rule for the centers of hidden neurons and a supervised learning rule for the weights of the output neurons.
  • 15. The centers are generally updated using the k-means clustering algorithm which consists of computing the squared distance between the input vector and the centers, choosing a minimum squared distance, and moving the corresponding center closer to the input vector. The k mean algorithm has some potential problems: classification depend on the initials values of the centers of RBF, on the type of chosen distance, on the number of classes. If a center is inappropriate chosen it may never be updated, so it may never represent a class.  Here is proposed a new competitive method to update the RBF centers, which recompenses the winning neuron and penalizes the second winner, named rival..
  • 16. Gradient Based Adaptive Algorithm An adaptive algorithm is a procedure for adjusting the parameters of an adaptive filter to minimize a cost function chosen for the task at hand.
  • 17. In this case, the parameters in ω(t) correspond to the impulse response values of the filter at time n. We can write the output signal y(t) as The general form of an adaptive FIR filtering algorithm is where G( ) is a particular vector-valued nonlinear function( depends on cost function chosen), ÎŒ(t) is a step size parameter, e(t) and s(t) are the error signal and input signal vector, respectively, and ω (t) is a vector of states that store pertinent information about the characteristics of the input and error signals
  • 18. The Mean-Squared Error (MSE) cost function can be defined as WMSE(t) can be found from the solution to the system of equations The method of steepest descent is an optimization procedure for minimizing the cost function J(t) with respect to a set of adjustable parameters W(t). This procedure adjusts each parameter of the system according to relationship
  • 19. Linear Equalization Algorithms
  • 20. LMS ALGORITHM ‱ In the family of stochastic gradient algorithms ‱ Approximation of the steepest – descent method ‱ Based on the MMSE criterion.(Minimum Mean square Error) ‱ Adaptive process containing two input signals: ‱ 1.) Filtering process, producing output signal. ‱ 2.) Desired signal (Training sequence) ‱ Adaptive process: recursive adjustment of filter tap weights
  • 21. LMS ALGORITHM STEPS M 1 * yn un k wk n ï‚ą Filter output k 0 en dn yn ï‚ą Estimation error wk n 1 wk n u n k e* n ï‚ą Tap-weight adaptation update value old value learning - tap error of tap - weigth of tap - weight rate input signal vector vector parameter vector 21
  • 22. Recursive Least Square Algorithm The recursive least squares (RLS) algorithm is another algorithm for determining the coefficients of an adaptive filter. In contrast to the LMS algorithm, the RLS algorithm uses information from all past input samples (and not only from the current tap-input samples) to estimate the (inverse of the) autocorrelation matrix of the input vector. To decrease the influence of input samples from the far past, a weighting factor for the influence of each sample is used. This cost function can be represented as
  • 23.
  • 25. Multilayer Perceptron Network In 1958, Rosenblatt demonstrated some practical applications using the perceptron. The perceptron is a single level connection of McCulloch-Pitts neurons is called as Single-layer feed forward networks. The network is capable of linearly separating the input vectors into pattern of classes by a hyper plane. Similarly many perceptrons can be connected in layers to provide a MLP network, the input signal propagates through the network in a forward direction, on a layer-by-layer basis. This network has been applied successfully to solve diverse problems.
  • 26. MLP Neural Network Using BP Algorithm
  • 27. Generally MLP is trained using popular error back- propagation algorithm.Si represent the inputs s1, s2


. sn to the network, and yk represents the output of the final layer of the neural network. The connecting weights between the input to the first hidden layer, first to second hidden layer and the second hidden layer to the output layers are represented by respectively. The final output layer of the MLP may be expressed as
  • 28. The final output yk(t) at the output of neuron k, is compared with the desired output d(t) and the resulting error signal e(t) is obtained as The instantaneous value of the total error energy is obtained by summing all error signals over all neurons in the output layer, that is This error signal is used to update the weights and thresholds of the hidden layers as well as the output layer. The updated weights are,
  • 29.
  • 30. Functional Link Artificial Neural Network FLANN is a novel single layer ANN network in which the original input pattern is expanded to a higher dimensional space using nonlinear functions, which provides arbitrarily complex decision regions by generating nonlinear decision boundaries. The main purpose of enhanced the functional expansion block to used for the channel equalization process. Each element undergoes nonlinear expansion to form M elements such that the resultant matrix has the dimension of N×M. The functional expansion of the element xk by power series expansion is carried out using the equation given in
  • 31.
  • 32. At tth iteration the error signal e(t) can be computed as The weight vector can be updated by least mean square (LMS) algorithm, as
  • 33. BER Performance of FLANN equalizer compared with LMS, RLS based Equalizer
  • 34. Chebyshev Artificial Neural Network Chebyshev artificial neural network is similar to FLANN. The difference being that in a FLANN the input signal is expanded to higher dimension using functional expansion. In Chebyshev the input is expanded using Chebyshev polynomial. Similarly as FLANN network given in the ChNN weights are updated by LMS algorithm. The Chebyshev polynomials generated using the recursive formula given as
  • 35.
  • 36. BER Performance of ChNN equalizer compared with FLANN and LMS, RLS based equalizer
  • 38. The centres of the RBF networks are updated using k-means clustering algorithm. This RBF structure can be extended for multidimensional output as well. Gaussian kernel is the most popular form of kernel function for equalization application, it can be represented as This network can implement a mapping Frbf : Rm→ R by the function Training of the RBF networks involves setting the parameters for the centres Ci, spread σr and the linear weights ωi RBF spread parameter, σr 2 is set to channel noise variance σn 2 This provides the optimum RBF network as an equaliser.
  • 39. BER Performance RBF Equalizer Compared ChNN, FLANN, LMS, RLS equalizer
  • 40. Conclusion We observed that RLS provides faster convergence rate than LMS equalizer. We observed that MLP equalizer is a feed-forward network trained using BP algorithm, it performed better than the linear equalizer, but it has a drawback of slow convergence rate, depending upon the number of nodes and layers. Optimal equalizer based on maximum a-posterior probability (MAP) criterion can be implemented using Radial basis function (RBF) network. RBF equalizer mitigation all the ISI, CCI and BN interference and provide minimum BER plot. But it has one draw back that if input is increased the number of centres of the network increases and makes the network more complicated.
  • 41. REFERENCES ‱Haykin, S., "Adaptive Filter Theory", Prentice Hall,2005 ‱Haykin.S “Neural Network”, PHI 2003 ‱Kavita Burse, R. N. Yadav, and S. C. Shrivastava Channel Equalization Using Neural Networks: A Review „ IEEE Transactions on Systems, Man, And Cybernetics —Part B: CYBERNETICS, VOL. 40, NO. 3, MAY 2010‟ ‱Jagdish C. Patra, Ranendra N. Pal, Rameswar Baliarsingh, and Ganapati Panda : Nonlinear Channel Equalization for QAM Constellation Using Artificial Neural Network „ IEEE Transactions on Systems, Man, And Cybernetics —Part B: CYBERNETICS, VOL. 29, NO. 2, APRIL 1999‟ ‱Amalendu Patnaik„, Dimitrios E. Anagnostou„, Rabindra K. Mishra‟, Christos G. Christodoulou„, and J. C. Lyke‟ „ Applications of Neural Networks in Wireless Communications „IEEE Antennas and Propagation Magazine, Vol. 46, No. 3. June 2004 ‱R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996 ‱http://www.geocities.com/SiliconValley/Lakes/6007/Neural.htm