SlideShare ist ein Scribd-Unternehmen logo
1 von 5
Downloaden Sie, um offline zu lesen
1
Abstract—Reed-Solomon codes are very effective in correcting
random symbol errors and random burst errors, and they are
widely used for error control in communication and data storage
system, ranging from deep-space telecommunications to compact
disc. Concatenation of these codes as outer codes with simple
binary codes as inner codes provides high data reliability with
reduced decoding complexity. RS codes are non-binary cyclic
error correcting block codes. Here redundant symbols are
generated in the encoder using a generator polynomial and added
to the very end of the message symbols. Then RS Decoder
determines the locations and magnitudes of errors in the received
polynomial. This Paper covers the development of program to
produce synthesizable VHDL code for an arbitrary RS Encoder
and Decoder.
Keywords—Error correcting codes, Generator Polynomial,
Reed-Solomon Encoder/ Decoder, VHDL.
I. INTRODUCTION
Many digital signaling systems use Forward error correction
for broadcasting purpose. It is a technique in which redundant
information is added to the signal to allow the receiver to
detect and correct errors that may have occurred during
transmission. Many different types of codes have been devised
for this purpose, but Reed-Solomon codes [1] have proved to
be a good compromise between efficiency and complexity.
Reed-Solomon codes are used in many applications such as
satellite communication, digital audio tape and in CD ROMs.
Such device applications call for the use of many different RS
codes. The modern digital systems designer is then faced with
the problem of implementation these different error-correction
codes. The difficulties of this task have been greatly reduced
through the use of FPGA or gate arrays and through the use of
a hardware description language, such as VHDL. The
approach is to write the functionality of the system in a top-
down approach using VDHL, mapping the code into hardware
elements through the process of synthesis and finally to
simulate the resting net list using a suitable test bench.
A. Classification of Reed-Solomon code:
There are many categories of error correcting codes, the main
ones being block codes and convolution codes. A Reed-
Solomon code is a block code, meaning that the message to be
transmitted is divided up into separate blocks of data. Each
block then has parity protection information added to it to
form a self-contained code word. It is also, as generally used,
a systematic code, which means that the encoding process
does not alter massage symbols and the protection symbols are
added as a separate part of the block.
Fig 1. Reed-Solomon Codeword
Also, a Reed-Solomon code is a linear code (adding two code
words produce another code word) and it is cyclic (cyclically
shifting the symbols of a code word produces another code
word). It belongs to the family of Bose-Chaudhuri-
Hocquenghem(BCH) codes [3,4], but is distinguished by
having multi-bit symbols. This makes the code particularly
good at dealing with burst of errors because, although a
symbol may have all its bit in error, this counts as only one
symbol error in term of the correction capacity of the code.
II. REED-SOLOMON CODE
BCH codes are binary codes. Reed-Solomon codes are q-ary
BCH codes. The parameters of these codes are:
Code-word length: n = 2 𝑚
− 1 (symbols)
Symbol length: m
Error-correcting capability: t (symbols)
Number of parity symbols: n-k = 2-t (symbols)
Note that for every additional error that can be corrected, only
2t additional parity symbols are needed. This is an
improvement over BCH codes. Let α be a primitive element of
GF (2 𝑚
). Then the generator polynomial g(x) is the lowest
degree polynomial over GF (2) which has 2t roots which are
consecutive powers of α.
𝑔(𝑥) = (𝑥+∝)(𝑥 +∝2) … . (𝑥 +∝2𝑡) (1)
The syndrome polynomial can e computed as before, however,
now we are dealing with operations over GF(2 𝑚
) instead of
GF(2). Efficient shift register circuit configurations have been
found to calculate the coefficients of the syndrome
polynomial.
Implementation of Reed-Solomon Code in
VHDL
Ramya.C.B, Postgraduate Student, RNSIT and Usha.B.S, Assistant Professor, RNSIT
2
Once the syndrome polynomial has been found, the error
locator polynomial, ᴧ(x), and the error evaluator polynomial,
Ω(x), (also called the error magnitude polynomial) must be
computed. The error evaluator polynomial is needed, because
knowing the error location is not sufficient to correct an error.
Each symbol is m bit wide, thus one of the 2 𝑚
− 1 possible
error patterns must be added to the symbols at the error
locations. Several algorithms for the error locator polynomial
have been devised, such as the Peterson-Gorenstein-Zieler
algorithm, Eulid’s algorithm, and the non-binary Massey-
Berlekamp algorithm. The error evaluator polynomial can be
computed as follows:
ᴧ(𝑥)𝑆(𝑥) ≡ Ω(𝑥)𝑚𝑜𝑑(𝑥2𝑡
) (2)
Improvement has been made to the Massey-Berlekamp
algorithm where both the error locator polynomial and the
error evaluator polynomial are computed at the same time.
This algorithm requires performing Galois field inversion.
Recently, an inversionless Massey-Berlekamp algorithm was
discovered; however, it only computes the error locator
polynomial. An extension to this algorithm which calculates
both polynomials at the same time without the use of inversion
was discovered. Once these two polynomials have been found,
the roots of the error locator polynomial are found. An
efficient method of performing this function is called the
Chien search. The error value that must be added to the
symbol at the error location is given by Forney’s algorithm.
III. REED-SOLOMON ENCODING
A Reed-Solomon code is a 𝑞 𝑚
-ary BCH code of length 𝑞 𝑚
-1.
Code symbols are elements of corresponding galois field
GF(𝑞 𝑚
). In this paper, we will be dealing with extensions of
binary codes, i.e., q=2. This fields is generated by primitive
irreducible polynomial degree m. Let α be a primitive element
form GF(2 𝑚
). The generator polynomial for the code capable
of correcting “t” errors is degree 2t, and has its roots, 2t
consecutive power of α, this is :
𝑔(𝑥) = ∏(𝑥 +∝ 𝑚+𝑡
)
2𝑡−1
𝑡=0
(3)
Where 𝑚0 is the log of the initial root. The message of k*m
bits is divided into k symbols, each symbol being an m-bit
element from GF(2 𝑚
). each symbol is regarded as a
coefficient of a k-1 degree polynomial, m(x). A total of 2t
parity symbols are appended to the message, thus making a
systematic encoding format, such as RS code is denoted by 2
parameters, n and k, and written as RS(n,k). The distance of
RS codes is:
Distance = n-k+1 = 2t+1 (4)
Here there are K message symbols, and n-k = 2t parity
symbols, for a total of n symbols. RS codes may be shortened
to RS(n`,k`), where n`=n-1 and k`=k-1. In this case, the
distance property d=2t+1 still holds.
The encoded message, c(x), is formed as follows:
1. 1. Multiply the message m(x) by 𝑥2𝑡
:
2.
3. From the parity symbols, b(x), by dividing the above result by
the generator polynomial.
𝑐(𝑥) = 𝑚(𝑥). 𝑥2𝑡
+ 𝑏(𝑥) (5)
Thus, in order to specify an RS code completely, the following
items must be described. The degree of field generator
polynomial, m, and its coefficients. The Error-correcting
capability, t, of code. Number of message symbols, k. The log
of the initial root of the code generator polynomial, 𝑚0.
Note that there is a restriction placed on these parameters.
First we can calculate n, the total number symbols as n=k+2t.
This number must be less than or equal to the maximum
number of allowable symbols, namely (2 𝑚
− 1). The higher
order symbols are usually transmitted first.
2. Syndrome calculation:
The syndrome calculation is the first step in the decoding
process. For a t-error correcting RS code, there are 2t
syndromes that must be calculated. They are calculated as
follows :
𝑠(𝑗) = 𝑟(∝ 𝑚0+1), where j€ {0..2t-1} (6)
In this equation above, r(x) is the received codeword
polynomial. and 𝑚0 is the log of the initial root of the code
generator polynomial. since the received codeword
polynomial is the sum of the message polynomial and the
error polynomial, we can reduce this to
𝑆(𝑗) = 𝑐(∝ 𝑚0+1) + 𝑒(∝ 𝑚0+1) = 𝑒(∝ 𝑚0+1) = ∑ 𝑌𝑡 ∗𝑡
𝑡=0
𝑋𝑡
𝑗+𝑚0
, where j€ {0..2t-1} (7)
3. Chien Search
Another crucial step in the decoding process is to find the
location of errors in the received codeword. An iterative
process, called the Chien search is the most efficient means of
doing this. Consider the error locator polynomial, σ(x), which
is a t degree polynomial whose roots are the error locations,
Xi.
𝜎(𝑥) = ∏ (𝑥 + 𝑋𝑡)𝑡
𝑡=1 = 𝑥 𝑡
+ 𝜎𝑡 ∗ 𝑥 𝑡
+ − ⋯ + 𝜎𝑡−1.x+𝜎𝑡 (8)
Now, it can be shown, that the coefficients of the error-locator
polynomial are related to the syndrome values, S(j), through a
set of relations called Newton’s identities :
𝑆(𝑡 + 𝑗) + 𝜎1. 𝑆(𝑡 + 𝑗 − 1) + ⋯ + 𝑆(𝑗). 𝜎𝑡 = 0 ∀𝑗 (9)
This fact will be called upon later in the next 2 section when
the equations for the error values for 1-error and 2-error
correcting codes are derived. Any root of σ(x) will also satisfy
the equation
𝜎1 ∗ 𝑥−1
+𝜎2 ∗ 𝑥−2
+ ⋯ + 𝜎𝑡 ∗ 𝑥−𝑡
= 1 (10)
3
The circuit shown in the below figure will implement such a
search, one at a time (per clock cycle). Initially, the values of
the registers are loaded with the coefficients of the error
locator polynomial. A sum of registers is formed, resulting in
Chien sum, and if it is 0, then an error has been located.
Fig 2. Chien Search
IV. DECODING FOR 3 OR MORE ERRORS
This section is devoted to the case of RS decoding for 3 or
more errors. For 3 or more errors, the resulting “t” x “t” sets of
simultaneous equations becomes cumbersome and inefficient.
A more elegant way to solve the key equation is the Massey
Berlekamp algorithm. In this algorithm, the error locator and
the error-evaluator polynomial can be simultaneously
computed. Once this is done, the Chien search is performed,
and the error can be calculated using Forneys’s algorithm. One
drawback of Massey-Berlekamp algorithm is the need to
perform Galois field inversion every iteration. There are two
approaches to mitigate this. First, one may pipeline the
algorithm, so that only either a (multiplication + addition)
operation is performed, or an inversion is performed. This will
reduce the maximum time needed between clock cycles, but
increase that total number of clock cylces, thus increasingly
latency. The other approach is to put the inversion in the same
clock cycles as the other operations, resulting in a slower
clock speed. An improvement to Massey-Berlekamp algorithm
is the inversionless Massey-Berlekamp algorithm. In this
algorithm, the need for Galois field inversion has been
eliminated by a clever modification to the equations used in
every iteration. These results in an algorithm that requires the
same number of iterations as the original, yet, since it does not
perform inversion, can be speeded up. Unfortunately, the
output of this algorithm that requires polynomials, namely, the
error-locator polynomial.
The error-evaluator polynomial is calculated afterwards by the
following equation:
Λ(𝑥)𝑆(𝑥) ≡ Ω(𝑥)𝑚𝑜𝑑(𝑥2t
) (11)
Extra clock cycles are needed to perform such a polynomial
multiplication, once again resulting in a longer latency.
A. Extended Inversionless Massey-Berlekamp Algorithm
The inversionless Massey-Berlekamp algorithm can be
extended to yield both the error-locator and the error-
evaluated polynomial at the same time, also without the use of
Galois field inverters. This eliminates the need for performing
the required polynomial multiplication to determine the error-
evaluator polynomial. The extension to the inversionless
Massey-Berlekamp algorithm consists of the addition of the
following steps to the inversionless algorithm:
Ω(𝑥)(𝑘+𝑙)
= 𝛾(𝑘)
. Ω(𝑥)(𝑘)
− 𝛿(𝑘+1)
. 𝑎(𝑥)(𝑘)
. 𝑥 (12)
If 𝛿(𝑘+1)
= 0 𝑜𝑟 2𝑙(𝑘)
> 𝑘 𝑡ℎ𝑒𝑛,
𝑎(𝑥)(𝑘+𝑙)
= 𝑥. 𝑎(𝑥)(𝑘)
(13)
Otherwise
𝑎(𝑥)(𝑘+𝑙)
= Ω(𝑥)(𝑘)
(14)
The initial conditions are the same as those i the original
Massey-Berlekamp algorithm, that is :
Ω(𝑥)(0)
= 0. 𝑎(𝑥) = 𝑥−1
(15)
In this equation above Ω(𝑥) is the error-evaluator polynomial.
At this point, both the error-locator and the error-evaluator
polynomials have been computed. The Chien search will find
the zeros of the error-locator polynomial, i.e, the error
locations. The next step is to correct the errors. This is done
using Forney’s algorithm, which, for the extensions of binary
fields is :
𝑒𝑡𝑚 = 𝛼 𝑡 𝑚(𝑙−𝑚0)
Ω(𝛼 𝑡 𝑚)
Λ(𝛼−𝑡 𝑚)
(16)
In this equation, 𝛼 𝑡 𝑚 is the present location of the Chien
search, 𝑋1. The entire algorithm for a multi-error (t>2)
correcting code is then stated as follows:
1. Calculate the 2t syndromes
2. If all of the syndromes are 0, then there is no error, and
we STOP.
3. Calculate the error-locator and the error-evaluator
polynomials using the extended inversionless Massey-
Berlekamp algorithm.
4. Perform a Chien search on the entire received message. If
the element at position (𝑋1) is found to be the location of
an error, that is, when the Chien sum is 0, then Calculate
the error value 𝑌1 using Forney’s algorithm and correct
the error : connected_element = received_element + 𝑌𝑖.
A block diagram of a pipelined multi-error (t>2) correcting
RS decoder is shown in the Figure below. As before, an input
strobe, RS_data_in_start is used to signify the beginning of a
new input codeword. The codeword enters the decode in
bitwise parallel format, one symbol at a time. If first enters
into the Calculate_Syndrome block, where the 2t syndrome
are calculated.
4
The control signal Error_Present is also generated. Finally, a
strobe indicating the completion of the syndrome calculation
is sent to the Key_Equation_solver block, which implements
the extended inversionless Massey-Berelekamp algorithm. A
delay block for the incoming RS symbols is necessary to
compensate for the delays in the Calculate_syndrome and
Key_Equation_Solver blocks, and any startup dealays in the
Chien_Search block. The Key_Equation_solver block
calculates the error-locator and the error-evaluator
polynomials. The Chien_Search block perfoms the function of
finding error locations, and correcting the data, using Forney’s
algorithm, when the Chien sum is 0. The evaluation of the
polynomials needed for error correction can be pipelined. The
corrected data and an output data strobe are the output of the
RS decoder.
V. RESULT
A. Implementation of RS Encoder in VDHL
Consider now the implementation of the RS encoding in
VHDL. The timing diagram showing the input/output
interface of the RS encoder Is shown in the figure. A signal
called input_strobe is used to start the encoding process. It is
a single clock cycle pulse that precedes the message data
called data_in. by one clock cycle. The data consists of the k
coefficients of the message polynomial. Another requirement
for the encoder is the size of the message: this encoder can
accommodate different sized messages. The message size is
conveyed through the signal data_size. The timing diagram
shows two control signals that are used internally, namely
do_calc and count. These will be discussed in more detail in
the next section. Finally, there are two output signals,
output_strobe, and data_out, whose names clearly describe
their function. output data.
Fig. 3. Reed-Solomon Encoder (179, 163)
Fig.4. Device Utilization summary for RS Encoder (179, 163)
B. Implementation of RS Decoder in VHDL
The RS decoder is implemented using all the entities
implemented above. Structural style of modeling is used for
mapping various components of the decoder to this entity. The
complete simulation results after port - mapping the
components to the ports of the decoder entity are shown
below:
1. Syndrome Register: The VHDL code for the syndrome
registers, and the determination of if there is an error present.
The intermediate syndrome values are stored in the IntS
register array. On reset, they are set to zero. When the input
strobe rs_data_in_start arrives, these registers are set to input
data. Thereafter, while the control signal DoCalc is 1, the
syndromes are calculated as per previous discussions. When
the control signal XferSyndrome is 1, i.e, at the end of the
input data, the output syndrome registers and the
errors_present control bit are set to their correct values.
2. Massey Brelekamp Algorithm: The VHDL implementation
of the extended inversionless Massey-Berlekamp will be
described in this section. Below figure shows the timing
diagram. The block algorithm starts when the input strobe
XferSyndrome arrives. The outputs of this block are the
coefficients of the error-locator polynomial (lamda_poly) and
the error-evaluator polynomial (omega_poly). The latency is
(2t+1) where “t” is the error correcting capability of the RS
code.
Fig. 5. Massey Berelaykamp Simulation
5
3. Chien Search and Error Correction: When the error
encoding capability of an RS code is greater than two, the
calculation of the correction factor requires evaluation of
derivative of the error-locator polynomial and the error-
evaluator polynomial. The Chien search requires the
evaluation of the error-locator polynomial. The evaluations
are best handled in a pipeline for polynomial evaluation. This
next section discusses the pipelines that are needed and their
size. The pipeline for the error-locator polynomial and error-
evaluator polynomial will be called the eval_lambda_prime
and Eval_omega pipelines in the VHDL code. The Chien
search pipeline is called Eval_Chiensum pipeline in VHDL
code.
Fig. 6. Chien Search Simulation
Fig. 7. Reed-Solomon Decoder (179, 163)
VI. CONCLUSION
This Paper has introduced and proved the extended
inversionless Massey-Berlekamp algorithm, which is a modest
improvement over the inversionless Massey-Berlekamp
algorithm for solving the key equation in a Reed-Solomon
decoder. The error location and correction equations for an
arbitrary RS code with an error correcting capability of more
than or equal to 3 errors was also presented. Using this
information, VHDL based designs for specific RS codes were
implemented for both encoder and decoder. Once this was
done, a program was written to automatically generate the
VHDL code for either an RS encoder, or an RS decoder for an
arbitrary RS code based on user inputs.
VII. REFERENCES
[1] E. Prange, "Cyclic Error-Correcting Codes in Two
Symbols," Air Force Cambridge Research Center-TN-
57-103, Cambridge, Mass., September 1957.
[2] E. Prange, "Some Cyclic Error-Correcting Codes with
Simple Decoding Algorithms," Air Force Cambridge
Research Center-TN-58-156, Cambridge, Mass., April
1958.
[3] E. Prange, "The Use of Coset Equivalence in the
Analysis and Decoding of Group Codes," Air Force
Cambridge Research Center-TR-59-164, Cambridge,
Mass., 1959.
[4] R. C. Bose and D. K. Ray-Chaudhuri, "On a Class of
Error Correcting Binary Group Codes," Information and
Control, Volume 3, pp. 68-79, March 1960.
[5] R. C. Bose and D. K. Ray-Chaudhuri, "Further Results
on Error Correcting Binary Group Codes," Information
and Control, Volume 3, pp. 279-290, September 1960.
[6] Hocquenghem, "Codes correcteurs d'erreurs," Chiffres,
Volume 2, pp. 147-156, 1959
[7] S. Reed and G. Solomon, "Polynomial Codes over
Certain Finite Fields," SI AM Journal of Applied
Mathematics, Volume 8, pp. 300-304, 1960.
[8] R. J. McEliece, Finite Fields for Computer Scientists and
Engineers, Boston: Kluwer Academic, 1987
[9] S. B. Wicker, Error Control Systems for Digital
Communication and Storage, N.J.: Prentice-Hall, 1994.
[10] R. C. Singleton, "Maximum Distance Q-nary Codes,"
IEEE Transactions on Information
[11] Theory, Volume IT-10, pp. 116-118,1964.
[12] D. Gorenstein and N. Zierler, "A Class of Error
Correcting Codes in pm Symbols," Journal of the Society
of Industrial and Applied Mathematics, Volume 9, pp.
207-214, June 1961.

Weitere ähnliche Inhalte

Was ist angesagt?

The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)theijes
 
An Efficient Interpolation-Based Chase BCH Decoder
An Efficient Interpolation-Based Chase BCH DecoderAn Efficient Interpolation-Based Chase BCH Decoder
An Efficient Interpolation-Based Chase BCH Decoderijsrd.com
 
7 convolutional codes
7 convolutional codes7 convolutional codes
7 convolutional codesVarun Raj
 
Linear block coding
Linear block codingLinear block coding
Linear block codingjknm
 
Reed Soloman and convolution codes
Reed Soloman and convolution codesReed Soloman and convolution codes
Reed Soloman and convolution codesShailesh Tanwar
 
Information Theory and Coding Question Bank
Information Theory and Coding Question BankInformation Theory and Coding Question Bank
Information Theory and Coding Question Bankmiraclebabu
 
Linear Block Codes
Linear Block CodesLinear Block Codes
Linear Block CodesNilaNila16
 

Was ist angesagt? (20)

The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
 
Coding
CodingCoding
Coding
 
Convolutional codes
Convolutional codesConvolutional codes
Convolutional codes
 
An Efficient Interpolation-Based Chase BCH Decoder
An Efficient Interpolation-Based Chase BCH DecoderAn Efficient Interpolation-Based Chase BCH Decoder
An Efficient Interpolation-Based Chase BCH Decoder
 
7 convolutional codes
7 convolutional codes7 convolutional codes
7 convolutional codes
 
Linear block coding
Linear block codingLinear block coding
Linear block coding
 
Reed Soloman and convolution codes
Reed Soloman and convolution codesReed Soloman and convolution codes
Reed Soloman and convolution codes
 
Information Theory and Coding Question Bank
Information Theory and Coding Question BankInformation Theory and Coding Question Bank
Information Theory and Coding Question Bank
 
Chapter 03 cyclic codes
Chapter 03   cyclic codesChapter 03   cyclic codes
Chapter 03 cyclic codes
 
BCH Codes
BCH CodesBCH Codes
BCH Codes
 
Linear block code
Linear block codeLinear block code
Linear block code
 
Channel coding
Channel coding  Channel coding
Channel coding
 
Hamming codes
Hamming codesHamming codes
Hamming codes
 
It3416071612
It3416071612It3416071612
It3416071612
 
Linear Block Codes
Linear Block CodesLinear Block Codes
Linear Block Codes
 
Convolution Codes
Convolution CodesConvolution Codes
Convolution Codes
 
Overview of Convolutional Codes
Overview of Convolutional CodesOverview of Convolutional Codes
Overview of Convolutional Codes
 
Error Control coding
Error Control codingError Control coding
Error Control coding
 
Bch codes
Bch codesBch codes
Bch codes
 
5 linear block codes
5 linear block codes5 linear block codes
5 linear block codes
 

Ähnlich wie Reed_Solomon_Implementation

Review paper on Reed Solomon (204,188) Decoder for Digital Video Broadcasting...
Review paper on Reed Solomon (204,188) Decoder for Digital Video Broadcasting...Review paper on Reed Solomon (204,188) Decoder for Digital Video Broadcasting...
Review paper on Reed Solomon (204,188) Decoder for Digital Video Broadcasting...IRJET Journal
 
Reed Solomon Coding For Error Detection and Correction
Reed Solomon Coding For Error Detection and CorrectionReed Solomon Coding For Error Detection and Correction
Reed Solomon Coding For Error Detection and Correctioninventionjournals
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)ijceronline
 
Performance Study of RS (255, 239) and RS (255.233) Used Respectively in DVB-...
Performance Study of RS (255, 239) and RS (255.233) Used Respectively in DVB-...Performance Study of RS (255, 239) and RS (255.233) Used Respectively in DVB-...
Performance Study of RS (255, 239) and RS (255.233) Used Respectively in DVB-...IJERA Editor
 
Reed solomon Encoder and Decoder
Reed solomon Encoder and DecoderReed solomon Encoder and Decoder
Reed solomon Encoder and DecoderAmeer H Ali
 
basicsofcodingtheory-160202182933-converted.pptx
basicsofcodingtheory-160202182933-converted.pptxbasicsofcodingtheory-160202182933-converted.pptx
basicsofcodingtheory-160202182933-converted.pptxupendrabhatt13
 
art_sklar7_reed-solomon
art_sklar7_reed-solomonart_sklar7_reed-solomon
art_sklar7_reed-solomonAniruddh Tyagi
 
art_sklar7_reed-solomon
art_sklar7_reed-solomonart_sklar7_reed-solomon
art_sklar7_reed-solomonaniruddh Tyagi
 
art_sklar7_reed-solomon
art_sklar7_reed-solomonart_sklar7_reed-solomon
art_sklar7_reed-solomonaniruddh Tyagi
 
FPGA based BCH Decoder
FPGA based BCH DecoderFPGA based BCH Decoder
FPGA based BCH Decoderijsrd.com
 
Performance Study of BCH Error Correcting Codes Using the Bit Error Rate Term...
Performance Study of BCH Error Correcting Codes Using the Bit Error Rate Term...Performance Study of BCH Error Correcting Codes Using the Bit Error Rate Term...
Performance Study of BCH Error Correcting Codes Using the Bit Error Rate Term...IJERA Editor
 
A method to determine partial weight enumerator for linear block codes
A method to determine partial weight enumerator for linear block codesA method to determine partial weight enumerator for linear block codes
A method to determine partial weight enumerator for linear block codesAlexander Decker
 
Channel Coding .pptx
Channel Coding .pptxChannel Coding .pptx
Channel Coding .pptxMortadha96
 
IJCER (www.ijceronline.com) International Journal of computational Engineeri...
 IJCER (www.ijceronline.com) International Journal of computational Engineeri... IJCER (www.ijceronline.com) International Journal of computational Engineeri...
IJCER (www.ijceronline.com) International Journal of computational Engineeri...ijceronline
 

Ähnlich wie Reed_Solomon_Implementation (20)

Review paper on Reed Solomon (204,188) Decoder for Digital Video Broadcasting...
Review paper on Reed Solomon (204,188) Decoder for Digital Video Broadcasting...Review paper on Reed Solomon (204,188) Decoder for Digital Video Broadcasting...
Review paper on Reed Solomon (204,188) Decoder for Digital Video Broadcasting...
 
Reed Solomon Coding For Error Detection and Correction
Reed Solomon Coding For Error Detection and CorrectionReed Solomon Coding For Error Detection and Correction
Reed Solomon Coding For Error Detection and Correction
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
Performance Study of RS (255, 239) and RS (255.233) Used Respectively in DVB-...
Performance Study of RS (255, 239) and RS (255.233) Used Respectively in DVB-...Performance Study of RS (255, 239) and RS (255.233) Used Respectively in DVB-...
Performance Study of RS (255, 239) and RS (255.233) Used Respectively in DVB-...
 
error control coding
error control coding error control coding
error control coding
 
rs_1.pptx
rs_1.pptxrs_1.pptx
rs_1.pptx
 
Reed solomon code
Reed solomon codeReed solomon code
Reed solomon code
 
Reed solomon Encoder and Decoder
Reed solomon Encoder and DecoderReed solomon Encoder and Decoder
Reed solomon Encoder and Decoder
 
basicsofcodingtheory-160202182933-converted.pptx
basicsofcodingtheory-160202182933-converted.pptxbasicsofcodingtheory-160202182933-converted.pptx
basicsofcodingtheory-160202182933-converted.pptx
 
Channel Coding (Error Control Coding)
Channel Coding (Error Control Coding)Channel Coding (Error Control Coding)
Channel Coding (Error Control Coding)
 
art_sklar7_reed-solomon
art_sklar7_reed-solomonart_sklar7_reed-solomon
art_sklar7_reed-solomon
 
art_sklar7_reed-solomon
art_sklar7_reed-solomonart_sklar7_reed-solomon
art_sklar7_reed-solomon
 
art_sklar7_reed-solomon
art_sklar7_reed-solomonart_sklar7_reed-solomon
art_sklar7_reed-solomon
 
FPGA based BCH Decoder
FPGA based BCH DecoderFPGA based BCH Decoder
FPGA based BCH Decoder
 
Performance Study of BCH Error Correcting Codes Using the Bit Error Rate Term...
Performance Study of BCH Error Correcting Codes Using the Bit Error Rate Term...Performance Study of BCH Error Correcting Codes Using the Bit Error Rate Term...
Performance Study of BCH Error Correcting Codes Using the Bit Error Rate Term...
 
A method to determine partial weight enumerator for linear block codes
A method to determine partial weight enumerator for linear block codesA method to determine partial weight enumerator for linear block codes
A method to determine partial weight enumerator for linear block codes
 
Channel Coding .pptx
Channel Coding .pptxChannel Coding .pptx
Channel Coding .pptx
 
Turbo Code
Turbo Code Turbo Code
Turbo Code
 
Hamlet_Khachatryan_57--61
Hamlet_Khachatryan_57--61Hamlet_Khachatryan_57--61
Hamlet_Khachatryan_57--61
 
IJCER (www.ijceronline.com) International Journal of computational Engineeri...
 IJCER (www.ijceronline.com) International Journal of computational Engineeri... IJCER (www.ijceronline.com) International Journal of computational Engineeri...
IJCER (www.ijceronline.com) International Journal of computational Engineeri...
 

Reed_Solomon_Implementation

  • 1. 1 Abstract—Reed-Solomon codes are very effective in correcting random symbol errors and random burst errors, and they are widely used for error control in communication and data storage system, ranging from deep-space telecommunications to compact disc. Concatenation of these codes as outer codes with simple binary codes as inner codes provides high data reliability with reduced decoding complexity. RS codes are non-binary cyclic error correcting block codes. Here redundant symbols are generated in the encoder using a generator polynomial and added to the very end of the message symbols. Then RS Decoder determines the locations and magnitudes of errors in the received polynomial. This Paper covers the development of program to produce synthesizable VHDL code for an arbitrary RS Encoder and Decoder. Keywords—Error correcting codes, Generator Polynomial, Reed-Solomon Encoder/ Decoder, VHDL. I. INTRODUCTION Many digital signaling systems use Forward error correction for broadcasting purpose. It is a technique in which redundant information is added to the signal to allow the receiver to detect and correct errors that may have occurred during transmission. Many different types of codes have been devised for this purpose, but Reed-Solomon codes [1] have proved to be a good compromise between efficiency and complexity. Reed-Solomon codes are used in many applications such as satellite communication, digital audio tape and in CD ROMs. Such device applications call for the use of many different RS codes. The modern digital systems designer is then faced with the problem of implementation these different error-correction codes. The difficulties of this task have been greatly reduced through the use of FPGA or gate arrays and through the use of a hardware description language, such as VHDL. The approach is to write the functionality of the system in a top- down approach using VDHL, mapping the code into hardware elements through the process of synthesis and finally to simulate the resting net list using a suitable test bench. A. Classification of Reed-Solomon code: There are many categories of error correcting codes, the main ones being block codes and convolution codes. A Reed- Solomon code is a block code, meaning that the message to be transmitted is divided up into separate blocks of data. Each block then has parity protection information added to it to form a self-contained code word. It is also, as generally used, a systematic code, which means that the encoding process does not alter massage symbols and the protection symbols are added as a separate part of the block. Fig 1. Reed-Solomon Codeword Also, a Reed-Solomon code is a linear code (adding two code words produce another code word) and it is cyclic (cyclically shifting the symbols of a code word produces another code word). It belongs to the family of Bose-Chaudhuri- Hocquenghem(BCH) codes [3,4], but is distinguished by having multi-bit symbols. This makes the code particularly good at dealing with burst of errors because, although a symbol may have all its bit in error, this counts as only one symbol error in term of the correction capacity of the code. II. REED-SOLOMON CODE BCH codes are binary codes. Reed-Solomon codes are q-ary BCH codes. The parameters of these codes are: Code-word length: n = 2 𝑚 − 1 (symbols) Symbol length: m Error-correcting capability: t (symbols) Number of parity symbols: n-k = 2-t (symbols) Note that for every additional error that can be corrected, only 2t additional parity symbols are needed. This is an improvement over BCH codes. Let α be a primitive element of GF (2 𝑚 ). Then the generator polynomial g(x) is the lowest degree polynomial over GF (2) which has 2t roots which are consecutive powers of α. 𝑔(𝑥) = (𝑥+∝)(𝑥 +∝2) … . (𝑥 +∝2𝑡) (1) The syndrome polynomial can e computed as before, however, now we are dealing with operations over GF(2 𝑚 ) instead of GF(2). Efficient shift register circuit configurations have been found to calculate the coefficients of the syndrome polynomial. Implementation of Reed-Solomon Code in VHDL Ramya.C.B, Postgraduate Student, RNSIT and Usha.B.S, Assistant Professor, RNSIT
  • 2. 2 Once the syndrome polynomial has been found, the error locator polynomial, ᴧ(x), and the error evaluator polynomial, Ω(x), (also called the error magnitude polynomial) must be computed. The error evaluator polynomial is needed, because knowing the error location is not sufficient to correct an error. Each symbol is m bit wide, thus one of the 2 𝑚 − 1 possible error patterns must be added to the symbols at the error locations. Several algorithms for the error locator polynomial have been devised, such as the Peterson-Gorenstein-Zieler algorithm, Eulid’s algorithm, and the non-binary Massey- Berlekamp algorithm. The error evaluator polynomial can be computed as follows: ᴧ(𝑥)𝑆(𝑥) ≡ Ω(𝑥)𝑚𝑜𝑑(𝑥2𝑡 ) (2) Improvement has been made to the Massey-Berlekamp algorithm where both the error locator polynomial and the error evaluator polynomial are computed at the same time. This algorithm requires performing Galois field inversion. Recently, an inversionless Massey-Berlekamp algorithm was discovered; however, it only computes the error locator polynomial. An extension to this algorithm which calculates both polynomials at the same time without the use of inversion was discovered. Once these two polynomials have been found, the roots of the error locator polynomial are found. An efficient method of performing this function is called the Chien search. The error value that must be added to the symbol at the error location is given by Forney’s algorithm. III. REED-SOLOMON ENCODING A Reed-Solomon code is a 𝑞 𝑚 -ary BCH code of length 𝑞 𝑚 -1. Code symbols are elements of corresponding galois field GF(𝑞 𝑚 ). In this paper, we will be dealing with extensions of binary codes, i.e., q=2. This fields is generated by primitive irreducible polynomial degree m. Let α be a primitive element form GF(2 𝑚 ). The generator polynomial for the code capable of correcting “t” errors is degree 2t, and has its roots, 2t consecutive power of α, this is : 𝑔(𝑥) = ∏(𝑥 +∝ 𝑚+𝑡 ) 2𝑡−1 𝑡=0 (3) Where 𝑚0 is the log of the initial root. The message of k*m bits is divided into k symbols, each symbol being an m-bit element from GF(2 𝑚 ). each symbol is regarded as a coefficient of a k-1 degree polynomial, m(x). A total of 2t parity symbols are appended to the message, thus making a systematic encoding format, such as RS code is denoted by 2 parameters, n and k, and written as RS(n,k). The distance of RS codes is: Distance = n-k+1 = 2t+1 (4) Here there are K message symbols, and n-k = 2t parity symbols, for a total of n symbols. RS codes may be shortened to RS(n`,k`), where n`=n-1 and k`=k-1. In this case, the distance property d=2t+1 still holds. The encoded message, c(x), is formed as follows: 1. 1. Multiply the message m(x) by 𝑥2𝑡 : 2. 3. From the parity symbols, b(x), by dividing the above result by the generator polynomial. 𝑐(𝑥) = 𝑚(𝑥). 𝑥2𝑡 + 𝑏(𝑥) (5) Thus, in order to specify an RS code completely, the following items must be described. The degree of field generator polynomial, m, and its coefficients. The Error-correcting capability, t, of code. Number of message symbols, k. The log of the initial root of the code generator polynomial, 𝑚0. Note that there is a restriction placed on these parameters. First we can calculate n, the total number symbols as n=k+2t. This number must be less than or equal to the maximum number of allowable symbols, namely (2 𝑚 − 1). The higher order symbols are usually transmitted first. 2. Syndrome calculation: The syndrome calculation is the first step in the decoding process. For a t-error correcting RS code, there are 2t syndromes that must be calculated. They are calculated as follows : 𝑠(𝑗) = 𝑟(∝ 𝑚0+1), where j€ {0..2t-1} (6) In this equation above, r(x) is the received codeword polynomial. and 𝑚0 is the log of the initial root of the code generator polynomial. since the received codeword polynomial is the sum of the message polynomial and the error polynomial, we can reduce this to 𝑆(𝑗) = 𝑐(∝ 𝑚0+1) + 𝑒(∝ 𝑚0+1) = 𝑒(∝ 𝑚0+1) = ∑ 𝑌𝑡 ∗𝑡 𝑡=0 𝑋𝑡 𝑗+𝑚0 , where j€ {0..2t-1} (7) 3. Chien Search Another crucial step in the decoding process is to find the location of errors in the received codeword. An iterative process, called the Chien search is the most efficient means of doing this. Consider the error locator polynomial, σ(x), which is a t degree polynomial whose roots are the error locations, Xi. 𝜎(𝑥) = ∏ (𝑥 + 𝑋𝑡)𝑡 𝑡=1 = 𝑥 𝑡 + 𝜎𝑡 ∗ 𝑥 𝑡 + − ⋯ + 𝜎𝑡−1.x+𝜎𝑡 (8) Now, it can be shown, that the coefficients of the error-locator polynomial are related to the syndrome values, S(j), through a set of relations called Newton’s identities : 𝑆(𝑡 + 𝑗) + 𝜎1. 𝑆(𝑡 + 𝑗 − 1) + ⋯ + 𝑆(𝑗). 𝜎𝑡 = 0 ∀𝑗 (9) This fact will be called upon later in the next 2 section when the equations for the error values for 1-error and 2-error correcting codes are derived. Any root of σ(x) will also satisfy the equation 𝜎1 ∗ 𝑥−1 +𝜎2 ∗ 𝑥−2 + ⋯ + 𝜎𝑡 ∗ 𝑥−𝑡 = 1 (10)
  • 3. 3 The circuit shown in the below figure will implement such a search, one at a time (per clock cycle). Initially, the values of the registers are loaded with the coefficients of the error locator polynomial. A sum of registers is formed, resulting in Chien sum, and if it is 0, then an error has been located. Fig 2. Chien Search IV. DECODING FOR 3 OR MORE ERRORS This section is devoted to the case of RS decoding for 3 or more errors. For 3 or more errors, the resulting “t” x “t” sets of simultaneous equations becomes cumbersome and inefficient. A more elegant way to solve the key equation is the Massey Berlekamp algorithm. In this algorithm, the error locator and the error-evaluator polynomial can be simultaneously computed. Once this is done, the Chien search is performed, and the error can be calculated using Forneys’s algorithm. One drawback of Massey-Berlekamp algorithm is the need to perform Galois field inversion every iteration. There are two approaches to mitigate this. First, one may pipeline the algorithm, so that only either a (multiplication + addition) operation is performed, or an inversion is performed. This will reduce the maximum time needed between clock cycles, but increase that total number of clock cylces, thus increasingly latency. The other approach is to put the inversion in the same clock cycles as the other operations, resulting in a slower clock speed. An improvement to Massey-Berlekamp algorithm is the inversionless Massey-Berlekamp algorithm. In this algorithm, the need for Galois field inversion has been eliminated by a clever modification to the equations used in every iteration. These results in an algorithm that requires the same number of iterations as the original, yet, since it does not perform inversion, can be speeded up. Unfortunately, the output of this algorithm that requires polynomials, namely, the error-locator polynomial. The error-evaluator polynomial is calculated afterwards by the following equation: Λ(𝑥)𝑆(𝑥) ≡ Ω(𝑥)𝑚𝑜𝑑(𝑥2t ) (11) Extra clock cycles are needed to perform such a polynomial multiplication, once again resulting in a longer latency. A. Extended Inversionless Massey-Berlekamp Algorithm The inversionless Massey-Berlekamp algorithm can be extended to yield both the error-locator and the error- evaluated polynomial at the same time, also without the use of Galois field inverters. This eliminates the need for performing the required polynomial multiplication to determine the error- evaluator polynomial. The extension to the inversionless Massey-Berlekamp algorithm consists of the addition of the following steps to the inversionless algorithm: Ω(𝑥)(𝑘+𝑙) = 𝛾(𝑘) . Ω(𝑥)(𝑘) − 𝛿(𝑘+1) . 𝑎(𝑥)(𝑘) . 𝑥 (12) If 𝛿(𝑘+1) = 0 𝑜𝑟 2𝑙(𝑘) > 𝑘 𝑡ℎ𝑒𝑛, 𝑎(𝑥)(𝑘+𝑙) = 𝑥. 𝑎(𝑥)(𝑘) (13) Otherwise 𝑎(𝑥)(𝑘+𝑙) = Ω(𝑥)(𝑘) (14) The initial conditions are the same as those i the original Massey-Berlekamp algorithm, that is : Ω(𝑥)(0) = 0. 𝑎(𝑥) = 𝑥−1 (15) In this equation above Ω(𝑥) is the error-evaluator polynomial. At this point, both the error-locator and the error-evaluator polynomials have been computed. The Chien search will find the zeros of the error-locator polynomial, i.e, the error locations. The next step is to correct the errors. This is done using Forney’s algorithm, which, for the extensions of binary fields is : 𝑒𝑡𝑚 = 𝛼 𝑡 𝑚(𝑙−𝑚0) Ω(𝛼 𝑡 𝑚) Λ(𝛼−𝑡 𝑚) (16) In this equation, 𝛼 𝑡 𝑚 is the present location of the Chien search, 𝑋1. The entire algorithm for a multi-error (t>2) correcting code is then stated as follows: 1. Calculate the 2t syndromes 2. If all of the syndromes are 0, then there is no error, and we STOP. 3. Calculate the error-locator and the error-evaluator polynomials using the extended inversionless Massey- Berlekamp algorithm. 4. Perform a Chien search on the entire received message. If the element at position (𝑋1) is found to be the location of an error, that is, when the Chien sum is 0, then Calculate the error value 𝑌1 using Forney’s algorithm and correct the error : connected_element = received_element + 𝑌𝑖. A block diagram of a pipelined multi-error (t>2) correcting RS decoder is shown in the Figure below. As before, an input strobe, RS_data_in_start is used to signify the beginning of a new input codeword. The codeword enters the decode in bitwise parallel format, one symbol at a time. If first enters into the Calculate_Syndrome block, where the 2t syndrome are calculated.
  • 4. 4 The control signal Error_Present is also generated. Finally, a strobe indicating the completion of the syndrome calculation is sent to the Key_Equation_solver block, which implements the extended inversionless Massey-Berelekamp algorithm. A delay block for the incoming RS symbols is necessary to compensate for the delays in the Calculate_syndrome and Key_Equation_Solver blocks, and any startup dealays in the Chien_Search block. The Key_Equation_solver block calculates the error-locator and the error-evaluator polynomials. The Chien_Search block perfoms the function of finding error locations, and correcting the data, using Forney’s algorithm, when the Chien sum is 0. The evaluation of the polynomials needed for error correction can be pipelined. The corrected data and an output data strobe are the output of the RS decoder. V. RESULT A. Implementation of RS Encoder in VDHL Consider now the implementation of the RS encoding in VHDL. The timing diagram showing the input/output interface of the RS encoder Is shown in the figure. A signal called input_strobe is used to start the encoding process. It is a single clock cycle pulse that precedes the message data called data_in. by one clock cycle. The data consists of the k coefficients of the message polynomial. Another requirement for the encoder is the size of the message: this encoder can accommodate different sized messages. The message size is conveyed through the signal data_size. The timing diagram shows two control signals that are used internally, namely do_calc and count. These will be discussed in more detail in the next section. Finally, there are two output signals, output_strobe, and data_out, whose names clearly describe their function. output data. Fig. 3. Reed-Solomon Encoder (179, 163) Fig.4. Device Utilization summary for RS Encoder (179, 163) B. Implementation of RS Decoder in VHDL The RS decoder is implemented using all the entities implemented above. Structural style of modeling is used for mapping various components of the decoder to this entity. The complete simulation results after port - mapping the components to the ports of the decoder entity are shown below: 1. Syndrome Register: The VHDL code for the syndrome registers, and the determination of if there is an error present. The intermediate syndrome values are stored in the IntS register array. On reset, they are set to zero. When the input strobe rs_data_in_start arrives, these registers are set to input data. Thereafter, while the control signal DoCalc is 1, the syndromes are calculated as per previous discussions. When the control signal XferSyndrome is 1, i.e, at the end of the input data, the output syndrome registers and the errors_present control bit are set to their correct values. 2. Massey Brelekamp Algorithm: The VHDL implementation of the extended inversionless Massey-Berlekamp will be described in this section. Below figure shows the timing diagram. The block algorithm starts when the input strobe XferSyndrome arrives. The outputs of this block are the coefficients of the error-locator polynomial (lamda_poly) and the error-evaluator polynomial (omega_poly). The latency is (2t+1) where “t” is the error correcting capability of the RS code. Fig. 5. Massey Berelaykamp Simulation
  • 5. 5 3. Chien Search and Error Correction: When the error encoding capability of an RS code is greater than two, the calculation of the correction factor requires evaluation of derivative of the error-locator polynomial and the error- evaluator polynomial. The Chien search requires the evaluation of the error-locator polynomial. The evaluations are best handled in a pipeline for polynomial evaluation. This next section discusses the pipelines that are needed and their size. The pipeline for the error-locator polynomial and error- evaluator polynomial will be called the eval_lambda_prime and Eval_omega pipelines in the VHDL code. The Chien search pipeline is called Eval_Chiensum pipeline in VHDL code. Fig. 6. Chien Search Simulation Fig. 7. Reed-Solomon Decoder (179, 163) VI. CONCLUSION This Paper has introduced and proved the extended inversionless Massey-Berlekamp algorithm, which is a modest improvement over the inversionless Massey-Berlekamp algorithm for solving the key equation in a Reed-Solomon decoder. The error location and correction equations for an arbitrary RS code with an error correcting capability of more than or equal to 3 errors was also presented. Using this information, VHDL based designs for specific RS codes were implemented for both encoder and decoder. Once this was done, a program was written to automatically generate the VHDL code for either an RS encoder, or an RS decoder for an arbitrary RS code based on user inputs. VII. REFERENCES [1] E. Prange, "Cyclic Error-Correcting Codes in Two Symbols," Air Force Cambridge Research Center-TN- 57-103, Cambridge, Mass., September 1957. [2] E. Prange, "Some Cyclic Error-Correcting Codes with Simple Decoding Algorithms," Air Force Cambridge Research Center-TN-58-156, Cambridge, Mass., April 1958. [3] E. Prange, "The Use of Coset Equivalence in the Analysis and Decoding of Group Codes," Air Force Cambridge Research Center-TR-59-164, Cambridge, Mass., 1959. [4] R. C. Bose and D. K. Ray-Chaudhuri, "On a Class of Error Correcting Binary Group Codes," Information and Control, Volume 3, pp. 68-79, March 1960. [5] R. C. Bose and D. K. Ray-Chaudhuri, "Further Results on Error Correcting Binary Group Codes," Information and Control, Volume 3, pp. 279-290, September 1960. [6] Hocquenghem, "Codes correcteurs d'erreurs," Chiffres, Volume 2, pp. 147-156, 1959 [7] S. Reed and G. Solomon, "Polynomial Codes over Certain Finite Fields," SI AM Journal of Applied Mathematics, Volume 8, pp. 300-304, 1960. [8] R. J. McEliece, Finite Fields for Computer Scientists and Engineers, Boston: Kluwer Academic, 1987 [9] S. B. Wicker, Error Control Systems for Digital Communication and Storage, N.J.: Prentice-Hall, 1994. [10] R. C. Singleton, "Maximum Distance Q-nary Codes," IEEE Transactions on Information [11] Theory, Volume IT-10, pp. 116-118,1964. [12] D. Gorenstein and N. Zierler, "A Class of Error Correcting Codes in pm Symbols," Journal of the Society of Industrial and Applied Mathematics, Volume 9, pp. 207-214, June 1961.