2. Topics to be discussed :
What is channel coding
Where it is used
How to detect and correct error
Hamming distance
Linear block coding
Syndrome coding
3. Defintion of channel coding
Error control coding ,detect, and often correct, symbols which are
received in error
The channel encoder separates or segments the incoming bit stream into
equal length blocks of L binary digits and maps each L-bit message
block into an N-bit code word where N > L
There are M=2L messages and 2L code words of length N bits
The channel decoder has the task of detecting that there has been a bit error and (if •
possible) correcting the bit error
4. ARQ (Automatic-Repeat-Request ) If the channel decoder performs error
detection then errors can be detected and a feedback channel from the channel
decoder to the channel encoder can be used to control the retransmission of the
. code word until the code word is received without detectable errors
There are two major ARQ techniques stop and wait continuous ARQ
FEC (Forward Error Correction) If the channel decoder performs error correction then
errors are not only detected but the bits in error can be identified and corrected (by bit
)inversion
5. There are two major ARQ techniques
•
stop and wait, in which each block of data is positively, or
negatively, acknowledged by the receiving terminal as being error
free before the next data block is transmitted,
•
continuous ARQ, in which blocks of data continue to be transmitted
without waiting for each previous block to be acknowledged
6. Where it used ?
• Error control coding, generally, is applied
widely in control and communications
systems for aerospace applications, in
mobile (GSM) cellular telephony and for
enhancing security in banking and
barcode readers.
7. Error Control Coding (Channel
Coding)
particular error control methods : linear group codes,
cyclic codes, the Golay code, BCH codes, Reed–
Solomon codes and Hamming codes
8. Block coding VS. convolutional coding
Block coding
The (n,k) block code is the
code that convert k bit of the
massage signal to n bit
codeword .
It block because it
take number of bits from
massage(information digit)
and add redundant bits
(parity digit) to it and do so
to the rest of the bits.
Convolutional Coding
encoding a stream of data
rather than blocks of data .
The sequence of bits in a
convolutional code depends
Not only on the current bits of
data
but also on previous bits of
data.
9. Error rate control concept
How to measure error performance ?
Answer is BER : the average rate at which errors occur and is
given by the product PbRb
Pb: probability of error
Rb : bit transmission rate in the channel
BUT
If BER is too large ! What to do to make it smaller ..
• increase transmitter power(not efficient )
• Diversity : Frequency diversiy employs two different frequencies to
transmit the same information , time diversity systems the same
message is transmitted more than once at different times .
• introduce full duplex transmission: implying simultaneous two-way
transmission
• ARQ and FEC
10. Hamming Distance
The Hamming distance between two code-words is defined as the number of places,
.bits or digits in which they differ
•The distance is important factor since it indicates how easy to change one valid code
into another.
•The weight of the codeword is defined as the number of ones in the codeword.
: Example) Calculate the hamming distance and weight of the following codeword •
11100, 11011
Hamming distance = 3 bit
The code word 11100 could changed to 11011
The weight of the codeword 1= 3
The weight of the codeword 2= 4
The minimum codeword weight =3
11. (n, k) block codes:
with
k information digits
going into the coder.
n digits coming out
after
(n −k) redundant
parity check digits.
The rate, or efficiency, for this
code
(R) = k/n
Rate is normally in the range 1/2 to
unity.
12. Linear group codes
Group codes contain the all-zeros codeword and have the
property referred to as closure .
Advantage : it makes performance calculations with linear group
codes particularly easy.
taking any two codewords Ci and Cj , then Ci ⊕ Cj = Ck .
14. Performance prediction
Hamming distances measurer to determine the overall performance of a
block code
consideration of each of the codewords with the all-zeros
codewordis sufficient.
Example
00000
00111
11100
11011
Dmin = 3 for this (5,2) code.
Consider four codewords the weights of these are 0, 3, 3 and 4.
the minimum weight in the weight structure (3) is equal to Dmin,
the minimum Hamming distance for the code.
15. Error detection and correction capability
t :The maximum possible error correcting.
Dmin :minimum Hamming distance
e : is the ability of certain code to detect errors.
t ≤ e.
16. Error detection and correction capability
Dmin is 3
e=1 , t=1
11001 &11000
If any single error occurs in one of the codewords it can
therefore be corrected. Dmin − 1 errors can be detected
there is no error correction
Longer codes with larger Hamming distances
offer greater detection and correction capability
by selecting different t and e
17. Standerd
The UK Post Office Code Standards Advisory Group(POCSAG) code
k = 21 and n = 32
R ≈ 2/3
Dmin = 6.
3 bit detection 2 bit correction capability.
19. Syndrome decoding
d is a message vector of k digits
G is the k × n generator matrix
c is the n-digit codeword corresponding to the message d,
dG=c
Where G is the generation matrix
Furthermore:
Hc=0
where H is the (even) parity check matrix corresponding to G
20. Syndrome decoding
r=c⊕e
r is the sequence received after transmitting c .
e is an error vector representing the location of the errors
which occur in the received sequence r.
syndrome vector s
s = H r = H (c ⊕ e) =
H c ⊕ H e = 0 ⊕ H e =He
s is easily calculated
21. The generator matrix(G) : The generator matrix G for
an (n, k) block code can be used to generate the appropriate
n-digit codeword from any given k-digit data sequence .
Parity check matrix (H) : does not contain any codewords.
(7,4) block block code H matrix .
The right side of G is the transpose of the left hand
portion of H.
Parity check section must : must contain at
least two ones. rows cannot be identical.
22. G is the k × n generator matrix . The right side of G is the
transpose of the left hand portion of H. .
23. use this syntax to Produce syndrome decoding table
t = syndtable(h)
returns a decoding table for an error-correcting binary
code having codeword length n and message length
http://www.mathworks.com/help/comm/ref/syndtable.html
24. % Use a [7,4] Hamming code.
m = 3; n = 2^m-1; k = n-m;
parmat = hammgen(m); % Produce parity-check matrix.
trt = syndtable(parmat); % Produce decoding table.
recd = [1 0 0 1 1 1 1] % Suppose this is the received vector.
syndrome = rem(recd * parmat',2);
syndrome_de = bi2de(syndrome,'left-msb'); % Convert to decimal.
disp(['Syndrome = ',num2str(syndrome_de),...
' (decimal), ',num2str(syndrome),' (binary)'])
corrvect = trt(1+syndrome_de,:) % Correction vector
% Now compute the corrected codeword.
correctedcode = rem(corrvect+recd,2)
25.
26. Matlab
n = 6; k = 4; % Set codeword length and message
length
% for a [6,4] code.
msg = [1 0 0 1 1 0 1 0 1 0 1 1]'; % Message is a
binary column.
code = encode(msg,n,k,'cyclic'); % Code will binary
column.
msg'
code'
27. msg consists of 12 entries, which are interpreted as three 4-digit
(because k = 4) messages. The resulting vector codecomprises three 6digit (because n = 6) codewords, which are concatenated to form a
vector of length 18. The parity bits are at the beginning of each
codeword