SlideShare a Scribd company logo
1 of 70
3F4 Error Control Coding
Dr. I. J. Wassell
Introduction
• Error Control Coding (ECC)
– Extra bits are added to the data at the
transmitter (redundancy) to permit error
detection or correction at the receiver
– Done to prevent the output of erroneous bits
despite noise and other imperfections in the
channel
– The positions of the error control coding and
decoding are shown in the transmission model
Transmission Model
Digital
Source
Source
Encoder
Error
Control
Coding
Line
Coding
Modulator
(Transmit
Filter, etc)
Channel
Noise
Digital
Sink
Source
Decoder
Error
Control
Decoding
Line
Decoding
Demod
(Receive
Filter, etc)
+
Transmitter
Receiver
X(w)
Hc(w)
N(w)
Y(w)
Error Models
• Binary Symmetric Memoryless Channel
– Assumes transmitted symbols are binary
– Errors affect ‘0’s and ‘1’s with equal
probability (i.e., symmetric)
– Errors occur randomly and are independent
from bit to bit (memoryless)
IN OUT
0 0
1 1
1-p
1-p
p
p
p is the probability of
bit error or the Bit
Error Rate (BER) of
the channel
Error Models
• Many other types
• Burst errors, i.e., contiguous bursts of bit
errors
– output from DFE (error propagation)
– common in radio channels
– Insertion, deletion and transposition errors
• We will consider mainly random errors
Error Control Techniques
• Error detection in a block of data
– Can then request a retransmission, known as
automatic repeat request (ARQ) for sensitive
data
– Appropriate for
• Low delay channels
• Channels with a return path
– Not appropriate for delay sensitive data, e.g.,
real time speech and data
Error Control Techniques
• Forward Error Correction (FEC)
– Coding designed so that errors can be corrected
at the receiver
– Appropriate for delay sensitive and one-way
transmission (e.g., broadcast TV) of data
– Two main types, namely block codes and
convolutional codes. We will only look at block
codes
Block Codes
• We will consider only binary data
• Data is grouped into blocks of length k bits
(dataword)
• Each dataword is coded into blocks of
length n bits (codeword), where in general
n>k
• This is known as an (n,k) block code
Block Codes
• A vector notation is used for the datawords
and codewords,
– Dataword d = (d1 d2….dk)
– Codeword c = (c1 c2……..cn)
• The redundancy introduced by the code is
quantified by the code rate,
– Code rate = k/n
– i.e., the higher the redundancy, the lower the
code rate
Block Code - Example
• Dataword length k = 4
• Codeword length n = 7
• This is a (7,4) block code with code rate = 4/7
• For example, d = (1101), c = (1101001)
Error Control Process
1000
1000
101101
Source code
data chopped
into blocks Channel
coder
Codeword
(n bits)
Dataword
(k bits)
Channel
Codeword +
possible errors
(n bits)
Channel
decoder
Dataword
(k bits)
Error flags
Error Control Process
• Decoder gives corrected data
• May also give error flags to
– Indicate reliability of decoded data
– Helps with schemes employing multiple layers
of error correction
Parity Codes
• Example of a simple block code – Single
Parity Check Code
– In this case, n = k+1, i.e., the codeword is the
dataword with one additional bit
– For ‘even’ parity the additional bit is,


k
i i
d
q 1
2)
(mod
– For ‘odd’ parity the additional bit is 1-q
– That is, the additional bit ensures that there are
an ‘even’ or ‘odd’ number of ‘1’s in the
codeword
Parity Codes – Example 1
• Even parity
(i) d=(10110) so,
c=(101101)
(ii) d=(11011) so,
c=(110110)
Parity Codes – Example 2
• Coding table for (4,3) even parity code
Dataword Codeword
1
1
1
0
1
1
1
0
1
0
0
1
1
1
0
0
1
0
1
0
0
0
0
0
1
1
1
1
0
0
1
1
0
1
0
1
1
0
0
1
0
1
1
0
1
0
1
0
1
1
0
0
0
0
0
0
Parity Codes
• To decode
– Calculate sum of received bits in block (mod 2)
– If sum is 0 (1) for even (odd) parity then the dataword is the
first k bits of the received codeword
– Otherwise error
• Code can detect single errors
• But cannot correct error since the error could be in
any bit
• For example, if the received dataword is (100000) the
transmitted dataword could have been (000000) or
(110000) with the error being in the first or second
place respectively
• Note error could also lie in other positions including
the parity bit
Parity Codes
• Known as a single error detecting code
(SED). Only useful if probability of getting
2 errors is small since parity will become
correct again
• Used in serial communications
• Low overhead but not very powerful
• Decoder can be implemented efficiently
using a tree of XOR gates
Hamming Distance
• Error control capability is determined by the
Hamming distance
• The Hamming distance between two
codewords is equal to the number of
differences between them, e.g.,
10011011
11010010 have a Hamming distance = 3
• Alternatively, can compute by adding
codewords (mod 2)
=01001001 (now count up the ones)
Hamming Distance
• The Hamming distance of a code is equal to
the minimum Hamming distance between
two codewords
• If Hamming distance is:
1 – no error control capability; i.e., a single error
in a received codeword yields another valid
codeword
XXXXXXX X is a valid codeword
Note that this representation is diagrammatic
only.
In reality each codeword is surrounded by n
codewords. That is, one for every bit that
could be changed
Hamming Distance
• If Hamming distance is:
2 – can detect single errors (SED); i.e., a single
error will yield an invalid codeword
XOXOXO X is a valid codeword
O in not a valid codeword
See that 2 errors will yield a valid (but
incorrect) codeword
Hamming Distance
• If Hamming distance is:
3 – can correct single errors (SEC) or can detect
double errors (DED)
XOOXOOX X is a valid codeword
O in not a valid codeword
See that 3 errors will yield a valid but
incorrect codeword
Hamming Distance - Example
• Hamming distance 3 code, i.e., SEC/DED
– Or can perform single error correction (SEC)
10011011 X
11011011 O
11010011 O
11010010 X
This code corrected this way
This code corrected this way
X is a valid codeword
O is an invalid codeword
Hamming Distance
• The maximum number of detectable errors is
• That is the maximum number of correctable errors
is given by,
where dmin is the minimum Hamming distance
between 2 codewords and means the smallest
integer





 

2
1
min
d
t

.
1
min 
d
Linear Block Codes
• As seen from the second Parity Code
example, it is possible to use a table to hold
all the codewords for a code and to look-up
the appropriate codeword based on the
supplied dataword
• Alternatively, it is possible to create
codewords by addition of other codewords.
This has the advantage that there is now no
longer the need to held every possible
codeword in the table.
Linear Block Codes
• If there are k data bits, all that is required is to
hold k linearly independent codewords, i.e., a set
of k codewords none of which can be produced by
linear combinations of 2 or more codewords in the
set.
• The easiest way to find k linearly independent
codewords is to choose those which have ‘1’ in
just one of the first k positions and ‘0’ in the other
k-1 of the first k positions.
Linear Block Codes
• For example for a (7,4) code, only four
codewords are required, e.g.,
1
1
1
1
0
0
0
1
1
0
0
1
0
0
1
0
1
0
0
1
0
0
1
1
0
0
0
1
• So, to obtain the codeword for dataword 1011,
the first, third and fourth codewords in the list are
added together, giving 1011010
• This process will now be described in more detail
Linear Block Codes
• An (n,k) block code has code vectors
d=(d1 d2….dk) and
c=(c1 c2……..cn)
• The block coding process can be written as
c=dG
where G is the Generator Matrix


























k
2
1
2
1
2
22
21
1
12
11
a
.
a
a
...
.
...
.
.
...
...
G
kn
k
k
n
n
a
a
a
a
a
a
a
a
a
Linear Block Codes
• Thus,



k
i
i
i
d
1
a
c
• ai must be linearly independent, i.e.,
Since codewords are given by summations
of the ai vectors, then to avoid 2 datawords
having the same codeword the ai vectors
must be linearly independent
Linear Block Codes
• Sum (mod 2) of any 2 codewords is
also a codeword, i.e.,
Since for datawords d1 and d2 we have;
2
1
3 d
d
d 




 








k
i
i
i
k
i
i
i
k
i
i
i
i
k
i
i
i d
d
d
d
d
1
2
1
1
1
2
1
1
3
3 a
a
)a
(
a
c
So,
2
1
3 c
c
c 

Linear Block Codes
• 0 is always a codeword, i.e.,
Since all zeros is a dataword then,
0
a
0
c
1

 

k
i
i
Error Correcting Power of LBC
• The Hamming distance of a linear block code (LBC)
is simply the minimum Hamming weight (number of
1’s or equivalently the distance from the all 0
codeword) of the non-zero codewords
• Note d(c1,c2) = w(c1+ c2) as shown previously
• For an LBC, c1+ c2=c3
• So min (d(c1,c2)) = min (w(c1+ c2)) = min (w(c3))
• Therefore to find min Hamming distance just need to
search among the 2k codewords to find the min
Hamming weight – far simpler than doing a pair
wise check for all possible codewords.
Linear Block Codes – example 1
• For example a (4,2) code, suppose;







1
0
1
0
1
1
0
1
G
• For d = [1 1], then;
0
1
1
1
_
_
_
_
1
0
1
0
1
1
0
1
c



a1 = [1011]
a2 = [0101]
Linear Block Codes – example 2
• A (6,5) code with

















1
1
0
0
0
0
1
0
1
0
0
0
1
0
0
1
0
0
1
0
0
0
1
0
1
0
0
0
0
1
G
• Is an even single parity code
Systematic Codes
• For a systematic block code the dataword
appears unaltered in the codeword – usually
at the start
• The generator matrix has the structure,
 
P
|
I
..
1
..
0
0
..
..
..
..
..
..
..
..
..
0
..
1
0
..
0
..
0
1
G
2
1
2
22
21
1
12
11














kR
k
k
R
R
p
p
p
p
p
p
p
p
p
k R R = n - k
• P is often referred to as parity bits
Systematic Codes
• I is k*k identity matrix. Ensures dataword
appears as beginning of codeword
• P is k*R matrix.
Decoding Linear Codes
• One possibility is a ROM look-up table
• In this case received codeword is used as an address
• Example – Even single parity check code;
Address Data
000000 0
000001 1
000010 1
000011 0
……… .
• Data output is the error flag, i.e., 0 – codeword ok,
• If no error, dataword is first k bits of codeword
• For an error correcting code the ROM can also store
datawords
Decoding Linear Codes
• Another possibility is algebraic decoding,
i.e., the error flag is computed from the
received codeword (as in the case of simple
parity codes)
• How can this method be extended to more
complex error detection and correction
codes?
Parity Check Matrix
• A linear block code is a linear subspace Ssub of all
length n vectors (Space S)
• Consider the subset Snull of all length n vectors in
space S that are orthogonal to all length n vectors
in Ssub
• It can be shown that the dimensionality of Snull is
n-k, where n is the dimensionality of S and k is the
dimensionality of Ssub
• It can also be shown that Snull is a valid subspace
of S and consequently Ssub is also the null space
of Snull
Parity Check Matrix
• Snull can be represented by its basis vectors. In this
case the generator basis vectors (or ‘generator
matrix’ H) denote the generator matrix for Snull - of
dimension n-k = R
• This matrix is called the parity check matrix of the
code defined by G, where G is obviously the
generator matrix for Ssub- of dimension k
• Note that the number of vectors in the basis
defines the dimension of the subspace
Parity Check Matrix
• So the dimension of H is n-k (= R) and all vectors
in the null space are orthogonal to all the vectors
of the code
• Since the rows of H, namely the vectors bi are
members of the null space they are orthogonal to
any code vector
• So a vector y is a codeword only if yHT=0
• Note that a linear block code can be specified by
either G or H
Parity Check Matrix
• So H is used to check if a codeword is valid,


























R
2
1
2
1
2
22
21
1
12
11
b
.
b
b
...
.
...
.
.
...
...
H
Rn
R
R
n
n
b
b
b
b
b
b
b
b
b
R = n - k
• The rows of H, namely, bi, are chosen to be
orthogonal to rows of G, namely ai
• Consequently the dot product of any valid
codeword with any bi is zero
Parity Check Matrix
• This is so since,



k
i
i
i
d
1
a
c
and so,

 




k
i
i
i
k
i
i
i d
d
1
j
1
j
j 0
)
b
.
(a
a
.
b
.c
b
• This means that a codeword is valid (but not
necessarily correct) only if cHT = 0. To ensure this
it is required that the rows of H are independent
and are orthogonal to the rows of G
• That is the bi span the remaining R (= n - k)
dimensions of the codespace
Parity Check Matrix
• For example consider a (3,2) code. In this case G has 2
rows, a1 and a2
• Consequently all valid codewords sit in the subspace (in
this case a plane) spanned by a1 and a2
• In this example the H matrix has only one row, namely
b1. This vector is orthogonal to the plane containing the
rows of the G matrix, i.e., a1 and a2
• Any received codeword which is not in the plane
containing a1 and a2 (i.e., an invalid codeword) will thus
have a component in the direction of b1 yielding a non-
zero dot product between itself and b1
Parity Check Matrix
• Similarly, any received codeword which is
in the plane containing a1 and a2 (i.e., a
valid codeword) will not have a component
in the direction of b1 yielding a zero dot
product between itself and b1
c1
c2
c3
a1
a2
b1
Error Syndrome
• For error correcting codes we need a method to
compute the required correction
• To do this we use the Error Syndrome, s of a
received codeword, cr
s = crHT
• If cr is corrupted by the addition of an error vector,
e, then
cr = c + e
and
s = (c + e) HT = cHT + eHT
s = 0 + eHT
Syndrome depends only on the error
Error Syndrome
• That is, we can add the same error pattern to
different codewords and get the same syndrome.
– There are 2(n - k) syndromes but 2n error patterns
– For example for a (3,2) code there are 2 syndromes and
8 error patterns
– Clearly no error correction possible in this case
– Another example. A (7,4) code has 8 syndromes and
128 error patterns.
– With 8 syndromes we can provide a different value to
indicate single errors in any of the 7 bit positions as
well as the zero value to indicate no errors
• Now need to determine which error pattern caused
the syndrome
Error Syndrome
• For systematic linear block codes, H is
constructed as follows,
G = [ I | P] and so H = [-PT | I]
where I is the k*k identity for G and the
R*R identity for H
• Example, (7,4) code, dmin= 3
 














1
1
1
1
0
0
0
0
1
1
0
1
0
0
1
0
1
0
0
1
0
1
1
0
0
0
0
1
P
|
I
G  












1
0
0
1
0
1
1
0
1
0
1
1
0
1
0
0
1
1
1
1
0
I
|
P
-
H T
Error Syndrome - Example
• For a correct received codeword cr = [1101001]
In this case,
   
0
0
0
1
0
0
0
1
0
0
0
1
1
1
1
0
1
1
1
0
1
1
1
0
1
0
0
1
0
1
1
H
c
s T
r 
























Error Syndrome - Example
• For the same codeword, this time with an
error in the first bit position, i.e.,
cr = [1101000]
   
1
0
0
1
0
0
0
1
0
0
0
1
1
1
1
0
1
1
1
0
1
1
1
0
0
0
0
1
0
1
1
H
c
s T
r 
























• In this case a syndrome 001 indicates an
error in bit 1 of the codeword
Comments about H
• The minimum distance of the code is equal
to the minimum number of columns (non-
zero) of H which sum to zero
• We can express
1
n
1
1
1
0
0
1
1
0
1
1
0
T
r d
...
d
d
d
.
d
d
]
,...,
,
[
H
c 


 















 n
r
r
r
n
n
r
r
r c
c
c
c
c
c
Where do, d1, dn-1 are the column vectors of H
• Clearly crHT is a linear combination of the
columns of H
Comments about H
• For a codeword with weight w (i.e., w
ones), then crHT is a linear combination of w
columns of H.
• Thus we have a one-to-one mapping
between weight w codewords and linear
combinations of w columns of H
• Thus the min value of w is that which
results in crHT=0, i.e., codeword cr will have
a weight w (w ones) and so dmin = w
Comments about H
• For the example code, a codeword with min
weight (dmin = 3) is given by the first row of
G, i.e., [1000011]
• Now form linear combination of first and
last 2 cols in H, i.e., [011]+[010]+[001] = 0
• So need min of 3 columns (= dmin) to get a
zero value of cHT in this example
Standard Array
• From the standard array we can find the
most likely transmitted codeword given a
particular received codeword without
having to have a look-up table at the
decoder containing all possible codewords
in the standard array
• Not surprisingly it makes use of syndromes
Standard Array
• The Standard Array is constructed as follows,
c1 (all zero)
e1
e2
e3
…
eN
c2+e1
c2+e2
c2+e3
……
c2+eN
c2
cM+e1
cM+e2
cM+e3
……
cM+eN
cM
……
……
……
……
……
…… s0
s1
s2
s3
…
sN
All patterns in
row have same
syndrome
Different rows
have distinct
syndromes
• The array has 2k columns (i.e., equal to the
number of valid codewords) and 2R rows
(i.e., the number of syndromes)
Standard Array
• The standard array is formed by initially
choosing ei to be,
– All 1 bit error patterns
– All 2 bit error patterns
– ……
• Ensure that each error pattern not already in
the array has a new syndrome. Stop when
all syndromes are used
Standard Array
• Imagine that the received codeword (cr) is c2 + e3
(shown in bold in the standard array)
• The most likely codeword is the one at the head of
the column containing c2 + e3
• The corresponding error pattern is the one at the
beginning of the row containing c2 + e3
• So in theory we could implement a look-up table
(in a ROM) which could map all codewords in the
array to the most likely codeword (i.e., the one at
the head of the column containing the received
codeword)
• This could be quite a large table so a more simple
way is to use syndromes
Standard Array
• This block diagram shows the proposed
implementation
Compute
syndrome
Look-up
table
+
cr
s e
c
Standard Array
• For the same received codeword c2 + e3, note that the
unique syndrome is s3
• This syndrome identifies e3 as the corresponding error
pattern
• So if we calculate the syndrome as described
previously, i.e., s = crHT
• All we need to do now is to have a relatively small
table which associates s with their respective error
patterns. In the example s3 will yield e3
• Finally we subtract (or equivalently add in modulo 2
arithmetic) e3 from the received codeword (c2 + e3) to
yield the most likely codeword, c2
Hamming Codes
• We will consider a special class of SEC codes
(i.e., Hamming distance = 3) where,
– Number of parity bits R = n – k and n = 2R – 1
– Syndrome has R bits
– 0 value implies zero errors
– 2R – 1 other syndrome values, i.e., one for each
bit that might need to be corrected
– This is achieved if each column of H is a different
binary word – remember s = eHT
Hamming Codes
• Systematic form of (7,4) Hamming code is,
 














1
1
1
1
0
0
0
0
1
1
0
1
0
0
1
0
1
0
0
1
0
1
1
0
0
0
0
1
P
|
I
G  












1
0
0
1
0
1
1
0
1
0
1
1
0
1
0
0
1
1
1
1
0
I
|
P
-
H T
• The original form is non-systematic,













1
0
0
1
0
1
1
0
1
0
1
0
1
0
0
0
1
1
0
0
1
0
0
0
0
1
1
1
G











1
0
1
0
1
0
1
1
1
0
0
1
1
0
1
1
1
1
0
0
0
H
• Compared with the systematic code, the
column orders of both G and H are swapped
so that the columns of H are a binary count
Hamming Codes
• The column order is now 7, 6, 1, 5, 2, 3, 4,
i.e., col. 1 in the non-systematic H is col. 7
in the systematic H.
Hamming Codes - Example
• For a non-systematic (7,4) code
d = 1011
c = 1110000
+ 0101010
+ 1101001
= 0110011
e = 0010000
cr= 0100011
s = crHT = eHT = 011
• Note the error syndrome is the binary address of
the bit to be corrected
Hamming Codes
• Double errors will always result in wrong
bit being corrected, since
– A double error is the sum of 2 single errors
– The resulting syndrome will be the sum of the
corresponding 2 single error syndromes
– This syndrome will correspond with a third
single bit error
– Consequently the ‘corrected’ codeword will
now contain 3 bit errors, i.e., the original
double bit error plus the incorrectly corrected
bit!
Bit Error Rates after Decoding
• For a given channel bit error rate (BER),
what is the BER after correction (assuming
a memoryless channel, i.e., no burst errors)?
• To do this we will compute the probability
of receiving 0, 1, 2, 3, …. errors
• And then compute their effect
Bit Error Rates after Decoding
• Example – A (7,4) Hamming code with a channel
BER of 1%, i.e., p = 0.01
P(0 errors received) = (1 – p)7 = 0.9321
P(1 error received) = 7p(1 – p)6 = 0.0659
P(3 or more errors) = 1 – P(0) – P(1) – P(2) = 0.000034
002
.
0
)
1
(
2
6
7
received)
errors
P(2 5
2



 p
p
Bit Error Rates after Decoding
• Single errors are corrected, so,
0.9321+ 0.0659 = 0.998 codewords are
correctly detected
• Double errors cause 3 bit errors in a 7 bit
codeword, i.e., (3/7)*4 bit errors per 4 bit
dataword, that is 3/7 bit errors per bit.
Therefore the double error contribution is
0.002*3/7 = 0.000856
Bit Error Rates after Decoding
• The contribution of triple or more errors
will be less than 0.000034 (since the worst
that can happen is that every databit
becomes corrupted)
• So the BER after decoding is approximately
0.000856 + 0.000034 = 0.0009 = 0.09%
• This is an improvement over the channel
BER by a factor of about 11
Perfect Codes
• If a codeword has n bits and we wish to
correct up to t errors, how many parity bits
(R) are needed?
• Clearly we need sufficient error syndromes
(2R of them) to identify all error patterns up
to t errors
– Need 1 syndrome to represent 0 errors
– Need n syndromes to represent all 1 bit errors
– Need n(n-1)/2 to syndromes to represent all 2
bit errors
– Need nCe = n!/(n-e)!e! syndromes to represent
all e bit errors
Perfect Codes
• So,
errors
3
to
up
correct
to
6
2)
-
1)(n
-
n(n
2
1)
-
n(n
n
1
errors
2
to
up
correct
to
2
1)
-
n(n
n
1
error
1
to
up
correct
to
1
2








 n
R
If equality then code is Perfect
• Only known perfect codes are SEC Hamming
codes and TEC Golay (23,12) code (dmin=7).
Using previous equation yields
)
12
23
(
11
2
2
2048
6
2)
-
1)(23
-
23(23
2
1)
-
23(23
23
1 






Summary
• In this section we have
– Used block codes to add redundancy to
messages to control the effects of transmission
errors
– Encoded and decoded messages using
Hamming codes
– Determined overall bit error rates as a function
of the error control strategy

More Related Content

Similar to 3F4ecc.ppt

Error detection and correction
Error detection and correctionError detection and correction
Error detection and correctionAbdul Razaq
 
Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple...
Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple...Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple...
Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple...Madhumita Tamhane
 
Introduction to the Data Link Layer
Introduction to the Data Link LayerIntroduction to the Data Link Layer
Introduction to the Data Link LayerMeenakshi Paul
 
Digital Logic BCA TU Chapter 2.2
Digital Logic BCA TU Chapter 2.2Digital Logic BCA TU Chapter 2.2
Digital Logic BCA TU Chapter 2.2ISMT College
 
Defense Senior College on Error Coding presentation 4/22/2010
Defense Senior College on Error Coding presentation 4/22/2010Defense Senior College on Error Coding presentation 4/22/2010
Defense Senior College on Error Coding presentation 4/22/2010Felicia Fort, MBA
 
Error Detection and Correction - Data link Layer
Error Detection and Correction - Data link LayerError Detection and Correction - Data link Layer
Error Detection and Correction - Data link LayerAbdullaziz Tagawy
 
Error detection enhanced decoding of difference set codes for memory applicat...
Error detection enhanced decoding of difference set codes for memory applicat...Error detection enhanced decoding of difference set codes for memory applicat...
Error detection enhanced decoding of difference set codes for memory applicat...Sherin Deena Sam
 
13-DataLink_02.ppt
13-DataLink_02.ppt13-DataLink_02.ppt
13-DataLink_02.pptWinterSnow16
 
Reed Soloman and convolution codes
Reed Soloman and convolution codesReed Soloman and convolution codes
Reed Soloman and convolution codesShailesh Tanwar
 
Chapter 4 combinational circuit
Chapter 4 combinational circuit Chapter 4 combinational circuit
Chapter 4 combinational circuit GulAhmad16
 

Similar to 3F4ecc.ppt (20)

Channel Coding (Error Control Coding)
Channel Coding (Error Control Coding)Channel Coding (Error Control Coding)
Channel Coding (Error Control Coding)
 
Error detection and correction
Error detection and correctionError detection and correction
Error detection and correction
 
Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple...
Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple...Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple...
Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple...
 
Ch3 datalink
Ch3 datalinkCh3 datalink
Ch3 datalink
 
Introduction to the Data Link Layer
Introduction to the Data Link LayerIntroduction to the Data Link Layer
Introduction to the Data Link Layer
 
Digital Logic BCA TU Chapter 2.2
Digital Logic BCA TU Chapter 2.2Digital Logic BCA TU Chapter 2.2
Digital Logic BCA TU Chapter 2.2
 
Defense Senior College on Error Coding presentation 4/22/2010
Defense Senior College on Error Coding presentation 4/22/2010Defense Senior College on Error Coding presentation 4/22/2010
Defense Senior College on Error Coding presentation 4/22/2010
 
Error Detection and Correction - Data link Layer
Error Detection and Correction - Data link LayerError Detection and Correction - Data link Layer
Error Detection and Correction - Data link Layer
 
Data linklayer
Data linklayerData linklayer
Data linklayer
 
Error detection enhanced decoding of difference set codes for memory applicat...
Error detection enhanced decoding of difference set codes for memory applicat...Error detection enhanced decoding of difference set codes for memory applicat...
Error detection enhanced decoding of difference set codes for memory applicat...
 
unit 5 (1).pptx
unit 5 (1).pptxunit 5 (1).pptx
unit 5 (1).pptx
 
Lecture set 3
Lecture set 3Lecture set 3
Lecture set 3
 
Codes r005
Codes  r005Codes  r005
Codes r005
 
Binary and EC codes
Binary and EC codesBinary and EC codes
Binary and EC codes
 
13-DataLink_02.ppt
13-DataLink_02.ppt13-DataLink_02.ppt
13-DataLink_02.ppt
 
Data links
Data links Data links
Data links
 
Turbo Code
Turbo Code Turbo Code
Turbo Code
 
BCH Codes
BCH CodesBCH Codes
BCH Codes
 
Reed Soloman and convolution codes
Reed Soloman and convolution codesReed Soloman and convolution codes
Reed Soloman and convolution codes
 
Chapter 4 combinational circuit
Chapter 4 combinational circuit Chapter 4 combinational circuit
Chapter 4 combinational circuit
 

Recently uploaded

Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...RKavithamani
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 

Recently uploaded (20)

INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 

3F4ecc.ppt

  • 1. 3F4 Error Control Coding Dr. I. J. Wassell
  • 2. Introduction • Error Control Coding (ECC) – Extra bits are added to the data at the transmitter (redundancy) to permit error detection or correction at the receiver – Done to prevent the output of erroneous bits despite noise and other imperfections in the channel – The positions of the error control coding and decoding are shown in the transmission model
  • 4. Error Models • Binary Symmetric Memoryless Channel – Assumes transmitted symbols are binary – Errors affect ‘0’s and ‘1’s with equal probability (i.e., symmetric) – Errors occur randomly and are independent from bit to bit (memoryless) IN OUT 0 0 1 1 1-p 1-p p p p is the probability of bit error or the Bit Error Rate (BER) of the channel
  • 5. Error Models • Many other types • Burst errors, i.e., contiguous bursts of bit errors – output from DFE (error propagation) – common in radio channels – Insertion, deletion and transposition errors • We will consider mainly random errors
  • 6. Error Control Techniques • Error detection in a block of data – Can then request a retransmission, known as automatic repeat request (ARQ) for sensitive data – Appropriate for • Low delay channels • Channels with a return path – Not appropriate for delay sensitive data, e.g., real time speech and data
  • 7. Error Control Techniques • Forward Error Correction (FEC) – Coding designed so that errors can be corrected at the receiver – Appropriate for delay sensitive and one-way transmission (e.g., broadcast TV) of data – Two main types, namely block codes and convolutional codes. We will only look at block codes
  • 8. Block Codes • We will consider only binary data • Data is grouped into blocks of length k bits (dataword) • Each dataword is coded into blocks of length n bits (codeword), where in general n>k • This is known as an (n,k) block code
  • 9. Block Codes • A vector notation is used for the datawords and codewords, – Dataword d = (d1 d2….dk) – Codeword c = (c1 c2……..cn) • The redundancy introduced by the code is quantified by the code rate, – Code rate = k/n – i.e., the higher the redundancy, the lower the code rate
  • 10. Block Code - Example • Dataword length k = 4 • Codeword length n = 7 • This is a (7,4) block code with code rate = 4/7 • For example, d = (1101), c = (1101001)
  • 11. Error Control Process 1000 1000 101101 Source code data chopped into blocks Channel coder Codeword (n bits) Dataword (k bits) Channel Codeword + possible errors (n bits) Channel decoder Dataword (k bits) Error flags
  • 12. Error Control Process • Decoder gives corrected data • May also give error flags to – Indicate reliability of decoded data – Helps with schemes employing multiple layers of error correction
  • 13. Parity Codes • Example of a simple block code – Single Parity Check Code – In this case, n = k+1, i.e., the codeword is the dataword with one additional bit – For ‘even’ parity the additional bit is,   k i i d q 1 2) (mod – For ‘odd’ parity the additional bit is 1-q – That is, the additional bit ensures that there are an ‘even’ or ‘odd’ number of ‘1’s in the codeword
  • 14. Parity Codes – Example 1 • Even parity (i) d=(10110) so, c=(101101) (ii) d=(11011) so, c=(110110)
  • 15. Parity Codes – Example 2 • Coding table for (4,3) even parity code Dataword Codeword 1 1 1 0 1 1 1 0 1 0 0 1 1 1 0 0 1 0 1 0 0 0 0 0 1 1 1 1 0 0 1 1 0 1 0 1 1 0 0 1 0 1 1 0 1 0 1 0 1 1 0 0 0 0 0 0
  • 16. Parity Codes • To decode – Calculate sum of received bits in block (mod 2) – If sum is 0 (1) for even (odd) parity then the dataword is the first k bits of the received codeword – Otherwise error • Code can detect single errors • But cannot correct error since the error could be in any bit • For example, if the received dataword is (100000) the transmitted dataword could have been (000000) or (110000) with the error being in the first or second place respectively • Note error could also lie in other positions including the parity bit
  • 17. Parity Codes • Known as a single error detecting code (SED). Only useful if probability of getting 2 errors is small since parity will become correct again • Used in serial communications • Low overhead but not very powerful • Decoder can be implemented efficiently using a tree of XOR gates
  • 18. Hamming Distance • Error control capability is determined by the Hamming distance • The Hamming distance between two codewords is equal to the number of differences between them, e.g., 10011011 11010010 have a Hamming distance = 3 • Alternatively, can compute by adding codewords (mod 2) =01001001 (now count up the ones)
  • 19. Hamming Distance • The Hamming distance of a code is equal to the minimum Hamming distance between two codewords • If Hamming distance is: 1 – no error control capability; i.e., a single error in a received codeword yields another valid codeword XXXXXXX X is a valid codeword Note that this representation is diagrammatic only. In reality each codeword is surrounded by n codewords. That is, one for every bit that could be changed
  • 20. Hamming Distance • If Hamming distance is: 2 – can detect single errors (SED); i.e., a single error will yield an invalid codeword XOXOXO X is a valid codeword O in not a valid codeword See that 2 errors will yield a valid (but incorrect) codeword
  • 21. Hamming Distance • If Hamming distance is: 3 – can correct single errors (SEC) or can detect double errors (DED) XOOXOOX X is a valid codeword O in not a valid codeword See that 3 errors will yield a valid but incorrect codeword
  • 22. Hamming Distance - Example • Hamming distance 3 code, i.e., SEC/DED – Or can perform single error correction (SEC) 10011011 X 11011011 O 11010011 O 11010010 X This code corrected this way This code corrected this way X is a valid codeword O is an invalid codeword
  • 23. Hamming Distance • The maximum number of detectable errors is • That is the maximum number of correctable errors is given by, where dmin is the minimum Hamming distance between 2 codewords and means the smallest integer         2 1 min d t  . 1 min  d
  • 24. Linear Block Codes • As seen from the second Parity Code example, it is possible to use a table to hold all the codewords for a code and to look-up the appropriate codeword based on the supplied dataword • Alternatively, it is possible to create codewords by addition of other codewords. This has the advantage that there is now no longer the need to held every possible codeword in the table.
  • 25. Linear Block Codes • If there are k data bits, all that is required is to hold k linearly independent codewords, i.e., a set of k codewords none of which can be produced by linear combinations of 2 or more codewords in the set. • The easiest way to find k linearly independent codewords is to choose those which have ‘1’ in just one of the first k positions and ‘0’ in the other k-1 of the first k positions.
  • 26. Linear Block Codes • For example for a (7,4) code, only four codewords are required, e.g., 1 1 1 1 0 0 0 1 1 0 0 1 0 0 1 0 1 0 0 1 0 0 1 1 0 0 0 1 • So, to obtain the codeword for dataword 1011, the first, third and fourth codewords in the list are added together, giving 1011010 • This process will now be described in more detail
  • 27. Linear Block Codes • An (n,k) block code has code vectors d=(d1 d2….dk) and c=(c1 c2……..cn) • The block coding process can be written as c=dG where G is the Generator Matrix                           k 2 1 2 1 2 22 21 1 12 11 a . a a ... . ... . . ... ... G kn k k n n a a a a a a a a a
  • 28. Linear Block Codes • Thus,    k i i i d 1 a c • ai must be linearly independent, i.e., Since codewords are given by summations of the ai vectors, then to avoid 2 datawords having the same codeword the ai vectors must be linearly independent
  • 29. Linear Block Codes • Sum (mod 2) of any 2 codewords is also a codeword, i.e., Since for datawords d1 and d2 we have; 2 1 3 d d d                k i i i k i i i k i i i i k i i i d d d d d 1 2 1 1 1 2 1 1 3 3 a a )a ( a c So, 2 1 3 c c c  
  • 30. Linear Block Codes • 0 is always a codeword, i.e., Since all zeros is a dataword then, 0 a 0 c 1     k i i
  • 31. Error Correcting Power of LBC • The Hamming distance of a linear block code (LBC) is simply the minimum Hamming weight (number of 1’s or equivalently the distance from the all 0 codeword) of the non-zero codewords • Note d(c1,c2) = w(c1+ c2) as shown previously • For an LBC, c1+ c2=c3 • So min (d(c1,c2)) = min (w(c1+ c2)) = min (w(c3)) • Therefore to find min Hamming distance just need to search among the 2k codewords to find the min Hamming weight – far simpler than doing a pair wise check for all possible codewords.
  • 32. Linear Block Codes – example 1 • For example a (4,2) code, suppose;        1 0 1 0 1 1 0 1 G • For d = [1 1], then; 0 1 1 1 _ _ _ _ 1 0 1 0 1 1 0 1 c    a1 = [1011] a2 = [0101]
  • 33. Linear Block Codes – example 2 • A (6,5) code with                  1 1 0 0 0 0 1 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 G • Is an even single parity code
  • 34. Systematic Codes • For a systematic block code the dataword appears unaltered in the codeword – usually at the start • The generator matrix has the structure,   P | I .. 1 .. 0 0 .. .. .. .. .. .. .. .. .. 0 .. 1 0 .. 0 .. 0 1 G 2 1 2 22 21 1 12 11               kR k k R R p p p p p p p p p k R R = n - k • P is often referred to as parity bits
  • 35. Systematic Codes • I is k*k identity matrix. Ensures dataword appears as beginning of codeword • P is k*R matrix.
  • 36. Decoding Linear Codes • One possibility is a ROM look-up table • In this case received codeword is used as an address • Example – Even single parity check code; Address Data 000000 0 000001 1 000010 1 000011 0 ……… . • Data output is the error flag, i.e., 0 – codeword ok, • If no error, dataword is first k bits of codeword • For an error correcting code the ROM can also store datawords
  • 37. Decoding Linear Codes • Another possibility is algebraic decoding, i.e., the error flag is computed from the received codeword (as in the case of simple parity codes) • How can this method be extended to more complex error detection and correction codes?
  • 38. Parity Check Matrix • A linear block code is a linear subspace Ssub of all length n vectors (Space S) • Consider the subset Snull of all length n vectors in space S that are orthogonal to all length n vectors in Ssub • It can be shown that the dimensionality of Snull is n-k, where n is the dimensionality of S and k is the dimensionality of Ssub • It can also be shown that Snull is a valid subspace of S and consequently Ssub is also the null space of Snull
  • 39. Parity Check Matrix • Snull can be represented by its basis vectors. In this case the generator basis vectors (or ‘generator matrix’ H) denote the generator matrix for Snull - of dimension n-k = R • This matrix is called the parity check matrix of the code defined by G, where G is obviously the generator matrix for Ssub- of dimension k • Note that the number of vectors in the basis defines the dimension of the subspace
  • 40. Parity Check Matrix • So the dimension of H is n-k (= R) and all vectors in the null space are orthogonal to all the vectors of the code • Since the rows of H, namely the vectors bi are members of the null space they are orthogonal to any code vector • So a vector y is a codeword only if yHT=0 • Note that a linear block code can be specified by either G or H
  • 41. Parity Check Matrix • So H is used to check if a codeword is valid,                           R 2 1 2 1 2 22 21 1 12 11 b . b b ... . ... . . ... ... H Rn R R n n b b b b b b b b b R = n - k • The rows of H, namely, bi, are chosen to be orthogonal to rows of G, namely ai • Consequently the dot product of any valid codeword with any bi is zero
  • 42. Parity Check Matrix • This is so since,    k i i i d 1 a c and so,        k i i i k i i i d d 1 j 1 j j 0 ) b . (a a . b .c b • This means that a codeword is valid (but not necessarily correct) only if cHT = 0. To ensure this it is required that the rows of H are independent and are orthogonal to the rows of G • That is the bi span the remaining R (= n - k) dimensions of the codespace
  • 43. Parity Check Matrix • For example consider a (3,2) code. In this case G has 2 rows, a1 and a2 • Consequently all valid codewords sit in the subspace (in this case a plane) spanned by a1 and a2 • In this example the H matrix has only one row, namely b1. This vector is orthogonal to the plane containing the rows of the G matrix, i.e., a1 and a2 • Any received codeword which is not in the plane containing a1 and a2 (i.e., an invalid codeword) will thus have a component in the direction of b1 yielding a non- zero dot product between itself and b1
  • 44. Parity Check Matrix • Similarly, any received codeword which is in the plane containing a1 and a2 (i.e., a valid codeword) will not have a component in the direction of b1 yielding a zero dot product between itself and b1 c1 c2 c3 a1 a2 b1
  • 45. Error Syndrome • For error correcting codes we need a method to compute the required correction • To do this we use the Error Syndrome, s of a received codeword, cr s = crHT • If cr is corrupted by the addition of an error vector, e, then cr = c + e and s = (c + e) HT = cHT + eHT s = 0 + eHT Syndrome depends only on the error
  • 46. Error Syndrome • That is, we can add the same error pattern to different codewords and get the same syndrome. – There are 2(n - k) syndromes but 2n error patterns – For example for a (3,2) code there are 2 syndromes and 8 error patterns – Clearly no error correction possible in this case – Another example. A (7,4) code has 8 syndromes and 128 error patterns. – With 8 syndromes we can provide a different value to indicate single errors in any of the 7 bit positions as well as the zero value to indicate no errors • Now need to determine which error pattern caused the syndrome
  • 47. Error Syndrome • For systematic linear block codes, H is constructed as follows, G = [ I | P] and so H = [-PT | I] where I is the k*k identity for G and the R*R identity for H • Example, (7,4) code, dmin= 3                 1 1 1 1 0 0 0 0 1 1 0 1 0 0 1 0 1 0 0 1 0 1 1 0 0 0 0 1 P | I G               1 0 0 1 0 1 1 0 1 0 1 1 0 1 0 0 1 1 1 1 0 I | P - H T
  • 48. Error Syndrome - Example • For a correct received codeword cr = [1101001] In this case,     0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 1 1 1 0 1 1 1 0 1 0 0 1 0 1 1 H c s T r                         
  • 49. Error Syndrome - Example • For the same codeword, this time with an error in the first bit position, i.e., cr = [1101000]     1 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 1 1 1 0 1 1 1 0 0 0 0 1 0 1 1 H c s T r                          • In this case a syndrome 001 indicates an error in bit 1 of the codeword
  • 50. Comments about H • The minimum distance of the code is equal to the minimum number of columns (non- zero) of H which sum to zero • We can express 1 n 1 1 1 0 0 1 1 0 1 1 0 T r d ... d d d . d d ] ,..., , [ H c                      n r r r n n r r r c c c c c c Where do, d1, dn-1 are the column vectors of H • Clearly crHT is a linear combination of the columns of H
  • 51. Comments about H • For a codeword with weight w (i.e., w ones), then crHT is a linear combination of w columns of H. • Thus we have a one-to-one mapping between weight w codewords and linear combinations of w columns of H • Thus the min value of w is that which results in crHT=0, i.e., codeword cr will have a weight w (w ones) and so dmin = w
  • 52. Comments about H • For the example code, a codeword with min weight (dmin = 3) is given by the first row of G, i.e., [1000011] • Now form linear combination of first and last 2 cols in H, i.e., [011]+[010]+[001] = 0 • So need min of 3 columns (= dmin) to get a zero value of cHT in this example
  • 53. Standard Array • From the standard array we can find the most likely transmitted codeword given a particular received codeword without having to have a look-up table at the decoder containing all possible codewords in the standard array • Not surprisingly it makes use of syndromes
  • 54. Standard Array • The Standard Array is constructed as follows, c1 (all zero) e1 e2 e3 … eN c2+e1 c2+e2 c2+e3 …… c2+eN c2 cM+e1 cM+e2 cM+e3 …… cM+eN cM …… …… …… …… …… …… s0 s1 s2 s3 … sN All patterns in row have same syndrome Different rows have distinct syndromes • The array has 2k columns (i.e., equal to the number of valid codewords) and 2R rows (i.e., the number of syndromes)
  • 55. Standard Array • The standard array is formed by initially choosing ei to be, – All 1 bit error patterns – All 2 bit error patterns – …… • Ensure that each error pattern not already in the array has a new syndrome. Stop when all syndromes are used
  • 56. Standard Array • Imagine that the received codeword (cr) is c2 + e3 (shown in bold in the standard array) • The most likely codeword is the one at the head of the column containing c2 + e3 • The corresponding error pattern is the one at the beginning of the row containing c2 + e3 • So in theory we could implement a look-up table (in a ROM) which could map all codewords in the array to the most likely codeword (i.e., the one at the head of the column containing the received codeword) • This could be quite a large table so a more simple way is to use syndromes
  • 57. Standard Array • This block diagram shows the proposed implementation Compute syndrome Look-up table + cr s e c
  • 58. Standard Array • For the same received codeword c2 + e3, note that the unique syndrome is s3 • This syndrome identifies e3 as the corresponding error pattern • So if we calculate the syndrome as described previously, i.e., s = crHT • All we need to do now is to have a relatively small table which associates s with their respective error patterns. In the example s3 will yield e3 • Finally we subtract (or equivalently add in modulo 2 arithmetic) e3 from the received codeword (c2 + e3) to yield the most likely codeword, c2
  • 59. Hamming Codes • We will consider a special class of SEC codes (i.e., Hamming distance = 3) where, – Number of parity bits R = n – k and n = 2R – 1 – Syndrome has R bits – 0 value implies zero errors – 2R – 1 other syndrome values, i.e., one for each bit that might need to be corrected – This is achieved if each column of H is a different binary word – remember s = eHT
  • 60. Hamming Codes • Systematic form of (7,4) Hamming code is,                 1 1 1 1 0 0 0 0 1 1 0 1 0 0 1 0 1 0 0 1 0 1 1 0 0 0 0 1 P | I G               1 0 0 1 0 1 1 0 1 0 1 1 0 1 0 0 1 1 1 1 0 I | P - H T • The original form is non-systematic,              1 0 0 1 0 1 1 0 1 0 1 0 1 0 0 0 1 1 0 0 1 0 0 0 0 1 1 1 G            1 0 1 0 1 0 1 1 1 0 0 1 1 0 1 1 1 1 0 0 0 H • Compared with the systematic code, the column orders of both G and H are swapped so that the columns of H are a binary count
  • 61. Hamming Codes • The column order is now 7, 6, 1, 5, 2, 3, 4, i.e., col. 1 in the non-systematic H is col. 7 in the systematic H.
  • 62. Hamming Codes - Example • For a non-systematic (7,4) code d = 1011 c = 1110000 + 0101010 + 1101001 = 0110011 e = 0010000 cr= 0100011 s = crHT = eHT = 011 • Note the error syndrome is the binary address of the bit to be corrected
  • 63. Hamming Codes • Double errors will always result in wrong bit being corrected, since – A double error is the sum of 2 single errors – The resulting syndrome will be the sum of the corresponding 2 single error syndromes – This syndrome will correspond with a third single bit error – Consequently the ‘corrected’ codeword will now contain 3 bit errors, i.e., the original double bit error plus the incorrectly corrected bit!
  • 64. Bit Error Rates after Decoding • For a given channel bit error rate (BER), what is the BER after correction (assuming a memoryless channel, i.e., no burst errors)? • To do this we will compute the probability of receiving 0, 1, 2, 3, …. errors • And then compute their effect
  • 65. Bit Error Rates after Decoding • Example – A (7,4) Hamming code with a channel BER of 1%, i.e., p = 0.01 P(0 errors received) = (1 – p)7 = 0.9321 P(1 error received) = 7p(1 – p)6 = 0.0659 P(3 or more errors) = 1 – P(0) – P(1) – P(2) = 0.000034 002 . 0 ) 1 ( 2 6 7 received) errors P(2 5 2     p p
  • 66. Bit Error Rates after Decoding • Single errors are corrected, so, 0.9321+ 0.0659 = 0.998 codewords are correctly detected • Double errors cause 3 bit errors in a 7 bit codeword, i.e., (3/7)*4 bit errors per 4 bit dataword, that is 3/7 bit errors per bit. Therefore the double error contribution is 0.002*3/7 = 0.000856
  • 67. Bit Error Rates after Decoding • The contribution of triple or more errors will be less than 0.000034 (since the worst that can happen is that every databit becomes corrupted) • So the BER after decoding is approximately 0.000856 + 0.000034 = 0.0009 = 0.09% • This is an improvement over the channel BER by a factor of about 11
  • 68. Perfect Codes • If a codeword has n bits and we wish to correct up to t errors, how many parity bits (R) are needed? • Clearly we need sufficient error syndromes (2R of them) to identify all error patterns up to t errors – Need 1 syndrome to represent 0 errors – Need n syndromes to represent all 1 bit errors – Need n(n-1)/2 to syndromes to represent all 2 bit errors – Need nCe = n!/(n-e)!e! syndromes to represent all e bit errors
  • 69. Perfect Codes • So, errors 3 to up correct to 6 2) - 1)(n - n(n 2 1) - n(n n 1 errors 2 to up correct to 2 1) - n(n n 1 error 1 to up correct to 1 2          n R If equality then code is Perfect • Only known perfect codes are SEC Hamming codes and TEC Golay (23,12) code (dmin=7). Using previous equation yields ) 12 23 ( 11 2 2 2048 6 2) - 1)(23 - 23(23 2 1) - 23(23 23 1       
  • 70. Summary • In this section we have – Used block codes to add redundancy to messages to control the effects of transmission errors – Encoded and decoded messages using Hamming codes – Determined overall bit error rates as a function of the error control strategy