SlideShare ist ein Scribd-Unternehmen logo
1 von 4
Downloaden Sie, um offline zu lesen
ITW2003, Paris, France, March 31 – April 4, 2003




                  Performance estimation for concatenated coding schemes
                                            Simon Huettinger and Johannes Huber
                            Chair of Information Transmission, University Erlangen-N¨ rnberg, Germany
                                                                                    u
                                           e-mail: huettinger,huber @LNT.de

   Abstract — Asymptotical analysis of concatenated codes         U [i], which is the a–posteriori probability taking the received
with EXIT charts [tB99] or the AMCA [HH02b] is proven             vector Y and code constraints into account, i.e.,
to be a powerful tool for the design of power–efficient com-
                                                                                          def
munication systems. But, usually the result of the asymp-                           V [i] = Pr U [i] = 0 Y .                           (1)
totical analysis is a binary decision, whether convergence
of iterative decoding is possible at the chosen signal–to–                               ˆ
                                                                     Estimated symbols U [i] can be obtained from the vector of
noise ratio, or not.                                              soft–output values V .
   In this paper it is shown how to obtain the Information
Processing Characteristic (IPC) introduced in [HHJF01]               II. I NFORMATION P ROCESSING C HARACTERISTICS
for concatenated coding schemes. If asymptotical anal-              The Information Processing Characteristic [HHJF01] for
ysis is performed under the assumption of infinite inter-          symbol–by–symbol decoding and Interleaving
leaving and infinitely many iterations, this IPC will be a
                                                                                                            K
lower bound. Furthermore, it also is possible to estimate                           def ¯       def 1
the performance of realistic coding schemes by restricting                 IPCI (C) = I(U ; V ) =                I(U [i]; V [i])       (2)
                                                                                                    K      i=1
the number of iterations.
   Finally, the IPC can be used to estimate the resulting         characterizes a coding scheme w.r.t. soft–output, i.e. IPCI is
bit error ratio for the concatenated coding scheme. As an         the capacity of the memoryless end–to–end channel from U to
upper and a lower bound on the bit error ratio for a given        soft–output V .
IPC exist, we are able to lower bound the performance of             In [HHFJ02] we proved by information theoretic bounding
any concatenated coding scheme and give an achievability          that the IPC of any coding scheme can be upper bounded by:
bound, i.e. it is possible to determine a performance that
can surely be achieved if sufficiently many iterations are                           IPCI (C)      min (C/R, 1) .                       (3)
performed and a large interleaver is used.
                                                                  A coding scheme fulfilling (3) with equality is called ideal
                     I. S YSTEM M ODEL                            coding scheme.
   In the following we analyze the properties of a digital com-      The IPCI is important for two reasons. Firstly, the charac-
munications system consisting of a binary Bernoulli source,       terization w.r.t. soft–output is very helpful for the analysis and
a channel coder, a channel, a decoder and a sink. Without         comparison of coding schemes, which will be used as com-
loss of generality we assume, that the source emits a block       ponents of concatenated codes [HHJF01]. Secondly, the IPCI
of K binary information symbols U [i], i ¾ 1, 2, ¡ ¡ ¡ , K .      can be a result of a convergence analysis performed with EXIT
The encoder maps the information vector U to a codeword           charts [tB99] or the AMCA [HH02b].
X which consists of N symbols X[n], n ¾ 1, 2, ¡ ¡ ¡ , N .            In the following we will show how the IPCI can be obtained
The rate of the code, which is supposed to be time–invariant,     for concatenated coding schemes and a relationship between
is R = K/N measured in bit per channel symbol. The code-                                                                 ˆ
                                                                  the IPCI and the bit error ratio of the hard–output U [i] will be
word X is transmitted over a memoryless channel that corrupts     derived.
the message by substitution errors, e.g., the binary symmetric
channel (BSC) or the additive white Gaussian noise channel           III. I NFORMATION P ROCESSING C HARACTERISTIC AS
(AWGN Channel). Modulator and demodulator are consid-                         R ESULT OF A SYMPTOTICAL A NALYSIS
ered as being part of the channel.                                   To obtain the Information Processing Characteristic for
   Additionally we introduce an (theoretically infinite) inter-    symbol–by–symbol decoding and interleaving IPCI (C),
leaver π½ before encoding that converts the end–to–end chan-      firstly we have to determine the mutual information between
nel between U and V to a memoryless channel.                      the source symbols U and the post-decoding soft–output V of
                                                                  the decoder using EXIT charts or the AMCA. It is possible
                                                                  to obtain both, a lower bound achieved by infinite interleav-
                                                                  ing and infinitely many iterations as well as estimations of the
                                                                  mutual information after an arbitrary number of iterations. As
                                                                  long as thereby the number of iterations is restricted such that
                  Figure 1: System model.                         the cycles in the graph of the code do not dominate the de-
                                                                  coding performance the result will be close to bit error per-
  The corrupted received sequence Y is processed by the de-       formance that can be measured if the whole coding scheme is
coder. The decoder output is the soft–output w.r.t. symbol        simulated.
Results or intermediate results of EXIT charts and the                                                                              infinitely many iterations, the decoding process gets stuck ex-
AMCA are the mutual information between the source sym-                                                                                actly at these points. Hence, from the abscissa and ordinate
bols U in parallel concatenation or the encoded symbols of                                                                             values of these points a lower bound on the IPCI (C) can be
the outer encoder X in serial concatenation and the respective                                                                         obtained. This asymptotical IPCI (C) is shown in Fig. 4.
extrinsic soft–output at the decoder side Z resp. Q. The post–
decoding information V , which is the final result at the output                                                                                                             1
                                                                                                                                                                                                                                        −1.5 dB
of an iterative decoder, is created by maximum ratio combin-
                                                                                                                                                                           0.9
ing [Bre59] of the extrinsic informations of all consituent de-
coders on a symbol basis. This can be modelled statistically                                                                                                               0.8
by information combining [HH02c].
   For serial concatenation we also have to assume systematic                                                                                                              0.7




                                                                                                                                        
encoding of the outer code, to ensure that the post–decoding




                                                                                                                                        IPCI (C) [bit per source symbol]
mutual information w.r.t. info bits U is the same as the post–                                                                                                             0.6




                                                                                                                                                                                                     me
decoding mutual information w.r.t. code bits X of the outer




                                                                                                                                                                                                    che
                                                                                                                                                                           0.5




                                                                                                                                                                                                   gs
encoder.




                                                                                                                                                                                               din
                                                                                                                                                                                             l co
   Exemplary, IPCI (C) for the serial concatenation of Fig. 2                                                                                                              0.4




                                                                                                                                                                                               a
                                                                                                                                                                                           ide
will be determined in the following.
                                                                                                                                                                           0.3

                                                                                                                                                                                                                              −2.5 dB
                                                                                                                                                                           0.2
                                                                                                                                                                                                                            10log (E /N )=−3 dB
                                                                                                                                                                                                                                10    s   0

                                                                                                                                                                           0.1
                                                                                                                                                                                                                                                Ser. Concat.
                                                                                                                                                                                                                                                ν=8 CC
Figure 2: Encoder for serial concatenation of a rate–1/2 MFD                                                                                                                0
convolutional code of memory ν = 1 with a ν = 2 scrambler                                                                                                                        0   0.1       0.2        0.3   0.4   0.5       0.6       0.7      0.8    0.9   1

(Gr = 07, G = 01).                                                                                                                                                                                  C [bit per channel symbol]                      
   As the concatenation is of extremely low complexity, it can                                                                         Figure 4: IPCI (C) for the concatenation of Fig. 2, obtained
be assumed that even in practical implementations the limit                                                                            by EXIT charts. For comparison also the IPCI (C) of a ν = 8
of infinite number of iterations will be closely approximated.                                                                          convolutional code is given.
Hence, we firstly determine the intersection point of the trans-                                                                           For a realistic coding scheme 25 iterations are sufficient to
fer characteristics within EXIT charts for the range of signal–                                                                        closely approximate this behavior. In every iteration the outer
to–noise ratios. Then the post–decoding mutual information is                                                                          decoder decodes a 2–state trellis of length K, and the inner
calculated using information combining.                                                                                                one visits 4 states in every of the 2K trellis segments. Hence,
                                                                                                                                       the decoding complexity then is 25 ¡ (2 + 2 ¡ 4) = 250 visited
                                         1
                                                                                                                                       states per decoded bit, which is approximately the complexity
                                                                                                                                       of decoding a memory ν = 8 convolutional code. For compar-
                                        0.9
                                                                                                                                       ison the IPCI (C) of a ν = 8 convolutional code also is plotted
                                        0.8                                                                                            into Fig. 4 This IPCI (C) is directly obtained via Monte Carlo
 




                                                                                                                                       simulation.
                                        0.7                                                                                               Obviously, there is a substantial difference in the behav-
 I(U ; E) = I(X; Y ) [bit per symbol]




                                                                                                  5
                                                                                                      dB                               ior of the two coding schemes. The concatenation shows the
                                        0.6
                                                                                              −1.
                                                                                           )=                                          turbo–cliff, which is typical for iteratively decoded concatena-
                                                                                      /N   0

                                        0.5                                 g
                                                                                 (E
                                                                                10
                                                                                      s
                                                                                                                                       tion, whereas the convolutional code shows a more constant
                                                                        0lo
                                                                       1                                                               improvement of output if channel capacity is raised. Hence, if
                                        0.4                                                                                            high mutual information (e.g., I(U ; V ) > 0.999) is required,
                                                                                                                                       which corresponds to a low bit error ratio, the concatenation
                                        0.3
                                                                                                                                       outperforms the convolutional code with equal complexity.
                                                          B
                                                    5d
                                        0.2 −2.
                                                                                                                                        IV. B OUNDING B IT E RROR P ROBABILITY BY C APACITY
                                                  −3 dB                                                                                    Although there is no direct relationship between probabil-
                                        0.1
                                                                                                                                       ity of bit error and the capacity of a channel, an upper and
                                         0
                                              0       0.1       0.2   0.3        0.4              0.5      0.6   0.7   0.8   0.9   1
                                                                                                                                       a lower bound can be given. Furthermore, as channels are
                                                                                                 
                                                              I(U ; Z) = I(X; Q) [bit per symbol]
                                                                                                                                       known, which satisfy these bound with equality, they are tight.
                                                                                                                                           We consider memoryless symmetric channels with binary
                                                                                                                                       input U ¾ 0; 1 and discrete or continuous output alphabet
                                  Figure 3: EXIT charts for the concatenation of Fig. 2.
                                                                                                                                       Î . The capacity
   Fig. 3 shows some EXIT charts used to determine the                                                                                                                                       I(U ; V ) = H(U )   H(U V )                                        (4)
IPCI (C) for the concatenation of Fig. 2. Circles mark the
intersection points of the transfer characteristics. Assuming                                                                          is achieved by equiprobable signaling, i.e. H(U ) = 1.
0.5
    With Fano’s inequality [Fan61] which reads                                                                                              1/2 H(p)
                                                                                                                                            min[p,1−p]
                                                                                0.45


               e2 (BER)    H(U V ) = 1   I(U ; V )             (5)               0.4




we have a lower bound on the probability of error. Here, e2 (¡)
                                                                                0.35


                                                                                 0.3

denotes the binary entropy function                                             0.25



         e2 (x) :=  x log2 (x)   (1   x) log2 (1   x) ,        (6)               0.2




                
    ¾ (0, 1), e2 1 is its inverse for x ¾ (0, 1/2).
                                                                                0.15


x                                                                                0.1


   This minimum bit error ratio is achieved by a Binary Sym-                    0.05


metric Channel (BSC). Hence, all channels with a larger out-                      0
                                                                                       0   0.1   0.2    0.3   0.4   0.5   0.6   0.7   0.8    0.9         1
put alphabet have no lower probability of error at the same
capacity.
                                                                                                              p      
   To derive a lower bound on the capacity [HR70] we need                     Figure 5: Graphs of min[p, 1   p] and 1 e2 (p).
the a–posteriori probability of U = 0 having received V = v:                                                        2


                     p = Pr(U = 0 V = v)                       (7)       For any given channel output V = v the entropy of the
                                                                      binary variable U is given by
and the hard–decision
                                                                                                  e2 (p) = H(U V = v),                                            (12)
                            1 for p 0.5
                    ˆ
                    U=                    .                    (8)
                            0 for p > 0.5                             as U = 0 occurs with probability p and U = 1 occurs with
   There is an equivalent binary symmetric channel from U to          probability 1   p. Hence, the average entropy of U given V
ˆ
U . The crossover probability of this channel is equal to the         can be expresses as the expectation over the binary entropy
a–posteriori probability p, as uniform signaling is assumed.          function of the a–posteriori probability p:
   Depending on the actual received V = v and the actual
                                                                                           H(U V ) = E H(U V = v) .                                               (13)
hard decision, which deterministically depends on v, a bit error
occurs with probability                                               Inserting into (4) yields
                       Pr(U = 0 V = v) for U = 1                                                       = 1   H(U V )
                                           ˆ
          Pb    =                                                                  I(U ; V )
                                           ˆ
                       Pr(U = 1 V = v) for U = 0
                                                                                                       = 1   E H(U V = v)
                       p         ˆ
                             for U = 1
                                                                                                       = 1   E e2 (p)
                       1   p for U = 0
                =                ˆ
                                                                                                         1   2E min[p, 1   p]
                       p     for p 0.5
                =
                       1   p for p > 0.5                                                               = 1   2 ¡ BER.                                             (14)
                =   min[p, 1   p].                             (9)       Hence, the bit error ratio can be upper bounded by
   The channels bit error ratio BER is given by the expecta-
                                                                                                                (1   I(U ; V ))
                                                                                                              1
tion over the bit error probability Pb of the actual channels:                                   BER                                                              (15)
                                                                                                              2
               BER = E Pb = E min[p, 1   p] .                 (10)    or equivalently
   (10) is a fundamental result for simulations. If, instead of                                  I(U ; V )          1   2 ¡ BER                                   (16)
counting the really occurred error events during a simulation
of transmission, which is the classical method to determine the          (16) is satisfied with equality for a Binary Erasure Chan-
bit error probability of a coding scheme or a channel, (10) is        nel (BEC). On average an erasure results in half a bit error
evaluated for every transmitted symbol the variance of the esti-      ER/2 = BER. Hence, the BEC is the worst case channel, as
mated bit error ratio is significantly smaller [HLS00]. Further-       it maximizes the probability of error for a given capacity.
more, (10) can be used as a test to determine, whether the re-           Both bounds are shown in Fig. 6. As they directly corre-
liability estimation of an algorithm yields the true a–posteriori     spond to a BSC resp. a BEC they are tight in the whole range
probability [HR90]. If the BER determined by the two differ-          of I(U ; V ) ¾ [0; 1].
ent methods does not coincide, the soft–output of the investi-
gated algorithm is a suboptimum reliability measure.                       V. E STIMATION OF B IT E RROR P ROBABILITY                                        OF
   (10) describes two line segments. For any p ¾ [0; 1] the                      C ONCATENATED C ODING S CHEMES
expression min[p, 1   p] can be upper bounded by 1 e2 (p),   2           After having determined the IPCI (C) for the concatena-
as e2 (p) is a strictly convex function and the line segments         tions, we are able to lower bound the achievable bit error ratio
touch 1 e2 (p) at p = 0, 1 , 1 and 1 e2 (p) = 0, 1 , 0, see Fig. 5.
        2                 2        2             2                    using (5). Furthermore, it is possible to give an achievability
                                                                      bound by (15). If sufficiently large interleavers are used and
                     min[p, 1   p]
                                       1
                                         e2 (p)               (11)    sufficiently many iterations are performed, it can be expected,
                                       2
interleaving. A block length of K = 100000 is not sufficient
            0
       10


                                                                                          to be below the acievability bound for all signal–to–noise ra-
                                                                                          tios.
            −1
       10
                                                                                                                   VI. C ONCLUSIONS
                                                                                              The information processing characteristic IPCI (C) of con-
                                                                                          catenated coding schemes can be directly obtained from
 




            −2
       10

                                                                                          asymptotical analysis. Without simulation of the iterative de-
 BER




                                                                                          coding process, which is quite complex, it is possible to en-
            −3
                                                                                          tirely characterize the behavior of a coding scheme.
       10
                                                                                              As also the bit error ratio, which is the important perfor-
                                                                                          mance measure for applications, can approximately be deter-
                     Lower Bound                                                          mined from the IPCI (C), this analysis is sufficient to decide,
                     Upper Bound
       10
            −4

                 0    0.1     0.2   0.3     0.4   0.5   0.6       0.7   0.8   0.9    1
                                                                                          whether a coding scheme is appropriate for the intended appli-
                            I(U ; V ) [bit per symbol]                                    cation.
                                                                                              For theoretical considerations the IPCI (C) has further ad-
Figure 6: Upper and lower bound on the bit error probability                              vantages. As it characterizes a coding scheme w.r.t. soft–
of a binary input channel.                                                                output and has a scaling that magnifies differences between
                                                                                          coding schemes operated below capacity, resulting in bit error
that the simulation results for the concatenation is between the                          ratios close to 50%, it gives much more insight than a bit error
bounds.                                                                                   ratio curve.
   Fig. 7 shows a comparison of bit error performance simula-
                                                                                                                      R EFERENCES
tion results with the bounds for the investigated concatenation
of Fig. 2. A block length of K = 100000 resp. N = 200000                                  [Bre59] D. G. Brennan. Linear diversity combining techniques. In Proceed-
                                                                                              ings of the IRE, vol. 47, pp. 1075–1102, Jun. 1959.
has been chosen. 25 iterations are sufficient. More iterations
would not significantly improve the bit error performance.                                 [tB99] S. ten Brink. Convergence of iterative decoding. IEE Electronics
                                                                                              Letters, Vol.35, No.10:pp. 806–808, May 1999.
        0
       10                                                                                 [tB00b] S. ten Brink. Iterative Decoding Trajectories of Parallel Concate-
                                                                                              nated Codes. Proc. of 3rd ITG Conference on Source and Channel Cod-
                                                                                              ing, pp. 75–80, Munich, Germany, Jan. 2000.

        −1
                                                                                          [Fan61] R. M. Fano. Transmission of Information: A Statistical Theory of
       10
                                                                                              Communication. John Wiley & Sons, Inc., New York, 1961.
                                                                                          [HR70] M. E. Hellman and J. Raviv. Probability of Error, Equivocation, and
                                                                                             the Chernoff Bound. IEEE Transactions on Information Theory, vol.16,
                                                                                             no.4:pp.368–372, Jul. 1970.
 




        −2
       10

                                                                                          [HLS00] P. Hoeher, I. Land and U. Sorger. Log–Likelihood Values ans
                                                                                             Monte Carlo Simulation – Some Fundamental Results. In Proceedings of
 BER




                                                                                             the International Symposium on Turbo Codes, pp. 43–46, Brest, France,
        −3
       10
                                                                                             Sept. 2000.
                                                                                          [HR90] J. B. Huber and A. Rueppel. Zuverl¨ ssigkeitssch¨ tzung f¨ r die Aus-
                                                                                                                                   a             a        u
                                                                                                                                                    ¨
                                                                                             gangssymbole von Trellis–Decodern [in German]. AEU Int. J. Electron.
                     upper bound
                     simulation                                                              Commun., No.1, pp. 8–21, Jan. 1990.
                     lower bound
        −4
       10
                0                     0.5                     1                     1.5   [HHJF01] S. Huettinger, J. B. Huber, R. Johannesson and R. Fischer. In-
                            10 log10 (Eb /Æ0 ) [dB]                                          formation Processing in Soft–Output Decoding. In Proceedings of
                                                                                             39rd Allerton Conference on Communications, Control and Computing,
                                                                                             Oct. 2001.
Figure 7: Comparison of BER obtained via simulation and
                                                                                          [HHFJ02] S. Huettinger, J. Huber, R. Fischer and R. Johannesson. Soft-
estimation from IPC for the concatenation of Fig. 2.                                         Output-Decoding: Some Aspects From Information Theory. In Proceed-
                                                                                             ings of 4. ITG Conference Source and Channel Coding, pp. 81–89, Berlin,
   There are two main observations. Fistly, the prediction                                   Jan. 2002.
of the bit error ratio via asymptotical analysis, IPCI (C) and
                                                                                          [HH02b] S. Huettinger and J. Huber Design of “Multiple–Turbo–Codes”
bounds on the bit error probability of memoryless channels                                   with Transfer Characteristics of Component Codes In Proceedings of
are very close to the bit error ratios observed in simulations,                              Conference on Information Sciences and Systems (CISS ’2002), Prince-
which are much more complex to perform. Secondly, the more                                   ton, Mar. 2002.
the turbo–cliff is pronounced by the concatenation, the closer                            [HH02c] S. Huettinger and J. Huber. Extrinsic and Intrinsic Information in
the bounds become, and hence the technique becomes more                                      Systematic Coding. In Proceedings of International Symposium on In-
                                                                                             formation Theory 2002, Lausanne, Jul. 2002.
valueable for concatenations that are difficult to simulate.
   But, Fig. 7 also shows, that even for constituent codes of                             [WFH99] U. Wachsmann, R. Fischer and J. B. Huber. Multilevel codes:
                                                                                             Theoretical concepts and practical design rules. IEEE Transactions on
small memory, which have a relatively small decoding hori-                                   Information Theory, IT-45: pp. 1361–1391, Jul. 1999.
zon, quite large interleavers are needed to approximate infinite

Weitere ähnliche Inhalte

Was ist angesagt?

A Nutshell On Convolutional Codes (Representations)
A Nutshell On Convolutional Codes (Representations)A Nutshell On Convolutional Codes (Representations)
A Nutshell On Convolutional Codes (Representations)alka swara
 
The performance of turbo codes for wireless communication systems
The performance of turbo codes for wireless communication systemsThe performance of turbo codes for wireless communication systems
The performance of turbo codes for wireless communication systemschakravarthy Gopi
 
Error-Correcting codes: Application of convolutional codes to Video Streaming
Error-Correcting codes: Application of convolutional codes to Video StreamingError-Correcting codes: Application of convolutional codes to Video Streaming
Error-Correcting codes: Application of convolutional codes to Video StreamingFacultad de Informática UCM
 
Design and Performance Analysis of Convolutional Encoder and Viterbi Decoder ...
Design and Performance Analysis of Convolutional Encoder and Viterbi Decoder ...Design and Performance Analysis of Convolutional Encoder and Viterbi Decoder ...
Design and Performance Analysis of Convolutional Encoder and Viterbi Decoder ...IJERA Editor
 
Convolutional Error Control Coding
Convolutional Error Control CodingConvolutional Error Control Coding
Convolutional Error Control CodingMohammed Abuibaid
 
15-bit NOVEL Hamming Codec using HSPICE 22nm CMOS Technology based on GDI Tec...
15-bit NOVEL Hamming Codec using HSPICE 22nm CMOS Technology based on GDI Tec...15-bit NOVEL Hamming Codec using HSPICE 22nm CMOS Technology based on GDI Tec...
15-bit NOVEL Hamming Codec using HSPICE 22nm CMOS Technology based on GDI Tec...theijes
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...TELKOMNIKA JOURNAL
 
7 convolutional codes
7 convolutional codes7 convolutional codes
7 convolutional codesVarun Raj
 
Study of the operational SNR while constructing polar codes
Study of the operational SNR while constructing polar codes Study of the operational SNR while constructing polar codes
Study of the operational SNR while constructing polar codes IJECEIAES
 
Information Theory and Coding Question Bank
Information Theory and Coding Question BankInformation Theory and Coding Question Bank
Information Theory and Coding Question Bankmiraclebabu
 
Lecture Notes: EEEC6440315 Communication Systems - Information Theory
Lecture Notes:  EEEC6440315 Communication Systems - Information TheoryLecture Notes:  EEEC6440315 Communication Systems - Information Theory
Lecture Notes: EEEC6440315 Communication Systems - Information TheoryAIMST University
 

Was ist angesagt? (20)

Turbo Codes
Turbo CodesTurbo Codes
Turbo Codes
 
A Nutshell On Convolutional Codes (Representations)
A Nutshell On Convolutional Codes (Representations)A Nutshell On Convolutional Codes (Representations)
A Nutshell On Convolutional Codes (Representations)
 
Convolution Codes
Convolution CodesConvolution Codes
Convolution Codes
 
The performance of turbo codes for wireless communication systems
The performance of turbo codes for wireless communication systemsThe performance of turbo codes for wireless communication systems
The performance of turbo codes for wireless communication systems
 
Channel coding
Channel coding  Channel coding
Channel coding
 
Error-Correcting codes: Application of convolutional codes to Video Streaming
Error-Correcting codes: Application of convolutional codes to Video StreamingError-Correcting codes: Application of convolutional codes to Video Streaming
Error-Correcting codes: Application of convolutional codes to Video Streaming
 
Design and Performance Analysis of Convolutional Encoder and Viterbi Decoder ...
Design and Performance Analysis of Convolutional Encoder and Viterbi Decoder ...Design and Performance Analysis of Convolutional Encoder and Viterbi Decoder ...
Design and Performance Analysis of Convolutional Encoder and Viterbi Decoder ...
 
Convolutional Error Control Coding
Convolutional Error Control CodingConvolutional Error Control Coding
Convolutional Error Control Coding
 
I Tlecture 13a
I Tlecture 13aI Tlecture 13a
I Tlecture 13a
 
Turbo codes.ppt
Turbo codes.pptTurbo codes.ppt
Turbo codes.ppt
 
BCH Codes
BCH CodesBCH Codes
BCH Codes
 
15-bit NOVEL Hamming Codec using HSPICE 22nm CMOS Technology based on GDI Tec...
15-bit NOVEL Hamming Codec using HSPICE 22nm CMOS Technology based on GDI Tec...15-bit NOVEL Hamming Codec using HSPICE 22nm CMOS Technology based on GDI Tec...
15-bit NOVEL Hamming Codec using HSPICE 22nm CMOS Technology based on GDI Tec...
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...
 
7 convolutional codes
7 convolutional codes7 convolutional codes
7 convolutional codes
 
Study of the operational SNR while constructing polar codes
Study of the operational SNR while constructing polar codes Study of the operational SNR while constructing polar codes
Study of the operational SNR while constructing polar codes
 
Turbo codes
Turbo codesTurbo codes
Turbo codes
 
Error Control coding
Error Control codingError Control coding
Error Control coding
 
Information Theory and Coding Question Bank
Information Theory and Coding Question BankInformation Theory and Coding Question Bank
Information Theory and Coding Question Bank
 
Lecture Notes: EEEC6440315 Communication Systems - Information Theory
Lecture Notes:  EEEC6440315 Communication Systems - Information TheoryLecture Notes:  EEEC6440315 Communication Systems - Information Theory
Lecture Notes: EEEC6440315 Communication Systems - Information Theory
 

Ähnlich wie Coding Scheme

Bit interleaved coded modulation
Bit interleaved coded modulationBit interleaved coded modulation
Bit interleaved coded modulationMridula Sharma
 
Performance Analysis of Steepest Descent Decoding Algorithm for LDPC Codes
Performance Analysis of Steepest Descent Decoding Algorithm for LDPC CodesPerformance Analysis of Steepest Descent Decoding Algorithm for LDPC Codes
Performance Analysis of Steepest Descent Decoding Algorithm for LDPC Codesidescitation
 
Nt1330 Unit 4.2 Paper
Nt1330 Unit 4.2 PaperNt1330 Unit 4.2 Paper
Nt1330 Unit 4.2 PaperLisa Olive
 
Design and Implementation of Encoder for (15, k) Binary BCH Code Using VHDL a...
Design and Implementation of Encoder for (15, k) Binary BCH Code Using VHDL a...Design and Implementation of Encoder for (15, k) Binary BCH Code Using VHDL a...
Design and Implementation of Encoder for (15, k) Binary BCH Code Using VHDL a...IOSR Journals
 
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
 
FPGA Implementation of Efficient Viterbi Decoder for Multi-Carrier Systems
FPGA Implementation of Efficient Viterbi Decoder for  Multi-Carrier SystemsFPGA Implementation of Efficient Viterbi Decoder for  Multi-Carrier Systems
FPGA Implementation of Efficient Viterbi Decoder for Multi-Carrier SystemsIJMER
 
Belief Propagation Decoder for LDPC Codes Based on VLSI Implementation
Belief Propagation Decoder for LDPC Codes Based on VLSI ImplementationBelief Propagation Decoder for LDPC Codes Based on VLSI Implementation
Belief Propagation Decoder for LDPC Codes Based on VLSI Implementationinventionjournals
 
Ebc7fc8ba9801f03982acec158fa751744ca copie
Ebc7fc8ba9801f03982acec158fa751744ca   copieEbc7fc8ba9801f03982acec158fa751744ca   copie
Ebc7fc8ba9801f03982acec158fa751744ca copieSourour Kanzari
 
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...IJERA Editor
 
D I G I T A L C O M M U N I C A T I O N S J N T U M O D E L P A P E R{Www
D I G I T A L  C O M M U N I C A T I O N S  J N T U  M O D E L  P A P E R{WwwD I G I T A L  C O M M U N I C A T I O N S  J N T U  M O D E L  P A P E R{Www
D I G I T A L C O M M U N I C A T I O N S J N T U M O D E L P A P E R{Wwwguest3f9c6b
 
Digital Communications Jntu Model Paper{Www.Studentyogi.Com}
Digital Communications Jntu Model Paper{Www.Studentyogi.Com}Digital Communications Jntu Model Paper{Www.Studentyogi.Com}
Digital Communications Jntu Model Paper{Www.Studentyogi.Com}guest3f9c6b
 
simulation of turbo encoding and decoding
simulation of turbo encoding and decodingsimulation of turbo encoding and decoding
simulation of turbo encoding and decodingGulafshan Saifi
 
ITERATIVE METHOD FOR IMPROVEMENT OF CODING AND DECRYPTION
ITERATIVE METHOD FOR IMPROVEMENT OF CODING AND DECRYPTIONITERATIVE METHOD FOR IMPROVEMENT OF CODING AND DECRYPTION
ITERATIVE METHOD FOR IMPROVEMENT OF CODING AND DECRYPTIONIJNSA Journal
 
An Overview of the ATSC 3.0 Physical Layer Specification
An Overview of the ATSC 3.0 Physical Layer SpecificationAn Overview of the ATSC 3.0 Physical Layer Specification
An Overview of the ATSC 3.0 Physical Layer SpecificationAlwin Poulose
 

Ähnlich wie Coding Scheme (20)

Turbo Code
Turbo Code Turbo Code
Turbo Code
 
Image compression
Image compressionImage compression
Image compression
 
Digital Communication Techniques
Digital Communication TechniquesDigital Communication Techniques
Digital Communication Techniques
 
Bit interleaved coded modulation
Bit interleaved coded modulationBit interleaved coded modulation
Bit interleaved coded modulation
 
Performance Analysis of Steepest Descent Decoding Algorithm for LDPC Codes
Performance Analysis of Steepest Descent Decoding Algorithm for LDPC CodesPerformance Analysis of Steepest Descent Decoding Algorithm for LDPC Codes
Performance Analysis of Steepest Descent Decoding Algorithm for LDPC Codes
 
Nt1330 Unit 4.2 Paper
Nt1330 Unit 4.2 PaperNt1330 Unit 4.2 Paper
Nt1330 Unit 4.2 Paper
 
Presentation
PresentationPresentation
Presentation
 
Design and Implementation of Encoder for (15, k) Binary BCH Code Using VHDL a...
Design and Implementation of Encoder for (15, k) Binary BCH Code Using VHDL a...Design and Implementation of Encoder for (15, k) Binary BCH Code Using VHDL a...
Design and Implementation of Encoder for (15, k) Binary BCH Code Using VHDL a...
 
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
 
FPGA Implementation of Efficient Viterbi Decoder for Multi-Carrier Systems
FPGA Implementation of Efficient Viterbi Decoder for  Multi-Carrier SystemsFPGA Implementation of Efficient Viterbi Decoder for  Multi-Carrier Systems
FPGA Implementation of Efficient Viterbi Decoder for Multi-Carrier Systems
 
Logic Fe Tcom
Logic Fe TcomLogic Fe Tcom
Logic Fe Tcom
 
Belief Propagation Decoder for LDPC Codes Based on VLSI Implementation
Belief Propagation Decoder for LDPC Codes Based on VLSI ImplementationBelief Propagation Decoder for LDPC Codes Based on VLSI Implementation
Belief Propagation Decoder for LDPC Codes Based on VLSI Implementation
 
Ebc7fc8ba9801f03982acec158fa751744ca copie
Ebc7fc8ba9801f03982acec158fa751744ca   copieEbc7fc8ba9801f03982acec158fa751744ca   copie
Ebc7fc8ba9801f03982acec158fa751744ca copie
 
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...
 
D I G I T A L C O M M U N I C A T I O N S J N T U M O D E L P A P E R{Www
D I G I T A L  C O M M U N I C A T I O N S  J N T U  M O D E L  P A P E R{WwwD I G I T A L  C O M M U N I C A T I O N S  J N T U  M O D E L  P A P E R{Www
D I G I T A L C O M M U N I C A T I O N S J N T U M O D E L P A P E R{Www
 
Digital Communications Jntu Model Paper{Www.Studentyogi.Com}
Digital Communications Jntu Model Paper{Www.Studentyogi.Com}Digital Communications Jntu Model Paper{Www.Studentyogi.Com}
Digital Communications Jntu Model Paper{Www.Studentyogi.Com}
 
simulation of turbo encoding and decoding
simulation of turbo encoding and decodingsimulation of turbo encoding and decoding
simulation of turbo encoding and decoding
 
ITERATIVE METHOD FOR IMPROVEMENT OF CODING AND DECRYPTION
ITERATIVE METHOD FOR IMPROVEMENT OF CODING AND DECRYPTIONITERATIVE METHOD FOR IMPROVEMENT OF CODING AND DECRYPTION
ITERATIVE METHOD FOR IMPROVEMENT OF CODING AND DECRYPTION
 
An Overview of the ATSC 3.0 Physical Layer Specification
An Overview of the ATSC 3.0 Physical Layer SpecificationAn Overview of the ATSC 3.0 Physical Layer Specification
An Overview of the ATSC 3.0 Physical Layer Specification
 
115 118
115 118115 118
115 118
 

Mehr von Deepak Sharma

Ttalteoverview 100923032416 Phpapp01 (1)
Ttalteoverview 100923032416 Phpapp01 (1)Ttalteoverview 100923032416 Phpapp01 (1)
Ttalteoverview 100923032416 Phpapp01 (1)Deepak Sharma
 
Rev 0900023gpplte Advancedintroduction 100324021148 Phpapp01
Rev 0900023gpplte Advancedintroduction 100324021148 Phpapp01Rev 0900023gpplte Advancedintroduction 100324021148 Phpapp01
Rev 0900023gpplte Advancedintroduction 100324021148 Phpapp01Deepak Sharma
 
Rev 090006 100324020704 Phpapp02
Rev 090006 100324020704 Phpapp02Rev 090006 100324020704 Phpapp02
Rev 090006 100324020704 Phpapp02Deepak Sharma
 
Ltetutorial 100126072043 Phpapp01 (1)
Ltetutorial 100126072043 Phpapp01 (1)Ltetutorial 100126072043 Phpapp01 (1)
Ltetutorial 100126072043 Phpapp01 (1)Deepak Sharma
 
Evolutontolteanoverviewjune2010 100615104336 Phpapp02
Evolutontolteanoverviewjune2010 100615104336 Phpapp02Evolutontolteanoverviewjune2010 100615104336 Phpapp02
Evolutontolteanoverviewjune2010 100615104336 Phpapp02Deepak Sharma
 
01 3gpplte Saeoverviewsep06 100613084751 Phpapp02
01 3gpplte Saeoverviewsep06 100613084751 Phpapp0201 3gpplte Saeoverviewsep06 100613084751 Phpapp02
01 3gpplte Saeoverviewsep06 100613084751 Phpapp02Deepak Sharma
 
Lte Advancedtechnologyintroduction 100401143915 Phpapp01
Lte Advancedtechnologyintroduction 100401143915 Phpapp01Lte Advancedtechnologyintroduction 100401143915 Phpapp01
Lte Advancedtechnologyintroduction 100401143915 Phpapp01Deepak Sharma
 
Understanding.Umts.Radio.Network.Modelling.Planning.And.Automated.Optimisation
Understanding.Umts.Radio.Network.Modelling.Planning.And.Automated.OptimisationUnderstanding.Umts.Radio.Network.Modelling.Planning.And.Automated.Optimisation
Understanding.Umts.Radio.Network.Modelling.Planning.And.Automated.OptimisationDeepak Sharma
 
Umts.Performance.Measurement
Umts.Performance.MeasurementUmts.Performance.Measurement
Umts.Performance.MeasurementDeepak Sharma
 
Wiley The.Umts.Network.And.Radio.Access.Technology.Air.Interface.Techniques.F...
Wiley The.Umts.Network.And.Radio.Access.Technology.Air.Interface.Techniques.F...Wiley The.Umts.Network.And.Radio.Access.Technology.Air.Interface.Techniques.F...
Wiley The.Umts.Network.And.Radio.Access.Technology.Air.Interface.Techniques.F...Deepak Sharma
 
Hsdpa.Hsupa.For.Umts
Hsdpa.Hsupa.For.UmtsHsdpa.Hsupa.For.Umts
Hsdpa.Hsupa.For.UmtsDeepak Sharma
 

Mehr von Deepak Sharma (20)

Lte White Paper V4
Lte White Paper V4Lte White Paper V4
Lte White Paper V4
 
Coding Scheme
Coding SchemeCoding Scheme
Coding Scheme
 
Ttalteoverview 100923032416 Phpapp01 (1)
Ttalteoverview 100923032416 Phpapp01 (1)Ttalteoverview 100923032416 Phpapp01 (1)
Ttalteoverview 100923032416 Phpapp01 (1)
 
Sae Archetecture
Sae ArchetectureSae Archetecture
Sae Archetecture
 
Ros Gra10
Ros Gra10Ros Gra10
Ros Gra10
 
Rev 0900023gpplte Advancedintroduction 100324021148 Phpapp01
Rev 0900023gpplte Advancedintroduction 100324021148 Phpapp01Rev 0900023gpplte Advancedintroduction 100324021148 Phpapp01
Rev 0900023gpplte Advancedintroduction 100324021148 Phpapp01
 
Rev 090006 100324020704 Phpapp02
Rev 090006 100324020704 Phpapp02Rev 090006 100324020704 Phpapp02
Rev 090006 100324020704 Phpapp02
 
Rev 090003 R1
Rev 090003 R1Rev 090003 R1
Rev 090003 R1
 
Ltetutorial 100126072043 Phpapp01 (1)
Ltetutorial 100126072043 Phpapp01 (1)Ltetutorial 100126072043 Phpapp01 (1)
Ltetutorial 100126072043 Phpapp01 (1)
 
Evolutontolteanoverviewjune2010 100615104336 Phpapp02
Evolutontolteanoverviewjune2010 100615104336 Phpapp02Evolutontolteanoverviewjune2010 100615104336 Phpapp02
Evolutontolteanoverviewjune2010 100615104336 Phpapp02
 
01 3gpplte Saeoverviewsep06 100613084751 Phpapp02
01 3gpplte Saeoverviewsep06 100613084751 Phpapp0201 3gpplte Saeoverviewsep06 100613084751 Phpapp02
01 3gpplte Saeoverviewsep06 100613084751 Phpapp02
 
3GPP
3GPP3GPP
3GPP
 
Lte Advancedtechnologyintroduction 100401143915 Phpapp01
Lte Advancedtechnologyintroduction 100401143915 Phpapp01Lte Advancedtechnologyintroduction 100401143915 Phpapp01
Lte Advancedtechnologyintroduction 100401143915 Phpapp01
 
Understanding.Umts.Radio.Network.Modelling.Planning.And.Automated.Optimisation
Understanding.Umts.Radio.Network.Modelling.Planning.And.Automated.OptimisationUnderstanding.Umts.Radio.Network.Modelling.Planning.And.Automated.Optimisation
Understanding.Umts.Radio.Network.Modelling.Planning.And.Automated.Optimisation
 
Umts.Performance.Measurement
Umts.Performance.MeasurementUmts.Performance.Measurement
Umts.Performance.Measurement
 
Wiley The.Umts.Network.And.Radio.Access.Technology.Air.Interface.Techniques.F...
Wiley The.Umts.Network.And.Radio.Access.Technology.Air.Interface.Techniques.F...Wiley The.Umts.Network.And.Radio.Access.Technology.Air.Interface.Techniques.F...
Wiley The.Umts.Network.And.Radio.Access.Technology.Air.Interface.Techniques.F...
 
Hsdpa.Hsupa.For.Umts
Hsdpa.Hsupa.For.UmtsHsdpa.Hsupa.For.Umts
Hsdpa.Hsupa.For.Umts
 
Wcdma.For.Umts
Wcdma.For.UmtsWcdma.For.Umts
Wcdma.For.Umts
 
wimax book
wimax bookwimax book
wimax book
 
Introduction W Cdma
Introduction W CdmaIntroduction W Cdma
Introduction W Cdma
 

Coding Scheme

  • 1. ITW2003, Paris, France, March 31 – April 4, 2003 Performance estimation for concatenated coding schemes Simon Huettinger and Johannes Huber Chair of Information Transmission, University Erlangen-N¨ rnberg, Germany u e-mail: huettinger,huber @LNT.de Abstract — Asymptotical analysis of concatenated codes U [i], which is the a–posteriori probability taking the received with EXIT charts [tB99] or the AMCA [HH02b] is proven vector Y and code constraints into account, i.e., to be a powerful tool for the design of power–efficient com- def munication systems. But, usually the result of the asymp- V [i] = Pr U [i] = 0 Y . (1) totical analysis is a binary decision, whether convergence of iterative decoding is possible at the chosen signal–to– ˆ Estimated symbols U [i] can be obtained from the vector of noise ratio, or not. soft–output values V . In this paper it is shown how to obtain the Information Processing Characteristic (IPC) introduced in [HHJF01] II. I NFORMATION P ROCESSING C HARACTERISTICS for concatenated coding schemes. If asymptotical anal- The Information Processing Characteristic [HHJF01] for ysis is performed under the assumption of infinite inter- symbol–by–symbol decoding and Interleaving leaving and infinitely many iterations, this IPC will be a K lower bound. Furthermore, it also is possible to estimate def ¯ def 1 the performance of realistic coding schemes by restricting IPCI (C) = I(U ; V ) = I(U [i]; V [i]) (2) K i=1 the number of iterations. Finally, the IPC can be used to estimate the resulting characterizes a coding scheme w.r.t. soft–output, i.e. IPCI is bit error ratio for the concatenated coding scheme. As an the capacity of the memoryless end–to–end channel from U to upper and a lower bound on the bit error ratio for a given soft–output V . IPC exist, we are able to lower bound the performance of In [HHFJ02] we proved by information theoretic bounding any concatenated coding scheme and give an achievability that the IPC of any coding scheme can be upper bounded by: bound, i.e. it is possible to determine a performance that can surely be achieved if sufficiently many iterations are IPCI (C) min (C/R, 1) . (3) performed and a large interleaver is used. A coding scheme fulfilling (3) with equality is called ideal I. S YSTEM M ODEL coding scheme. In the following we analyze the properties of a digital com- The IPCI is important for two reasons. Firstly, the charac- munications system consisting of a binary Bernoulli source, terization w.r.t. soft–output is very helpful for the analysis and a channel coder, a channel, a decoder and a sink. Without comparison of coding schemes, which will be used as com- loss of generality we assume, that the source emits a block ponents of concatenated codes [HHJF01]. Secondly, the IPCI of K binary information symbols U [i], i ¾ 1, 2, ¡ ¡ ¡ , K . can be a result of a convergence analysis performed with EXIT The encoder maps the information vector U to a codeword charts [tB99] or the AMCA [HH02b]. X which consists of N symbols X[n], n ¾ 1, 2, ¡ ¡ ¡ , N . In the following we will show how the IPCI can be obtained The rate of the code, which is supposed to be time–invariant, for concatenated coding schemes and a relationship between is R = K/N measured in bit per channel symbol. The code- ˆ the IPCI and the bit error ratio of the hard–output U [i] will be word X is transmitted over a memoryless channel that corrupts derived. the message by substitution errors, e.g., the binary symmetric channel (BSC) or the additive white Gaussian noise channel III. I NFORMATION P ROCESSING C HARACTERISTIC AS (AWGN Channel). Modulator and demodulator are consid- R ESULT OF A SYMPTOTICAL A NALYSIS ered as being part of the channel. To obtain the Information Processing Characteristic for Additionally we introduce an (theoretically infinite) inter- symbol–by–symbol decoding and interleaving IPCI (C), leaver π½ before encoding that converts the end–to–end chan- firstly we have to determine the mutual information between nel between U and V to a memoryless channel. the source symbols U and the post-decoding soft–output V of the decoder using EXIT charts or the AMCA. It is possible to obtain both, a lower bound achieved by infinite interleav- ing and infinitely many iterations as well as estimations of the mutual information after an arbitrary number of iterations. As long as thereby the number of iterations is restricted such that Figure 1: System model. the cycles in the graph of the code do not dominate the de- coding performance the result will be close to bit error per- The corrupted received sequence Y is processed by the de- formance that can be measured if the whole coding scheme is coder. The decoder output is the soft–output w.r.t. symbol simulated.
  • 2. Results or intermediate results of EXIT charts and the infinitely many iterations, the decoding process gets stuck ex- AMCA are the mutual information between the source sym- actly at these points. Hence, from the abscissa and ordinate bols U in parallel concatenation or the encoded symbols of values of these points a lower bound on the IPCI (C) can be the outer encoder X in serial concatenation and the respective obtained. This asymptotical IPCI (C) is shown in Fig. 4. extrinsic soft–output at the decoder side Z resp. Q. The post– decoding information V , which is the final result at the output 1 −1.5 dB of an iterative decoder, is created by maximum ratio combin- 0.9 ing [Bre59] of the extrinsic informations of all consituent de- coders on a symbol basis. This can be modelled statistically 0.8 by information combining [HH02c]. For serial concatenation we also have to assume systematic 0.7   encoding of the outer code, to ensure that the post–decoding IPCI (C) [bit per source symbol] mutual information w.r.t. info bits U is the same as the post– 0.6 me decoding mutual information w.r.t. code bits X of the outer che 0.5 gs encoder. din l co Exemplary, IPCI (C) for the serial concatenation of Fig. 2 0.4 a ide will be determined in the following. 0.3 −2.5 dB 0.2 10log (E /N )=−3 dB 10 s 0 0.1 Ser. Concat. ν=8 CC Figure 2: Encoder for serial concatenation of a rate–1/2 MFD 0 convolutional code of memory ν = 1 with a ν = 2 scrambler 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (Gr = 07, G = 01). C [bit per channel symbol]   As the concatenation is of extremely low complexity, it can Figure 4: IPCI (C) for the concatenation of Fig. 2, obtained be assumed that even in practical implementations the limit by EXIT charts. For comparison also the IPCI (C) of a ν = 8 of infinite number of iterations will be closely approximated. convolutional code is given. Hence, we firstly determine the intersection point of the trans- For a realistic coding scheme 25 iterations are sufficient to fer characteristics within EXIT charts for the range of signal– closely approximate this behavior. In every iteration the outer to–noise ratios. Then the post–decoding mutual information is decoder decodes a 2–state trellis of length K, and the inner calculated using information combining. one visits 4 states in every of the 2K trellis segments. Hence, the decoding complexity then is 25 ¡ (2 + 2 ¡ 4) = 250 visited 1 states per decoded bit, which is approximately the complexity of decoding a memory ν = 8 convolutional code. For compar- 0.9 ison the IPCI (C) of a ν = 8 convolutional code also is plotted 0.8 into Fig. 4 This IPCI (C) is directly obtained via Monte Carlo   simulation. 0.7 Obviously, there is a substantial difference in the behav- I(U ; E) = I(X; Y ) [bit per symbol] 5 dB ior of the two coding schemes. The concatenation shows the 0.6 −1. )= turbo–cliff, which is typical for iteratively decoded concatena- /N 0 0.5 g (E 10 s tion, whereas the convolutional code shows a more constant 0lo 1 improvement of output if channel capacity is raised. Hence, if 0.4 high mutual information (e.g., I(U ; V ) > 0.999) is required, which corresponds to a low bit error ratio, the concatenation 0.3 outperforms the convolutional code with equal complexity. B 5d 0.2 −2. IV. B OUNDING B IT E RROR P ROBABILITY BY C APACITY −3 dB Although there is no direct relationship between probabil- 0.1 ity of bit error and the capacity of a channel, an upper and 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 a lower bound can be given. Furthermore, as channels are   I(U ; Z) = I(X; Q) [bit per symbol] known, which satisfy these bound with equality, they are tight. We consider memoryless symmetric channels with binary input U ¾ 0; 1 and discrete or continuous output alphabet Figure 3: EXIT charts for the concatenation of Fig. 2. Î . The capacity Fig. 3 shows some EXIT charts used to determine the I(U ; V ) = H(U )   H(U V ) (4) IPCI (C) for the concatenation of Fig. 2. Circles mark the intersection points of the transfer characteristics. Assuming is achieved by equiprobable signaling, i.e. H(U ) = 1.
  • 3. 0.5 With Fano’s inequality [Fan61] which reads 1/2 H(p) min[p,1−p] 0.45 e2 (BER) H(U V ) = 1   I(U ; V ) (5) 0.4 we have a lower bound on the probability of error. Here, e2 (¡) 0.35 0.3 denotes the binary entropy function 0.25 e2 (x) :=  x log2 (x)   (1   x) log2 (1   x) , (6) 0.2   ¾ (0, 1), e2 1 is its inverse for x ¾ (0, 1/2). 0.15 x 0.1 This minimum bit error ratio is achieved by a Binary Sym- 0.05 metric Channel (BSC). Hence, all channels with a larger out- 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 put alphabet have no lower probability of error at the same capacity. p   To derive a lower bound on the capacity [HR70] we need Figure 5: Graphs of min[p, 1   p] and 1 e2 (p). the a–posteriori probability of U = 0 having received V = v: 2 p = Pr(U = 0 V = v) (7) For any given channel output V = v the entropy of the binary variable U is given by and the hard–decision e2 (p) = H(U V = v), (12) 1 for p 0.5 ˆ U= . (8) 0 for p > 0.5 as U = 0 occurs with probability p and U = 1 occurs with There is an equivalent binary symmetric channel from U to probability 1   p. Hence, the average entropy of U given V ˆ U . The crossover probability of this channel is equal to the can be expresses as the expectation over the binary entropy a–posteriori probability p, as uniform signaling is assumed. function of the a–posteriori probability p: Depending on the actual received V = v and the actual H(U V ) = E H(U V = v) . (13) hard decision, which deterministically depends on v, a bit error occurs with probability Inserting into (4) yields Pr(U = 0 V = v) for U = 1 = 1   H(U V ) ˆ Pb = I(U ; V ) ˆ Pr(U = 1 V = v) for U = 0 = 1   E H(U V = v) p ˆ for U = 1 = 1   E e2 (p) 1   p for U = 0 = ˆ 1   2E min[p, 1   p] p for p 0.5 = 1   p for p > 0.5 = 1   2 ¡ BER. (14) = min[p, 1   p]. (9) Hence, the bit error ratio can be upper bounded by The channels bit error ratio BER is given by the expecta- (1   I(U ; V )) 1 tion over the bit error probability Pb of the actual channels: BER (15) 2 BER = E Pb = E min[p, 1   p] . (10) or equivalently (10) is a fundamental result for simulations. If, instead of I(U ; V ) 1   2 ¡ BER (16) counting the really occurred error events during a simulation of transmission, which is the classical method to determine the (16) is satisfied with equality for a Binary Erasure Chan- bit error probability of a coding scheme or a channel, (10) is nel (BEC). On average an erasure results in half a bit error evaluated for every transmitted symbol the variance of the esti- ER/2 = BER. Hence, the BEC is the worst case channel, as mated bit error ratio is significantly smaller [HLS00]. Further- it maximizes the probability of error for a given capacity. more, (10) can be used as a test to determine, whether the re- Both bounds are shown in Fig. 6. As they directly corre- liability estimation of an algorithm yields the true a–posteriori spond to a BSC resp. a BEC they are tight in the whole range probability [HR90]. If the BER determined by the two differ- of I(U ; V ) ¾ [0; 1]. ent methods does not coincide, the soft–output of the investi- gated algorithm is a suboptimum reliability measure. V. E STIMATION OF B IT E RROR P ROBABILITY OF (10) describes two line segments. For any p ¾ [0; 1] the C ONCATENATED C ODING S CHEMES expression min[p, 1   p] can be upper bounded by 1 e2 (p), 2 After having determined the IPCI (C) for the concatena- as e2 (p) is a strictly convex function and the line segments tions, we are able to lower bound the achievable bit error ratio touch 1 e2 (p) at p = 0, 1 , 1 and 1 e2 (p) = 0, 1 , 0, see Fig. 5. 2 2 2 2 using (5). Furthermore, it is possible to give an achievability bound by (15). If sufficiently large interleavers are used and min[p, 1   p] 1 e2 (p) (11) sufficiently many iterations are performed, it can be expected, 2
  • 4. interleaving. A block length of K = 100000 is not sufficient 0 10 to be below the acievability bound for all signal–to–noise ra- tios. −1 10 VI. C ONCLUSIONS The information processing characteristic IPCI (C) of con- catenated coding schemes can be directly obtained from   −2 10 asymptotical analysis. Without simulation of the iterative de- BER coding process, which is quite complex, it is possible to en- −3 tirely characterize the behavior of a coding scheme. 10 As also the bit error ratio, which is the important perfor- mance measure for applications, can approximately be deter- Lower Bound mined from the IPCI (C), this analysis is sufficient to decide, Upper Bound 10 −4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 whether a coding scheme is appropriate for the intended appli- I(U ; V ) [bit per symbol]   cation. For theoretical considerations the IPCI (C) has further ad- Figure 6: Upper and lower bound on the bit error probability vantages. As it characterizes a coding scheme w.r.t. soft– of a binary input channel. output and has a scaling that magnifies differences between coding schemes operated below capacity, resulting in bit error that the simulation results for the concatenation is between the ratios close to 50%, it gives much more insight than a bit error bounds. ratio curve. Fig. 7 shows a comparison of bit error performance simula- R EFERENCES tion results with the bounds for the investigated concatenation of Fig. 2. A block length of K = 100000 resp. N = 200000 [Bre59] D. G. Brennan. Linear diversity combining techniques. In Proceed- ings of the IRE, vol. 47, pp. 1075–1102, Jun. 1959. has been chosen. 25 iterations are sufficient. More iterations would not significantly improve the bit error performance. [tB99] S. ten Brink. Convergence of iterative decoding. IEE Electronics Letters, Vol.35, No.10:pp. 806–808, May 1999. 0 10 [tB00b] S. ten Brink. Iterative Decoding Trajectories of Parallel Concate- nated Codes. Proc. of 3rd ITG Conference on Source and Channel Cod- ing, pp. 75–80, Munich, Germany, Jan. 2000. −1 [Fan61] R. M. Fano. Transmission of Information: A Statistical Theory of 10 Communication. John Wiley & Sons, Inc., New York, 1961. [HR70] M. E. Hellman and J. Raviv. Probability of Error, Equivocation, and the Chernoff Bound. IEEE Transactions on Information Theory, vol.16, no.4:pp.368–372, Jul. 1970.   −2 10 [HLS00] P. Hoeher, I. Land and U. Sorger. Log–Likelihood Values ans Monte Carlo Simulation – Some Fundamental Results. In Proceedings of BER the International Symposium on Turbo Codes, pp. 43–46, Brest, France, −3 10 Sept. 2000. [HR90] J. B. Huber and A. Rueppel. Zuverl¨ ssigkeitssch¨ tzung f¨ r die Aus- a a u ¨ gangssymbole von Trellis–Decodern [in German]. AEU Int. J. Electron. upper bound simulation Commun., No.1, pp. 8–21, Jan. 1990. lower bound −4 10 0 0.5 1 1.5 [HHJF01] S. Huettinger, J. B. Huber, R. Johannesson and R. Fischer. In- 10 log10 (Eb /Æ0 ) [dB]   formation Processing in Soft–Output Decoding. In Proceedings of 39rd Allerton Conference on Communications, Control and Computing, Oct. 2001. Figure 7: Comparison of BER obtained via simulation and [HHFJ02] S. Huettinger, J. Huber, R. Fischer and R. Johannesson. Soft- estimation from IPC for the concatenation of Fig. 2. Output-Decoding: Some Aspects From Information Theory. In Proceed- ings of 4. ITG Conference Source and Channel Coding, pp. 81–89, Berlin, There are two main observations. Fistly, the prediction Jan. 2002. of the bit error ratio via asymptotical analysis, IPCI (C) and [HH02b] S. Huettinger and J. Huber Design of “Multiple–Turbo–Codes” bounds on the bit error probability of memoryless channels with Transfer Characteristics of Component Codes In Proceedings of are very close to the bit error ratios observed in simulations, Conference on Information Sciences and Systems (CISS ’2002), Prince- which are much more complex to perform. Secondly, the more ton, Mar. 2002. the turbo–cliff is pronounced by the concatenation, the closer [HH02c] S. Huettinger and J. Huber. Extrinsic and Intrinsic Information in the bounds become, and hence the technique becomes more Systematic Coding. In Proceedings of International Symposium on In- formation Theory 2002, Lausanne, Jul. 2002. valueable for concatenations that are difficult to simulate. But, Fig. 7 also shows, that even for constituent codes of [WFH99] U. Wachsmann, R. Fischer and J. B. Huber. Multilevel codes: Theoretical concepts and practical design rules. IEEE Transactions on small memory, which have a relatively small decoding hori- Information Theory, IT-45: pp. 1361–1391, Jul. 1999. zon, quite large interleavers are needed to approximate infinite