SlideShare ist ein Scribd-Unternehmen logo
1 von 35
Downloaden Sie, um offline zu lesen
Channel Coding for Quantum Key Distribution

                   Gottfried Lechner

            gottfried.lechner@unisa.edu.au


           Institute for Telecommunications Research
                   University of South Australia


                  July 7, 2011
            HiPANQ Workshop, Vienna



                                                       1 / 26
Outline


   Basics
      Channel Coding
      Slepian-Wolf Coding
      Binning and the Dual Channel
      Linear Block Codes, Syndrome and Rates


   QKD Reconciliation
     System Setup
     Example
     Optimisation


   Conclusions




                                               2 / 26
Shannon: Commiunication in the Presence of Noise
Channel Coding1949                                                                      the channel capacity may be defined as
                           ony, this operation consists of merely changing sound
                           pressure into a proportional electrical current. In teleg-
                                                                                                            ,= T-oo
                                                                                                                        C9g2 M
                                                                                                                           T
                                                                                                                 T--a     T
                                                                                        A precise meaning will be given later to
                                                                                        of reliable resolution of the M signals.
                                                                                                II. THE SAMPLING THEOR

                                                                                    Let us suppose that the channel has
                                                                                 width W in cps starting at zero frequen
                                                                                 are allowed to use this channel for a c
                                 Fig. 1-General communications system.           time T. Without any further restrict
                                                                                 mean that we can use as signal functio
                                                                                 of time whose spectra lie entirely with
                    raphy, we have an encoding operation which produces a and whose time functions lie within th
      transmit data reliably from source to
                    sequence of dots, dashes, and spaces destination though it is not possible to fulfill both
                                                              corresponding to
                    the letters of the message. To take a more complex tions exactly, it is possible to keep the
      maximise the example, speech functionsmultiplex sampled, compressed, outside theW, and PeT.< the time fun
                    different
                               in the R of
                                         c      must be
                                                          PCM telephony the the of
                    code rate case such that the probability banderror to have we describe    interval Can
                    quantized and encoded, and finally interleaved properly way the functions which satisfy these
      the maximum to constructgiven by the capacity of the answer is the following:
                     rate is the signal.                                          channel C
                       3. The channel. This is merely the medium used to           THEOREM 1: If a function f(t) contai
                    transmit the signal from the transmitting to the receiv- higher than W cps, it is completely dete
                    ing point. It may be a pair of wires, a coaxial cable, a its ordinates at a series of points space
                    band of radio frequencies, etc. During transmission, or apart.
                    at the receiving terminal, the signal may be perturbed          This is a fact which is common knowl
                    by noise or distortion. Noise and distortion may be dif- munication art. The intuitive justificat
                    ferentiated on the basis that distortion is a fixed opera- contains no frequencies higher than
                    tion applied to the signal, while noise involves statistical change to a substantially new value in
                    and unpredictable perturbations. Distortion can, in one-half cycle of the highest frequency,
                    principle, be corrected by applying the inverse opera- mathematical proof showing that this
                    tion, while a perturbation due to noise cannot always be proximately, but exactly, true can be
                    removed, since the signal does not always undergo the Let F(w) be the spectrum of f(t). Then
                    same change during transmission.
                       4. The receiver. This operates on the received signal                             1 a00    3 / 26
Shannon: Commiunication in the Presence of Noise
Channel Coding1949                                                                          the channel capacity may be defined as
                               ony, this operation consists of merely changing sound
                               pressure into a proportional electrical current. In teleg-
                                                                                                                ,= T-oo
                                                                                                                            C9g2 M
                                                                                                                               T
                                                                                                                     T--a     T
                                                                                            A precise meaning will be given later to
                                                                                            of reliable resolution of the M signals.
                                                                                                   II. THE SAMPLING THEOR

                                                                                       Let us suppose that the channel has
                                                                                    width W in cps starting at zero frequen
                                                                                    are allowed to use this channel for a c
                                    Fig. 1-General communications system.           time T. Without any further restrict
                                                                                    mean that we can use as signal functio
                                                                                    of time whose spectra lie entirely with
                       raphy, we have an encoding operation which produces a and whose time functions lie within th
       transmit data reliably from source to
                       sequence of dots, dashes, and spaces destination though it is not possible to fulfill both
                                                                 corresponding to
                       the letters of the message. To take a more complex tions exactly, it is possible to keep the
       maximise the example, speech functionsmultiplex sampled, compressed, outside theW, and PeT.< the time fun
                       different
                                  in the R of
                                            c      must be
                                                             PCM telephony the the of
                       code rate case such that the probability banderror to have we describe    interval Can
                       quantized and encoded, and finally interleaved properly way the functions which satisfy these
       the maximum to constructgiven by the capacity of the answer is the following:
                        rate is the signal.                                          channel C
                          3. The channel. This is merely the medium used to           THEOREM 1: If a function f(t) contai
                       transmit the signal from the transmitting to the receiv- higher than W cps, it is completely dete
                       ing point. It may be a pair of wires, a coaxial cable, a its ordinates at a series of points space
   Channel Coding Theorem [Shannon 1948]
                       band of radio frequencies, etc. During transmission, or apart.
                       at the receiving terminal, the signal may be perturbed          This is a fact which is common knowl
   For any > 0 and by c <orC, for largeand distortion may be dif- munication a code of justificat
                        Rnoise distortion. Noise enough N, there exists art. The intuitive
                       ferentiated on the basis that distortion is a fixed opera- contains no frequencies higher than
   length N and rate Rc and a decoding algorithm, such that thesubstantially new value in
                       tion applied to the signal, while noise involves statistical change to a maximal
   probability of blockand unpredictable perturbations. Distortion can, in one-half cycle of the highest frequency,
                         error is less than .
                       principle, be corrected by applying the inverse opera- mathematical proof showing that this
                       tion, while a perturbation due to noise cannot always be proximately, but exactly, true can be
                       removed, since the signal does not always undergo the Let F(w) be the spectrum of f(t). Then
                       same change during transmission.
                          4. The receiver. This operates on the received signal                             1 a00    3 / 26
4 Simplifying Component Decoders

Typical Approach
     choose a family of channels with a single parameter (e.g.,
     AWGN, BSC, BEC,...)
     fix a code rate
     optimise the code such that it achieves vanishing error probability
     close to capacity

                                 0
                                10
                                                                  SPA
                                                                  MSA
                                                                  MSA − variable scaling
                                 −1
                                                                  MSA − fixed scaling 0.60
                                10                                MSA − fixed scaling 0.70
                                                                  MSA − universal


                                 −2
                                10
               bit error rate




                                 −3
                                10




                                 −4
                                10




                                 −5
                                10




                                 −6
                                10
                                     0.5   1      1.5             2                      2.5
                                                Eb
                                                N0


         Figure 4.10: Bit error rates for irregular code with and without post-processing.
                                                                                               4 / 26
DAVID SLEPIAN AND JACK K. WOLF
     Slepian-Wolf Coding
ation sequences . . .,X- 1,X0,XI,. . . and                 -x-(   ,xg   .X,,“’        x    “~01101~~~       ..x-:,x,*,x;l-.
 ed by repeated independent drawings of                                          - ENCODER   RATE RX    0
bles X, Y from a given bivariate distribu-                                                              E
                                                                                                        C
e minimum number of bits per character        CORRELATED
                                                                                                        0
                                               SOURCES
se sequences that they can be faithfully
              so                                                                                        D
assumptions regarding the encoders and                     “Ye, ,Y,     .Y,;..       Y    “‘11000..~    ;   ..Y-,*,Yo*8Y,p...
 hich are not at all obvious, are presented                                       ENCODER    RATE RY
  in the Rx-Ry plane. They generalize a
or a single information sequence,namely             Fig. 1. Correlated source coding configuration.
uction.

TRODUCTION
tement                 transmit two correlated sources over two noiseless channels
   generalize, to the caseencoding and decoding: H(X, Y)
                      joint of two
rtain well-known results on the
                      separate encoding and decoding: H(X) + H(Y) ≥
ngle discrete information source.                                                                                               H(X, Y)
onsideredis that depictedin Fig. 1.
 information sequences. * .,X- 1,
,,,Y,, . . . are obtained by repeated
m a discrete bivariate distribution
ch sourceis constrained to operate
e other source, while the decoder                           HtXIY)       H(X)    H(X,Y)    RX

ded binary messagestreams. We                Fig. 2. Admissible rate region W corresponding to Fig. 1
umber of bits per sourcecharacter
oded messagestreams in order to
                                        ensureaccuratereconstruction by the decoderof the outputs
                                        of both information sources.The results are presentedas an
  25, 1972; revised December 28, 1972.
 sity of Hawaii, Honolulu, Hawaii, and allowed two-dimensional rate region 93for the two encoded
 Hill, N.J. 07974.                      message   streamsas shown in Fig. 2. Note that in 93for this
 ersity of Hawaii, Honolulu, Hawaii, on
 itute of Brooklyn, Brooklyn, N.Y.      case we can have both R, < H(X) and R, -c H(Y) al-
                                                                                                                                          5 / 26
DAVID SLEPIAN AND JACK K. WOLF
     Slepian-Wolf Coding
ation sequences . . .,X- 1,X0,XI,. . . and                 -x-(   ,xg   .X,,“’        x    “~01101~~~       ..x-:,x,*,x;l-.
 ed by repeated independent drawings of                                          - ENCODER   RATE RX    0
bles X, Y from a given bivariate distribu-                                                              E
                                                                                                        C
e minimum number of bits per character        CORRELATED
                                                                                                        0
                                               SOURCES
se sequences that they can be faithfully
              so                                                                                        D
assumptions regarding the encoders and                     “Ye, ,Y,     .Y,;..       Y    “‘11000..~    ;   ..Y-,*,Yo*8Y,p...
 hich are not at all obvious, are presented                                       ENCODER    RATE RY
  in the Rx-Ry plane. They generalize a
or a single information sequence,namely             Fig. 1. Correlated source coding configuration.
uction.

TRODUCTION
tement                 transmit two correlated sources over two noiseless channels
   generalize, to the caseencoding and decoding: H(X, Y)
                      joint of two
rtain well-known results on the
                      separate encoding and decoding: H(X) + H(Y) ≥ H(X, Y)
ngle discrete information source.
onsideredis that depictedin Fig. 1.
 information sequences. * .,X- 1,
               Slepian-Wolf Theorem (1973)
,,,Y,, . . . are obtained by repeated
m a discrete bivariate distribution rate region is given by the rate pairs satisfying
               The admissible
ch sourceis constrained to operate
e other source, while the decoder                          HtXIY)       H(X)    H(X,Y)    RX

ded binary messagestreams. We                                     Rx ≥ H(X|Y)
                                            Fig. 2. Admissible rate region W corresponding to Fig. 1
umber of bits per sourcecharacter
oded messagestreams in order to                                   Ry ≥ H(Y|X)
                                       ensureaccuratereconstruction by the decoderof the outputs
                                       of both information + Ry ≥ H(X, are presentedas an
                                                        Rx sources.The results Y)
  25, 1972; revised December 28, 1972.
 sity of Hawaii, Honolulu, Hawaii, and allowed two-dimensional rate region 93for the two encoded
 Hill, N.J. 07974.                     message   streamsas shown in Fig. 2. Note that in 93for this
 ersity of Hawaii, Honolulu, Hawaii,penalty if X and Y are encoded separately!
               There is N.Y. on case we can have both R, < H(X) and R, -c H(Y) al-
 itute of Brooklyn, Brooklyn, no

                                                                                                                                5 / 26
nimum number of bits per character                       CORRELATED
                                                                                                                   0
                                                          SOURCES
 quences that they can be faithfully
         so                                                                                                        D
mptions regarding the encoders and
      Slepian-Wolf Coding
 are not at all obvious, are presented
                                                                         “Ye, ,Y,   .Y,;..      Y
                                                                                             ENCODER
                                                                                                     “‘11000..~
                                                                                                        RATE RY
                                                                                                                   ;   ..Y-,*,Yo*8Y,p...


he Rx-Ry plane. They generalize a
single information sequence,namely                               Fig. 1. Correlated source coding configuration.
n.

DUCTION
ent
 neralize, to the case of two
n well-known results on the
  discrete information source.
 deredis that depictedin Fig. 1.
ormation sequences. * .,X- 1,
 , . . . are obtained by repeated
  discrete bivariate distribution
ourceis constrained to operate
her source, while the decoder                                                   HtXIY)           H(X)     H(X,Y)       RX

 binary messagestreams. We                                Fig. 2. Admissible rate region W corresponding to Fig. 1
ber of bits per sourcecharacter
d messagestreams in order to
                                                   ensureaccuratereconstruction by the decoderof the outputs
                                                   of both information sources.The results are presentedas an
 1972; revised December 28, 1972.
 of Hawaii, Honolulu, Hawaii, and                  allowed two-dimensional rate region 93for the two encoded
  N.J. 07974.                                      message  streamsas shown in Fig. 2. Note that in 93for this
y of Hawaii, Honolulu, Hawaii, on
  of Brooklyn, Brooklyn, N.Y.                      case we can have both R, < H(X) and R, -c H(Y) al-



rsity of South Australia. Downloaded on January 17, 2009 at 20:51 from IEEE Xplore. Restrictions apply.




                                                                                                                                           6 / 26
nimum number of bits per character                       CORRELATED
                                                                                                                   0
                                                          SOURCES
 quences that they can be faithfully
         so                                                                                                        D
mptions regarding the encoders and
      Slepian-Wolf Coding
 are not at all obvious, are presented
                                                                         “Ye, ,Y,   .Y,;..      Y
                                                                                             ENCODER
                                                                                                     “‘11000..~
                                                                                                        RATE RY
                                                                                                                   ;   ..Y-,*,Yo*8Y,p...


he Rx-Ry plane. They generalize a
single information sequence,namely                               Fig. 1. Correlated source coding configuration.
n.

DUCTION
ent
 neralize, to the case of two
n well-known results on the
  discrete information source.
 deredis that depictedin Fig. 1.
ormation sequences. * .,X- 1,
 , . . . are obtained by repeated
  discrete bivariate distribution
ourceis constrained to operate
her source, while the decoder                                                   HtXIY)           H(X)     H(X,Y)       RX

 binary messagestreams. We                                Fig. 2. Admissible rate region W corresponding to Fig. 1
ber of bits per sourcecharacter
d messagestreams in order to
                                                   ensureaccuratereconstruction by the decoderof the outputs
                          assume thatbothis transmitted at H(Y) are presentedas an
                                   of Y information sources.The results
 1972; revised December 28, 1972.
                          we operate at a two-dimensional rateFig. 2. Note that two encoded
 of Hawaii, Honolulu, Hawaii, and
  N.J. 07974.
                                    allowed
                                    message corner point in region Slepian-Wolfthis
                                            streamsas shown of the
                                                                      93for the
                                                                                in 93for region
y of Hawaii, Honolulu, Hawaii, on
                          for this corner we can have can R, < H(X) syndrome of al- channel code
  of Brooklyn, Brooklyn, N.Y.         case point we both use the and R, -c H(Y) a
                          as a binning scheme
rsity of South Australia. Downloaded on January 17, 2009 at 20:51 from IEEE Xplore. Restrictions apply.




                                                                                                                                           6 / 26
Binning with Syndrome




                         Bin 1   Bin 2   Bin 3



     encoding of X can be done by random binning
     the syndrome of a linear code is used for binning




                                                         7 / 26
Dual Channel

  Correlated sources:
      assume sources X and Y with P(X, Y) = P(X)P(Y|X)
      generate X according to P(X)
      transmit X over the channel P(Y|X) to obtain Y




                                                         8 / 26
Dual Channel

  Correlated sources:
      assume sources X and Y with P(X, Y) = P(X)P(Y|X)
      generate X according to P(X)
      transmit X over the channel P(Y|X) to obtain Y


  What is the channel that is seen by the channel decoder?
      in general it is the dual channel which is not equal to P(Y|X) nor
      P(X|Y)
      the channel seen by the decoder is always a symmetric channel
      with uniform input
      therefore, linear codes can be used
      for the simple case of two binary sources correlated via a BSC all
      these channels are the same



                                                                           8 / 26
Linear Block Codes, Syndrome and Rates
     x


                                            N
                            C=     x ∈ {0, 1}   xHT = 0
 N        M

                            Rc =   N−M
                                    N    =1−    M
                                                N




                                                          9 / 26
Linear Block Codes, Syndrome and Rates
     x


                                                                          N
                                                  C=     x ∈ {0, 1}           xHT = 0
 N              M

                                                  Rc =   N−M
                                                          N      =1−          M
                                                                              N




     x
                      s
                                                                          N
                                                  Cs =      x ∈ {0, 1}            xHT = s
 N                        M

                                                  Rs =   M
                                                         N   = 1 − Rc



         efficiency parameter   f =     M
                                     H(X|Y)   =     M
                                                  NH(X|Y)    =     Rs
                                                                 H(X|Y)


                                                                                            9 / 26
LDPC Codes




                                        N
                                                                           
       0   0   0    0   0   0   0   0       1   0   1   0   1   0   1   0
      0   0   0    0   0   0   1   1       0   1   0   1   0   0   0   0   
                                                                           
      0   1   1    0   0   0   0   1       0   0   1   0   0   0   0   0   
                                                                           
      1   1   0    1   0   0   0   0       0   0   0   0   0   1   0   0   
 H=
   
                                                                            
                                                                               M
      1   0   0    0   0   1   0   0       0   0   0   1   0   0   1   0   
      0   0   0    0   1   0   0   0       0   1   0   0   1   1   0   0   
                                                                           
      0   0   0    1   1   0   0   0       1   0   0   0   0   0   0   1      dc
       0   0   1    0   0   1   1   0       0   0   0   0   0   0   0   1

               dv




                                                                                     10 / 26
Outline


   Basics
      Channel Coding
      Slepian-Wolf Coding
      Binning and the Dual Channel
      Linear Block Codes, Syndrome and Rates


   QKD Reconciliation
     System Setup
     Example
     Optimisation


   Conclusions




                                               11 / 26
Quantum Key Distribution



                                  Public
                                 Channel



           Alice                                     Bob




                     X                           Y
                            Quantum	
  Channel




     Alice and Bob generate a common key
     they communicate via a quantum channel and a public channel
     Eve attempts to gain knowledge of the key
                                                                   12 / 26
System Setup


              X                          Public
                         Encoder
           (Alice)                      Channel

                                                        Bob


             Y




     the quantum channel creates a correlated source
     Alice observes X and Bob observes Y
     Alice has to communicate at least H(X|Y) over the public channel
     this corresponds to the corner point of the Slepian-Wolf region




                                                                        13 / 26
Aims



  Aims of QKD:
       Alice and Bob want to create a common key
       the goal is to maximise the key generation rate
       this does not necessarily require error free communication (as
       long as the errors are detectable)
       the key generation rate can be limited by
           the quantum channel (quantum source)
           the data rate over the public channel
           the processing capabilities of Bob




                                                                        14 / 26
Example


                 Algorithm 1                                     Algorithm 2
        WER
   1                                              1


  0.8                                            0.8


  0.6                                            0.6


  0.4                                            0.4


  0.2                                            0.2


   0                                              0
   0.15   0.2   0.25   0.3   0.35   0.4   0.45    0.15   0.2   0.25   0.3    0.35   0.4   0.45
                        rs                                              rs




                                                                                                 15 / 26
Example


                 Algorithm 1                                     Algorithm 2
        WER
   1                                              1


  0.8                                            0.8


  0.6                                            0.6


  0.4                                            0.4


  0.2                                            0.2


   0                                              0
   0.15   0.2   0.25   0.3   0.35   0.4   0.45    0.15   0.2   0.25   0.3    0.35   0.4   0.45
                        rs                                              rs




                                                                                                 15 / 26
Example


              Algorithm 1                                        Algorithm 2
        WER Keyrate
   1                                              1


  0.8                                            0.8


  0.6                                            0.6


  0.4                                            0.4


  0.2                                            0.2


   0                                              0
   0.15   0.2   0.25   0.3   0.35   0.4   0.45    0.15   0.2   0.25   0.3    0.35   0.4   0.45
                        rs                                              rs




                                                                                                 15 / 26
Example


              Algorithm 1                                        Algorithm 2
        WER Keyrate Time
   1                                              1


  0.8                                            0.8


  0.6                                            0.6


  0.4                                            0.4


  0.2                                            0.2


   0                                              0
   0.15   0.2   0.25   0.3   0.35   0.4   0.45    0.15   0.2   0.25   0.3    0.35   0.4   0.45
                        rs                                              rs




                                                                                                 15 / 26
Optimisation Problem
     maximum achievable key rate
                              rk,max = fk (rs , pX,Y )
     word error probability
                              pe = fe (rs , pX,Y , A)
     decoding complexity
                              td = ft (rs , pX,Y , A)




                                                         16 / 26
Optimisation Problem
      maximum achievable key rate
                               rk,max = fk (rs , pX,Y )
      word error probability
                               pe = fe (rs , pX,Y , A)
      decoding complexity
                               td = ft (rs , pX,Y , A)

  Optimisation Problem


                         rk = max {rk,max (1 − pe )}

                           subject to td < td,max
  where the maximisation is taken over
      0 < rs < 0.5
      all decoding algorithms A
                                                          16 / 26
Optimisation


  The decoding algorithm can either be
      fixed
      chosen from a fixed set of algorithms
      adaptively changed during the decoding process (e.g., gear-shift
      decoding)



  The coding rate can either be
      fixed
      chosen from a fixed set of rates (rate-compatible codes)
      adaptively changed during the decoding process (rateless codes)




                                                                         17 / 26
Message-Passing Decoders
        Lch                  Lcv,j                     Lvc,j




                             Lvc,i                     Lcv,i


      Sum-Product Algorithm (SPA)

                        dv                                          dc
                                                                                     Lvc,j
     Lvc,i = Lch +             Lcv,j       Lcv,i = 2 tanh−1                   tanh
                                                                                      2
                     j=1 j=i                                   j=1 j=i
      Min-Sum Algorithm (MSA)

                       dv                                                dc
    Lvc,i = Lch +              Lcv,j   Lcv,i = α · min |Lvc,j | ·               sign(Lvc,j )
                                                    j=i
                     j=1 j=i                                        j=1 j=i
      Binary Message-Passing (BMP)

    mvc,i = majority(mch , mcv,j )                   mcv,i = xor mvc,j
                                                               j=i



                                                                                             18 / 26
Gear-Shift Decoding
     1238                                                                                     IEEE



                                                                                           labeled
                                                                                        rithms in
                                                                                               .T
                                                                                        gear-shif
                                                                                        available
                                                                                        quence c
                                                                                        panded g

                                                                                        E. Conv
                                                                                           For eq
                                                                                        gence th
        Fig. 2. Simple gear-shifting trellis with   of size six and three algorithms.   than the
        Notice that some vertices have fewer than three outgoing edges; this happens
        when some algorithms have a closed EXIT chart at this message-error rate, or
                                                                                        chooses
        when two algorithms result in a parallel edge (in which case, only the lower    sulting E
        complexity algorithm is retained).                                              and henc
                                                                                           In the
  from Ardakani and Kschischang, “Gear-shift decoding,” IEEE Trans. Com. 2006           timum ge
           Clearly, every gear-shifting sequence              corresponds to a path     complex
                                                                                             19 / 26
Fixed Rate vs Rateless




     error rate on the quantum channel known and large block length
     transmit syndrome over public channel and discard key if
     decoding is not successful (one bit feedback)
     error rate on the quantum channel varies
     not enough data on the public channel leads to high error rate
     too much data on the public channel reduces the keyrate




                                                                      20 / 26
Literature


   Information Theory




       David Slepian and Jack K Wolf.
       Noiseless coding of correlated information sources.
       IEEE Transactions on Information Theory, 19(4):471 – 480, 1973.
       Aaron D Wyner.
       Recent results in the Shannon theory.
       IEEE Transactions on Information Theory, 20(1):2 – 10, 1974.
       Jun Chen, Da ke He, and Ashish Jagmohan.
       On the duality between Slepian–Wolf coding and channel coding under mismatched
       decoding.
       IEEE Transactions on Information Theory, 55(9):4006 – 4018, 2009.




                                                                                        21 / 26
Literature

   Coding



      Robert G Gallager.
      Low-density parity-check codes.
      IEEE Transactions on Information Theory, 8(1):21 – 28, 1962.
      Michael Luby.
      LT codes.
      In IEEE Symposium on Foundations of Computer Science, 2002, pages 271 – 280, 2002.
      Amin Shokrollahi.
      Raptor codes.
      IEEE Transactions on Information Theory, 52(6):2551 – 2567, 2006.
      T. Richardson and R. Urbanke.
      Modern Coding Theory.
      Cambridge University Press, 2008.




                                                                                           22 / 26
Literature

   QKD Basics


      Gilles Brassard and Louis Salvail.
      Secret-key reconciliation by public discussion.
      In Advances in Cryptology EUROCRYPT’93, pages 410–423, 1994.
      Tomohiro Sugimoto and Kouichi Yamazaki.
      A study on secret key reconciliation protocol ”cascade”.
      In IEICE Transactions on Fundamentals of Electronics, Communications and Computer
      Sciences, volume E83-A, pages 1987–1991, 2000.
      W T Buttler, S K Lamoreaux, J R Torgerson, G H Nickel, C H Donahue, and C G Peterson.
      Fast, efficient error reconciliation for quantum cryptography.
      arXiv, quant-ph, 2002.
      Hao Yan, Xiang Peng, Xiaxiang Lin, Wei Jiang, Tian Liu, and Hong Guo.
      Efficiency of Winnow protocol in secret key reconciliation.
      In Computer Science and Information Engineering, 2009 WRI World Congress on, volume 3,
      pages 238 – 242, 2009.




                                                                                               23 / 26
Literature
   Coding for QKD (non-exhaustive)
       David Elkouss, Anthony Leverrier, Romain Alleaume, and Joseph J Boutros.
       Efficient reconciliation protocol for discrete-variable quantum key distribution.
       In International Symposium on Information Theory, pages 1879–1883, 2009.
       David Elkouss, Jesus Martinez-Mateo, Daniel Lancho, and Vicente Martin.
       Rate compatible protocol for information reconciliation: An application to QKD.
       In Information Theory Workshop (ITW), 2010 IEEE, pages 1 – 5, 2010.
       David Elkouss, Jesus Martinez-Mateo, and Vicente Martin.
       Efficient reconciliation with rate adaptive codes in quantum key distribution.
       arXiv, quant-ph, 2010.
       David Elkouss, Jesus Martinez-Mateo, and Vicente Martin.
       Secure rate-adaptive reconciliation.
       In Information Theory and its Applications (ISITA), 2010 International Symposium on, pages
       179 – 184, 2010.
       Kenta Kasai, Ryutaroh Matsumoto, and Kohichi Sakaniwa.
       Information reconciliation for QKD with rate-compatible non-binary LDPC codes.
       In Information Theory and its Applications (ISITA), 2010 International Symposium on, pages
       922 – 927, 2010.
       Jesus Martinez-Mateo, David Elkouss, and Vicente Martin.
       Interactive reconciliation with low-density parity-check codes.
       In Turbo Codes and Iterative Information Processing (ISTC), 2010 6th International
       Symposium on, pages 270 – 274, 2010.

                                                                                                    24 / 26
Literature
   Implementation (non-exhaustive)

       Chip Elliott, Alexander Colvin, David Pearson, Oleksiy Pikalo, John Schlafer, and Henry Yeh.
       Current status of the DARPA quantum network.
       arXiv, 2005.
       Jerome Lodewyck, Matthieu Bloch, Raul Garcia-Patron, Simon Fossier, Evgueni Karpov,
       Eleni Diamanti, Thierry Debuisschert, Nicolas J Cerf, Rosa Tualle-Brouri, Steven W
       McLaughlin, and Philippe Grangier.
       Quantum key distribution over 25 km with an all-fiber continuous-variable system.
       arXiv, quant-ph, 2007.
                                                                   ´
       Simon Fossier, Eleni Diamanti, Thierry Debuisschert, Andre Villing, Rosa Tualle-Brouri, and
       Philippe Grangier.
       Field test of a continuous-variable quantum key distribution prototype.
       arXiv, quant-ph, 2008.
       Simon Fossier, J Lodewyck, Eleni Diamanti, Matthieu Bloch, Thierry Debuisschert, Rosa
       Tualle-Brouri, and Philippe Grangier.
       Quantum key distribution over 25 km, using a fiber setup based on continuous variables.
       In Lasers and Electro-Optics, 2008 and 2008 Conference on Quantum Electronics and Laser
       Science. CLEO/QELS 2008, pages 1 – 2, 2008.
       A Dixon, Z Yuan, J Dynes, A Sharpe, and Andrew Shields.
       Megabit per second quantum key distribution using practical InGaAs APDs.
       In Lasers and Electro-Optics, 2009 and 2009 Conference on Quantum Electronics and Laser
       Science. CLEO/QELS 2009, pages 1 – 2, 2009.


                                                                                                      25 / 26
Conclusions



     Reconciliation for QKD is a Slepian-Wolf coding problem (in a
     corner point)
     linear codes are sufficient for the optimal solution
     maximising the key rate is not necessarily equivalent to
     minimising the error rate
     complexity constraints may lead to a non-trivial optimisation
     problem to find the best codes and decoding algorithms
     rate adaptive or rateless schemes might be necessary in case
     where the error rate on the quantum channel is unknown




                                                                     26 / 26

Weitere ähnliche Inhalte

Was ist angesagt?

Performance Evaluation of Different QAM Techniques Using Matlab/Simulink full...
Performance Evaluation of Different QAM Techniques Using Matlab/Simulink full...Performance Evaluation of Different QAM Techniques Using Matlab/Simulink full...
Performance Evaluation of Different QAM Techniques Using Matlab/Simulink full...
vnktrjr
 
Ee443 communications 1 - lab 2 - loren schwappach
Ee443   communications 1 - lab 2 - loren schwappachEe443   communications 1 - lab 2 - loren schwappach
Ee443 communications 1 - lab 2 - loren schwappach
Loren Schwappach
 
Lab manual uoh_ee370
Lab manual uoh_ee370Lab manual uoh_ee370
Lab manual uoh_ee370
slatano
 
Tele3113 wk10wed
Tele3113 wk10wedTele3113 wk10wed
Tele3113 wk10wed
Vin Voro
 
Ac matlab programs
Ac matlab programsAc matlab programs
Ac matlab programs
Ravi Teja
 
Ec2306 mini project report-matlab
Ec2306 mini project report-matlabEc2306 mini project report-matlab
Ec2306 mini project report-matlab
unnimaya_k
 

Was ist angesagt? (20)

Lect 11.regenerative repeaters
Lect 11.regenerative repeatersLect 11.regenerative repeaters
Lect 11.regenerative repeaters
 
Modulation techniques matlab_code
Modulation techniques matlab_codeModulation techniques matlab_code
Modulation techniques matlab_code
 
Performance Evaluation of Different QAM Techniques Using Matlab/Simulink full...
Performance Evaluation of Different QAM Techniques Using Matlab/Simulink full...Performance Evaluation of Different QAM Techniques Using Matlab/Simulink full...
Performance Evaluation of Different QAM Techniques Using Matlab/Simulink full...
 
Frame Synchronization in Digital Communication Systems
Frame Synchronization in Digital Communication SystemsFrame Synchronization in Digital Communication Systems
Frame Synchronization in Digital Communication Systems
 
Ee443 communications 1 - lab 2 - loren schwappach
Ee443   communications 1 - lab 2 - loren schwappachEe443   communications 1 - lab 2 - loren schwappach
Ee443 communications 1 - lab 2 - loren schwappach
 
Group no 11
Group no 11Group no 11
Group no 11
 
Relationships Among EVM, BER and SNR + WiFi minimum SNR consideration
Relationships Among EVM, BER and SNR + WiFi minimum SNR considerationRelationships Among EVM, BER and SNR + WiFi minimum SNR consideration
Relationships Among EVM, BER and SNR + WiFi minimum SNR consideration
 
MEA2010 Poster
MEA2010 PosterMEA2010 Poster
MEA2010 Poster
 
Baseband shaping for data transmission
Baseband shaping for data transmissionBaseband shaping for data transmission
Baseband shaping for data transmission
 
Lab manual uoh_ee370
Lab manual uoh_ee370Lab manual uoh_ee370
Lab manual uoh_ee370
 
Characterization of the Wireless Channel
Characterization of the Wireless ChannelCharacterization of the Wireless Channel
Characterization of the Wireless Channel
 
Introduction to communication system lecture5
Introduction to communication system lecture5Introduction to communication system lecture5
Introduction to communication system lecture5
 
Ch4 linear modulation pg 111
Ch4 linear modulation pg 111Ch4 linear modulation pg 111
Ch4 linear modulation pg 111
 
Tele3113 wk10wed
Tele3113 wk10wedTele3113 wk10wed
Tele3113 wk10wed
 
Ac matlab programs
Ac matlab programsAc matlab programs
Ac matlab programs
 
Simulating communication systems with MATLAB: An introduction
Simulating communication systems with MATLAB: An introductionSimulating communication systems with MATLAB: An introduction
Simulating communication systems with MATLAB: An introduction
 
Ec2306 mini project report-matlab
Ec2306 mini project report-matlabEc2306 mini project report-matlab
Ec2306 mini project report-matlab
 
M ary psk modulation
M ary psk modulationM ary psk modulation
M ary psk modulation
 
Introduction to differential signal -For RF and EMC engineer
Introduction to differential signal -For RF and EMC engineerIntroduction to differential signal -For RF and EMC engineer
Introduction to differential signal -For RF and EMC engineer
 
Wireless Channel Modeling - MATLAB Simulation Approach
Wireless Channel Modeling - MATLAB Simulation ApproachWireless Channel Modeling - MATLAB Simulation Approach
Wireless Channel Modeling - MATLAB Simulation Approach
 

Andere mochten auch (10)

Nexmo Chat App Map
Nexmo Chat App MapNexmo Chat App Map
Nexmo Chat App Map
 
Not available in your country
Not available in your countryNot available in your country
Not available in your country
 
Evolution of Mobile Messaging: The Next Web Conference Europe 2015
Evolution of Mobile Messaging: The Next Web Conference Europe 2015Evolution of Mobile Messaging: The Next Web Conference Europe 2015
Evolution of Mobile Messaging: The Next Web Conference Europe 2015
 
Quantum Computers
Quantum ComputersQuantum Computers
Quantum Computers
 
fix point 16 Qam demapper-Presentation_Linked_in
fix point 16 Qam demapper-Presentation_Linked_infix point 16 Qam demapper-Presentation_Linked_in
fix point 16 Qam demapper-Presentation_Linked_in
 
Hacking with Nexmo - at EmojiCon Hackathon
Hacking with Nexmo - at EmojiCon HackathonHacking with Nexmo - at EmojiCon Hackathon
Hacking with Nexmo - at EmojiCon Hackathon
 
Instant Messaging apps market analysis
Instant Messaging apps market analysisInstant Messaging apps market analysis
Instant Messaging apps market analysis
 
Photodetectors
PhotodetectorsPhotodetectors
Photodetectors
 
Quantum computer
Quantum computerQuantum computer
Quantum computer
 
Quantum computer ppt
Quantum computer pptQuantum computer ppt
Quantum computer ppt
 

Ähnlich wie Channel coding for quantum key distribution

Introduction to Modulation and Demodulation.pptx
Introduction to Modulation and Demodulation.pptxIntroduction to Modulation and Demodulation.pptx
Introduction to Modulation and Demodulation.pptx
NiharranjanAdit
 
Chapter7 circuits
Chapter7 circuitsChapter7 circuits
Chapter7 circuits
Vin Voro
 
Introduction to Modulation and Demodulation.ppt
Introduction to Modulation and Demodulation.pptIntroduction to Modulation and Demodulation.ppt
Introduction to Modulation and Demodulation.ppt
AtmacaDevrim
 
The Power-Bandwidth Tradeoff in MIMO Systems
The Power-Bandwidth Tradeoff in MIMO SystemsThe Power-Bandwidth Tradeoff in MIMO Systems
The Power-Bandwidth Tradeoff in MIMO Systems
Marwan Hammouda
 
Parameters of multipath channel
Parameters of multipath channelParameters of multipath channel
Parameters of multipath channel
Naveen Kumar
 

Ähnlich wie Channel coding for quantum key distribution (20)

Acmanual ww8
Acmanual ww8Acmanual ww8
Acmanual ww8
 
Introduction to Modulation and Demodulation.pptx
Introduction to Modulation and Demodulation.pptxIntroduction to Modulation and Demodulation.pptx
Introduction to Modulation and Demodulation.pptx
 
45
4545
45
 
Chapter7 circuits
Chapter7 circuitsChapter7 circuits
Chapter7 circuits
 
Introduction to Modulation and Demodulation (1).ppt
Introduction to Modulation and Demodulation (1).pptIntroduction to Modulation and Demodulation (1).ppt
Introduction to Modulation and Demodulation (1).ppt
 
Introduction to Modulation and Demodulation.ppt
Introduction to Modulation and Demodulation.pptIntroduction to Modulation and Demodulation.ppt
Introduction to Modulation and Demodulation.ppt
 
Introduction to Modulation and Demodulation.ppt
Introduction to Modulation and Demodulation.pptIntroduction to Modulation and Demodulation.ppt
Introduction to Modulation and Demodulation.ppt
 
Introduction to Modulation and Demodulation.ppt
Introduction to Modulation and Demodulation.pptIntroduction to Modulation and Demodulation.ppt
Introduction to Modulation and Demodulation.ppt
 
Introduction to Modulation and Demodulation.ppt
Introduction to Modulation and Demodulation.pptIntroduction to Modulation and Demodulation.ppt
Introduction to Modulation and Demodulation.ppt
 
dspppt.pptx
dspppt.pptxdspppt.pptx
dspppt.pptx
 
Applicationofmatrices21155
Applicationofmatrices21155Applicationofmatrices21155
Applicationofmatrices21155
 
Frequency translation
Frequency translationFrequency translation
Frequency translation
 
Introduction to modulation and demodulation
Introduction to modulation and demodulationIntroduction to modulation and demodulation
Introduction to modulation and demodulation
 
Introduction to modulation and demodulation
Introduction to modulation and demodulationIntroduction to modulation and demodulation
Introduction to modulation and demodulation
 
The Power-Bandwidth Tradeoff in MIMO Systems
The Power-Bandwidth Tradeoff in MIMO SystemsThe Power-Bandwidth Tradeoff in MIMO Systems
The Power-Bandwidth Tradeoff in MIMO Systems
 
Transmultiplexer as precoder
Transmultiplexer as precoderTransmultiplexer as precoder
Transmultiplexer as precoder
 
Principles of communication systems for reference
Principles of communication systems for referencePrinciples of communication systems for reference
Principles of communication systems for reference
 
Parameters of multipath channel
Parameters of multipath channelParameters of multipath channel
Parameters of multipath channel
 
Matching techniques
Matching techniquesMatching techniques
Matching techniques
 
Capacitive Sensing 101
Capacitive Sensing 101Capacitive Sensing 101
Capacitive Sensing 101
 

Mehr von wtyru1989

Gaussian discord imperial
Gaussian discord imperialGaussian discord imperial
Gaussian discord imperial
wtyru1989
 
Entropic characteristics of quantum channels and the additivity problem
Entropic characteristics of quantum channels and the additivity problemEntropic characteristics of quantum channels and the additivity problem
Entropic characteristics of quantum channels and the additivity problem
wtyru1989
 
Manipulating continuous variable photonic entanglement
Manipulating continuous variable photonic entanglementManipulating continuous variable photonic entanglement
Manipulating continuous variable photonic entanglement
wtyru1989
 
The gaussian minimum entropy conjecture
The gaussian minimum entropy conjectureThe gaussian minimum entropy conjecture
The gaussian minimum entropy conjecture
wtyru1989
 
The security of quantum cryptography
The security of quantum cryptographyThe security of quantum cryptography
The security of quantum cryptography
wtyru1989
 
Entanglement of formation
Entanglement of formationEntanglement of formation
Entanglement of formation
wtyru1989
 
Bound entanglement is not rare
Bound entanglement is not rareBound entanglement is not rare
Bound entanglement is not rare
wtyru1989
 
Continuous variable quantum entanglement and its applications
Continuous variable quantum entanglement and its applicationsContinuous variable quantum entanglement and its applications
Continuous variable quantum entanglement and its applications
wtyru1989
 
Relative entropy and_squahed_entanglement
Relative entropy and_squahed_entanglementRelative entropy and_squahed_entanglement
Relative entropy and_squahed_entanglement
wtyru1989
 
Towards a one shot entanglement theory
Towards a one shot entanglement theoryTowards a one shot entanglement theory
Towards a one shot entanglement theory
wtyru1989
 
Postselection technique for quantum channels and applications for qkd
Postselection technique for quantum channels and applications for qkdPostselection technique for quantum channels and applications for qkd
Postselection technique for quantum channels and applications for qkd
wtyru1989
 
Qkd and de finetti theorem
Qkd and de finetti theoremQkd and de finetti theorem
Qkd and de finetti theorem
wtyru1989
 
Lattices, sphere packings, spherical codes
Lattices, sphere packings, spherical codesLattices, sphere packings, spherical codes
Lattices, sphere packings, spherical codes
wtyru1989
 

Mehr von wtyru1989 (20)

Quantum optical measurement
Quantum optical measurementQuantum optical measurement
Quantum optical measurement
 
Gaussian discord imperial
Gaussian discord imperialGaussian discord imperial
Gaussian discord imperial
 
Entropic characteristics of quantum channels and the additivity problem
Entropic characteristics of quantum channels and the additivity problemEntropic characteristics of quantum channels and the additivity problem
Entropic characteristics of quantum channels and the additivity problem
 
Manipulating continuous variable photonic entanglement
Manipulating continuous variable photonic entanglementManipulating continuous variable photonic entanglement
Manipulating continuous variable photonic entanglement
 
The gaussian minimum entropy conjecture
The gaussian minimum entropy conjectureThe gaussian minimum entropy conjecture
The gaussian minimum entropy conjecture
 
The security of quantum cryptography
The security of quantum cryptographyThe security of quantum cryptography
The security of quantum cryptography
 
Entanglement of formation
Entanglement of formationEntanglement of formation
Entanglement of formation
 
Bound entanglement is not rare
Bound entanglement is not rareBound entanglement is not rare
Bound entanglement is not rare
 
Continuous variable quantum entanglement and its applications
Continuous variable quantum entanglement and its applicationsContinuous variable quantum entanglement and its applications
Continuous variable quantum entanglement and its applications
 
Relative entropy and_squahed_entanglement
Relative entropy and_squahed_entanglementRelative entropy and_squahed_entanglement
Relative entropy and_squahed_entanglement
 
Lect12 photodiode detectors
Lect12 photodiode detectorsLect12 photodiode detectors
Lect12 photodiode detectors
 
Towards a one shot entanglement theory
Towards a one shot entanglement theoryTowards a one shot entanglement theory
Towards a one shot entanglement theory
 
Postselection technique for quantum channels and applications for qkd
Postselection technique for quantum channels and applications for qkdPostselection technique for quantum channels and applications for qkd
Postselection technique for quantum channels and applications for qkd
 
Encrypting with entanglement matthias christandl
Encrypting with entanglement matthias christandlEncrypting with entanglement matthias christandl
Encrypting with entanglement matthias christandl
 
Qkd and de finetti theorem
Qkd and de finetti theoremQkd and de finetti theorem
Qkd and de finetti theorem
 
Dic rd theory_quantization_07
Dic rd theory_quantization_07Dic rd theory_quantization_07
Dic rd theory_quantization_07
 
Lattices, sphere packings, spherical codes
Lattices, sphere packings, spherical codesLattices, sphere packings, spherical codes
Lattices, sphere packings, spherical codes
 
Em method
Em methodEm method
Em method
 
标量量化
标量量化标量量化
标量量化
 
Fully understanding cmrr taiwan-2012
Fully understanding cmrr taiwan-2012Fully understanding cmrr taiwan-2012
Fully understanding cmrr taiwan-2012
 

Kürzlich hochgeladen

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Kürzlich hochgeladen (20)

Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 

Channel coding for quantum key distribution

  • 1. Channel Coding for Quantum Key Distribution Gottfried Lechner gottfried.lechner@unisa.edu.au Institute for Telecommunications Research University of South Australia July 7, 2011 HiPANQ Workshop, Vienna 1 / 26
  • 2. Outline Basics Channel Coding Slepian-Wolf Coding Binning and the Dual Channel Linear Block Codes, Syndrome and Rates QKD Reconciliation System Setup Example Optimisation Conclusions 2 / 26
  • 3. Shannon: Commiunication in the Presence of Noise Channel Coding1949 the channel capacity may be defined as ony, this operation consists of merely changing sound pressure into a proportional electrical current. In teleg- ,= T-oo C9g2 M T T--a T A precise meaning will be given later to of reliable resolution of the M signals. II. THE SAMPLING THEOR Let us suppose that the channel has width W in cps starting at zero frequen are allowed to use this channel for a c Fig. 1-General communications system. time T. Without any further restrict mean that we can use as signal functio of time whose spectra lie entirely with raphy, we have an encoding operation which produces a and whose time functions lie within th transmit data reliably from source to sequence of dots, dashes, and spaces destination though it is not possible to fulfill both corresponding to the letters of the message. To take a more complex tions exactly, it is possible to keep the maximise the example, speech functionsmultiplex sampled, compressed, outside theW, and PeT.< the time fun different in the R of c must be PCM telephony the the of code rate case such that the probability banderror to have we describe interval Can quantized and encoded, and finally interleaved properly way the functions which satisfy these the maximum to constructgiven by the capacity of the answer is the following: rate is the signal. channel C 3. The channel. This is merely the medium used to THEOREM 1: If a function f(t) contai transmit the signal from the transmitting to the receiv- higher than W cps, it is completely dete ing point. It may be a pair of wires, a coaxial cable, a its ordinates at a series of points space band of radio frequencies, etc. During transmission, or apart. at the receiving terminal, the signal may be perturbed This is a fact which is common knowl by noise or distortion. Noise and distortion may be dif- munication art. The intuitive justificat ferentiated on the basis that distortion is a fixed opera- contains no frequencies higher than tion applied to the signal, while noise involves statistical change to a substantially new value in and unpredictable perturbations. Distortion can, in one-half cycle of the highest frequency, principle, be corrected by applying the inverse opera- mathematical proof showing that this tion, while a perturbation due to noise cannot always be proximately, but exactly, true can be removed, since the signal does not always undergo the Let F(w) be the spectrum of f(t). Then same change during transmission. 4. The receiver. This operates on the received signal 1 a00 3 / 26
  • 4. Shannon: Commiunication in the Presence of Noise Channel Coding1949 the channel capacity may be defined as ony, this operation consists of merely changing sound pressure into a proportional electrical current. In teleg- ,= T-oo C9g2 M T T--a T A precise meaning will be given later to of reliable resolution of the M signals. II. THE SAMPLING THEOR Let us suppose that the channel has width W in cps starting at zero frequen are allowed to use this channel for a c Fig. 1-General communications system. time T. Without any further restrict mean that we can use as signal functio of time whose spectra lie entirely with raphy, we have an encoding operation which produces a and whose time functions lie within th transmit data reliably from source to sequence of dots, dashes, and spaces destination though it is not possible to fulfill both corresponding to the letters of the message. To take a more complex tions exactly, it is possible to keep the maximise the example, speech functionsmultiplex sampled, compressed, outside theW, and PeT.< the time fun different in the R of c must be PCM telephony the the of code rate case such that the probability banderror to have we describe interval Can quantized and encoded, and finally interleaved properly way the functions which satisfy these the maximum to constructgiven by the capacity of the answer is the following: rate is the signal. channel C 3. The channel. This is merely the medium used to THEOREM 1: If a function f(t) contai transmit the signal from the transmitting to the receiv- higher than W cps, it is completely dete ing point. It may be a pair of wires, a coaxial cable, a its ordinates at a series of points space Channel Coding Theorem [Shannon 1948] band of radio frequencies, etc. During transmission, or apart. at the receiving terminal, the signal may be perturbed This is a fact which is common knowl For any > 0 and by c <orC, for largeand distortion may be dif- munication a code of justificat Rnoise distortion. Noise enough N, there exists art. The intuitive ferentiated on the basis that distortion is a fixed opera- contains no frequencies higher than length N and rate Rc and a decoding algorithm, such that thesubstantially new value in tion applied to the signal, while noise involves statistical change to a maximal probability of blockand unpredictable perturbations. Distortion can, in one-half cycle of the highest frequency, error is less than . principle, be corrected by applying the inverse opera- mathematical proof showing that this tion, while a perturbation due to noise cannot always be proximately, but exactly, true can be removed, since the signal does not always undergo the Let F(w) be the spectrum of f(t). Then same change during transmission. 4. The receiver. This operates on the received signal 1 a00 3 / 26
  • 5. 4 Simplifying Component Decoders Typical Approach choose a family of channels with a single parameter (e.g., AWGN, BSC, BEC,...) fix a code rate optimise the code such that it achieves vanishing error probability close to capacity 0 10 SPA MSA MSA − variable scaling −1 MSA − fixed scaling 0.60 10 MSA − fixed scaling 0.70 MSA − universal −2 10 bit error rate −3 10 −4 10 −5 10 −6 10 0.5 1 1.5 2 2.5 Eb N0 Figure 4.10: Bit error rates for irregular code with and without post-processing. 4 / 26
  • 6. DAVID SLEPIAN AND JACK K. WOLF Slepian-Wolf Coding ation sequences . . .,X- 1,X0,XI,. . . and -x-( ,xg .X,,“’ x “~01101~~~ ..x-:,x,*,x;l-. ed by repeated independent drawings of - ENCODER RATE RX 0 bles X, Y from a given bivariate distribu- E C e minimum number of bits per character CORRELATED 0 SOURCES se sequences that they can be faithfully so D assumptions regarding the encoders and “Ye, ,Y, .Y,;.. Y “‘11000..~ ; ..Y-,*,Yo*8Y,p... hich are not at all obvious, are presented ENCODER RATE RY in the Rx-Ry plane. They generalize a or a single information sequence,namely Fig. 1. Correlated source coding configuration. uction. TRODUCTION tement transmit two correlated sources over two noiseless channels generalize, to the caseencoding and decoding: H(X, Y) joint of two rtain well-known results on the separate encoding and decoding: H(X) + H(Y) ≥ ngle discrete information source. H(X, Y) onsideredis that depictedin Fig. 1. information sequences. * .,X- 1, ,,,Y,, . . . are obtained by repeated m a discrete bivariate distribution ch sourceis constrained to operate e other source, while the decoder HtXIY) H(X) H(X,Y) RX ded binary messagestreams. We Fig. 2. Admissible rate region W corresponding to Fig. 1 umber of bits per sourcecharacter oded messagestreams in order to ensureaccuratereconstruction by the decoderof the outputs of both information sources.The results are presentedas an 25, 1972; revised December 28, 1972. sity of Hawaii, Honolulu, Hawaii, and allowed two-dimensional rate region 93for the two encoded Hill, N.J. 07974. message streamsas shown in Fig. 2. Note that in 93for this ersity of Hawaii, Honolulu, Hawaii, on itute of Brooklyn, Brooklyn, N.Y. case we can have both R, < H(X) and R, -c H(Y) al- 5 / 26
  • 7. DAVID SLEPIAN AND JACK K. WOLF Slepian-Wolf Coding ation sequences . . .,X- 1,X0,XI,. . . and -x-( ,xg .X,,“’ x “~01101~~~ ..x-:,x,*,x;l-. ed by repeated independent drawings of - ENCODER RATE RX 0 bles X, Y from a given bivariate distribu- E C e minimum number of bits per character CORRELATED 0 SOURCES se sequences that they can be faithfully so D assumptions regarding the encoders and “Ye, ,Y, .Y,;.. Y “‘11000..~ ; ..Y-,*,Yo*8Y,p... hich are not at all obvious, are presented ENCODER RATE RY in the Rx-Ry plane. They generalize a or a single information sequence,namely Fig. 1. Correlated source coding configuration. uction. TRODUCTION tement transmit two correlated sources over two noiseless channels generalize, to the caseencoding and decoding: H(X, Y) joint of two rtain well-known results on the separate encoding and decoding: H(X) + H(Y) ≥ H(X, Y) ngle discrete information source. onsideredis that depictedin Fig. 1. information sequences. * .,X- 1, Slepian-Wolf Theorem (1973) ,,,Y,, . . . are obtained by repeated m a discrete bivariate distribution rate region is given by the rate pairs satisfying The admissible ch sourceis constrained to operate e other source, while the decoder HtXIY) H(X) H(X,Y) RX ded binary messagestreams. We Rx ≥ H(X|Y) Fig. 2. Admissible rate region W corresponding to Fig. 1 umber of bits per sourcecharacter oded messagestreams in order to Ry ≥ H(Y|X) ensureaccuratereconstruction by the decoderof the outputs of both information + Ry ≥ H(X, are presentedas an Rx sources.The results Y) 25, 1972; revised December 28, 1972. sity of Hawaii, Honolulu, Hawaii, and allowed two-dimensional rate region 93for the two encoded Hill, N.J. 07974. message streamsas shown in Fig. 2. Note that in 93for this ersity of Hawaii, Honolulu, Hawaii,penalty if X and Y are encoded separately! There is N.Y. on case we can have both R, < H(X) and R, -c H(Y) al- itute of Brooklyn, Brooklyn, no 5 / 26
  • 8. nimum number of bits per character CORRELATED 0 SOURCES quences that they can be faithfully so D mptions regarding the encoders and Slepian-Wolf Coding are not at all obvious, are presented “Ye, ,Y, .Y,;.. Y ENCODER “‘11000..~ RATE RY ; ..Y-,*,Yo*8Y,p... he Rx-Ry plane. They generalize a single information sequence,namely Fig. 1. Correlated source coding configuration. n. DUCTION ent neralize, to the case of two n well-known results on the discrete information source. deredis that depictedin Fig. 1. ormation sequences. * .,X- 1, , . . . are obtained by repeated discrete bivariate distribution ourceis constrained to operate her source, while the decoder HtXIY) H(X) H(X,Y) RX binary messagestreams. We Fig. 2. Admissible rate region W corresponding to Fig. 1 ber of bits per sourcecharacter d messagestreams in order to ensureaccuratereconstruction by the decoderof the outputs of both information sources.The results are presentedas an 1972; revised December 28, 1972. of Hawaii, Honolulu, Hawaii, and allowed two-dimensional rate region 93for the two encoded N.J. 07974. message streamsas shown in Fig. 2. Note that in 93for this y of Hawaii, Honolulu, Hawaii, on of Brooklyn, Brooklyn, N.Y. case we can have both R, < H(X) and R, -c H(Y) al- rsity of South Australia. Downloaded on January 17, 2009 at 20:51 from IEEE Xplore. Restrictions apply. 6 / 26
  • 9. nimum number of bits per character CORRELATED 0 SOURCES quences that they can be faithfully so D mptions regarding the encoders and Slepian-Wolf Coding are not at all obvious, are presented “Ye, ,Y, .Y,;.. Y ENCODER “‘11000..~ RATE RY ; ..Y-,*,Yo*8Y,p... he Rx-Ry plane. They generalize a single information sequence,namely Fig. 1. Correlated source coding configuration. n. DUCTION ent neralize, to the case of two n well-known results on the discrete information source. deredis that depictedin Fig. 1. ormation sequences. * .,X- 1, , . . . are obtained by repeated discrete bivariate distribution ourceis constrained to operate her source, while the decoder HtXIY) H(X) H(X,Y) RX binary messagestreams. We Fig. 2. Admissible rate region W corresponding to Fig. 1 ber of bits per sourcecharacter d messagestreams in order to ensureaccuratereconstruction by the decoderof the outputs assume thatbothis transmitted at H(Y) are presentedas an of Y information sources.The results 1972; revised December 28, 1972. we operate at a two-dimensional rateFig. 2. Note that two encoded of Hawaii, Honolulu, Hawaii, and N.J. 07974. allowed message corner point in region Slepian-Wolfthis streamsas shown of the 93for the in 93for region y of Hawaii, Honolulu, Hawaii, on for this corner we can have can R, < H(X) syndrome of al- channel code of Brooklyn, Brooklyn, N.Y. case point we both use the and R, -c H(Y) a as a binning scheme rsity of South Australia. Downloaded on January 17, 2009 at 20:51 from IEEE Xplore. Restrictions apply. 6 / 26
  • 10. Binning with Syndrome Bin 1 Bin 2 Bin 3 encoding of X can be done by random binning the syndrome of a linear code is used for binning 7 / 26
  • 11. Dual Channel Correlated sources: assume sources X and Y with P(X, Y) = P(X)P(Y|X) generate X according to P(X) transmit X over the channel P(Y|X) to obtain Y 8 / 26
  • 12. Dual Channel Correlated sources: assume sources X and Y with P(X, Y) = P(X)P(Y|X) generate X according to P(X) transmit X over the channel P(Y|X) to obtain Y What is the channel that is seen by the channel decoder? in general it is the dual channel which is not equal to P(Y|X) nor P(X|Y) the channel seen by the decoder is always a symmetric channel with uniform input therefore, linear codes can be used for the simple case of two binary sources correlated via a BSC all these channels are the same 8 / 26
  • 13. Linear Block Codes, Syndrome and Rates x N C= x ∈ {0, 1} xHT = 0 N M Rc = N−M N =1− M N 9 / 26
  • 14. Linear Block Codes, Syndrome and Rates x N C= x ∈ {0, 1} xHT = 0 N M Rc = N−M N =1− M N x s N Cs = x ∈ {0, 1} xHT = s N M Rs = M N = 1 − Rc efficiency parameter f = M H(X|Y) = M NH(X|Y) = Rs H(X|Y) 9 / 26
  • 15. LDPC Codes N   0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0  0 0 0 0 0 0 1 1 0 1 0 1 0 0 0 0     0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0     1 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0  H=    M  1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0   0 0 0 0 1 0 0 0 0 1 0 0 1 1 0 0     0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1  dc 0 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1 dv 10 / 26
  • 16. Outline Basics Channel Coding Slepian-Wolf Coding Binning and the Dual Channel Linear Block Codes, Syndrome and Rates QKD Reconciliation System Setup Example Optimisation Conclusions 11 / 26
  • 17. Quantum Key Distribution Public Channel Alice Bob X Y Quantum  Channel Alice and Bob generate a common key they communicate via a quantum channel and a public channel Eve attempts to gain knowledge of the key 12 / 26
  • 18. System Setup X Public Encoder (Alice) Channel Bob Y the quantum channel creates a correlated source Alice observes X and Bob observes Y Alice has to communicate at least H(X|Y) over the public channel this corresponds to the corner point of the Slepian-Wolf region 13 / 26
  • 19. Aims Aims of QKD: Alice and Bob want to create a common key the goal is to maximise the key generation rate this does not necessarily require error free communication (as long as the errors are detectable) the key generation rate can be limited by the quantum channel (quantum source) the data rate over the public channel the processing capabilities of Bob 14 / 26
  • 20. Example Algorithm 1 Algorithm 2 WER 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.15 0.2 0.25 0.3 0.35 0.4 0.45 rs rs 15 / 26
  • 21. Example Algorithm 1 Algorithm 2 WER 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.15 0.2 0.25 0.3 0.35 0.4 0.45 rs rs 15 / 26
  • 22. Example Algorithm 1 Algorithm 2 WER Keyrate 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.15 0.2 0.25 0.3 0.35 0.4 0.45 rs rs 15 / 26
  • 23. Example Algorithm 1 Algorithm 2 WER Keyrate Time 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.15 0.2 0.25 0.3 0.35 0.4 0.45 rs rs 15 / 26
  • 24. Optimisation Problem maximum achievable key rate rk,max = fk (rs , pX,Y ) word error probability pe = fe (rs , pX,Y , A) decoding complexity td = ft (rs , pX,Y , A) 16 / 26
  • 25. Optimisation Problem maximum achievable key rate rk,max = fk (rs , pX,Y ) word error probability pe = fe (rs , pX,Y , A) decoding complexity td = ft (rs , pX,Y , A) Optimisation Problem rk = max {rk,max (1 − pe )} subject to td < td,max where the maximisation is taken over 0 < rs < 0.5 all decoding algorithms A 16 / 26
  • 26. Optimisation The decoding algorithm can either be fixed chosen from a fixed set of algorithms adaptively changed during the decoding process (e.g., gear-shift decoding) The coding rate can either be fixed chosen from a fixed set of rates (rate-compatible codes) adaptively changed during the decoding process (rateless codes) 17 / 26
  • 27. Message-Passing Decoders Lch Lcv,j Lvc,j Lvc,i Lcv,i Sum-Product Algorithm (SPA) dv dc Lvc,j Lvc,i = Lch + Lcv,j Lcv,i = 2 tanh−1 tanh 2 j=1 j=i j=1 j=i Min-Sum Algorithm (MSA) dv dc Lvc,i = Lch + Lcv,j Lcv,i = α · min |Lvc,j | · sign(Lvc,j ) j=i j=1 j=i j=1 j=i Binary Message-Passing (BMP) mvc,i = majority(mch , mcv,j ) mcv,i = xor mvc,j j=i 18 / 26
  • 28. Gear-Shift Decoding 1238 IEEE labeled rithms in .T gear-shif available quence c panded g E. Conv For eq gence th Fig. 2. Simple gear-shifting trellis with of size six and three algorithms. than the Notice that some vertices have fewer than three outgoing edges; this happens when some algorithms have a closed EXIT chart at this message-error rate, or chooses when two algorithms result in a parallel edge (in which case, only the lower sulting E complexity algorithm is retained). and henc In the from Ardakani and Kschischang, “Gear-shift decoding,” IEEE Trans. Com. 2006 timum ge Clearly, every gear-shifting sequence corresponds to a path complex 19 / 26
  • 29. Fixed Rate vs Rateless error rate on the quantum channel known and large block length transmit syndrome over public channel and discard key if decoding is not successful (one bit feedback) error rate on the quantum channel varies not enough data on the public channel leads to high error rate too much data on the public channel reduces the keyrate 20 / 26
  • 30. Literature Information Theory David Slepian and Jack K Wolf. Noiseless coding of correlated information sources. IEEE Transactions on Information Theory, 19(4):471 – 480, 1973. Aaron D Wyner. Recent results in the Shannon theory. IEEE Transactions on Information Theory, 20(1):2 – 10, 1974. Jun Chen, Da ke He, and Ashish Jagmohan. On the duality between Slepian–Wolf coding and channel coding under mismatched decoding. IEEE Transactions on Information Theory, 55(9):4006 – 4018, 2009. 21 / 26
  • 31. Literature Coding Robert G Gallager. Low-density parity-check codes. IEEE Transactions on Information Theory, 8(1):21 – 28, 1962. Michael Luby. LT codes. In IEEE Symposium on Foundations of Computer Science, 2002, pages 271 – 280, 2002. Amin Shokrollahi. Raptor codes. IEEE Transactions on Information Theory, 52(6):2551 – 2567, 2006. T. Richardson and R. Urbanke. Modern Coding Theory. Cambridge University Press, 2008. 22 / 26
  • 32. Literature QKD Basics Gilles Brassard and Louis Salvail. Secret-key reconciliation by public discussion. In Advances in Cryptology EUROCRYPT’93, pages 410–423, 1994. Tomohiro Sugimoto and Kouichi Yamazaki. A study on secret key reconciliation protocol ”cascade”. In IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, volume E83-A, pages 1987–1991, 2000. W T Buttler, S K Lamoreaux, J R Torgerson, G H Nickel, C H Donahue, and C G Peterson. Fast, efficient error reconciliation for quantum cryptography. arXiv, quant-ph, 2002. Hao Yan, Xiang Peng, Xiaxiang Lin, Wei Jiang, Tian Liu, and Hong Guo. Efficiency of Winnow protocol in secret key reconciliation. In Computer Science and Information Engineering, 2009 WRI World Congress on, volume 3, pages 238 – 242, 2009. 23 / 26
  • 33. Literature Coding for QKD (non-exhaustive) David Elkouss, Anthony Leverrier, Romain Alleaume, and Joseph J Boutros. Efficient reconciliation protocol for discrete-variable quantum key distribution. In International Symposium on Information Theory, pages 1879–1883, 2009. David Elkouss, Jesus Martinez-Mateo, Daniel Lancho, and Vicente Martin. Rate compatible protocol for information reconciliation: An application to QKD. In Information Theory Workshop (ITW), 2010 IEEE, pages 1 – 5, 2010. David Elkouss, Jesus Martinez-Mateo, and Vicente Martin. Efficient reconciliation with rate adaptive codes in quantum key distribution. arXiv, quant-ph, 2010. David Elkouss, Jesus Martinez-Mateo, and Vicente Martin. Secure rate-adaptive reconciliation. In Information Theory and its Applications (ISITA), 2010 International Symposium on, pages 179 – 184, 2010. Kenta Kasai, Ryutaroh Matsumoto, and Kohichi Sakaniwa. Information reconciliation for QKD with rate-compatible non-binary LDPC codes. In Information Theory and its Applications (ISITA), 2010 International Symposium on, pages 922 – 927, 2010. Jesus Martinez-Mateo, David Elkouss, and Vicente Martin. Interactive reconciliation with low-density parity-check codes. In Turbo Codes and Iterative Information Processing (ISTC), 2010 6th International Symposium on, pages 270 – 274, 2010. 24 / 26
  • 34. Literature Implementation (non-exhaustive) Chip Elliott, Alexander Colvin, David Pearson, Oleksiy Pikalo, John Schlafer, and Henry Yeh. Current status of the DARPA quantum network. arXiv, 2005. Jerome Lodewyck, Matthieu Bloch, Raul Garcia-Patron, Simon Fossier, Evgueni Karpov, Eleni Diamanti, Thierry Debuisschert, Nicolas J Cerf, Rosa Tualle-Brouri, Steven W McLaughlin, and Philippe Grangier. Quantum key distribution over 25 km with an all-fiber continuous-variable system. arXiv, quant-ph, 2007. ´ Simon Fossier, Eleni Diamanti, Thierry Debuisschert, Andre Villing, Rosa Tualle-Brouri, and Philippe Grangier. Field test of a continuous-variable quantum key distribution prototype. arXiv, quant-ph, 2008. Simon Fossier, J Lodewyck, Eleni Diamanti, Matthieu Bloch, Thierry Debuisschert, Rosa Tualle-Brouri, and Philippe Grangier. Quantum key distribution over 25 km, using a fiber setup based on continuous variables. In Lasers and Electro-Optics, 2008 and 2008 Conference on Quantum Electronics and Laser Science. CLEO/QELS 2008, pages 1 – 2, 2008. A Dixon, Z Yuan, J Dynes, A Sharpe, and Andrew Shields. Megabit per second quantum key distribution using practical InGaAs APDs. In Lasers and Electro-Optics, 2009 and 2009 Conference on Quantum Electronics and Laser Science. CLEO/QELS 2009, pages 1 – 2, 2009. 25 / 26
  • 35. Conclusions Reconciliation for QKD is a Slepian-Wolf coding problem (in a corner point) linear codes are sufficient for the optimal solution maximising the key rate is not necessarily equivalent to minimising the error rate complexity constraints may lead to a non-trivial optimisation problem to find the best codes and decoding algorithms rate adaptive or rateless schemes might be necessary in case where the error rate on the quantum channel is unknown 26 / 26