SlideShare ist ein Scribd-Unternehmen logo
1 von 70
Downloaden Sie, um offline zu lesen
Sparse Regularization
Of Inverse Problems
       Gabriel Peyré
   www.numerical-tours.com
Overview

• Inverse Problems Regularization

• Sparse Synthesis Regularization

• Examples: Sparse Wavelet Regularizations

• Iterative Soft Thresholding

• Sparse Seismic Deconvolution
Inverse Problems
Forward model:    y = K f0 + w   RP

   Observations   Operator  (Unknown)   Noise
                  : RQ   RP   Input
Inverse Problems
Forward model:       y = K f0 + w   RP

    Observations     Operator  (Unknown)   Noise
                     : RQ   RP   Input
Denoising: K = IdQ , P = Q.
Inverse Problems
Forward model:          y = K f0 + w     RP

    Observations        Operator  (Unknown)          Noise
                        : RQ   RP   Input
Denoising: K = IdQ , P = Q.
Inpainting: set    of missing pixels, P = Q   | |.
                          0 if x     ,
           (Kf )(x) =
                          f (x) if x /    .




            K
Inverse Problems
Forward model:          y = K f0 + w     RP

    Observations        Operator  (Unknown)            Noise
                        : RQ   RP   Input
Denoising: K = IdQ , P = Q.
Inpainting: set    of missing pixels, P = Q     | |.
                          0 if x     ,
           (Kf )(x) =
                          f (x) if x /    .
Super-resolution: Kf = (f     k)   , P = Q/ .


            K                                 K
Inverse Problem in Medical Imaging
           Kf = (p k )1   k K
Inverse Problem in Medical Imaging
                       Kf = (p k )1   k K




Magnetic resonance imaging (MRI):            ˆ
                                       Kf = (f ( ))
                                ˆ
                                f
Inverse Problem in Medical Imaging
                        Kf = (p k )1   k K




Magnetic resonance imaging (MRI):             ˆ
                                        Kf = (f ( ))
                                  ˆ
                                  f




Other examples: MEG, EEG, . . .
Inverse Problem Regularization

Noisy measurements: y = Kf0 + w.

Prior model: J : RQ   R assigns a score to images.
Inverse Problem Regularization

Noisy measurements: y = Kf0 + w.

Prior model: J : RQ   R assigns a score to images.

                                1
                      f   argmin ||y Kf ||2 + J(f )
                           f RQ 2
                                Data fidelity Regularity
Inverse Problem Regularization

Noisy measurements: y = Kf0 + w.

Prior model: J : RQ       R assigns a score to images.

                                    1
                      f       argmin ||y Kf ||2 + J(f )
                               f RQ 2
                                    Data fidelity Regularity

Choice of : tradeo
            Noise level               Regularity of f0
                ||w||                     J(f0 )
Inverse Problem Regularization

Noisy measurements: y = Kf0 + w.

Prior model: J : RQ         R assigns a score to images.

                                      1
                        f       argmin ||y Kf ||2 + J(f )
                                 f RQ 2
                                      Data fidelity Regularity

Choice of : tradeo
              Noise level                 Regularity of f0
                  ||w||                       J(f0 )

No noise:       0+ , minimize         f       argmin J(f )
                                             f RQ ,Kf =y
Smooth and Cartoon Priors

              J(f ) =   || f (x)||2 dx




           | f |2
Smooth and Cartoon Priors

              J(f ) =       || f (x)||2 dx

                    J(f ) =      || f (x)||dx



            J(f ) =         length(Ct )dt
                        R




           | f |2                               | f|
Inpainting Example




Input y = Kf0 + w   Sobolev   Total variation
Overview

• Inverse Problems Regularization

• Sparse Synthesis Regularization

• Examples: Sparse Wavelet Regularizations

• Iterative Soft Thresholding

• Sparse Seismic Deconvolution
Redundant Dictionaries
Dictionary   =(   m )m   RQ   N
                                  ,N       Q.




                                       Q

                                                N
Redundant Dictionaries
Dictionary    =(    m )m        RQ   N
                                         ,N       Q.
Fourier:      m   = ei   ·, m

                                frequency




                                              Q

                                                       N
Redundant Dictionaries
Dictionary    =(       m )m      RQ    N
                                           ,N       Q.
                                                         m = (j, , n)
Fourier:      m    =e   i ·, m

                                 frequency           scale         position
Wavelets:
       m    = (2   j
                       R x        n)                     orientation



                                                     =1                =2


                                                Q

                                                          N
Redundant Dictionaries
Dictionary    =(       m )m      RQ    N
                                           ,N       Q.
                                                         m = (j, , n)
Fourier:      m    =e   i ·, m

                                 frequency           scale         position
Wavelets:
       m    = (2   j
                       R x        n)                     orientation

DCT, Curvelets, bandlets, . . .

                                                     =1                =2


                                                Q

                                                          N
Redundant Dictionaries
Dictionary    =(       m )m      RQ       N
                                              ,N       Q.
                                                            m = (j, , n)
Fourier:      m    =e   i ·, m

                                 frequency              scale         position
Wavelets:
       m    = (2   j
                       R x           n)                     orientation

DCT, Curvelets, bandlets, . . .

Synthesis: f =     m    xm       m   =    x.            =1                =2


                                                   Q                       =f
                                                                      x
                                                             N
Coe cients x       Image f =              x
Sparse Priors
                                      Coe cients x
Ideal sparsity: for most m, xm = 0.
     J0 (x) = # {m  xm = 0}




                                        Image f0
Sparse Priors
                                       Coe cients x
Ideal sparsity: for most m, xm = 0.
     J0 (x) = # {m  xm = 0}
Sparse approximation: f = x where
    argmin ||f0    x||2 + T 2 J0 (x)
     x2RN




                                         Image f0
Sparse Priors
                                            Coe cients x
Ideal sparsity: for most m, xm = 0.
     J0 (x) = # {m  xm = 0}
Sparse approximation: f = x where
    argmin ||f0    x||2 + T 2 J0 (x)
     x2RN

Orthogonal   :      =      = IdN
       f0 , m if | f0 ,        m   | > T,
 xm =
      0 otherwise.                    ST      Image f0
   f=    ST   (f0 )
Sparse Priors
                                            Coe cients x
Ideal sparsity: for most m, xm = 0.
     J0 (x) = # {m  xm = 0}
Sparse approximation: f = x where
    argmin ||f0    x||2 + T 2 J0 (x)
     x2RN

Orthogonal   :      =      = IdN
       f0 , m if | f0 ,        m   | > T,
 xm =
      0 otherwise.                    ST      Image f0
   f=    ST   (f0 )

Non-orthogonal :
       NP-hard.
Convex Relaxation: L1 Prior
                       J0 (x) = # {m  xm = 0}
                        J0 (x) = 0        null image.
Image with 2 pixels:    J0 (x) = 1        sparse image.
                        J0 (x) = 2        non-sparse image.
   x2

         x1


  q=0
Convex Relaxation: L1 Prior
                             J0 (x) = # {m  xm = 0}
                               J0 (x) = 0       null image.
Image with 2 pixels:           J0 (x) = 1       sparse image.
                               J0 (x) = 2       non-sparse image.
     x2

           x1


     q=0           q = 1/2         q=1      q = 3/2       q=2
 q
     priors:        Jq (x) =       |xm |q      (convex for q    1)
                               m
Convex Relaxation: L1 Prior
                                  J0 (x) = # {m  xm = 0}
                                    J0 (x) = 0          null image.
Image with 2 pixels:                J0 (x) = 1          sparse image.
                                    J0 (x) = 2          non-sparse image.
     x2

               x1


     q=0                q = 1/2         q=1         q = 3/2       q=2
 q
     priors:             Jq (x) =       |xm |q         (convex for q    1)
                                    m



Sparse     1
               prior:      J1 (x) =         |xm |
                                        m
L1 Regularization

 x0 RN
coe cients
L1 Regularization

 x0 RN          f0 = x0 RQ
coe cients          image
L1 Regularization

 x0 RN          f0 = x0 RQ       y = Kf0 + w RP
coe cients          image           observations
                             K

                             w
L1 Regularization

 x0 RN          f0 = x0 RQ            y = Kf0 + w RP
coe cients          image                observations
                                  K

                              w


                 = K ⇥ ⇥ RP   N
L1 Regularization

 x0 RN            f0 = x0 RQ             y = Kf0 + w RP
coe cients            image                 observations
                                     K

                                  w


                  = K ⇥ ⇥ RP     N



 Sparse recovery: f =   x where x solves
            1
        min   ||y     x||2 + ||x||1
       x RN 2
               Fidelity Regularization
Noiseless Sparse Regularization
Noiseless measurements:        y = x0

              x
                      x=
                           y




 x    argmin          |xm |
        x=y       m
Noiseless Sparse Regularization
Noiseless measurements:        y = x0

              x
                                                 x
                      x=                              x=
                           y                               y




 x    argmin          |xm |        x    argmin       |xm |2
        x=y       m                       x=y    m
Noiseless Sparse Regularization
Noiseless measurements:          y = x0

                x
                                                        x
                        x=                                     x=
                             y                                      y




  x    argmin           |xm |          x      argmin          |xm |2
          x=y       m                            x=y     m


Convex linear program.
      Interior points, cf. [Chen, Donoho, Saunders] “basis pursuit”.
      Douglas-Rachford splitting, see [Combettes, Pesquet].
Noisy Sparse Regularization
Noisy measurements:      y = x0 + w

             1
 x    argmin ||y    x||2 + ||x||1
       x RQ 2
            Data fidelity Regularization
Noisy Sparse Regularization
Noisy measurements:      y = x0 + w

             1
 x    argmin ||y    x||2 + ||x||1
       x RQ 2                             Equivalence
            Data fidelity Regularization

 x     argmin ||x||1
      || x y||
                                          |
                                              x=
                                      x            y|
Noisy Sparse Regularization
Noisy measurements:               y = x0 + w

                 1
  x       argmin ||y    x||2 + ||x||1
           x RQ 2                                   Equivalence
                Data fidelity Regularization

  x       argmin ||x||1
         || x y||
                                                    |
                                                        x=
Algorithms:                                     x            y|
      Iterative soft thresholding
             Forward-backward splitting
 see [Daubechies et al], [Pesquet et al], etc
      Nesterov multi-steps schemes.
Overview

• Inverse Problems Regularization

• Sparse Synthesis Regularization

• Examples: Sparse Wavelet Regularizations

• Iterative Soft Thresholding

• Sparse Seismic Deconvolution
Image De-blurring




Original f0   y = h f0 + w
Image De-blurring




  Original f0     y = h f0 + w           Sobolev
                                       SNR=22.7dB
Sobolev regularization:   f = argmin ||f ⇥ h   y||2 + ||⇥f ||2
                                 f RN
                          ˆ
                          h(⇥)
          ˆ
          f (⇥) =                    y (⇥)
                                     ˆ
                     ˆ
                    |h(⇥)|2 + |⇥|2
Image De-blurring




  Original f0      y = h f0 + w            Sobolev            Sparsity
                                         SNR=22.7dB        SNR=24.7dB
Sobolev regularization:       f = argmin ||f ⇥ h   y||2 + ||⇥f ||2
                                     f RN
                              ˆ
                              h(⇥)
            ˆ
            f (⇥) =                    y (⇥)
                                       ˆ
                       ˆ
                      |h(⇥)|2 + |⇥|2

Sparsity regularization:          = translation invariant wavelets.
                                        1
f =     x       where     x      argmin ||h ( x) y||2 + ||x||1
                                    x   2
Comparison of Regularizations
  L2 regularization      Sobolev regularization   Sparsity regularization
SNR




                          SNR




                                                   SNR
      opt                            opt                 opt




   L2                   Sobolev         Sparsity           Invariant
SNR=21.7dB            SNR=22.7dB      SNR=23.7dB          SNR=24.7dB
Inpainting Problem


               K                         0 if x     ,
                            (Kf )(x) =
                                         f (x) if x /   .

Measures:     y = Kf0 + w
Image Separation
Model: f = f1 + f2 + w, (f1 , f2 ) components, w noise.
Image Separation
Model: f = f1 + f2 + w, (f1 , f2 ) components, w noise.
Image Separation
Model: f = f1 + f2 + w, (f1 , f2 ) components, w noise.




Union dictionary:         =[    1,     2]      RQ   (N1 +N2 )


Recovered component: fi =            i xi .
                                       1
         (x1 , x2 )      argmin          ||f        x||2 + ||x||1
                      x=(x1 ,x2 ) RN   2
Examples of Decompositions
Cartoon+Texture Separation
Overview

• Inverse Problems Regularization

• Sparse Synthesis Regularization

• Examples: Sparse Wavelet Regularizations

• Iterative Soft Thresholding

• Sparse Seismic Deconvolution
Sparse Regularization Denoising
Denoising: y = x0 + w 2 RN , K = Id.
                     ⇤                 ⇤
Orthogonal-basis:        = IdN , x =       f.
Regularization-based denoising:
                           1
              x = argmin ||x y||2 + J(x)
                ?
                     x2RN 2
                             P
Sparse regularization: J(x) = m |xm |q
       (where |a|0 = (a))
Sparse Regularization Denoising
Denoising: y = x0 + w 2 RN , K = Id.
                        ⇤                 ⇤
Orthogonal-basis:           = IdN , x =       f.
Regularization-based denoising:
                           1
              x = argmin ||x y||2 + J(x)
                ?
                     x2RN 2
                             P
Sparse regularization: J(x) = m |xm |q
       (where |a|0 = (a))

                     q
           x?
            m   =   ST (xm )
Surrogate Functionals
Sparse regularization:
            ?               1
           x 2 argmin E(x) = ||y               x||2 + ||x||1
                 x2RN       2
                                      ⇤
Surrogate functional:      ⌧ < 1/||       ||
                        1             2  1
 E(x, x) = E(x)
      ˜                   || (x   x)|| + ||x
                                  ˜                   x||2
                                                      ˜
                        2               2⌧
                                                       E(·, x)
                                                            ˜

                                               E(·)

                                                                         x
                                                             S ⌧ (u) x
                                                                     ˜
Surrogate Functionals
Sparse regularization:
            ?               1
           x 2 argmin E(x) = ||y                 x||2 + ||x||1
                 x2RN       2
                                        ⇤
Surrogate functional:       ⌧ < 1/||        ||
                        1               2    1
 E(x, x) = E(x)
      ˜                   || (x       x)|| + ||x
                                      ˜                 x||2
                                                        ˜
                        2                   2⌧
                                                         E(·, x)
                                                              ˜
   Theorem:
     argmin E(x, x) = S ⌧ (u)
                 ˜                               E(·)
          x
                            ⇤
    where u = x         ⌧       ( x    x)
                                       ˜
                                                                           x
                                                               S ⌧ (u) x
                                                                       ˜

 Proof: E(x, x) / 1 ||u
             ˜    2             x||2 + ||x||1 + cst.
Iterative Thresholding
Algorithm: x(`+1) = argmin E(x, x(`) )
                             x
     Initialize x(0) , set ` = 0.
                        ⇤
     u(`) = x(`)    ⌧       ( x(`)   ⌧ y)
                                            E(·)
     x(`+1) = S 1⌧ (u(`) )                             (2)       (1)       (0)
                                                                                 x
                                                   x         x         x
Iterative Thresholding
Algorithm: x(`+1) = argmin E(x, x(`) )
                              x
     Initialize x(0) , set ` = 0.
                         ⇤
     u(`) = x(`)     ⌧       ( x(`)   ⌧ y)
                                             E(·)
      x(`+1) = S 1⌧ (u(`) )                             (2)       (1)       (0)
                                                                                  x
                                                    x         x         x
Remark:
    x(`) 7! u(`) is a gradient descent of || x y||2 .
     1
    S`⌧ is the proximal step of associated to ||x||1 .
Iterative Thresholding
Algorithm: x(`+1) = argmin E(x, x(`) )
                              x
     Initialize x(0) , set ` = 0.
                         ⇤
     u(`) = x(`)     ⌧       ( x(`)    ⌧ y)
                                                 E(·)
      x(`+1) = S 1⌧ (u(`) )                                  (2)       (1)       (0)
                                                                                       x
                                                        x          x         x
Remark:
    x(`) 7! u(`) is a gradient descent of || x y||2 .
     1
    S`⌧ is the proximal step of associated to ||x||1 .


                                  ⇤
      Theorem: if ⌧ < 2/||            ||, then x(`) ! x? .
Overview

• Inverse Problems Regularization

• Sparse Synthesis Regularization

• Examples: Sparse Wavelet Regularizations

• Iterative Soft Thresholding

• Sparse Seismic Deconvolution
Seismic Imaging
1D Idealization
               Initial condition: “wavelet”
                      = band pass filter h

                   1D propagation convolution
                           f =h f

                                              h(x)

               f

                                              ˆ
                                              h( )



y = f0 h + w                      P
Pseudo Inverse
Pseudo-inverse:

   ˆ+ ( ) = y ( )
   f
            ˆ
                    =      f + = h+ ⇥ y = f0 + h+ ⇥ w
            h( )
                                     ˆ        ˆ
                               where h+ ( ) = h( )   1


               ˆ              ˆ
Stabilization: h+ (⇥) = 0 if |h(⇥)|

                        ˆ
                        h( )                   y = h f0 + w



                      ˆ
                    1/h( )

                                                         f+
Sparse Spikes Deconvolution




    f with small ||f ||0                 y=f        h+w
Sparsity basis: Diracs     ⇥m [x] = [x   m]
                       1
             f = argmin ||f ⇥ h    y||2 +         |f [m]|.
                  f RN 2                      m
Sparse Spikes Deconvolution




    f with small ||f ||0                     y=f        h+w
Sparsity basis: Diracs      ⇥m [x] = [x      m]
                       1
             f = argmin ||f ⇥ h        y||2 +         |f [m]|.
                  f RN 2                          m


Algorithm:         < 2/||                 ˆ
                              || = 2/max |h(⇥)|2
                   ˜
      • Inversion: f (k) = f (k)     h ⇥ (h ⇥ f (k)    y).
                                         ˜
                     f (k+1) [m] = S 1⇥ (f (k) [m])
Numerical Example

                                        f0
Choosing optimal :
 oracle, minimize ||f0   f ||

                                y=f   h+w
SNR(f0 , f )



                                       f
Convergence Study
Sparse deconvolution:f = argmin E(f ).
                            f RN
                    1
     Energy: E(f ) = ||h ⇥ f y||2 +    |f [m]|.
                    2               m
Not strictly convex      =    no convergence speed.

log10 (E(f (k) )/E(f )   1)        log10 (||f (k)   f ||/||f0 ||)




                              k                                     k
Conclusion
Conclusion
Conclusion

Weitere ähnliche Inhalte

Was ist angesagt?

サーバー実装いろいろ
サーバー実装いろいろサーバー実装いろいろ
サーバー実装いろいろ
kjwtnb
 

Was ist angesagt? (20)

暗認本読書会7
暗認本読書会7暗認本読書会7
暗認本読書会7
 
画像処理の高性能計算
画像処理の高性能計算画像処理の高性能計算
画像処理の高性能計算
 
【関東GPGPU勉強会#3】OpenCVの新機能 UMatを先取りしよう
【関東GPGPU勉強会#3】OpenCVの新機能 UMatを先取りしよう【関東GPGPU勉強会#3】OpenCVの新機能 UMatを先取りしよう
【関東GPGPU勉強会#3】OpenCVの新機能 UMatを先取りしよう
 
文字列アルゴリズム
文字列アルゴリズム文字列アルゴリズム
文字列アルゴリズム
 
世界最速の正規表現JITエンジンの実装
世界最速の正規表現JITエンジンの実装世界最速の正規表現JITエンジンの実装
世界最速の正規表現JITエンジンの実装
 
サーバー実装いろいろ
サーバー実装いろいろサーバー実装いろいろ
サーバー実装いろいろ
 
[GTCJ2018]CuPy -NumPy互換GPUライブラリによるPythonでの高速計算- PFN奥田遼介
[GTCJ2018]CuPy -NumPy互換GPUライブラリによるPythonでの高速計算- PFN奥田遼介[GTCJ2018]CuPy -NumPy互換GPUライブラリによるPythonでの高速計算- PFN奥田遼介
[GTCJ2018]CuPy -NumPy互換GPUライブラリによるPythonでの高速計算- PFN奥田遼介
 
Protocol Buffers 入門
Protocol Buffers 入門Protocol Buffers 入門
Protocol Buffers 入門
 
論文紹介:”Playing hard exploration games by watching YouTube“
論文紹介:”Playing hard exploration games by watching YouTube“論文紹介:”Playing hard exploration games by watching YouTube“
論文紹介:”Playing hard exploration games by watching YouTube“
 
キメるClojure
キメるClojureキメるClojure
キメるClojure
 
Domain Modeling Made Functional (DevTernity 2022)
Domain Modeling Made Functional (DevTernity 2022)Domain Modeling Made Functional (DevTernity 2022)
Domain Modeling Made Functional (DevTernity 2022)
 
Pythonによる黒魔術入門
Pythonによる黒魔術入門Pythonによる黒魔術入門
Pythonによる黒魔術入門
 
「日本語LaTeX」が多すぎる件について
「日本語LaTeX」が多すぎる件について「日本語LaTeX」が多すぎる件について
「日本語LaTeX」が多すぎる件について
 
π計算
π計算π計算
π計算
 
私を SKI に連れてって
私を SKI に連れてって私を SKI に連れてって
私を SKI に連れてって
 
ラムダ計算入門
ラムダ計算入門ラムダ計算入門
ラムダ計算入門
 
Jupyter 簡介—互動式的筆記本系統
Jupyter 簡介—互動式的筆記本系統Jupyter 簡介—互動式的筆記本系統
Jupyter 簡介—互動式的筆記本系統
 
Pythonが動く仕組み(の概要)
Pythonが動く仕組み(の概要)Pythonが動く仕組み(の概要)
Pythonが動く仕組み(の概要)
 
関数型プログラミングのデザインパターンひとめぐり
関数型プログラミングのデザインパターンひとめぐり関数型プログラミングのデザインパターンひとめぐり
関数型プログラミングのデザインパターンひとめぐり
 
Binary Search Tree and Heap in Prolog
Binary Search Tree and Heap in PrologBinary Search Tree and Heap in Prolog
Binary Search Tree and Heap in Prolog
 

Andere mochten auch

Andere mochten auch (9)

Signal Processing Course : Convex Optimization
Signal Processing Course : Convex OptimizationSignal Processing Course : Convex Optimization
Signal Processing Course : Convex Optimization
 
Signal Processing Course : Approximation
Signal Processing Course : ApproximationSignal Processing Course : Approximation
Signal Processing Course : Approximation
 
Signal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems RegularizationSignal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems Regularization
 
Digital Signal Processing - Practical Techniques, Tips and Tricks Course Sampler
Digital Signal Processing - Practical Techniques, Tips and Tricks Course SamplerDigital Signal Processing - Practical Techniques, Tips and Tricks Course Sampler
Digital Signal Processing - Practical Techniques, Tips and Tricks Course Sampler
 
C2.5
C2.5C2.5
C2.5
 
Signal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse RecoverySignal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse Recovery
 
Signal Processing Course : Compressed Sensing
Signal Processing Course : Compressed SensingSignal Processing Course : Compressed Sensing
Signal Processing Course : Compressed Sensing
 
Energy proposition
Energy propositionEnergy proposition
Energy proposition
 
EEG: Basics
EEG: BasicsEEG: Basics
EEG: Basics
 

Ähnlich wie Signal Processing Course : Sparse Regularization of Inverse Problems

Analisis Korespondensi
Analisis KorespondensiAnalisis Korespondensi
Analisis Korespondensi
dessybudiyanti
 
On gradient Ricci solitons
On gradient Ricci solitonsOn gradient Ricci solitons
On gradient Ricci solitons
mnlfdzlpz
 

Ähnlich wie Signal Processing Course : Sparse Regularization of Inverse Problems (20)

Sparsity and Compressed Sensing
Sparsity and Compressed SensingSparsity and Compressed Sensing
Sparsity and Compressed Sensing
 
02 2d systems matrix
02 2d systems matrix02 2d systems matrix
02 2d systems matrix
 
Fourier transform
Fourier transformFourier transform
Fourier transform
 
03 image transform
03 image transform03 image transform
03 image transform
 
Color Img at Prisma Network meeting 2009
Color Img at Prisma Network meeting 2009Color Img at Prisma Network meeting 2009
Color Img at Prisma Network meeting 2009
 
Signal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal BasesSignal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal Bases
 
Theoretical Spectroscopy Lectures: real-time approach 2
Theoretical Spectroscopy Lectures: real-time approach 2Theoretical Spectroscopy Lectures: real-time approach 2
Theoretical Spectroscopy Lectures: real-time approach 2
 
Linear response theory
Linear response theoryLinear response theory
Linear response theory
 
Mcgill3
Mcgill3Mcgill3
Mcgill3
 
Low Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
Low Complexity Regularization of Inverse Problems - Course #1 Inverse ProblemsLow Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
Low Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
 
Analisis Korespondensi
Analisis KorespondensiAnalisis Korespondensi
Analisis Korespondensi
 
A Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New OneA Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New One
 
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
 
Decimation in time and frequency
Decimation in time and frequencyDecimation in time and frequency
Decimation in time and frequency
 
NMR Spectroscopy
NMR SpectroscopyNMR Spectroscopy
NMR Spectroscopy
 
On gradient Ricci solitons
On gradient Ricci solitonsOn gradient Ricci solitons
On gradient Ricci solitons
 
Deblurring in ct
Deblurring in ctDeblurring in ct
Deblurring in ct
 
WAVELET-PACKET-BASED ADAPTIVE ALGORITHM FOR SPARSE IMPULSE RESPONSE IDENTIFI...
WAVELET-PACKET-BASED ADAPTIVE ALGORITHM FOR  SPARSE IMPULSE RESPONSE IDENTIFI...WAVELET-PACKET-BASED ADAPTIVE ALGORITHM FOR  SPARSE IMPULSE RESPONSE IDENTIFI...
WAVELET-PACKET-BASED ADAPTIVE ALGORITHM FOR SPARSE IMPULSE RESPONSE IDENTIFI...
 
Dsp U Lec10 DFT And FFT
Dsp U   Lec10  DFT And  FFTDsp U   Lec10  DFT And  FFT
Dsp U Lec10 DFT And FFT
 
Mesh Processing Course : Geodesics
Mesh Processing Course : GeodesicsMesh Processing Course : Geodesics
Mesh Processing Course : Geodesics
 

Mehr von Gabriel Peyré

Mehr von Gabriel Peyré (20)

Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
 
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
 
Low Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse ProblemsLow Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse Problems
 
Model Selection with Piecewise Regular Gauges
Model Selection with Piecewise Regular GaugesModel Selection with Piecewise Regular Gauges
Model Selection with Piecewise Regular Gauges
 
Proximal Splitting and Optimal Transport
Proximal Splitting and Optimal TransportProximal Splitting and Optimal Transport
Proximal Splitting and Optimal Transport
 
Geodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and GraphicsGeodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and Graphics
 
Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse Representation
 
Adaptive Signal and Image Processing
Adaptive Signal and Image ProcessingAdaptive Signal and Image Processing
Adaptive Signal and Image Processing
 
Mesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh ParameterizationMesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh Parameterization
 
Mesh Processing Course : Multiresolution
Mesh Processing Course : MultiresolutionMesh Processing Course : Multiresolution
Mesh Processing Course : Multiresolution
 
Mesh Processing Course : Introduction
Mesh Processing Course : IntroductionMesh Processing Course : Introduction
Mesh Processing Course : Introduction
 
Mesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic SamplingMesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic Sampling
 
Mesh Processing Course : Differential Calculus
Mesh Processing Course : Differential CalculusMesh Processing Course : Differential Calculus
Mesh Processing Course : Differential Calculus
 
Mesh Processing Course : Active Contours
Mesh Processing Course : Active ContoursMesh Processing Course : Active Contours
Mesh Processing Course : Active Contours
 
Signal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the CourseSignal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the Course
 
Signal Processing Course : Fourier
Signal Processing Course : FourierSignal Processing Course : Fourier
Signal Processing Course : Fourier
 
Signal Processing Course : Denoising
Signal Processing Course : DenoisingSignal Processing Course : Denoising
Signal Processing Course : Denoising
 
Signal Processing Course : Wavelets
Signal Processing Course : WaveletsSignal Processing Course : Wavelets
Signal Processing Course : Wavelets
 
Optimal Transport in Imaging Sciences
Optimal Transport in Imaging SciencesOptimal Transport in Imaging Sciences
Optimal Transport in Imaging Sciences
 
An Introduction to Optimal Transport
An Introduction to Optimal TransportAn Introduction to Optimal Transport
An Introduction to Optimal Transport
 

Signal Processing Course : Sparse Regularization of Inverse Problems

  • 1. Sparse Regularization Of Inverse Problems Gabriel Peyré www.numerical-tours.com
  • 2. Overview • Inverse Problems Regularization • Sparse Synthesis Regularization • Examples: Sparse Wavelet Regularizations • Iterative Soft Thresholding • Sparse Seismic Deconvolution
  • 3. Inverse Problems Forward model: y = K f0 + w RP Observations Operator (Unknown) Noise : RQ RP Input
  • 4. Inverse Problems Forward model: y = K f0 + w RP Observations Operator (Unknown) Noise : RQ RP Input Denoising: K = IdQ , P = Q.
  • 5. Inverse Problems Forward model: y = K f0 + w RP Observations Operator (Unknown) Noise : RQ RP Input Denoising: K = IdQ , P = Q. Inpainting: set of missing pixels, P = Q | |. 0 if x , (Kf )(x) = f (x) if x / . K
  • 6. Inverse Problems Forward model: y = K f0 + w RP Observations Operator (Unknown) Noise : RQ RP Input Denoising: K = IdQ , P = Q. Inpainting: set of missing pixels, P = Q | |. 0 if x , (Kf )(x) = f (x) if x / . Super-resolution: Kf = (f k) , P = Q/ . K K
  • 7. Inverse Problem in Medical Imaging Kf = (p k )1 k K
  • 8. Inverse Problem in Medical Imaging Kf = (p k )1 k K Magnetic resonance imaging (MRI): ˆ Kf = (f ( )) ˆ f
  • 9. Inverse Problem in Medical Imaging Kf = (p k )1 k K Magnetic resonance imaging (MRI): ˆ Kf = (f ( )) ˆ f Other examples: MEG, EEG, . . .
  • 10. Inverse Problem Regularization Noisy measurements: y = Kf0 + w. Prior model: J : RQ R assigns a score to images.
  • 11. Inverse Problem Regularization Noisy measurements: y = Kf0 + w. Prior model: J : RQ R assigns a score to images. 1 f argmin ||y Kf ||2 + J(f ) f RQ 2 Data fidelity Regularity
  • 12. Inverse Problem Regularization Noisy measurements: y = Kf0 + w. Prior model: J : RQ R assigns a score to images. 1 f argmin ||y Kf ||2 + J(f ) f RQ 2 Data fidelity Regularity Choice of : tradeo Noise level Regularity of f0 ||w|| J(f0 )
  • 13. Inverse Problem Regularization Noisy measurements: y = Kf0 + w. Prior model: J : RQ R assigns a score to images. 1 f argmin ||y Kf ||2 + J(f ) f RQ 2 Data fidelity Regularity Choice of : tradeo Noise level Regularity of f0 ||w|| J(f0 ) No noise: 0+ , minimize f argmin J(f ) f RQ ,Kf =y
  • 14. Smooth and Cartoon Priors J(f ) = || f (x)||2 dx | f |2
  • 15. Smooth and Cartoon Priors J(f ) = || f (x)||2 dx J(f ) = || f (x)||dx J(f ) = length(Ct )dt R | f |2 | f|
  • 16. Inpainting Example Input y = Kf0 + w Sobolev Total variation
  • 17. Overview • Inverse Problems Regularization • Sparse Synthesis Regularization • Examples: Sparse Wavelet Regularizations • Iterative Soft Thresholding • Sparse Seismic Deconvolution
  • 18. Redundant Dictionaries Dictionary =( m )m RQ N ,N Q. Q N
  • 19. Redundant Dictionaries Dictionary =( m )m RQ N ,N Q. Fourier: m = ei ·, m frequency Q N
  • 20. Redundant Dictionaries Dictionary =( m )m RQ N ,N Q. m = (j, , n) Fourier: m =e i ·, m frequency scale position Wavelets: m = (2 j R x n) orientation =1 =2 Q N
  • 21. Redundant Dictionaries Dictionary =( m )m RQ N ,N Q. m = (j, , n) Fourier: m =e i ·, m frequency scale position Wavelets: m = (2 j R x n) orientation DCT, Curvelets, bandlets, . . . =1 =2 Q N
  • 22. Redundant Dictionaries Dictionary =( m )m RQ N ,N Q. m = (j, , n) Fourier: m =e i ·, m frequency scale position Wavelets: m = (2 j R x n) orientation DCT, Curvelets, bandlets, . . . Synthesis: f = m xm m = x. =1 =2 Q =f x N Coe cients x Image f = x
  • 23. Sparse Priors Coe cients x Ideal sparsity: for most m, xm = 0. J0 (x) = # {m xm = 0} Image f0
  • 24. Sparse Priors Coe cients x Ideal sparsity: for most m, xm = 0. J0 (x) = # {m xm = 0} Sparse approximation: f = x where argmin ||f0 x||2 + T 2 J0 (x) x2RN Image f0
  • 25. Sparse Priors Coe cients x Ideal sparsity: for most m, xm = 0. J0 (x) = # {m xm = 0} Sparse approximation: f = x where argmin ||f0 x||2 + T 2 J0 (x) x2RN Orthogonal : = = IdN f0 , m if | f0 , m | > T, xm = 0 otherwise. ST Image f0 f= ST (f0 )
  • 26. Sparse Priors Coe cients x Ideal sparsity: for most m, xm = 0. J0 (x) = # {m xm = 0} Sparse approximation: f = x where argmin ||f0 x||2 + T 2 J0 (x) x2RN Orthogonal : = = IdN f0 , m if | f0 , m | > T, xm = 0 otherwise. ST Image f0 f= ST (f0 ) Non-orthogonal : NP-hard.
  • 27. Convex Relaxation: L1 Prior J0 (x) = # {m xm = 0} J0 (x) = 0 null image. Image with 2 pixels: J0 (x) = 1 sparse image. J0 (x) = 2 non-sparse image. x2 x1 q=0
  • 28. Convex Relaxation: L1 Prior J0 (x) = # {m xm = 0} J0 (x) = 0 null image. Image with 2 pixels: J0 (x) = 1 sparse image. J0 (x) = 2 non-sparse image. x2 x1 q=0 q = 1/2 q=1 q = 3/2 q=2 q priors: Jq (x) = |xm |q (convex for q 1) m
  • 29. Convex Relaxation: L1 Prior J0 (x) = # {m xm = 0} J0 (x) = 0 null image. Image with 2 pixels: J0 (x) = 1 sparse image. J0 (x) = 2 non-sparse image. x2 x1 q=0 q = 1/2 q=1 q = 3/2 q=2 q priors: Jq (x) = |xm |q (convex for q 1) m Sparse 1 prior: J1 (x) = |xm | m
  • 30. L1 Regularization x0 RN coe cients
  • 31. L1 Regularization x0 RN f0 = x0 RQ coe cients image
  • 32. L1 Regularization x0 RN f0 = x0 RQ y = Kf0 + w RP coe cients image observations K w
  • 33. L1 Regularization x0 RN f0 = x0 RQ y = Kf0 + w RP coe cients image observations K w = K ⇥ ⇥ RP N
  • 34. L1 Regularization x0 RN f0 = x0 RQ y = Kf0 + w RP coe cients image observations K w = K ⇥ ⇥ RP N Sparse recovery: f = x where x solves 1 min ||y x||2 + ||x||1 x RN 2 Fidelity Regularization
  • 35. Noiseless Sparse Regularization Noiseless measurements: y = x0 x x= y x argmin |xm | x=y m
  • 36. Noiseless Sparse Regularization Noiseless measurements: y = x0 x x x= x= y y x argmin |xm | x argmin |xm |2 x=y m x=y m
  • 37. Noiseless Sparse Regularization Noiseless measurements: y = x0 x x x= x= y y x argmin |xm | x argmin |xm |2 x=y m x=y m Convex linear program. Interior points, cf. [Chen, Donoho, Saunders] “basis pursuit”. Douglas-Rachford splitting, see [Combettes, Pesquet].
  • 38. Noisy Sparse Regularization Noisy measurements: y = x0 + w 1 x argmin ||y x||2 + ||x||1 x RQ 2 Data fidelity Regularization
  • 39. Noisy Sparse Regularization Noisy measurements: y = x0 + w 1 x argmin ||y x||2 + ||x||1 x RQ 2 Equivalence Data fidelity Regularization x argmin ||x||1 || x y|| | x= x y|
  • 40. Noisy Sparse Regularization Noisy measurements: y = x0 + w 1 x argmin ||y x||2 + ||x||1 x RQ 2 Equivalence Data fidelity Regularization x argmin ||x||1 || x y|| | x= Algorithms: x y| Iterative soft thresholding Forward-backward splitting see [Daubechies et al], [Pesquet et al], etc Nesterov multi-steps schemes.
  • 41. Overview • Inverse Problems Regularization • Sparse Synthesis Regularization • Examples: Sparse Wavelet Regularizations • Iterative Soft Thresholding • Sparse Seismic Deconvolution
  • 43. Image De-blurring Original f0 y = h f0 + w Sobolev SNR=22.7dB Sobolev regularization: f = argmin ||f ⇥ h y||2 + ||⇥f ||2 f RN ˆ h(⇥) ˆ f (⇥) = y (⇥) ˆ ˆ |h(⇥)|2 + |⇥|2
  • 44. Image De-blurring Original f0 y = h f0 + w Sobolev Sparsity SNR=22.7dB SNR=24.7dB Sobolev regularization: f = argmin ||f ⇥ h y||2 + ||⇥f ||2 f RN ˆ h(⇥) ˆ f (⇥) = y (⇥) ˆ ˆ |h(⇥)|2 + |⇥|2 Sparsity regularization: = translation invariant wavelets. 1 f = x where x argmin ||h ( x) y||2 + ||x||1 x 2
  • 45. Comparison of Regularizations L2 regularization Sobolev regularization Sparsity regularization SNR SNR SNR opt opt opt L2 Sobolev Sparsity Invariant SNR=21.7dB SNR=22.7dB SNR=23.7dB SNR=24.7dB
  • 46. Inpainting Problem K 0 if x , (Kf )(x) = f (x) if x / . Measures: y = Kf0 + w
  • 47. Image Separation Model: f = f1 + f2 + w, (f1 , f2 ) components, w noise.
  • 48. Image Separation Model: f = f1 + f2 + w, (f1 , f2 ) components, w noise.
  • 49. Image Separation Model: f = f1 + f2 + w, (f1 , f2 ) components, w noise. Union dictionary: =[ 1, 2] RQ (N1 +N2 ) Recovered component: fi = i xi . 1 (x1 , x2 ) argmin ||f x||2 + ||x||1 x=(x1 ,x2 ) RN 2
  • 52. Overview • Inverse Problems Regularization • Sparse Synthesis Regularization • Examples: Sparse Wavelet Regularizations • Iterative Soft Thresholding • Sparse Seismic Deconvolution
  • 53. Sparse Regularization Denoising Denoising: y = x0 + w 2 RN , K = Id. ⇤ ⇤ Orthogonal-basis: = IdN , x = f. Regularization-based denoising: 1 x = argmin ||x y||2 + J(x) ? x2RN 2 P Sparse regularization: J(x) = m |xm |q (where |a|0 = (a))
  • 54. Sparse Regularization Denoising Denoising: y = x0 + w 2 RN , K = Id. ⇤ ⇤ Orthogonal-basis: = IdN , x = f. Regularization-based denoising: 1 x = argmin ||x y||2 + J(x) ? x2RN 2 P Sparse regularization: J(x) = m |xm |q (where |a|0 = (a)) q x? m = ST (xm )
  • 55. Surrogate Functionals Sparse regularization: ? 1 x 2 argmin E(x) = ||y x||2 + ||x||1 x2RN 2 ⇤ Surrogate functional: ⌧ < 1/|| || 1 2 1 E(x, x) = E(x) ˜ || (x x)|| + ||x ˜ x||2 ˜ 2 2⌧ E(·, x) ˜ E(·) x S ⌧ (u) x ˜
  • 56. Surrogate Functionals Sparse regularization: ? 1 x 2 argmin E(x) = ||y x||2 + ||x||1 x2RN 2 ⇤ Surrogate functional: ⌧ < 1/|| || 1 2 1 E(x, x) = E(x) ˜ || (x x)|| + ||x ˜ x||2 ˜ 2 2⌧ E(·, x) ˜ Theorem: argmin E(x, x) = S ⌧ (u) ˜ E(·) x ⇤ where u = x ⌧ ( x x) ˜ x S ⌧ (u) x ˜ Proof: E(x, x) / 1 ||u ˜ 2 x||2 + ||x||1 + cst.
  • 57. Iterative Thresholding Algorithm: x(`+1) = argmin E(x, x(`) ) x Initialize x(0) , set ` = 0. ⇤ u(`) = x(`) ⌧ ( x(`) ⌧ y) E(·) x(`+1) = S 1⌧ (u(`) ) (2) (1) (0) x x x x
  • 58. Iterative Thresholding Algorithm: x(`+1) = argmin E(x, x(`) ) x Initialize x(0) , set ` = 0. ⇤ u(`) = x(`) ⌧ ( x(`) ⌧ y) E(·) x(`+1) = S 1⌧ (u(`) ) (2) (1) (0) x x x x Remark: x(`) 7! u(`) is a gradient descent of || x y||2 . 1 S`⌧ is the proximal step of associated to ||x||1 .
  • 59. Iterative Thresholding Algorithm: x(`+1) = argmin E(x, x(`) ) x Initialize x(0) , set ` = 0. ⇤ u(`) = x(`) ⌧ ( x(`) ⌧ y) E(·) x(`+1) = S 1⌧ (u(`) ) (2) (1) (0) x x x x Remark: x(`) 7! u(`) is a gradient descent of || x y||2 . 1 S`⌧ is the proximal step of associated to ||x||1 . ⇤ Theorem: if ⌧ < 2/|| ||, then x(`) ! x? .
  • 60. Overview • Inverse Problems Regularization • Sparse Synthesis Regularization • Examples: Sparse Wavelet Regularizations • Iterative Soft Thresholding • Sparse Seismic Deconvolution
  • 62. 1D Idealization Initial condition: “wavelet” = band pass filter h 1D propagation convolution f =h f h(x) f ˆ h( ) y = f0 h + w P
  • 63. Pseudo Inverse Pseudo-inverse: ˆ+ ( ) = y ( ) f ˆ = f + = h+ ⇥ y = f0 + h+ ⇥ w h( ) ˆ ˆ where h+ ( ) = h( ) 1 ˆ ˆ Stabilization: h+ (⇥) = 0 if |h(⇥)| ˆ h( ) y = h f0 + w ˆ 1/h( ) f+
  • 64. Sparse Spikes Deconvolution f with small ||f ||0 y=f h+w Sparsity basis: Diracs ⇥m [x] = [x m] 1 f = argmin ||f ⇥ h y||2 + |f [m]|. f RN 2 m
  • 65. Sparse Spikes Deconvolution f with small ||f ||0 y=f h+w Sparsity basis: Diracs ⇥m [x] = [x m] 1 f = argmin ||f ⇥ h y||2 + |f [m]|. f RN 2 m Algorithm: < 2/|| ˆ || = 2/max |h(⇥)|2 ˜ • Inversion: f (k) = f (k) h ⇥ (h ⇥ f (k) y). ˜ f (k+1) [m] = S 1⇥ (f (k) [m])
  • 66. Numerical Example f0 Choosing optimal : oracle, minimize ||f0 f || y=f h+w SNR(f0 , f ) f
  • 67. Convergence Study Sparse deconvolution:f = argmin E(f ). f RN 1 Energy: E(f ) = ||h ⇥ f y||2 + |f [m]|. 2 m Not strictly convex = no convergence speed. log10 (E(f (k) )/E(f ) 1) log10 (||f (k) f ||/||f0 ||) k k