Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.
Keywords:
Error-correcting codes,
low-degree polynomials,
randomness, linear algebra
questions begging to be resolved
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of ...
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of ...
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of ...
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of ...
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of ...
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of ...
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of ...
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of ...
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of ...
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of ...
A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of ...
Efficiently decoding
Reed-Muller codes
from random errors
Ramprasad Saptharishi Amir Shpilka Ben Lee Volk
TIFR Tel Aviv Univ...
Typical Setting
Alicia Machado
Bobby Jindal
Typical Setting
Alicia Machado sends
Dont vote for Trump
Bobby Jindal
Typical Setting
Alicia Machado sends
Dont vote for Trump
Channel corrupts the message...
Bobby Jindal
Typical Setting
Alicia Machado sends
Dont vote for Trump
Channel corrupts the message...
Bobby Jindal receives
Dude vote f...
Typical Setting
Alicia Machado sends
Dont vote for Trump
Channel corrupts the message...
Bobby Jindal receives
Dude vote f...
Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation ...
Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation ...
Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation ...
Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation ...
Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation ...
Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation ...
Reed-Muller Codes: RM(m, r)
Generator Matrix: evaluation matrix of deg ≤ r monomials:
v ∈ m
2
M







 := E(m, r)...
Reed-Muller Codes: RM(m, r)
Generator Matrix: evaluation matrix of deg ≤ r monomials:
v ∈ m
2
M







 := E(m, r)...
Reed-Muller Codes: RM(m, r)
Generator Matrix: evaluation matrix of deg ≤ r monomials:
v ∈ m
2
M







 := E(m, r)...
Decoding RM(m, r)
Worst Case Errors:
d
Decoding RM(m, r)
Worst Case Errors: Up to d/2 (d = 2m−r
is minimal distance).
d
d/2
Decoding RM(m, r)
Worst Case Errors: Up to d/2 (d = 2m−r
is minimal distance).
d
d/2
(algorithm by [Reed54])
Decoding RM(m, r)
Worst Case Errors: Up to d/2 (d = 2m−r
is minimal distance).
d
d/2
(algorithm by [Reed54])
List Decoding...
Decoding RM(m, r)
Worst Case Errors: Up to d/2 (d = 2m−r
is minimal distance).
d
d/2
(algorithm by [Reed54])
List Decoding...
Two schools of study
▶ Hamming Model a.k.a worst-case errors
▶ Generally the model of interest for complexity theorists,
▶...
Two schools of study
▶ Hamming Model a.k.a worst-case errors
▶ Generally the model of interest for complexity theorists,
▶...
Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with proba...
Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with proba...
Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with proba...
Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with proba...
Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with proba...
Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with proba...
Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with proba...
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Erasure Channel — BEC(p)
Each bi...
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Erasure Channel — BEC(p)
Each bi...
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Erasure Channel — BEC(p)
Each bi...
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Symmetric Channel — BSC(p)
Each ...
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Symmetric Channel — BSC(p)
Each ...
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Symmetric Channel — BSC(p)
Each ...
Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
[Shannon48] Maximum rate that enables d...
Motivating questions for this talk
How well does Reed-Muller codes perform in the Shannon Model?
In BEC(p)?
In BSC(p)?
Motivating questions for this talk
How well does Reed-Muller codes perform in the Shannon Model?
In BEC(p)?
In BSC(p)?
Are...
Outline
30
So far
45
Outline
30
So far
45
RM codes
and erasures
0
Outline
30
So far
45
RM codes
and erasures
0
Decoding errors,
efficiently
20
Outline
30
So far
45
RM codes
and erasures
0
Decoding errors,
efficiently
20
Conclusions
and Q&A
Outline
30
So far
45
RM codes
and erasures
0
Decoding errors,
efficiently
20
Conclusions
and Q&A
Diagram not to scale
Dual of a code
Dual of a code
Any linear space can be specified by a generating basis, or a solution to a
system of constraints.
Dual of a code
Any linear space can be specified by a generating basis, or a solution to a
system of constraints.
⊥
= {u : ...
Dual of a code
Any linear space can be specified by a generating basis, or a solution to a
system of constraints.
⊥
= {u : ...
Linear codes and erasures
0 0 1 1 0? ? ?
Question: When can we decode from a pattern of erasures?
Linear codes and erasures
0 0 1 1 0? ? ?
Question: When can we decode from a pattern of erasures?
0 0 1 1 0
0 0 1 1 0
1 0 ...
Linear codes and erasures
0 0 1 1 0? ? ?
Question: When can we decode from a pattern of erasures?
0 0 1 1 0
0 0 1 1 0
1 0 ...
Linear codes and erasures
0 0 1 1 0? ? ?
Question: When can we decode from a pattern of erasures?
0 0 0 0 01 1 0
Linear codes and erasures
0 0 1 1 0? ? ?
Question: When can we decode from a pattern of erasures?
0 0 0 0 01 1 0
Decodable...
Linear codes and erasures
? ? ?
Question: When can we decode from a pattern of erasures?
0 0 0 0 01 1 0
Decodable if and o...
Linear codes and erasures
? ? ?
Question: When can we decode from a pattern of erasures?
0 0 0 0 01 1 0
Linear codes and erasures
? ? ?
Question: When can we decode from a pattern of erasures?
Observation: A pattern of erasure...
Linear codes and erasures
? ? ?
Question: When can we decode from a pattern of erasures?
Observation: A pattern of erasure...
Reed-Muller codes under erasures
Cool Fact
The dual of RM(m, r) is RM(m, r′
) where r′
= m − r − 1.
Reed-Muller codes under erasures
Cool Fact
The dual of RM(m, r) is RM(m, r′
) where r′
= m − r − 1.
Hence, the Parity Chec...
Reed-Muller codes under erasures
Cool Fact
The dual of RM(m, r) is RM(m, r′
) where r′
= m − r − 1.
Hence, the Parity Chec...
Reed-Muller codes under erasures
[m]
≤r′
{0,1}m
v
M
M(v)
Reed-Muller codes under erasures
[m]
≤r′
{0,1}m
v
M
M(v)
Question: Let R = m
≤r′ . Suppose you pick (0.99)R columns at
ran...
Reed-Muller codes under erasures
[m]
≤r′
{0,1}m
v
M
M(v)
Question: Let R = m
≤r′ . Suppose you pick (0.99)R columns at
ran...
Reed-Muller codes under erasures
[m]
≤r′
{0,1}m
v
M
M(v)
Question: Let R = m
≤r′ . Suppose you pick (0.99)R columns at
ran...
From erasures to errors
Theorem: [ASW] Any pattern correctable from erasures in
RM(m, m − r − 1) is correctable from error...
From erasures to errors
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/...
From erasures to errors
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/...
From erasures to errors
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/...
From erasures to errors
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/...
From erasures to errors
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/...
Outline
30
So far
45
RM codes
and erasures
0
Outline
30
So far
45
RM codes
and erasures
0
Decoding errors,
efficiently
20
What we want to prove
Theorem [S-Shpilka-Volk]
There exists an efficient algorithm with the following guarantee:
Given a c...
What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
Parity Check M...
What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
E(m,2r + 1) v ...
What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
E(m,2r + 1) w =
What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
E(m,2r + 1) w ...
What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
E(m,2r + 1) w ...
What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
E(m,2r + 1) w ...
Erasure Correctable Patterns
Erasure Correctable Patterns
A pattern of erasures is correctable if and only if the corresponding
columns in the parity c...
Erasure Correctable Patterns
A pattern of erasures is correctable if and only if the corresponding
columns in the parity c...
Erasure Correctable Patterns
A pattern of erasures is correctable if and only if the corresponding
columns in the parity c...
The Decoding Algorithm
Input: Received word w (= v + errS)
The Decoding Algorithm
Input: Received word w (= v + errS)
Lemma
Assume that S is a pattern of erasures correctable in RM(...
The Decoding Algorithm
Input: Received word w (= v + errS)
Lemma
Assume that S is a pattern of erasures correctable in RM(...
The Decoding Algorithm
Input: Received word w (= v + errS)
Lemma
Assume that S is a pattern of erasures correctable in RM(...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t],...
Decoding Algorithm
▶ Input: Received word w (= v + errS)
Decoding Algorithm
▶ Input: Received word w (= v + errS)
▶ Compute E(m,2r + 1)w to get the value of
∑
ui ∈S f (ui ) for ev...
Decoding Algorithm
▶ Input: Received word w (= v + errS)
▶ Compute E(m,2r + 1)w to get the value of
∑
ui ∈S f (ui ) for ev...
Decoding Algorithm
▶ Input: Received word w (= v + errS)
▶ Compute E(m,2r + 1)w to get the value of
∑
ui ∈S f (ui ) for ev...
Outline
30
So far
45
RM codes
and erasures
0
Decoding errors,
efficiently
20
Outline
30
So far
45
RM codes
and erasures
0
Decoding errors,
efficiently
20
Conclusions
and Q&A
ICYMI
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o...
ICYMI
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o...
ICYMI
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o...
ICYMI
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o...
The obvious open question
[m]
≤r
{0,1}m
v
M
M(v)
Question: Let R = m
≤r . Suppose you pick (0.99)R columns at random.
Are ...
The obvious open question
[m]
≤r
{0,1}m
v
M
M(v)
Question: Let R = m
≤r . Suppose you pick (0.99)R columns at random.
Are ...
The obvious open question
[m]
≤r
{0,1}m
v
M
M(v)
Question: Let R = m
≤r . Suppose you pick (0.99)R columns at random.
Are ...
The obvious open question
[m]
≤r
{0,1}m
v
M
M(v)
Question: Let R = m
≤r . Suppose you pick (0.99)R columns at random.
Are ...
Efficiently decoding Reed-Muller codes from random errors
Nächste SlideShare
Wird geladen in …5
×

Efficiently decoding Reed-Muller codes from random errors

244 Aufrufe

Veröffentlicht am

"Consider the following setting. Suppose we are given as input a ""corrupted"" truth-table of a polynomial f(x1,..,xm) of degree r = o(√m), with a random set of 1/2 - o(1) fraction of the evaluations flipped. Can the polynomial f be recovered? Turns out we can, and we can do it efficiently! The above question is a demonstration of the reliability of Reed-Muller codes under the random error model. For Reed-Muller codes over F_2, the message is thought of as an m-variate polynomial of degree r, and its encoding is just the evaluation over all of F_2^m.
In this talk, we shall study the resilience of RM codes in the *random error* model. We shall see that under random errors, RM codes can efficiently decode many more errors than its minimum distance. (For example, in the above toy example, minimum distance is about 1/2^{√m} but we can correct close to 1/2-fraction of random errors). This builds on a recent work of [Abbe-Shpilka-Wigderson-2015] who established strong connections between decoding erasures and decoding errors. The main result in this talk would be constructive versions of those connections that yield efficient decoding algorithms. This is joint work with Amir Shpilka and Ben Lee Volk."

Veröffentlicht in: Bildung
  • DOWNLOAD THIS BOOKS INTO AVAILABLE FORMAT (Unlimited) ......................................................................................................................... ......................................................................................................................... Download Full PDF EBOOK here { https://tinyurl.com/yyxo9sk7 } ......................................................................................................................... Download Full EPUB Ebook here { https://tinyurl.com/yyxo9sk7 } ......................................................................................................................... ACCESS WEBSITE for All Ebooks ......................................................................................................................... Download Full PDF EBOOK here { https://tinyurl.com/yyxo9sk7 } ......................................................................................................................... Download EPUB Ebook here { https://tinyurl.com/yyxo9sk7 } ......................................................................................................................... Download doc Ebook here { https://tinyurl.com/yyxo9sk7 } ......................................................................................................................... ......................................................................................................................... ......................................................................................................................... .............. Browse by Genre Available eBooks ......................................................................................................................... Art, Biography, Business, Chick Lit, Children's, Christian, Classics, Comics, Contemporary, Cookbooks, Crime, Ebooks, Fantasy, Fiction, Graphic Novels, Historical Fiction, History, Horror, Humor And Comedy, Manga, Memoir, Music, Mystery, Non Fiction, Paranormal, Philosophy, Poetry, Psychology, Religion, Romance, Science, Science Fiction, Self Help, Suspense, Spirituality, Sports, Thriller, Travel, Young Adult,
       Antworten 
    Sind Sie sicher, dass Sie …  Ja  Nein
    Ihre Nachricht erscheint hier
  • How is it possible? 80% accuracy? Sports picks directly from the insiders?  https://tinyurl.com/yxcmgjf5
       Antworten 
    Sind Sie sicher, dass Sie …  Ja  Nein
    Ihre Nachricht erscheint hier
  • Njce! Thanks for sharing.
       Antworten 
    Sind Sie sicher, dass Sie …  Ja  Nein
    Ihre Nachricht erscheint hier

Efficiently decoding Reed-Muller codes from random errors

  1. 1. Keywords: Error-correcting codes, low-degree polynomials, randomness, linear algebra questions begging to be resolved
  2. 2. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m.
  3. 3. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . 1 0 1 1 0 0 01 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
  4. 4. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . 1 0 1 1 0 0 01 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ?
  5. 5. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . 1 0 1 1 0 0 01 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ? Of course! Just interpolate.
  6. 6. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits. 1 0 1 1 0 0 01 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ?
  7. 7. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits. 1 0 1 1 0 0 01 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ?
  8. 8. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits. 0 1 0 0 1 1 11 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ?
  9. 9. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits. 0 1 0 0 1 1 11 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ?
  10. 10. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits. 0 1 0 0 1 1 11 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ? Impossible if errors are adversarial...
  11. 11. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits randomly. 0 1 0 0 1 1 11 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ?
  12. 12. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits randomly. 0 1 0 0 1 1 11 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ? This talk: Yes, and we can do it efficiently!
  13. 13. A Puzzle Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree r ≈ 3 m. She gives you the entire truth-table of f , i.e. the value of f (v) for every v ∈ {0,1}m . But she corrupts 49% of the bits randomly. 0 1 0 0 1 1 11 0 1 1 0 0 0 1 1 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 Can you recover f ? This talk: Yes, and we can do it efficiently! (“Efficiently decoding Reed-Muller codes from random errors”)
  14. 14. Efficiently decoding Reed-Muller codes from random errors Ramprasad Saptharishi Amir Shpilka Ben Lee Volk TIFR Tel Aviv University Tel Aviv University
  15. 15. Typical Setting Alicia Machado Bobby Jindal
  16. 16. Typical Setting Alicia Machado sends Dont vote for Trump Bobby Jindal
  17. 17. Typical Setting Alicia Machado sends Dont vote for Trump Channel corrupts the message... Bobby Jindal
  18. 18. Typical Setting Alicia Machado sends Dont vote for Trump Channel corrupts the message... Bobby Jindal receives Dude vote for Trump
  19. 19. Typical Setting Alicia Machado sends Dont vote for Trump Channel corrupts the message... Bobby Jindal receives Dude vote for Trump Want to encode messages with redundancies to make it resilient to errors
  20. 20. Reed-Muller Codes: RM(m, r) Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r. Encoding: The evaluation of f on all points in {0,1}m . f (x1, x2, x3, x4) = x1x2 + x3x4 −→ 0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
  21. 21. Reed-Muller Codes: RM(m, r) Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r. Encoding: The evaluation of f on all points in {0,1}m . f (x1, x2, x3, x4) = x1x2 + x3x4 −→ 0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 A linear code, with
  22. 22. Reed-Muller Codes: RM(m, r) Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r. Encoding: The evaluation of f on all points in {0,1}m . f (x1, x2, x3, x4) = x1x2 + x3x4 −→ 0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 A linear code, with ▶ Block Length: 2m := n
  23. 23. Reed-Muller Codes: RM(m, r) Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r. Encoding: The evaluation of f on all points in {0,1}m . f (x1, x2, x3, x4) = x1x2 + x3x4 −→ 0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 A linear code, with ▶ Block Length: 2m := n ▶ Distance: 2m−r (lightest codeword: x1x2 ··· xr )
  24. 24. Reed-Muller Codes: RM(m, r) Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r. Encoding: The evaluation of f on all points in {0,1}m . f (x1, x2, x3, x4) = x1x2 + x3x4 −→ 0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 A linear code, with ▶ Block Length: 2m := n ▶ Distance: 2m−r (lightest codeword: x1x2 ··· xr ) ▶ Dimension: m 0 + m 1 + ··· + m r =: m ≤r
  25. 25. Reed-Muller Codes: RM(m, r) Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r. Encoding: The evaluation of f on all points in {0,1}m . f (x1, x2, x3, x4) = x1x2 + x3x4 −→ 0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 A linear code, with ▶ Block Length: 2m := n ▶ Distance: 2m−r (lightest codeword: x1x2 ··· xr ) ▶ Dimension: m 0 + m 1 + ··· + m r =: m ≤r ▶ Rate: dimension/block length = m ≤r /2m
  26. 26. Reed-Muller Codes: RM(m, r) Generator Matrix: evaluation matrix of deg ≤ r monomials: v ∈ m 2 M         := E(m, r)M(v)
  27. 27. Reed-Muller Codes: RM(m, r) Generator Matrix: evaluation matrix of deg ≤ r monomials: v ∈ m 2 M         := E(m, r)M(v) (Every codeword is spanned by the rows.)
  28. 28. Reed-Muller Codes: RM(m, r) Generator Matrix: evaluation matrix of deg ≤ r monomials: v ∈ m 2 M         := E(m, r)M(v) (Every codeword is spanned by the rows.) Also called the inclusion matrix (M(v) = 1 if and only if “M ⊂ v”).
  29. 29. Decoding RM(m, r) Worst Case Errors: d
  30. 30. Decoding RM(m, r) Worst Case Errors: Up to d/2 (d = 2m−r is minimal distance). d d/2
  31. 31. Decoding RM(m, r) Worst Case Errors: Up to d/2 (d = 2m−r is minimal distance). d d/2 (algorithm by [Reed54])
  32. 32. Decoding RM(m, r) Worst Case Errors: Up to d/2 (d = 2m−r is minimal distance). d d/2 (algorithm by [Reed54]) List Decoding: max radius with constant # of words
  33. 33. Decoding RM(m, r) Worst Case Errors: Up to d/2 (d = 2m−r is minimal distance). d d/2 (algorithm by [Reed54]) List Decoding: max radius with constant # of words d [Gopalan-Klivans-Zuckerman08, Bhowmick-Lovett15] List decoding radius = d.
  34. 34. Two schools of study ▶ Hamming Model a.k.a worst-case errors ▶ Generally the model of interest for complexity theorists, ▶ Reed-Muller codes are not the best for these (far from optimal rate-distance tradeoffs). ▶ Shannon Model a.k.a random errors ▶ The standard model for coding theorists, ▶ Recent breakthroughs (e.g. Arıkan’s polar codes),
  35. 35. Two schools of study ▶ Hamming Model a.k.a worst-case errors ▶ Generally the model of interest for complexity theorists, ▶ Reed-Muller codes are not the best for these (far from optimal rate-distance tradeoffs). ▶ Shannon Model a.k.a random errors ▶ The standard model for coding theorists, ▶ Recent breakthroughs (e.g. Arıkan’s polar codes), ▶ An ongoing research endeavor: How do Reed-Muller codes perform in the Shannon model?
  36. 36. Models for random corruptions (channels) Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p
  37. 37. Models for random corruptions (channels) Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 00 0 1
  38. 38. Models for random corruptions (channels) Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ?
  39. 39. Models for random corruptions (channels) Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ? Binary Symmetric Channel — BSC(p) Each bit independently flipped with probability p
  40. 40. Models for random corruptions (channels) Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ? Binary Symmetric Channel — BSC(p) Each bit independently flipped with probability p 0 0 1 1 00 0 1
  41. 41. Models for random corruptions (channels) Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ? Binary Symmetric Channel — BSC(p) Each bit independently flipped with probability p 0 0 1 1 01 1 0
  42. 42. Models for random corruptions (channels) Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ? Binary Symmetric Channel — BSC(p) Each bit independently flipped with probability p 0 0 1 1 01 1 0 (almost) equiv: fixed number t ≈ pn of random errors
  43. 43. Channel Capacity Question: Given a channel, what is the best rate we can hope for?
  44. 44. Channel Capacity Question: Given a channel, what is the best rate we can hope for? Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ?
  45. 45. Channel Capacity Question: Given a channel, what is the best rate we can hope for? Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ? If X n is transmitted to the channel and received as Y n , how many bits of information about X n do we get from Y n ?
  46. 46. Channel Capacity Question: Given a channel, what is the best rate we can hope for? Binary Erasure Channel — BEC(p) Each bit independently replaced by ‘?’ with probability p 0 0 1 1 0? ? ? If X n is transmitted to the channel and received as Y n , how many bits of information about X n do we get from Y n ? Intuitively, (1 − p)n.
  47. 47. Channel Capacity Question: Given a channel, what is the best rate we can hope for? Binary Symmetric Channel — BSC(p) Each bit independently flipped with probability p 0 0 1 1 00 1 1 ∗ ∗ ∗∗
  48. 48. Channel Capacity Question: Given a channel, what is the best rate we can hope for? Binary Symmetric Channel — BSC(p) Each bit independently flipped with probability p 0 0 1 1 00 1 1 ∗ ∗ ∗∗ If X n is transmitted to the channel and received as Y n , how many bits of information about X n do we get from Y n ?
  49. 49. Channel Capacity Question: Given a channel, what is the best rate we can hope for? Binary Symmetric Channel — BSC(p) Each bit independently flipped with probability p 0 0 1 1 00 1 1 ∗ ∗ ∗∗ If X n is transmitted to the channel and received as Y n , how many bits of information about X n do we get from Y n ? Intuitively, (1 − H(p))n. (as n pn ≈ 2H(p)·n )
  50. 50. Channel Capacity Question: Given a channel, what is the best rate we can hope for? [Shannon48] Maximum rate that enables decoding (w.h.p.) is: 1 − p for BEC(p), 1 − H(p) for BSC(p). Codes achieving this bound called capacity achieving.
  51. 51. Motivating questions for this talk How well does Reed-Muller codes perform in the Shannon Model? In BEC(p)? In BSC(p)?
  52. 52. Motivating questions for this talk How well does Reed-Muller codes perform in the Shannon Model? In BEC(p)? In BSC(p)? Are they as good as polar codes?
  53. 53. Outline 30 So far 45
  54. 54. Outline 30 So far 45 RM codes and erasures 0
  55. 55. Outline 30 So far 45 RM codes and erasures 0 Decoding errors, efficiently 20
  56. 56. Outline 30 So far 45 RM codes and erasures 0 Decoding errors, efficiently 20 Conclusions and Q&A
  57. 57. Outline 30 So far 45 RM codes and erasures 0 Decoding errors, efficiently 20 Conclusions and Q&A Diagram not to scale
  58. 58. Dual of a code
  59. 59. Dual of a code Any linear space can be specified by a generating basis, or a solution to a system of constraints.
  60. 60. Dual of a code Any linear space can be specified by a generating basis, or a solution to a system of constraints. ⊥ = {u : 〈v,u〉 = 0 for every v ∈ }
  61. 61. Dual of a code Any linear space can be specified by a generating basis, or a solution to a system of constraints. ⊥ = {u : 〈v,u〉 = 0 for every v ∈ } Parity Check Matrix (A basis for ⊥ stacked as rows) PC M v = 0 ⇐⇒ v ∈
  62. 62. Linear codes and erasures 0 0 1 1 0? ? ? Question: When can we decode from a pattern of erasures?
  63. 63. Linear codes and erasures 0 0 1 1 0? ? ? Question: When can we decode from a pattern of erasures? 0 0 1 1 0 0 0 1 1 0 1 0 0 0 1 0
  64. 64. Linear codes and erasures 0 0 1 1 0? ? ? Question: When can we decode from a pattern of erasures? 0 0 1 1 0 0 0 1 1 0 1 0 0 0 1 0 0 0 0 0 01 1 0
  65. 65. Linear codes and erasures 0 0 1 1 0? ? ? Question: When can we decode from a pattern of erasures? 0 0 0 0 01 1 0
  66. 66. Linear codes and erasures 0 0 1 1 0? ? ? Question: When can we decode from a pattern of erasures? 0 0 0 0 01 1 0 Decodable if and only if no non-zero codeword supported on erasures.
  67. 67. Linear codes and erasures ? ? ? Question: When can we decode from a pattern of erasures? 0 0 0 0 01 1 0 Decodable if and only if no non-zero codeword supported on erasures. Depends only the erasure pattern
  68. 68. Linear codes and erasures ? ? ? Question: When can we decode from a pattern of erasures? 0 0 0 0 01 1 0
  69. 69. Linear codes and erasures ? ? ? Question: When can we decode from a pattern of erasures? Observation: A pattern of erasures are decodeable if and only if the corresponding columns of the Parity Check Matrix are linearly independent.
  70. 70. Linear codes and erasures ? ? ? Question: When can we decode from a pattern of erasures? Observation: A pattern of erasures are decodeable if and only if the corresponding columns of the Parity Check Matrix are linearly independent. In order for a code to be good for BEC(p), the Parity Check Matrix of the code must be “robustly high-rank”.
  71. 71. Reed-Muller codes under erasures Cool Fact The dual of RM(m, r) is RM(m, r′ ) where r′ = m − r − 1.
  72. 72. Reed-Muller codes under erasures Cool Fact The dual of RM(m, r) is RM(m, r′ ) where r′ = m − r − 1. Hence, the Parity Check Matrix of RM(m, r) is the generator matrix of RM(m, r′ ).
  73. 73. Reed-Muller codes under erasures Cool Fact The dual of RM(m, r) is RM(m, r′ ) where r′ = m − r − 1. Hence, the Parity Check Matrix of RM(m, r) is the generator matrix of RM(m, r′ ). [m] ≤r′ {0,1}m v M M(v)
  74. 74. Reed-Muller codes under erasures [m] ≤r′ {0,1}m v M M(v)
  75. 75. Reed-Muller codes under erasures [m] ≤r′ {0,1}m v M M(v) Question: Let R = m ≤r′ . Suppose you pick (0.99)R columns at random. Are they linearly independent with high probability?
  76. 76. Reed-Muller codes under erasures [m] ≤r′ {0,1}m v M M(v) Question: Let R = m ≤r′ . Suppose you pick (0.99)R columns at random. Are they linearly independent with high probability? r′ : m/20 m o( m/log m) [ASW-15] o(m) [ASW-15] O( m) [KMSU+KP-16]
  77. 77. Reed-Muller codes under erasures [m] ≤r′ {0,1}m v M M(v) Question: Let R = m ≤r′ . Suppose you pick (0.99)R columns at random. Are they linearly independent with high probability? r′ : m/20 m o( m/log m) [ASW-15] o(m) [ASW-15] O( m) [KMSU+KP-16] OPEN OPEN
  78. 78. From erasures to errors Theorem: [ASW] Any pattern correctable from erasures in RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
  79. 79. From erasures to errors Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [ASW] Any pattern correctable from erasures in RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
  80. 80. From erasures to errors Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [ASW] Any pattern correctable from erasures in RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2). Corollary #1: (high-rate) Decodeable from (1 − o(1)) m ≤r random errors in RM(m, m − 2r) if r = o( m/log m) (min distance of RM(m, m − 2r) is 22r ) .
  81. 81. From erasures to errors Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [ASW] Any pattern correctable from erasures in RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
  82. 82. From erasures to errors Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [ASW] Any pattern correctable from erasures in RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2). (If r = m 2 − o( m), then m − 2r − 2 = o( m)) Corollary #2: (low-rate) Decodeable from ( 1 2 − o(1))2m random errors in RM(m,o( m)) (min distance of RM(m, m) is 2m− m ) .
  83. 83. From erasures to errors Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [ASW] Any pattern correctable from erasures in RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2). (If r = m 2 − o( m), then m − 2r − 2 = o( m)) Corollary #2: (low-rate) Decodeable from ( 1 2 − o(1))2m random errors in RM(m,o( m)) (min distance of RM(m, m) is 2m− m ) . [S-Shpilka-Volk]: Efficient decoding from errors.
  84. 84. Outline 30 So far 45 RM codes and erasures 0
  85. 85. Outline 30 So far 45 RM codes and erasures 0 Decoding errors, efficiently 20
  86. 86. What we want to prove Theorem [S-Shpilka-Volk] There exists an efficient algorithm with the following guarantee: Given a corrupted codeword w = v + errS of RM(m, m − 2r − 1), if S happens to be a correctable erasure pattern in RM(m, m − r − 1), then the algorithm correctly decodes v from w.
  87. 87. What we have access to Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and S = {u1,..., ut }.
  88. 88. What we have access to Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and S = {u1,..., ut }. Parity Check Matrix for RM(m, m − 2r − 2) is the generator matrix for RM(m,2r + 1).
  89. 89. What we have access to Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and S = {u1,..., ut }. E(m,2r + 1) v = 0
  90. 90. What we have access to Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and S = {u1,..., ut }. E(m,2r + 1) w =
  91. 91. What we have access to Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and S = {u1,..., ut }. E(m,2r + 1) w = M M(u1) + ··· + M(ut )
  92. 92. What we have access to Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and S = {u1,..., ut }. E(m,2r + 1) w = M M(u1) + ··· + M(ut ) Have access to ∑ i∈S M(ui ) for every monomial M with deg(M) ≤ 2r + 1.
  93. 93. What we have access to Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and S = {u1,..., ut }. E(m,2r + 1) w = M M(u1) + ··· + M(ut ) Have access to ∑ i∈S f (ui ) for every polynomial f with deg(f ) ≤ 2r + 1.
  94. 94. Erasure Correctable Patterns
  95. 95. Erasure Correctable Patterns A pattern of erasures is correctable if and only if the corresponding columns in the parity check matrix are linearly independent.
  96. 96. Erasure Correctable Patterns A pattern of erasures is correctable if and only if the corresponding columns in the parity check matrix are linearly independent. The parity check matrix for RM(m, m − r − 1) is the generator matrix for RM(m, r).
  97. 97. Erasure Correctable Patterns A pattern of erasures is correctable if and only if the corresponding columns in the parity check matrix are linearly independent. The parity check matrix for RM(m, m − r − 1) is the generator matrix for RM(m, r). Corollary A set of patterns S = {u1,..., ut } is erasure-correctable in RM(m, m − r − 1) if and only if ur 1 ,..., ur t are linearly independent. ur i is just the vector of degree r monomials evaluated at ui
  98. 98. The Decoding Algorithm Input: Received word w (= v + errS)
  99. 99. The Decoding Algorithm Input: Received word w (= v + errS) Lemma Assume that S is a pattern of erasures correctable in RM(m, m − r − 1).
  100. 100. The Decoding Algorithm Input: Received word w (= v + errS) Lemma Assume that S is a pattern of erasures correctable in RM(m, m − r − 1). For any arbitrary u ∈ {0,1}m , we have u ∈ S if and only if there exists a polynomial g with deg(g) ≤ r such that ∑ i∈S (f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
  101. 101. The Decoding Algorithm Input: Received word w (= v + errS) Lemma Assume that S is a pattern of erasures correctable in RM(m, m − r − 1). For any arbitrary u ∈ {0,1}m , we have u ∈ S if and only if there exists a polynomial g with deg(g) ≤ r such that ∑ i∈S (f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1. Can be checked by solving a system of linear equations. Algorithm is straightforward.
  102. 102. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
  103. 103. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Proof. ur 1 ··· ur t
  104. 104. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Proof. ur 1 ··· ur t Row operations I 0
  105. 105. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
  106. 106. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Main Lemma (⇒): If u ∈ S, then there is a polynomial g with deg(g) ≤ r such that ∑ ui ∈S (f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
  107. 107. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Main Lemma (⇒): If u ∈ S, then there is a polynomial g with deg(g) ≤ r such that ∑ ui ∈S (f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1. If u = ui , then g = hi satisfies the conditions.
  108. 108. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
  109. 109. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
  110. 110. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
  111. 111. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Main Lemma (⇐): If u /∈ S, then there is no polynomial g with deg(g) ≤ r such that ∑ ui ∈S (f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
  112. 112. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof
  113. 113. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof Case 1: Suppose hi (u) = 1 for some i. u 1 u1 0 ut 0··· ui 0 1 0 ···
  114. 114. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof Case 1: Suppose hi (u) = 1 for some i. u 1 u1 0 ut 0··· ui 0 1 0 ··· u ̸= ui hence u(ℓ) ̸= (ui )(ℓ) for some coordinate ℓ.
  115. 115. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof Case 1: Suppose hi (u) = 1 for some i. u 1 u1 0 ut 0··· ui 0 1 0 ··· u ̸= ui hence u(ℓ) ̸= (ui )(ℓ) for some coordinate ℓ. f (x) = hi (x) · xℓ − (ui )(ℓ) works. u 1 u1 0 ut 0··· ui 0 ···
  116. 116. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof Case 2: Suppose hi (u) = 0 for all i.
  117. 117. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof Case 2: Suppose hi (u) = 0 for all i. Look at ∑ hi (x). u 0 u1 1 ut 1···
  118. 118. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof Case 2: Suppose hi (u) = 0 for all i. Look at ∑ hi (x). u 0 u1 1 ut 1··· f (x) = 1 − ∑ hi (x) works. u 1 u1 0 ut 0···
  119. 119. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Proof Case 2: Suppose hi (u) = 0 for all i. Look at ∑ hi (x). u 0 u1 1 ut 1··· f (x) = 1 − ∑ hi (x) works. u 1 u1 0 ut 0···
  120. 120. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
  121. 121. Proof of Lemma Claim 1 Let u1,..., ut ∈ {0,1}m such that ur 1 ,..., ur t are linearly independent. Then, for each i ∈ [t], there is a polynomial hi such that: ▶ deg(hi ) ≤ r, ▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise. Claim 2 If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t. Therefore, if ur 1 ,..., ur t are linearly independent, then there exists a polynomial g with deg(g) ≤ r satisfying ∑ ui ∈S (f ·g)(ui ) = f (u) , for every polynomial f with deg(f ) ≤ r + 1, if and only if u = ui for some i.
  122. 122. Decoding Algorithm ▶ Input: Received word w (= v + errS)
  123. 123. Decoding Algorithm ▶ Input: Received word w (= v + errS) ▶ Compute E(m,2r + 1)w to get the value of ∑ ui ∈S f (ui ) for every polynomial f with deg(f ) ≤ 2r + 1.
  124. 124. Decoding Algorithm ▶ Input: Received word w (= v + errS) ▶ Compute E(m,2r + 1)w to get the value of ∑ ui ∈S f (ui ) for every polynomial f with deg(f ) ≤ 2r + 1. ▶ For each u ∈ {0,1}m , solve for a polynomial g with deg(g) ≤ r satisfying ∑ ui ∈S (f · g)(ui ) = f (u) , for every f satisfying deg(f ) ≤ r + 1. If there is a solution, then add u to Corruptions.
  125. 125. Decoding Algorithm ▶ Input: Received word w (= v + errS) ▶ Compute E(m,2r + 1)w to get the value of ∑ ui ∈S f (ui ) for every polynomial f with deg(f ) ≤ 2r + 1. ▶ For each u ∈ {0,1}m , solve for a polynomial g with deg(g) ≤ r satisfying ∑ ui ∈S (f · g)(ui ) = f (u) , for every f satisfying deg(f ) ≤ r + 1. If there is a solution, then add u to Corruptions. ▶ Flip the coordinates in Corruptions and interpolate.
  126. 126. Outline 30 So far 45 RM codes and erasures 0 Decoding errors, efficiently 20
  127. 127. Outline 30 So far 45 RM codes and erasures 0 Decoding errors, efficiently 20 Conclusions and Q&A
  128. 128. ICYMI Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m)
  129. 129. ICYMI Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [S-Shpilka-Volk] Any pattern that is erasure-correctable in RM(m, m − r − 1) is efficiently error-correctable in RM(m, m − 2r − 2).
  130. 130. ICYMI Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [S-Shpilka-Volk] Any pattern that is erasure-correctable in RM(m, m − r − 1) is efficiently error-correctable in RM(m, m − 2r − 2). Corollary #1: (high-rate) Efficiently decodeable from (1 − o(1)) m ≤r random errors in RM(m, m − 2r) if r = o( m/log m). (min distance of RM(m, m − 2r) is 22r )
  131. 131. ICYMI Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m ≤r random erasures. m/20 m o( m/log m) o(m)O( m) Theorem: [S-Shpilka-Volk] Any pattern that is erasure-correctable in RM(m, m − r − 1) is efficiently error-correctable in RM(m, m − 2r − 2). Corollary #1: (high-rate) Efficiently decodeable from (1 − o(1)) m ≤r random errors in RM(m, m − 2r) if r = o( m/log m). (min distance of RM(m, m − 2r) is 22r ) Corollary #2: (low-rate) Efficiently decodeable from ( 1 2 − o(1))2m random errors in RM(m,o( m)). (min distance of RM(m, m) is 2m− m )
  132. 132. The obvious open question [m] ≤r {0,1}m v M M(v) Question: Let R = m ≤r . Suppose you pick (0.99)R columns at random. Are they linearly independent with high probability?
  133. 133. The obvious open question [m] ≤r {0,1}m v M M(v) Question: Let R = m ≤r . Suppose you pick (0.99)R columns at random. Are they linearly independent with high probability? r : m/20 m o( m/log m) [ASW-15] o(m) [ASW-15] O( m) [KMSU+KP-16]
  134. 134. The obvious open question [m] ≤r {0,1}m v M M(v) Question: Let R = m ≤r . Suppose you pick (0.99)R columns at random. Are they linearly independent with high probability? r : m/20 m o( m/log m) [ASW-15] o(m) [ASW-15] O( m) [KMSU+KP-16] OPEN OPEN
  135. 135. The obvious open question [m] ≤r {0,1}m v M M(v) Question: Let R = m ≤r . Suppose you pick (0.99)R columns at random. Are they linearly independent with high probability? r : m/20 m o( m/log m) [ASW-15] o(m) [ASW-15] O( m) [KMSU+KP-16] OPEN OPEN end{document}

×